CN110169779A - A kind of Visual Characteristics Analysis of Drivers method based on eye movement vision mode - Google Patents

A kind of Visual Characteristics Analysis of Drivers method based on eye movement vision mode Download PDF

Info

Publication number
CN110169779A
CN110169779A CN201910231249.6A CN201910231249A CN110169779A CN 110169779 A CN110169779 A CN 110169779A CN 201910231249 A CN201910231249 A CN 201910231249A CN 110169779 A CN110169779 A CN 110169779A
Authority
CN
China
Prior art keywords
model
vor
okr
visual
movement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910231249.6A
Other languages
Chinese (zh)
Inventor
沈强儒
汤天培
张珂峰
吴坤
曹慧
成军
邱礼平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong University
Original Assignee
Nantong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nantong University filed Critical Nantong University
Priority to CN201910231249.6A priority Critical patent/CN110169779A/en
Publication of CN110169779A publication Critical patent/CN110169779A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/113Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining or recording eye movement
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/18Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state for vehicle drivers or machine operators
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7246Details of waveform analysis using correlation, e.g. template matching or determination of similarity
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Pathology (AREA)
  • Psychiatry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Physiology (AREA)
  • Mathematical Physics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Fuzzy Systems (AREA)
  • Evolutionary Computation (AREA)
  • Human Computer Interaction (AREA)
  • Child & Adolescent Psychology (AREA)
  • Developmental Disabilities (AREA)
  • Educational Technology (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychology (AREA)
  • Social Psychology (AREA)
  • Eye Examination Apparatus (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses a kind of Visual Characteristics Analysis of Drivers method based on eye movement vision mode, include the following steps: Step 1: in conjunction with VOR and OKR model development VOR+OKR model;Step 2: three experiments of progress: being stimulated including no visual and drive (VS), there is VS to drive, have VS driving and n-back task;Step 3: carrying out the analysis of A2*2 factor design to the data of experiment acquisition, the movement of eye and head is captured using Smart Eye Pro and Fastrak equipment, and data are scanned and point of scintillation is screened;Then two variance analyses are carried out.Models coupling VOR and OKR of the invention can simulate human eye movement, and can reflect influence of the mental load to driver when driving.In addition, the model shows higher precision, reduce the influence of light stream, and can be well adapted in the case where involuntary eye movement continually changing watching attentively.

Description

A kind of Visual Characteristics Analysis of Drivers method based on eye movement vision mode
Technical field
The invention belongs to field of traffic safety, more particularly to a kind of Driver Vision characteristic based on eye movement vision mode point Analysis method.
Background technique
Driver is an important safety factor of world today's traffic safety.According to the number of the World Health Organization According to global road traffic death toll is 1,250,000 within 2015.To global driver's risk management analysis shows, in all roads In the accident of road, human error accounts for 90%, wherein 90% or more accident is as caused by driver " see and lose ".Cause this One main cause of kind situation is that driver is interfered by mobile phone, short message, billboard, electronic equipment, passenger etc..According to American National Highway traffic safety management board, driver distraction have three types: vision, body kinematics and cognition.In the present invention, it is of interest that Divert one's attention as caused by workload at heart.
In previous studies, there is several methods that being used to assessment driver distraction as caused by mental workload. 1992, backs and walrath attempted to estimate mental workload by pupillary reaction.Although they do not develop one Successful method is planted to capture the dispersion attention of driver, but they confirm eye movement in quantization cognition workload Importance.2016, Faure, Lobjois and Benguigui had found the relationship between mental load and blink behavior.
However, blink behavior can not clearly reflect mental load when driving under the conditions of dual role.One kind has very The method of big potentiality online evaluation driver distraction is to utilize the difference between the eye movement of simulation observed. In their research, Shibataetal attempts to combine vestibular visual reflection model with photodynamics model.Their mould Type is effective in the Narrow Field Of Vision experiment in their driving simulator, and illustrates only road, thus to luminous flux and The influence very little of sight variation.Later, Usui etc. has found there is relationship between driver distraction and eye movement.However, If to apply the model in the case where being related to involuntary oculomotor practical driving situation, need to make improvements.
Summary of the invention
Goal of the invention: in order to solve the deficiencies in the prior art, the present invention provides a kind of driving based on eye movement vision mode The person's of sailing Visual Characteristics Analysis method.
Technical solution: a kind of Visual Characteristics Analysis of Drivers method based on eye movement vision mode, including following three master Want step:
Step 1: in conjunction with VOR and OKR model development VOR+OKR model;
Step 2: three experiments of progress: being stimulated including no visual and drive (VS), there is VS to drive, have VS driving and n-back Task, in which: visual stimulus: simulation trees are placed on beside the test tracks of driving simulator, a large amount of to help to induce Light stream;No VS drives: subject is required to drive on designed racing track in the case where no any simulation object;VS type Drive: subject is required the drive simulation trees on identical road;Driven using VS and n-back task: subject is wanted Ask in the case where the trees of simulated roadway both sides, along identical route drive, while by press on steering wheel it is appropriate by Button completes single time task in two seconds;
Step 3: carry out the analysis of A2*2 factor design to the data of experiment acquisition, using Smart Eye Pro and Fastrak equipment captures the movement of eye and head, is scanned to data and point of scintillation is screened;A2*2 factor design analysis are as follows: VOR Model, VOR+OKR model, no visual stimulation have visual stimulus;The VOR model is used to simulate the eye movement in the case of two kinds: No VS drives and without VS driving and without Mental Workload (MW);Using the eye movement in the case of three kinds of VOR+OKR modeling: without VS It drives, there is VS to drive without MW, there is VS to have MW driving;Then two variance analyses are carried out.
As optimization: the VOR model of the step one, using genetic algorithm by the parameter set of each subject: ka, Kf, kfw, kw, ki, kp are respectively applied to the emulation of eye movement in VOR and VOR+OKR model;For OKR model, all subjects OKR (kwv=1) parameter remains unchanged.
As optimization: in the VOR+OKR model of the step one: on the one hand in order to study the mutual of semicircular canal and otolith Effect, the VOR model proposed using Merfeld and Zupan;On the other hand, create one to the image stabilization on retina very Important negative feedback loop, VOR are only used as a useful supplement, and the limited of OKR is compensated during high frequency/high speed end rotation Bandwidth;
The model of exploitation includes seven parameters: four parameters of VOR model: a ginseng of ka, kf, kfw, kw, OKR model Two parameters of number KWV and final common path: ki, kp;
Firstly, using the angular speed x of head movement and linear acceleration a as the input information of VOR model, four free parameters Be by internal model prediction sensory measurement and sensory measurement between feedback error, by these feedback errors be converted into movement and The estimation in direction;Two parameters, kw and ka are linear feedback parameters, are weighted with kw corresponding with VOR slow phase velocity to predict The difference of practical SCC signal;It is weighted with the difference of the practical and expected otolith signal of the parameter ka for deriving acceleration estimation value; On the contrary, using Kf and Kfw as the feedback of nonlinearity erron item, parameter kf is indicated between the actual direction and anticipated orientation of otolith Difference, for estimating gravity direction;On the other hand, Kfw indicates the rotation of gravitational cue, and the estimation for adjusting angular speed Value;
Secondly, for OKR model, it is assumed that pass through the vision of eye motion measurement and OKR model that low-pass filter carries out Input corresponding, which is also used for compensating voluntary eye motion;Then, visual sensor (VIS) to vision input (xv: Angular speed) it is handled, to generate the estimation of visual perception's parameter;The estimated value is assessed with using visual sensor internal model Expection visual perception's parameter be compared (<vis>);Then the estimation of visual sensor is regarded with remaining weighting parameters (kv) Feel that the difference between organoleptic parameters and internal model is weighted, and is added in the change rate of estimated state;
The mathematical relationship that VOR and OKR interacts in VOR+OKR model is as follows:
Sensor dynamics: each visual sensor is expressed as 33 by the Observer Theory based on more sense organ interactive modelings Vision input (XV) is converted to visual perception's estimated value by a unit matrix, each sensor, it then follows formula:
Internal sensor dynamic:
Error calculation:
4th, after calculating eye movement, using the final co-route that Robinson proposes, final co-route includes Two parameters, the two parameters are related with different types of muscle in eyes muscle.
As optimization: in the experiment of the step two, subject is required to be sitting in the drive simulation of a six degree of freedom The route simulated in device around one drives, and simulator controls by Carsim, can with the dynamic behaviour of simulating vehicle, in experiment, Carsim is controlled by using Matlab Simulink, with fixed frequency moved seat on vertically and horizontally face;
The subject that several names hold driving license has participated in experiment;Each participant does movement track three times: not having Vs has vs, there is vs and mw, it is desirable that each participant follows four steps:
(1) training circle number;
(2) no visual stimulation drives;
(3) there is visual stimulus driving;
(4) visual stimulus and mental work load drive;
Between each step, participant rests about 3 minutes, with release psychology and body pressure;
By using the SmartEyePro of four cameras to acquire eye movement on simulator, in order to collect head fortune Dynamic information uses Fastrak electromagnetic tracker;
Experimental design one movement track comprising straight line portion, right-hand bend and left-hand bend and narrow zone;Exist In the case where VS, trees are located at around road, 18m apart;
In order to generate the movement on head and eyes, control of the seat in Carsim and Matlab Simulink when driving Lower movement, seat vibrate in two directions: vertically and horizontally, in this direction, with the control of Carsim, pitching fortune Dynamic is random perturbation, power when horizontal direction is due to vehicle driving and move naturally.
As optimization: in the n-back task: in order to apply a mw in driving procedure, having used a n- The digital memories task of back has a digital verbal to be presented to participle in every two seconds, when the number of appearance in n-back task When word is identical as previous number, it is desirable that subject presses "Yes", when the number of appearance is different, presses "No" button.
The utility model has the advantages that the present invention incorporates vestibular-eye reflex (VOR) model and Visual Dynamics reflection (OKR) model.? On the basis of the analysis of 2*2 factor design (only VOR model and VOR+OKR model, whether there is or not visual stimulus), VOR+OKR model ratio VOR Model shows better performance, and mean square error is smaller.For this purpose, also having carried out an experiment, simulated using n-back task Mental load when driving, in the case where there is mental load, simulation eye movement and the mean square error for observing eye movement become larger.As a result Show that VOR+OKR model can be used for the evaluation of driver distraction.
Models coupling VOR and OKR of the invention can simulate human eye movement, and can reflect driving when mental load to driving The influence of member.In addition, the model shows higher precision, reduce the influence of light stream, and involuntary oculomotor In the case of can be well adapted for continually changing watching attentively.
Sizable potentiality that model of the invention also have develop into the on-time model of an automatic detection driver distraction. By the way that the method is combined with other methods, the system that will can develop a monitoring driving behavior quickly, the system The quantity of accident can be reduced by warning driver distraction.
Detailed description of the invention
Fig. 1 is the data analysis chart of experiment acquisition of the invention;
Fig. 2 is two analysiss of variance diagram of the invention;
Fig. 3 is VOR+OKR model schematic of the invention;
Fig. 4 is movement track schematic diagram one of the invention;
Fig. 5 is movement track schematic diagram two of the invention;
Fig. 6 is the input figure that Kasim model of the invention generates vibration;
Fig. 7 is the digital memories task image of n-back of the invention;
Fig. 8 is the matching figure of VOR and VOR+OKR of the invention on response time and amplitude;
Fig. 9 is the result box traction substation of comparison mean square error of the invention;
Figure 10 is the eye movement figure that the eyeball of the invention by VOR model is simulated and observed;
Figure 11 is of the invention using VOR and OKR as the head movement figure of model emulation;
Figure 12 is the box-shaped figure of the mean square error result between comparison simulation and measurement of the invention;
Figure 13 is of the invention there are MW and there is no the motion diagrams observed and predicted in vertical direction when MW.
Specific embodiment
The technical scheme in the embodiments of the invention will be clearly and completely described below, so that the technology of this field Personnel can better understand advantages and features of the invention, to make apparent boundary to protection scope of the present invention It is fixed.Embodiment described in the invention is only a part of the embodiment of the present invention, instead of all the embodiments, based on the present invention In embodiment, those of ordinary skill in the art's every other implementation obtained without making creative work Example, shall fall within the protection scope of the present invention.
Embodiment
1. introducing
In the present invention, by simulating eye movement, the dispersion attention of driver can even be watched attentively by changing and Visual stimulus is estimated.
The movement of eyes can be estimated by the movement on head, when a people focuses in a target, if he Head turn to the left side, their eyes will move in the opposite direction, with the input according to vestibular organ come stable optical Image.Vestibular system is the sensing mechanism at interior rear, is the main source of the sense of equilibrium and spatial orientation sense.The system by otolith and Semicircular canal two parts composition.
VOR model can be used for eye movement caused by analogue head moves.VOR model proposes in multiple researchs, and There are many applications.In the present invention, we used the VOR model that Merfeld and Zupan is proposed, which reflects otolith Interaction between semicircular canal.In this model, head movement using linear acceleration and has angular speed as input, eye Ball movement is output.
Since Merfeld and Zupan model only has the first-order lag characteristic of eye muscle, we are by it with Robinson's Models coupling is as final co-route section.Generally speaking, our model includes four parameters to compensate of VOR feature Body difference, two parameters compensate the individual difference of eye muscle feature.
The model is used to study the various aspects of driving behavior by other researchers.For example, Obinata and colleague The relationship between the eye movement of measurement and simulating brake movement subjective evaluation is determined using VOR model.In the present invention, lead to The eye movement that crossing will be observed that is compared with the eye movement of VOR modeling, to measure the comfort level of passenger.Although grinding Study carefully personnel and have found connection therein, but the lazy weight of research object is to allow them to make decision.In addition, using blending heredity Algorithm carries out parameter identification, and there are a certain distance between emulation and measurement.The research group of Obinata also uses VOR mould Type has evaluated the situation of diverting one's attention of mental load and driver in terms of remembering decision load, they propose one kind by VOR amount Change the new method of mental load.However, they do not account for the variation of gaze direction.
Dynamic response is a kind of mechanism of ocular stabilization in visual scene, has some researchers and has developed.For Mathematical model, the interaction between VOR and OKR are the feedback loops of the position based on head, which is made using visual information For the input of OKR model.The input of light power is changed by using magnifying glass, the effect of active end rotation can be enhanced. Schweigart and Mergner has studied spatial vision pattern further and subject's head using head movement actively and passively Relationship between movement.Based on their information, the present invention develops a negative feedback loop to stablize the image on retina, Wherein VOR is intended only as a kind of useful supplement to compensate the finite bandwidth of the OKR in high-frequency/high speed end rotation.
During finding orientation and perceiving, Newman by four kinds of independent visual sensors (visual velocity, position, Angular speed and gravity) static and dynamic visual perception's information is provided, the phase interaction of vision and vestibular is handled in another way With.In his model, visual perception estimates (αvv) it is by visual sensor (VISv) generate, input is vision inputThis estimated value and expected visual perception's estimated valueIt compares, the latter is according to visual sensorInternal model calculated.Using residual weighted parameterTo sensor conflictIt is weighted.
Although VOR-OKR model is widely used to simulate eye mood in daily life, without one research at Function VOR and OKR are combined, to simulate eye motion when driving.
Current analysis supports the hypothesis that driver distraction can capture by simulation eye movement.However, arriving mesh Before until, there are no researchers successfully to simulate the autonomous eye movement being look under variation and environmental stimulation, in nature In the case of this be necessary for driver.
Only for VOR model, error be due to its driving physical motion and accumulate, can not be with ambient enviroment Simulate eye movement.On the other hand, only for OKR, which cannot utilize the vibration of vehicle, and usually existing vibration can not Capture diverting one's attention for driver.
In order to improve eye movement emulation, it is necessary in conjunction with the effect of VOR and OKR, to guarantee eye in space and visual scene Stabilization.
The present invention is for the purpose of evaluating driver because of dispersion attention caused by mental work load and the relationship of ambient enviroment, hair A kind of new method that VOR is combined to exploitation eye movement emulation with OKR model is illustrated, this method can be applied to practical driving feelings Condition.
1.1 summarizing
In order to develop one based on oculomotor driver distraction's assessment models, work is divided into three main steps by us Suddenly.Firstly, we combine one model of VOR and OKR model development.The details of model development are shown in 1.2 sections.Secondly, I Carried out described in Section 3 three experiments: stimulated including no visual and drive (VS), there is VS to drive, there is VS to drive and n-back Task is tested as follows.The details of n-back task describes in 2.3 sections.
Visual stimulus: simulation trees are placed on beside the test tracks of driving simulator, to help to induce a large amount of light Stream.
No VS drives: subject is required to drive on designed racing track in the case where no any simulation object.
VS type drives: subject is required the drive simulation trees on identical road.
Driven using vs and n-back task: subject is required in the case where the trees of simulated roadway both sides, along phase Same route drives, while by pressing the suitable button on steering wheel, single time task is completed in two seconds.
Finally, (Fig. 1) is analyzed as follows to the data of experiment acquisition, using Smart Eye Pro and Fastrak equipment Capture the movement of eye and head.Scanning and point of scintillation screening have been carried out to data.
A2*2 factor design analysis (VOR model, VOR+OKR model, no visual stimulation have visual stimulus).VOR model is used Eye movement in the case of two kinds of simulation: no VS drives and without VS driving and without Mental Workload (MW).Using VOR+OKR pattern die Eye movement in the case of three kinds quasi-: no VS drives, has VS to drive without MW, have VS to have MW driving.Then two variance analyses (figure is carried out 2)。
1.1.1 parameter identifies
For VOR model, using the genetic algorithm proposed before by the parameter set of each subject (ka, kf, kfw, kw, Ki, kp) it is respectively applied to the emulation of eye movement in VOR and VOR+OKR model.
For the OKR model in this research, with (1997) and Newman (the 2009) (k such as Schweigartwv) equally, own OKR (the k of subjectwv=1) parameter remains unchanged.
1.1.2 testing to model
The result of the first two experiment, which has carried out the analysis of 2*2 factor design, (no VS and has VS;VOR model and VOR+OKR mould Type), main purpose is that verifying new model exists in VS and in the case where be not present to oculomotor good result,
1.1.3MW to the influence of eye movement
In order to verify application of the new model to MW driver, will most latter two situation (VS drive drive with VS, MW) knot Fruit compares.
1.1.4 in driving condition Imitating eye movement
As mentioned in the introduction, some study tours mental work load is on non-autonomous oculomotor influence.However, at him Research in, unique VOR model be do not change watch attentively in the case where simulate eyeball involuntary movement, this is impossible to answer In the actual vehicle rapidly changed for eye gaze and visual information.
When driving, vibration occurs mainly in vertical direction, this is as caused by road surface.Vibration can pass to the head of driver On.Therefore, the not autonomous eye movement of VOR always occurs.On the contrary, smoothly pursuing and attacking and seldom going out when driver is absorbed in road Present vertical direction.On the other hand, pan and smoothly pursue and attack it is popular in the horizontal direction, drive when check traffic environment.Base In these evidences, in order to simulate eye movement when driving, it is necessary to combine VOR and OKR model.Further, since driving The reason of condition, this research only focus on the vertical eye movement of simulation and divert one's attention to assess the cognition of driver.
The exploitation of 1.2 eyes simulation models
In order to study the interaction of semicircular canal and otolith, we use the VOR model that Merfeld and Zupan is proposed. On the other hand, with Newman and its colleague description method it is similar, create one it is critically important to the image stabilization on retina Negative feedback loop, VOR are only used as a useful supplement, and the finite bandwidth of OKR is compensated during high frequency/high speed end rotation.
The model of exploitation includes seven parameters: four parameters (ka, kf, kfw, kw) of VOR model, one of OKR model Two parameters (ki, kp) of parameter (KWV) and final common path.
Firstly, by the angular speed (x) of head movement and input information of the linear acceleration (a) as VOR model.Four freedom Parameter is the feedback error between sensory measurement and sensory measurement by internal model prediction, and main purpose is to feed back these Error is converted into movement and the estimation in direction.Two parameters, kw and ka are linear feedback parameters.With opposite with VOR slow phase velocity The kw answered weights the difference to predict practical SCC signal;With the practical and expected otolith of the parameter ka for deriving acceleration estimation value The difference of signal weights.On the contrary, using Kf and Kfw as the feedback of nonlinearity erron item.The practical side of parameter kf expression otolith To the difference between anticipated orientation, for estimating gravity direction.On the other hand, Kfw indicates the rotation of gravitational cue, and is used for Adjust the estimated value of angular speed.
Secondly, since target can change over time in driving procedure, being difficult from vision research for OKR model Middle acquisition information.Therefore we assume that the vision of the eye motion measurement and OKR model that are carried out by low-pass filter inputs phase Corresponding, which is also used for compensating voluntary eye motion.Then, visual sensor (VIS) inputs (xv: angular speed) to vision It is handled, to generate the estimation of visual perception's parameter.The estimated value and the expection assessed using visual sensor internal model Visual perception's parameter is compared (<vis>).Then with remaining weighting parameters (kv) to the estimation visual perception of visual sensor Difference between parameter and internal model is weighted, and is added in the change rate of estimated state.
As shown in the VOR+OKR model of Fig. 3, the mathematical relationship that VOR and OKR interacts is as follows:
Sensor dynamics: the Observer Theory based on more sense organ interactive modelings, we indicate each visual sensor For 33 unit matrixs.Vision input (XV) is converted to visual perception's estimated value and follows formula by each sensor:
Internal sensor dynamic:
Error calculation:
Third, after calculating eye movement, using the final co-route of Robinson's proposition.Final co-route includes Two parameters, the two parameters are related with different types of muscle in eyes muscle.
2. experimental provision
In this experiment, a subject is required in the driving simulator for being sitting in a six degree of freedom around a mould Quasi- route drives.Simulator is controlled by Carsim, can be with the dynamic behaviour of simulating vehicle.In these experiments, by using Matlab Simulink controls Carsim, with fixed frequency moved seat on vertically and horizontally face.
28 subjects for holding driving license have participated in experiment, and (average age: 39 years old, most subjects were upper and lower daily Class, the driving age of all subjects is at 3 years old or more).
Each participant does movement track three times: not having vs, has vs, there is vs and mw.It is required that each participant follows four A step:
(1) training circle number;
(2) no visual stimulation drives;
(3) there is visual stimulus driving;
(4) visual stimulus and mental work load drive.
Between each step, participant rests about 3 minutes, with release psychology and body pressure.
By using the SmartEyePro of four cameras to acquire eye movement on simulator.The equipment is noninvasive (non-to invade Harmful, non-invasion), installation is simple, provides data using video camera.In order to collect the information of head movement, we used Fastrak electromagnetic tracker.
2.1 design
We devise the movement track comprising straight line portion, right-hand bend and left-hand bend and narrow zone.It is depositing In the case where VS, trees are located at around road, about 18m (Figure 4 and 5) apart.
2.2 seat vibration
In order to generate the movement on head and eyes, control of the seat in Carsim and Matlab Simulink when driving Lower movement.Seat vibrates in two directions: vertically and horizontally, in this direction, with the control of Carsim, pitching fortune Dynamic is random perturbation, power when horizontal direction is due to vehicle driving and move naturally.Kasim model generates the defeated of vibration Enter as shown in Figure 6.
2.3n-back task
In order to apply a mw in driving procedure, the digital memories task (Fig. 7) of a n-back has been used.At us N-back task in, there is a digital verbal to be presented to participle within every two seconds.When the number of appearance is identical as previous number When, it is desirable that subject presses "Yes", when the number of appearance is different, presses "No" button.The two buttons install on the steering wheel, because This is easy to press.
3. result
3.1VOR model and VOR+OKR model
In the case where no visual stimulus, VOR and VOR+OKR show good performance, in response time and width There is good matching (Fig. 8) on degree.In other words, driver is absorbed in driving, and good eye fortune is shown in no vs. Dynamic reaction.
In addition, having carried out box traction substation configuration, also to compare the result (Fig. 9) of mean square error.Each box intuitively indicates The means square error data of different condition and all subjects using different models.For example, at first row (w/o vs), box table Show the mean square error of the eyes simulation and all subjects of VOR model between eyes measurement in the case where no visual stimulus As a result.The result shows that new model can work well in all cases, mean square error is smaller (only VOR).
Mean square error distribution shown in Fig. 9 also shows following situations.
Block diagram of the VOR+OKR model presence or absence of vs is shorter than the block diagram of only VOR model.
This shows that the new model in conjunction with VOR and OKR is performed better than than the model only in conjunction with VOR.There are in the case where vs Box traction substation is higher than the case where there is no vs, especially for VOR model.The middle mean square error of VOR model increases to from 0.29 degree 0.32 degree, and the middle mean square error that OKR model is added is slightly increased to 0.23 degree from 0.21 degree.These results indicate that being developed Model precision (the median mean square error under same case is lower) (P < 0.05) can be improved, and reduce Vs to eyes simulate Influence (using new model can reduce median caused by Vs increase).
In order to which more carefully research VS on oculomotor influence, simulated and observed by we by the eyeball of VOR model Eye movement is plotted on a chart (Figure 10).This shows that in frequency and time response, VS causes VOR model to stimulate Eye movement is mismatched with the eye movement observed.On the other hand, VOR+OKR model shows preferable matching performance.
In addition, we also use one 2 × 2 analytical table (VOR model and VOR+OKR model, no VS and have VS)
Dependent variable: the mean square error between each subject's model prediction and measurement
Independent variable: model and driving condition (whether there is or not visual stimulus)
The null hypothesis of variance analysis is as follows:
1, the mean square error result (VOR and VOR+OKR) of two kinds of models is equal.
2, the mean square error result of two kinds of situations (having visual stimulus and no visual stimulation) is equal.
3, there is no reciprocation between model and visual stimulus.
The results are shown in Table 1:
As result before, the variance analysis of model again show statistical remarkable result (f=14.08, P < 0.005), it was confirmed that VOR+OKR model is in presence and there is no VOR model is superior in the case where VS.On the other hand, well-behaved Vs effect in analysis is not statistically significant (f=2.46, p > 0.005), this may be because the model proposed can be reduced Influence of the light stream to eye motion when driving, or precision is improved when there is tracker head movement.
Head movement
For the deep eye motion for explaining driver, need to consider the input of eyes simulation model.Figure 11 show with VOR and OKR is the head movement of model emulation.
As shown in figure 11, for the driver, head movement is made of two basic exercises: the fortune based on Vehicular vibration Dynamic (the reason of this is VOR) and the movement (active eye movement) determined based on driver.In vertical direction, head movement is several Caused by being the vibration as high frequency vehicle.On the contrary, in the horizontal direction, head is almost observed according to the decision of driver Target checks visual information.Therefore, when VOR model is applied to both horizontally and vertically by Obinata etc., vertical direction is imitated True result has better performance compared with the simulation result of horizontal direction.In addition, even if there are still smooth tracking or its In the vertical direction of his type header movement, by adding OKR model, the result of eye motion emulation is more acurrate, such as previous portion Shown in point.
Table 1
ANOVA results.
3.2, mental load is on oculomotor influence
After VOR the and OKR performance that new model has been determined is better than the model of VOR is used alone, mould that we will be developed Type is applied to the oculomotor simulation in the simulation present or absent situation of MW.Box-shaped figure is in order to compare simulation and measure it Between mean square error as a result, having made chart (Figure 12).Each box intuitively indicates that different condition (has (W/) or without (W/O) The means square error data of all subjects under mental load (MW).According to the observation with predict oculomotor difference, the results showed that, When not having MW, eye movement becomes more sustainable, and compared with having the case where MW (0.28), mean square error median is lower (0.23)。
Eye motion when being driven to check in more detail using MW, vertical when depicting there are MW and MW is not present The motion diagram (Figure 13) that side looks up and predicts.As shown in figure 13, when being driven using MW, the eye motion of driver is in frequency It rate and is mismatched on time response.In addition, there is the eye movement of the driver of MW more more out than the driver behavior of no MW Peak value.
(no MW and have MW) is examined using t to check the difference (table 2) between prediction and observation eye movement.
Dependent variable: the mean square error of each agent model prediction and measurement;
Independent variable: situation when driving (with or without mental work load);
The null hypothesis that-t is examined is as follows:
1, the mean square error of two kinds of situations (with or without mental work load) is the result is that equal.
2, analysis the result shows that, workload to simulation eye movement and observe eye movement difference have a significant impact (P < 0.05)。
Table 2
T-test result.
4. discussing
Our result of study on the basis of, we determined that eye motion is influenced by mental work load.However, in order to The tool of on-line checking driver distraction a kind of is developed, we are relevant to method and apparatus several there is still a need for improving Aspect.
4.1 methodology
Field is tracked in eyes, has multinomial research by checking pupil diameter, eyelid movement, blink duration etc. Assess the dispersion attention as caused by mental work load.These technologies respectively have advantage and disadvantage.For example, it has been noted that workload is to pupil The influence of diameter.
However, pupil size is very sensitive to lighting condition, significant changes can occur for lighting condition in driving procedure.Separately On the one hand, some researchers, which proposed through the measurement blink duration, estimates the method for recognizing workload;However, one Studies have shown that when operator must handle a large amount of visual information, it is not related between blink and cognitive load.
Obinata and its colleague propose the online evaluation method of new driver distraction a kind of.Make at them In research with VOR model, eye motion is simulated on the basis of head movement.Predict eye movement and observing eye Consistency between ball movement is used as measuring the index of tractive force.However, there is no consider nature eye movement for these researchs Or the influence of visual information.
In the present invention, our result has reconfirmed the oculomotor difference and driver predicted and observed Dispersion attention is related.In addition, completely simulating oculomotor model to establish one, we improve parameter identification side Method with accelerator and improves precision.On the other hand, OKR model is added reduces visual information to the shadow of optic nerve flow field simulation It rings.Therefore, we, which intend to replace with the off-line method for using mean square error, calculates correlation between eyes simulation and eyes measurement Property in line method.
4.2 equipment
In the present invention, we also noted that the eye allocinesis of individual has differences.This means that we are necessary for Each individual identification model parameter.It, should be by with biggish specimen discerning model in order to realize the on-line checking of driver distraction In each parameter trend, then reduced by some parameters varied less between subjects of fixation parameter identification when Between.Furthermore, it is intended that with the development of computer science, parameter recognition time can shorten to 1 point from current 10 minutes Clock is less.
In the present invention, in order to capture the movement on head and eyes, we used FastTrack and Smartye equipment, These equipment are not only expensive but also are difficult to install.However, this equipment now can be by recordable head and eye due to the development of technology The tight type equipment of eyeball movement replaces.
4.3 limitation
In the present invention, we use the driving simulator under a limited conditions.In order to confirm the stability of model, need It to be tested in complicated visual environment under virtual and truth.Further, since sample size is limited, this research is simultaneously The influence of non-analysis-driven factor characteristic.Following research should increasingly focus on the individual difference of model parameter, and will reflect eye The ball method of movement is combined with the method for other assessment driver/vehicle behaviors, to obtain more accurate driver distraction's evaluation.
5 conclusions
Models coupling VOR and OKR can simulate human eye movement, and can reflect that mental load is to the shadow of driver when driving It rings.In addition, the model shows higher precision, reduce the influence of light stream, and in the case where involuntary eye movement It can be well adapted for continually changing watching attentively.
Sizable potentiality that new model also have develop into the on-time model of an automatic detection driver distraction.Passing through will The method is combined with other methods, and the system that will can develop a monitoring driving behavior quickly, the system can lead to Warning driver distraction is crossed to reduce accident quantity.

Claims (5)

1. a kind of Visual Characteristics Analysis of Drivers method based on eye movement vision mode, it is characterised in that: including following three master Want step:
Step 1: in conjunction with VOR and OKR model development VOR+OKR model;
Step 2: three experiments of progress: it is stimulated including no visual and drives (VS), there is VS to drive, have VS driving and n-back task, Wherein: visual stimulus: simulation trees are placed on beside the test tracks of driving simulator, to help to induce a large amount of light stream; No VS drives: subject is required to drive on designed racing track in the case where no any simulation object;VS type drives: Subject is required the drive simulation trees on identical road;Driven using VS and n-back task: subject is required In the case where the trees of simulated roadway both sides, driven along identical route, while by pressing the suitable button on steering wheel, Single time task is completed in two seconds;
Step 3: carrying out the analysis of A2*2 factor design to the data of experiment acquisition, set using Smart Eye Pro and Fastrak The standby movement for capturing eye and head, is scanned data and point of scintillation is screened;A2*2 factor design analysis are as follows: VOR model, VOR+ OKR model, no visual stimulation, there is visual stimulus;The VOR model is used to simulate the eye movement in the case of two kinds: no VS drives It is driven with no VS and without Mental Workload (MW);Using the eye movement in the case of three kinds of VOR+OKR modeling: no VS drives, has VS drives without MW, has VS to have MW driving;Then two variance analyses are carried out.
2. the Visual Characteristics Analysis of Drivers method according to claim 1 based on eye movement vision mode, it is characterised in that: The VOR model of the step one, using genetic algorithm by the parameter set of each subject: ka, kf, kfw, kw, ki, kp divide Not Ying Yongyu in VOR and VOR+OKR model eye movement emulation;For OKR model, the OKR (k of all subjectswv=1) parameter is protected It holds constant.
3. the Visual Characteristics Analysis of Drivers method according to claim 1 based on eye movement vision mode, it is characterised in that: In the VOR+OKR model of the step one: on the one hand in order to study the interaction of semicircular canal and otolith, using Merfeld The VOR model proposed with Zupan;On the other hand, a negative feedback loop critically important to the image stabilization on retina is created, VOR is only used as a useful supplement, and the finite bandwidth of OKR is compensated during high frequency/high speed end rotation;
The model of exploitation includes seven parameters: four parameters of VOR model a: parameter of ka, kf, kfw, kw, OKR model Two parameters of KWV and final common path: ki, kp;
Firstly, using the angular speed x of head movement and linear acceleration a as the input information of VOR model, four free parameters be by Feedback error between the sensory measurement and sensory measurement of internal model prediction, converts movement and direction for these feedback errors Estimation;Two parameters, kw and ka are linear feedback parameters, are weighted with kw corresponding with VOR slow phase velocity to predict reality The difference of SCC signal;It is weighted with the difference of the practical and expected otolith signal of the parameter ka for deriving acceleration estimation value;Phase Instead, use Kf and Kfw as the feedback of nonlinearity erron item, parameter kf indicates the difference between the actual direction and anticipated orientation of otolith It is different, for estimating gravity direction;On the other hand, Kfw indicates the rotation of gravitational cue, and the estimated value for adjusting angular speed;
Secondly, for OKR model, it is assumed that the vision of the eye motion measurement and OKR model that are carried out by low-pass filter inputs Corresponding, which is also used for compensating voluntary eye motion;Then, visual sensor (VIS) is to vision input (xv: angle speed Degree) it is handled, to generate the estimation of visual perception's parameter;The estimated value with assessed using visual sensor internal model it is pre- Phase visual perception's parameter is compared (<vis>);Then with remaining weighting parameters (kv) to the estimation visual impression of visual sensor Difference between official's parameter and internal model is weighted, and is added in the change rate of estimated state;
The mathematical relationship that VOR and OKR interacts in VOR+OKR model is as follows:
Sensor dynamics: each visual sensor is expressed as 33 lists by the Observer Theory based on more sense organ interactive modelings Vision input (XV) is converted to visual perception's estimated value by bit matrix, each sensor, it then follows formula:
Internal sensor dynamic:
Error calculation:
Third, after calculating eye movement, using the final co-route that Robinson proposes, final co-route includes two Parameter, the two parameters are related with different types of muscle in eyes muscle.
4. the Visual Characteristics Analysis of Drivers method according to claim 1 based on eye movement vision mode, it is characterised in that: In the experiment of the step two, subject be required in the driving simulator for being sitting in a six degree of freedom around one simulate Route drives, and simulator controls by Carsim, can be with the dynamic behaviour of simulating vehicle, in experiment, by using Matlab Simulink controls Carsim, with fixed frequency moved seat on vertically and horizontally face;
The subject that several names hold driving license has participated in experiment;Each participant does movement track three times: not having vs, has Vs has vs and mw, it is desirable that each participant follows four steps:
(1) training circle number;
(2) no visual stimulation drives;
(3) there is visual stimulus driving;
(4) visual stimulus and mental work load drive;
Between each step, participant rests about 3 minutes, with release psychology and body pressure;
By using the SmartEyePro of four cameras to acquire eye movement on simulator, in order to collect head movement Information uses Fastrak electromagnetic tracker;
Experimental design one movement track comprising straight line portion, right-hand bend and left-hand bend and narrow zone;There are VS's In the case of, trees are located at around road, 18m apart;
In order to generate the movement on head and eyes when driving, control of the seat in Carsim and Matlab Simulink is moved down Dynamic, seat vibrates in two directions: vertically and horizontally, in this direction, with the control of Carsim, pitching movement is Random perturbation, power and movement naturally when horizontal direction is due to vehicle driving.
5. the Visual Characteristics Analysis of Drivers method according to claim 1 based on eye movement vision mode, it is characterised in that: In the n-back task: in order to apply a mw in driving procedure, the digital memories task of a n-back has been used, In n-back task, there is a digital verbal to be presented to participle within every two seconds, when the number of appearance is identical as previous number When, it is desirable that subject presses "Yes", when the number of appearance is different, presses "No" button.
CN201910231249.6A 2019-03-26 2019-03-26 A kind of Visual Characteristics Analysis of Drivers method based on eye movement vision mode Pending CN110169779A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910231249.6A CN110169779A (en) 2019-03-26 2019-03-26 A kind of Visual Characteristics Analysis of Drivers method based on eye movement vision mode

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910231249.6A CN110169779A (en) 2019-03-26 2019-03-26 A kind of Visual Characteristics Analysis of Drivers method based on eye movement vision mode

Publications (1)

Publication Number Publication Date
CN110169779A true CN110169779A (en) 2019-08-27

Family

ID=67688992

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910231249.6A Pending CN110169779A (en) 2019-03-26 2019-03-26 A kind of Visual Characteristics Analysis of Drivers method based on eye movement vision mode

Country Status (1)

Country Link
CN (1) CN110169779A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112926404A (en) * 2021-01-29 2021-06-08 吉林大学 Active interactive human-vehicle passing system and method
WO2021124140A1 (en) * 2019-12-17 2021-06-24 Indian Institute Of Science System and method for monitoring cognitive load of a driver of a vehicle

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160007849A1 (en) * 2014-07-08 2016-01-14 Krueger Wesley W O Systems and methods for the measurement of vestibulo-ocular reflex to improve human performance in an occupational environment
US20160262608A1 (en) * 2014-07-08 2016-09-15 Krueger Wesley W O Systems and methods using virtual reality or augmented reality environments for the measurement and/or improvement of human vestibulo-ocular performance

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160007849A1 (en) * 2014-07-08 2016-01-14 Krueger Wesley W O Systems and methods for the measurement of vestibulo-ocular reflex to improve human performance in an occupational environment
US20160262608A1 (en) * 2014-07-08 2016-09-15 Krueger Wesley W O Systems and methods using virtual reality or augmented reality environments for the measurement and/or improvement of human vestibulo-ocular performance

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ANH SON LE ET AL: "Towards online detection of driver distraction: Eye-movement simulation based on a combination of vestibulo–ocular reflex and optokinetic reflex models", 《TRANSPORTATION RESEARCH PART F》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021124140A1 (en) * 2019-12-17 2021-06-24 Indian Institute Of Science System and method for monitoring cognitive load of a driver of a vehicle
CN112926404A (en) * 2021-01-29 2021-06-08 吉林大学 Active interactive human-vehicle passing system and method

Similar Documents

Publication Publication Date Title
CN107929007B (en) Attention and visual ability training system and method using eye tracking and intelligent evaluation technology
O'regan et al. A sensorimotor account of vision and visual consciousness
O'Regan et al. Acting out our sensory experience
Dennett Surprise, surprise
CN110169779A (en) A kind of Visual Characteristics Analysis of Drivers method based on eye movement vision mode
Manzotti et al. Does Functionalism really deal with the phenomenal side of experience?
Clark et al. Sensorimotor chauvinism?
Ryan et al. The existence of internal visual memory representations
Le et al. The effect of visual stimulus on voluntary eye movement based on a VOR/OKR model
Cristina de Angelo et al. Video game simulation on car driving: Analysis of participants' gaze behavior and perception of usability, risk, and visual attention.
Pylyshyn Seeing, acting, and knowing
Regan et al. A step by step approach to research on time-to-contact and time-to-passage
Obinata et al. Evaluating the influence of distractions to drivers based on reflex eye movement model
CN113408431B (en) Intelligent driving evaluation training method and system based on eyeball tracking
Goodale Real action in a virtual world
Hecht et al. The impact of spatiotemporal sampling on time-to-contact judgments
Berton Immersive virtual crowds: evaluation of pedestrian behaviours in virtual reality
Revonsuo Dreaming and the place of consciousness in nature
Toma et al. Determining car driver interaction intent through analysis of behavior patterns
Çeven et al. Young Driver Gaze (YDGaze): Dataset for driver gaze analysis
Bartolomeo et al. Visual awareness relies on exogenous orienting of attention: Evidence from unilateral neglect
Van Gulick Still room for representations
Cohen Whither visual representations? Whither qualia?
CN115429275A (en) Driving state monitoring method based on eye movement technology
Velichkovsky et al. In search of the ultimate evidence: The fastest visual reaction adapts to environment, not retinal locations

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190827

RJ01 Rejection of invention patent application after publication