CN112806996A - Driver distraction multi-channel assessment method and system under L3-level automatic driving condition - Google Patents

Driver distraction multi-channel assessment method and system under L3-level automatic driving condition Download PDF

Info

Publication number
CN112806996A
CN112806996A CN202110035155.9A CN202110035155A CN112806996A CN 112806996 A CN112806996 A CN 112806996A CN 202110035155 A CN202110035155 A CN 202110035155A CN 112806996 A CN112806996 A CN 112806996A
Authority
CN
China
Prior art keywords
distraction
driver
visual
electroencephalogram
attention
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110035155.9A
Other languages
Chinese (zh)
Inventor
马艳丽
朱洁玉
张亚平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN202110035155.9A priority Critical patent/CN112806996A/en
Publication of CN112806996A publication Critical patent/CN112806996A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/18Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state for vehicle drivers or machine operators
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/168Evaluating attention deficit, hyperactivity
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device

Abstract

The invention relates to a driver distraction multi-channel assessment method under an L3-level automatic driving condition, which comprises the following steps of: step one, monitoring and collecting various index information data of a driver in real time: step two, respectively calculating the driver electroencephalogram signal sample entropy and the attention buffering time according to the collected data: inputting the electroencephalogram signal sample entropy into an AdaBoost classifier, and outputting an electroencephalogram distraction judgment result; comparing the attention buffer time with each grade threshold value, and outputting the visual distraction influence grade; and comprehensively judging the distraction grade of the driver by adopting a fuzzy logic method. Under the L3 level automatic driving condition, the invention adopts two indexes of electroencephalogram and visual characteristics to judge the distraction state of the driver, thereby reducing the traffic accidents caused by untimely control right switching due to distraction of the driver.

Description

Driver distraction multi-channel assessment method and system under L3-level automatic driving condition
Technical Field
The invention belongs to the field of traffic safety, and particularly relates to a driver distraction multi-channel assessment method and system under an L3-level automatic driving condition.
Background
In recent years, automatic driving has become a new direction for the development of automotive technology. According to the National Highway Traffic Safety Administration (NHTSA) standard, autopilot is classified into 5 classes. The higher the level of autopilot, the less attentive the driver is to environmental monitoring and system operation, and the increased operation of other non-driving tasks. Under the automatic driving condition of level L3, intelligent transportation system terminals such as a travel information system, a vehicle control system, and a fatigue driving monitor have come to be widely used for automobiles, and these terminals have been a source of driver's attention while reducing the driving load. Under the automatic driving condition of the L3 level, a driver does not need to monitor the automobile for a long time and can engage in non-driving tasks such as making a call, looking at a map, inputting texts, chatting with WeChat and the like, and the task demands occupy visual resources, cognitive resources and action resources of the driver to different degrees and disperse the attention of the driver. The driver distraction caused by the occupation of multi-channel resources can directly influence the taking-over capability and the taking-over effect of the driver, great hidden dangers are brought to the driving safety in the taking-over process of driving, and the development of driver distraction evaluation under the L3-level automatic driving condition becomes one of important tasks of active traffic safety.
Most of the existing driving distraction evaluation methods adopt a certain type of evaluation indexes to analyze the influence of related factors on the distraction of a driver under the non-automatic driving condition and determine whether the driver is distracted. Under the condition of L3-level automatic driving, a driver does not need to monitor the vehicle for a long time, and only needs to take over the vehicle when the automatic driving fails or the external environment of the vehicle is complex, so that the driver can participate in more non-driving related tasks. The non-driving related tasks can disperse the attention of the driver and directly influence the taking over capability of the driver in the driving taking over process, so that the multi-channel characteristic of the driver is utilized to evaluate the L3 level automatic driving distraction condition and determine the driving distraction levels under different characteristics, and the method has great significance for improving the safety taking over of the automatic driving vehicle.
Disclosure of Invention
The invention aims to solve the traffic safety problem caused by driver distraction in a driving takeover under an automatic driving condition, and further provides a driver distraction multi-channel assessment method and system under an L3-level automatic driving condition.
The invention relates to a driver distraction multi-channel assessment method under an L3-level automatic driving condition, which comprises the following steps of;
step one, monitoring and collecting various index information data of a driver in real time:
step two, respectively calculating the driver electroencephalogram signal sample entropy and the attention buffering time according to the collected data:
performing feature fusion processing on the obtained EEG signal sample to obtain EEG sample entropy;
the EEG sample entropy algorithm is as follows:
n-dimensional time series X with original data sampled at equal time intervalsN={x1,x2,x3,…,xN}
(1) Constructing a set of m-dimensional vectors X (1), X (2), X (N-m +1), wherein
X(i)=[x1,x2,x3,…,x(i+m-1)]
(2) Defining the distance d [ X (i) between the vectors X (i) and X (j), wherein X (j) is the largest difference value of the two vector corresponding elements, namely:
d[X(i),X(j)]=maxk=0~m-1|x(i+k)-x(j+k)|
(3) for 1 ≦ i ≦ n-m +1, in case of tolerance of r, statistics are taken of d [ X (i), X (j)]<r is counted as Nm(i) And calculating the ratio of the number to the total number of distances, and calculating as follows:
Figure BDA0002893940980000023
r is 0.1-0.25 times of the standard deviation of the original data;
(4) average all i and record as phim(r) then
Figure BDA0002893940980000021
(5) Increasing the dimension m by 1 to m +1, repeating the above process to obtain phim+1(r)
Figure BDA0002893940980000022
(6) The sample entropy of this sequence is:
SampEn(N,m,r)=-ln[φm+1(r)/φm(r)]。
the method comprises the steps of collecting information by using an eye tracker, detecting whether a driver is far away from a visual field area related to driving, and calculating attention buffer time, wherein the visual field area is defined as a cross point between a 90-degree sector and a vehicle window.
Inputting the electroencephalogram signal sample entropy into an AdaBoost classifier, and outputting an electroencephalogram distraction judgment result; comparing the attention buffer time with each grade threshold value, and outputting the visual distraction influence grade; the method comprises the steps of comprehensively judging the distraction grade of a driver by adopting a fuzzy logic method, inputting electroencephalogram sample entropy into an AdaBoost classifier, and classifying the distraction of the driver into three grades of mild distraction, moderate distraction and deep distraction, wherein the AdaBoost classifier is realized by the following steps:
using 5-dimensional input xi∈R5With corresponding distraction level yiE.g. {1,2, 3} as learning data for weak classifier ht(x) T ═ 1.. T) is weighted to obtain the reliability αtConnected together to form a strong classifier h (x);
(1) initialization weighted D1(i) 1/N, the weight of the learning count t is Dt(i);
(2) Based on distribution DtThe weak classifier of (2) minimizing the error value to find ht(x):X→Y;
(3) Calculating a reliability degree alpha using an error ratet
Figure BDA0002893940980000031
Wherein epsilontIs an error value;
(4) updating distributions
Figure BDA0002893940980000032
ZtIn order to normalize the factors, the method comprises the steps of,
Figure BDA0002893940980000033
(5) using the reliability of all weak classifiers to make weighted majority decision to obtain strong classifiers H (x)
Figure BDA0002893940980000034
If the classification result is y1If the corresponding state is the deep electroencephalogram distraction of the driver;
if the classification result is y2If the corresponding state is the driver's moderate electroencephalogram distraction;
if the classification result is y3And the corresponding state is the mild electroencephalogram distraction of the driver.
Calculating the influence degree of the visual distraction according to the attention buffering time, and dividing the distraction of the driver into three grades of mild distraction, moderate distraction and deep distraction; the driver distraction influence degree is as follows:
Figure BDA0002893940980000035
in the formula: INCdis-degree of visual distraction effect; n-total sample size of visual output; bufferi-visual buffer value of i/60 s;
when the attention buffering time is 0s, the degree of the influence of the visual distraction is recorded as INC1At the moment, the corresponding state is the deep visual distraction of the driver;
when the attention buffering time is 0-1 s, the visual distraction influence degree is recorded as INC2At the moment, the corresponding state is the driver moderate visual distraction;
when the attention buffering time is 1-2 s, the visual distraction influence degree is recorded as INC3At the moment, the corresponding state is light visual distraction of the driver;
when attention is paid to bufferingWhen the time is more than 2s, the visual distraction influence degree is recorded as INCnThe corresponding state is normal driving.
Obtaining fuzzy values from the brain electricity and vision characteristic variables, evaluating a system rule base by adopting an inference method, and comprehensively judging the distraction level of a driver; the driver mild distraction is recorded as SD, the driver moderate distraction is recorded as MD, and the driver deep distraction is recorded as LD.
The invention also relates to a multi-channel assessment system for the distraction of the driver under the L3-level automatic driving condition, which comprises an information acquisition device, an information processing device and a driver distraction grade judging device.
Advantageous effects
According to the invention, under the L3 level automatic driving condition, the distraction state of the driver is judged by adopting two indexes of the electroencephalogram and the visual characteristics, the distraction grade of the driver is comprehensively judged according to the electroencephalogram distraction judgment result and the visual distraction judgment result, the distraction degree of the driver can be predicted in advance, and thus traffic accidents caused by untimely control right switching due to distraction of the driver are reduced.
Drawings
FIG. 1 is a schematic diagram of the system of the present invention;
FIG. 2 is a schematic view of the driving-related field of view of the present invention;
FIG. 3 is a driving vision distraction category diagram of the present invention;
FIG. 4 is a flow chart of the method of the present invention.
Detailed Description
The present embodiment will be described below with reference to fig. 1 to 4.
The invention discloses a driver distraction multi-channel assessment method under an L3-level automatic driving condition, which comprises the following steps of:
step one, monitoring and collecting various index information data of a driver in real time:
the method comprises the steps of recording 5 standard brain waves by a brain electrograph, dividing the brain waves into delta waves, theta waves, alpha waves, beta waves and gamma waves according to frequency, installing a front infrared camera on an instrument panel and aligning with a driver, monitoring by a notebook computer end program, recording the distraction prompt starting time and the distraction ending time, and respectively collecting fixation time and fixation frequency according to a road surface area, a distraction related area, a rearview mirror and an instrument panel area.
Step two, respectively calculating the driver electroencephalogram signal sample entropy and the attention buffering time according to the collected data:
performing feature fusion processing on the obtained EEG signal sample to obtain EEG sample entropy;
the EEG sample entropy algorithm is as follows:
n-dimensional time series X with original data sampled at equal time intervalsN={x1,x2,x3,…,xN}
(1) Constructing a set of m-dimensional vectors X (1), X (2), X (N-m +1), wherein
X(i)=[x1,x2,x3,…,x(i+m-1)]
(2) Defining the distance d [ X (i) between the vectors X (i) and X (j), wherein X (j) is the largest difference value of the two vector corresponding elements, namely:
d[X(i),X(j)]=maxk=0~m-1|x(i+k)-x(j+k)|
(3) for 1 ≦ i ≦ n-m +1, in case of tolerance of r, statistics are taken of d [ X (i), X (j)]<r is counted as Nm(i) And calculating the ratio of the number to the total number of distances, and calculating as follows:
Figure BDA0002893940980000053
r is 0.1-0.25 times of the standard deviation of the original data;
(4) average all i and record as phim(r) then
Figure BDA0002893940980000051
(5) Increasing the dimension m by 1 to m +1, repeating the above process to obtain phim+1(r)
Figure BDA0002893940980000052
(6) The sample entropy of this sequence is:
SampEn(N,m,r)=-ln[φm+1(r)/φm(r)]。
the method comprises the steps of collecting information by using an eye tracker, detecting whether a driver is far away from a visual field area related to driving, and calculating attention buffer time, wherein the visual field area is defined as a cross point between a 90-degree sector and a vehicle window.
Inputting the electroencephalogram signal sample entropy into an AdaBoost classifier, and outputting an electroencephalogram distraction judgment result; comparing the attention buffer time with each grade threshold value, and outputting the visual distraction influence grade; the method comprises the steps of comprehensively judging the distraction grade of a driver by adopting a fuzzy logic method, inputting electroencephalogram sample entropy into an AdaBoost classifier, and classifying the distraction of the driver into three grades of mild distraction, moderate distraction and deep distraction, wherein the AdaBoost classifier is realized by the following steps:
using 5-dimensional input xi∈R5With corresponding distraction level yiE.g. {1,2, 3} as learning data for weak classifier ht(x) T ═ 1.. T) is weighted to obtain the reliability αtConnected together to form a strong classifier h (x);
(1) initialization weighted D1(i) 1/N, the weight of the learning count t is Dt(i);
(2) Based on distribution DtThe weak classifier of (2) minimizing the error value to find ht(x):X→Y;
(3) Calculating a reliability degree alpha using an error ratet
Figure BDA0002893940980000061
Wherein epsilontIs an error value;
(4) updating distributions
Figure BDA0002893940980000062
ZtIn order to normalize the factors, the method comprises the steps of,
Figure BDA0002893940980000063
(5) using the reliability of all weak classifiers to make weighted majority decision to obtain strong classifiers H (x)
Figure BDA0002893940980000064
If the classification result is y1If the corresponding state is the deep electroencephalogram distraction of the driver;
if the classification result is y2If the corresponding state is the driver's moderate electroencephalogram distraction;
if the classification result is y3And the corresponding state is the mild electroencephalogram distraction of the driver.
Calculating the influence degree of the visual distraction according to the attention buffering time, and dividing the distraction of the driver into three grades of mild distraction, moderate distraction and deep distraction; the driver distraction influence degree is as follows:
Figure BDA0002893940980000065
in the formula: INCdis-degree of visual distraction effect; n-total sample size of visual output; bufferi-visual buffer value of i/60 s;
when the attention buffering time is 0s, the degree of the influence of the visual distraction is recorded as INC1At the moment, the corresponding state is the deep visual distraction of the driver;
when the attention buffering time is 0-1 s, the visual distraction influence degree is recorded as INC2At the moment, the corresponding state is the driver moderate visual distraction;
when the attention buffering time is 1-2 s, the visual distraction influence degree is recorded as INC3When the corresponding state is light visual distraction of the driver;
When the attention buffering time is greater than 2s, the visual distraction influence degree is recorded as INCnThe corresponding state is normal driving.
Obtaining fuzzy values from the brain electricity and vision characteristic variables, evaluating a system rule base by adopting an inference method, and comprehensively judging the distraction level of a driver; the driver mild distraction is recorded as SD, the driver moderate distraction is recorded as MD, and the driver deep distraction is recorded as LD.
The invention also relates to a multi-channel assessment system for the distraction of the driver under the L3-level automatic driving condition, which comprises an information acquisition device, an information processing device and a driver distraction grade judging device.
1. The information acquisition device comprises an eye tracker, an electroencephalograph and a wireless transmitter, wherein the electroencephalograph is used for monitoring electroencephalogram indexes of a driver in real time, the eye tracker is used for monitoring visual characteristic indexes of the driver in real time, and the wireless transmitter transmits acquired electroencephalogram and visual characteristic index related information to the information processing device; the electroencephalograph in the information acquisition apparatus is used to record 5 standard electroencephalograms, which are divided into 5 kinds of waves, i.e., δ waves, θ waves, α waves, β waves, and γ waves, according to frequency. An Eye tracker in the information acquisition device adopts Smart Eye Pro, the sampling frequency is 60Hz, front infrared cameras are respectively arranged on an instrument panel and aligned with the position of a driver, a notebook computer end program is adopted for monitoring, the distraction prompt starting time and the distraction ending time are recorded, and the indicators such as the fixation time, the fixation frequency and the like are acquired according to a road surface area, a distraction related area, a rearview mirror, an instrument panel area and the like.
2. The information processing device comprises a wireless transmitter, a wireless receiver, an electroencephalogram signal processing module and a visual attention processing module, wherein the wireless receiver is used for receiving electroencephalogram and eye movement indexes of the driver, which are acquired by the information acquisition device; the brain electrical signal processing module is used for calculating the brain electrical signal sample entropy of the driver, and the visual attention processing module is used for calculating the attention buffering time; a visual attention processing module in the information processing device detects whether the driver is far away from a driving-related visual field area defined as an intersection between a 90 DEG fan and a window based on information collected by the eye tracker, and calculates an attention buffer time. The wireless transmitter transmits the information processed by the information processing device to the driver distraction level judging device;
3. the driver distraction grade judging device comprises a wireless receiver, an electroencephalogram distraction judging submodule, a visual distraction judging submodule and a driver distraction comprehensive judging module; the wireless receiver is used for receiving information such as driver electroencephalogram sample entropy and attention buffering time sent by the information processing device; the electroencephalogram distraction judgment sub-module divides distraction of the driver into mild distraction, moderate distraction and deep distraction based on an AdaBoost classifier according to the sample entropy of the received electroencephalogram signal; the vision distraction judgment submodule pays attention to the buffer time threshold value to divide the distraction of the driver into mild distraction, moderate distraction and deep distraction; the comprehensive driver distraction judging module consists of an input electroencephalogram distraction judging sub-result and an input visual distraction judging sub-result, adopts a fuzzy logic method to comprehensively judge the distraction grade of the driver, and a rule base based on the fuzzy logic is shown in a table 1.
TABLE 1 driver distraction Integrated Distinguishing results based on fuzzy logic rules
Figure BDA0002893940980000071
Figure BDA0002893940980000081
Examples
In the embodiment, a driving simulator and related software are used for designing simulation scenes and simulating experimental operation, and meanwhile, electroencephalographs and eye-motion instruments are used for collecting behavior data of drivers.
The scene design module in the software is used for establishing driving simulation scenes of vehicles, pedestrians, roads, roadside buildings and the like.
The Simcar software in the system is used for simulation and modeling, and virtual operation of the vehicle is operated. The electroencephalograph mainly comprises an electroencephalogram signal amplifier and an electrode cap, the sampling frequency of an Eye tracker type Smart Eye Pro is 60Hz, and a front infrared camera is installed on an instrument panel to record the Eye movement characteristics of a driver. The design uses 5 distraction tasks of navigation, WeChat voice, WeChat short message, listening to music, making a call, etc. to perform the experiment.
The electroencephalogram distraction judgment rule of the driver is as follows:
if the classification result is y1If the corresponding state is the deep electroencephalogram distraction of the driver;
if the classification result is y2If the corresponding state is the driver's moderate electroencephalogram distraction;
if the classification result is y3And the corresponding state is the mild electroencephalogram distraction of the driver.
The visual distraction judgment rule of the driver is as follows:
degree of visual distraction is INC1If so, the corresponding state is the deep visual distraction of the driver;
when the degree of influence of visual distraction is INC2If so, the corresponding state is the driver moderate visual distraction;
when the degree of the visual distraction is recorded as INC3When the driver is in the normal state, the corresponding state is that the driver is slightly distracted;
when the degree of the visual distraction is recorded as INCnAnd if so, the corresponding state is normal driving.
And finally, comprehensively judging the distraction state of the driver by adopting a fuzzy logic rule, and dividing the result of judging the distraction degree of the driver into SD, MD and LD.
Under the conditions of respectively adopting the method and not adopting the method, 20 drivers are recruited to carry out 40 times of tests on the automatic driving simulator, the distracting distinguishing effect on the drivers is obvious, and the number of traffic accidents caused by insufficient takeover capacity due to the distraction of the drivers can be effectively reduced by 60 percent on average.
The above-mentioned embodiments are only preferred embodiments of the present invention, and are not intended to limit the embodiments of the present invention, and those skilled in the art can easily make various changes and modifications according to the main concept and spirit of the present invention, so the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A driver distraction multichannel assessment method under L3-level automatic driving conditions is characterized by comprising the following steps:
monitoring and collecting various index information data of a driver in real time;
step two, respectively calculating the driver electroencephalogram signal sample entropy and attention buffering time according to the collected data;
inputting the electroencephalogram signal sample entropy into an AdaBoost classifier, and outputting an electroencephalogram distraction judgment result; comparing the attention buffer time with each grade threshold value, and outputting the visual distraction influence grade; and comprehensively judging the distraction grade of the driver by adopting a fuzzy logic method.
2. The method for multichannel assessment of driver distraction under L3-level automatic driving conditions according to claim 1, wherein in step one, 5 standard brain waves are recorded by a brain computer, and divided into delta waves, theta waves, alpha waves, beta waves and gamma waves according to frequency, a front infrared camera is mounted on a dashboard and aimed at the driver, a notebook computer-side program is used for monitoring, distraction prompt start time and distraction end time are recorded, and fixation time and fixation frequency are collected according to a road surface area, a distraction related area, a rearview mirror and an instrument panel area respectively.
3. The method for multichannel assessment of driver distraction under the L3-level automatic driving condition of claim 1, wherein in step two, feature fusion processing is performed on the obtained electroencephalogram signal samples to obtain EEG sample entropy;
the EEG sample entropy algorithm is as follows:
n-dimensional time series X with original data sampled at equal time intervalsN={x1,x2,x3,…,xN}
(1) Constructing a set of m-dimensional vectors X (1), X (2), X (N-m +1), wherein
X(i)=[x1,x2,x3,…,x(i+m-1)]
(2) Defining the distance d [ X (i) between the vectors X (i) and X (j), wherein X (j) is the largest difference value of the two vector corresponding elements, namely:
d[X(i),X(j)]=maxk=0~m-1|x(i+k)-x(j+k)|
(3) for 1 ≦ i ≦ n-m +1, in case of tolerance of r, statistics are taken of d [ X (i), X (j)]<r is counted as Nm(i) And calculating the ratio of the number to the total number of distances, and calculating as follows:
Figure FDA0002893940970000011
r is 0.1-0.25 times of the standard deviation of the original data;
(4) average all i and record as phim(r) then
Figure FDA0002893940970000012
(5) Increasing the dimension m by 1 to m +1, repeating the above process to obtain phim+1(r)
Figure FDA0002893940970000021
(6) The sample entropy of this sequence is:
SampEn(N,m,r)=-ln[φm+1(r)/φm(r)]。
4. the method for multichannel assessment of driver distraction under L3-level automatic driving conditions according to claim 1, wherein in step two, the information is collected by an eye tracker, it is detected whether the driver is far away from the driving-related field of view region, the attention buffer time is calculated, and the field of view region is defined as the intersection between the 90 ° sector and the window.
5. The method for multichannel estimation of driver distraction under the L3-level automatic driving condition according to claim 1, wherein in the third step, the electroencephalogram signal sample entropy is input into an AdaBoost classifier, and the driver distraction is classified into three levels of mild distraction, moderate distraction and deep distraction, and the AdaBoost classifier is implemented as follows:
using 5-dimensional input xi∈R5With corresponding distraction level yiE.g. {1,2, 3} as learning data for weak classifier ht(x) T ═ 1.. T) is weighted to obtain the reliability αtConnected together to form a strong classifier h (x);
(1) initialization weighted D1(i) 1/N, the weight of the learning count t is Dt(i);
(2) Based on distribution DtThe weak classifier of (2) minimizing the error value to find ht(x):X→Y;
(3) Calculating a reliability degree alpha using an error ratet
Figure FDA0002893940970000022
Wherein epsilontIs an error value;
(4) updating distributions
Figure FDA0002893940970000023
ZtIn order to normalize the factors, the method comprises the steps of,
Figure FDA0002893940970000024
(5) using the reliability of all weak classifiers to make weighted majority decision to obtain strong classifiers H (x)
Figure FDA0002893940970000025
If the classification result is y1If the corresponding status is drivingThe electroencephalogram is distracted by the driving depth;
if the classification result is y2If the corresponding state is the driver's moderate electroencephalogram distraction;
if the classification result is y3And the corresponding state is the mild electroencephalogram distraction of the driver.
6. The method for multichannel assessment of driver distraction under L3-level automatic driving conditions according to claim 1, wherein in step three, the degree of influence of visual distraction is calculated according to the attention buffer time, and the driver distraction is classified into three levels, namely mild distraction, moderate distraction and deep distraction; the driver distraction influence degree is as follows:
Figure FDA0002893940970000031
in the formula: INCdis-degree of visual distraction effect; n-total sample size of visual output; bufferi-visual buffer value of i/60 s;
when the attention buffering time is 0s, the degree of the influence of the visual distraction is recorded as INC1At the moment, the corresponding state is the deep visual distraction of the driver;
when the attention buffering time is 0-1 s, the visual distraction influence degree is recorded as INC2At the moment, the corresponding state is the driver moderate visual distraction;
when the attention buffering time is 1-2 s, the visual distraction influence degree is recorded as INC3At the moment, the corresponding state is light visual distraction of the driver;
when the attention buffering time is greater than 2s, the visual distraction influence degree is recorded as INCnThe corresponding state is normal driving.
7. The multi-channel assessment method for driver distraction under the L3-level automatic driving condition as claimed in claim 1, wherein in step three, fuzzy values are obtained from the brain electrical and visual characteristic variables, a system rule base is assessed by using an inference method, and the level of driver distraction is comprehensively judged; the driver mild distraction is recorded as SD, the driver moderate distraction is recorded as MD, and the driver deep distraction is recorded as LD.
8. A multi-channel assessment system for driver distraction under automatic driving conditions class L3, comprising the method according to any one of claims 1 to 7, characterized by comprising information acquisition means, information processing means and driver distraction level discrimination means.
9. The system of claim 1, wherein the information collecting device comprises an electroencephalograph, an electroencephalograph and a wireless transmitter, the electroencephalograph is used for monitoring the electroencephalogram index of the driver in real time, the oculomograph is used for monitoring the visual characteristic index of the driver in real time, and the wireless transmitter transmits the collected information related to the electroencephalogram and visual characteristic indexes to the information processing device.
10. The system for multichannel assessment of driver distraction under automatic driving at level L3 of claim 1, wherein the information processing device comprises a wireless transmitter, a wireless receiver, an electroencephalogram signal processing module and a visual attention processing module, the wireless receiver is used for receiving the electroencephalogram and eye movement indexes of the driver collected by the information collecting device; the brain electrical signal processing module is used for calculating the brain electrical signal sample entropy of the driver, and the visual attention processing module is used for calculating the attention buffering time; the wireless transmitter transmits the information processed by the information processing device to the driver distraction level judging device;
the driver distraction grade judging device comprises a wireless receiver, an electroencephalogram distraction judging submodule, a visual distraction judging submodule and a driver distraction comprehensive judging module; the wireless receiver is used for receiving information such as driver electroencephalogram sample entropy and attention buffering time sent by the information processing device; the electroencephalogram distraction judgment sub-module divides distraction of the driver into mild distraction, moderate distraction and deep distraction based on an AdaBoost classifier according to the sample entropy of the received electroencephalogram signal; the vision distraction judgment submodule pays attention to the buffer time threshold value to divide the distraction of the driver into mild distraction, moderate distraction and deep distraction; the comprehensive driver distraction judging module consists of an input electroencephalogram distraction judging sub-result and an input visual distraction judging sub-result, and adopts a fuzzy logic method to comprehensively judge the distraction grade of the driver.
CN202110035155.9A 2021-01-12 2021-01-12 Driver distraction multi-channel assessment method and system under L3-level automatic driving condition Pending CN112806996A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110035155.9A CN112806996A (en) 2021-01-12 2021-01-12 Driver distraction multi-channel assessment method and system under L3-level automatic driving condition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110035155.9A CN112806996A (en) 2021-01-12 2021-01-12 Driver distraction multi-channel assessment method and system under L3-level automatic driving condition

Publications (1)

Publication Number Publication Date
CN112806996A true CN112806996A (en) 2021-05-18

Family

ID=75868872

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110035155.9A Pending CN112806996A (en) 2021-01-12 2021-01-12 Driver distraction multi-channel assessment method and system under L3-level automatic driving condition

Country Status (1)

Country Link
CN (1) CN112806996A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103818256A (en) * 2012-11-16 2014-05-28 西安众智惠泽光电科技有限公司 Automobile fatigue-driving real-time alert system
CN107007278A (en) * 2017-04-25 2017-08-04 中国科学院苏州生物医学工程技术研究所 Sleep mode automatically based on multi-parameter Fusion Features method by stages
CN108537161A (en) * 2018-03-30 2018-09-14 南京理工大学 A kind of driving of view-based access control model characteristic is divert one's attention detection method
CN108877150A (en) * 2018-07-05 2018-11-23 张屹然 A kind of tired driver driving monitoring device based on biological brain electrical chip and tired distinguished number
CN109480872A (en) * 2018-11-08 2019-03-19 哈尔滨工业大学 Driving fatigue detection method based on EEG signals frequency band energy than feature
US20190168771A1 (en) * 2017-12-04 2019-06-06 Lear Corporation Distractedness sensing system
CN110811649A (en) * 2019-10-31 2020-02-21 太原理工大学 Fatigue driving detection method based on bioelectricity and behavior characteristic fusion
CN110871809A (en) * 2014-06-23 2020-03-10 本田技研工业株式会社 Method for controlling a vehicle system in a motor vehicle
GB202008797D0 (en) * 2020-06-10 2020-07-22 Virtual Vehicle Res Gmbh Method for detecting safety relevant driving distraction
CN111460950A (en) * 2020-03-25 2020-07-28 西安工业大学 Cognitive distraction method based on head-eye evidence fusion in natural driving conversation behavior
CN111832416A (en) * 2020-06-16 2020-10-27 杭州电子科技大学 Motor imagery electroencephalogram signal identification method based on enhanced convolutional neural network

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103818256A (en) * 2012-11-16 2014-05-28 西安众智惠泽光电科技有限公司 Automobile fatigue-driving real-time alert system
CN110871809A (en) * 2014-06-23 2020-03-10 本田技研工业株式会社 Method for controlling a vehicle system in a motor vehicle
CN107007278A (en) * 2017-04-25 2017-08-04 中国科学院苏州生物医学工程技术研究所 Sleep mode automatically based on multi-parameter Fusion Features method by stages
US20190168771A1 (en) * 2017-12-04 2019-06-06 Lear Corporation Distractedness sensing system
CN108537161A (en) * 2018-03-30 2018-09-14 南京理工大学 A kind of driving of view-based access control model characteristic is divert one's attention detection method
CN108877150A (en) * 2018-07-05 2018-11-23 张屹然 A kind of tired driver driving monitoring device based on biological brain electrical chip and tired distinguished number
CN109480872A (en) * 2018-11-08 2019-03-19 哈尔滨工业大学 Driving fatigue detection method based on EEG signals frequency band energy than feature
CN110811649A (en) * 2019-10-31 2020-02-21 太原理工大学 Fatigue driving detection method based on bioelectricity and behavior characteristic fusion
CN111460950A (en) * 2020-03-25 2020-07-28 西安工业大学 Cognitive distraction method based on head-eye evidence fusion in natural driving conversation behavior
GB202008797D0 (en) * 2020-06-10 2020-07-22 Virtual Vehicle Res Gmbh Method for detecting safety relevant driving distraction
CN111832416A (en) * 2020-06-16 2020-10-27 杭州电子科技大学 Motor imagery electroencephalogram signal identification method based on enhanced convolutional neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ANDREI AKSJONOV, 等: "Detection and Evaluation of Driver Distraction Using Machine Learning and Fuzzy Logic", 《IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS》 *

Similar Documents

Publication Publication Date Title
CN107531244B (en) Information processing system, information processing method, and recording medium
CN111274881A (en) Driving safety monitoring method and device, computer equipment and storage medium
CN112793576B (en) Lane change decision method and system based on rule and machine learning fusion
CN113257023B (en) L3-level automatic driving risk assessment and takeover early warning method and system
CN112644506A (en) Method for detecting driver driving distraction based on model long-time memory neural network LSTM-NN
CN115690750A (en) Driver distraction detection method and device
Yang et al. Classification and evaluation of driving behavior safety levels: A driving simulation study
CN110620760A (en) FlexRay bus fusion intrusion detection method and detection device for SVM (support vector machine) and Bayesian network
CN111062300A (en) Driving state detection method, device, equipment and computer readable storage medium
CN112806996A (en) Driver distraction multi-channel assessment method and system under L3-level automatic driving condition
Wu et al. Fuzzy logic based driving behavior monitoring using hidden markov models
CN111775948B (en) Driving behavior analysis method and device
CN116955943A (en) Driving distraction state identification method based on eye movement sequence space-time semantic feature analysis
CN110555425A (en) Video stream real-time pedestrian detection method
Yuan et al. Predicting drivers’ eyes-off-road duration in different driving scenarios
KR101405785B1 (en) System for assigning automobile level and method thereof
CN114943956A (en) Driving distraction identification method and system under multiple scenes and vehicle
CN113989780A (en) Sign board detection method and device
CN112660141A (en) Method for identifying driver driving distraction behavior through driving behavior data
Jiřina et al. Identification of driver's drowsiness using driving information and EEG
CN112785863B (en) Merging decision classification early warning method based on K-Means and entropy weighting
US20230045706A1 (en) System for displaying attention to nearby vehicles and method for providing an alarm using the same
US20220410708A1 (en) Method and system for adaptively processing vehicle information
CN114973212A (en) Fatigue driving stopping method based on visual features, steering wheel operation detection and active intervention
Zhao et al. Distraction pattern classification and comparisons under different conditions in the full-touch HMI mode

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210518

RJ01 Rejection of invention patent application after publication