WO2021017329A1 - 一种检测驾驶员分心的方法及装置 - Google Patents

一种检测驾驶员分心的方法及装置 Download PDF

Info

Publication number
WO2021017329A1
WO2021017329A1 PCT/CN2019/120566 CN2019120566W WO2021017329A1 WO 2021017329 A1 WO2021017329 A1 WO 2021017329A1 CN 2019120566 W CN2019120566 W CN 2019120566W WO 2021017329 A1 WO2021017329 A1 WO 2021017329A1
Authority
WO
WIPO (PCT)
Prior art keywords
distraction
driver
data
eeg
detection result
Prior art date
Application number
PCT/CN2019/120566
Other languages
English (en)
French (fr)
Inventor
李国法
颜伟荃
赖伟鉴
陈耀昱
杨一帆
李盛龙
谢恒�
李晓航
Original Assignee
深圳大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳大学 filed Critical 深圳大学
Priority to US16/629,944 priority Critical patent/US20220175287A1/en
Publication of WO2021017329A1 publication Critical patent/WO2021017329A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/18Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state for vehicle drivers or machine operators
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/168Evaluating attention deficit, hyperactivity
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/372Analysis of electroencephalograms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/746Alarms related to a physiological condition, e.g. details of setting alarm thresholds or avoiding false alarms
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q9/00Arrangement or adaptation of signal devices not provided for in one of main groups B60Q1/00 - B60Q7/00, e.g. haptic signalling
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q9/00Arrangement or adaptation of signal devices not provided for in one of main groups B60Q1/00 - B60Q7/00, e.g. haptic signalling
    • B60Q9/008Arrangement or adaptation of signal devices not provided for in one of main groups B60Q1/00 - B60Q7/00, e.g. haptic signalling for anti-collision purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2148Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2503/00Evaluating a particular growth phase or type of persons or animals
    • A61B2503/20Workers
    • A61B2503/22Motor vehicles operators, e.g. drivers, pilots, captains
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/725Details of waveform analysis using specific filters therefor, e.g. Kalman or adaptive filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • This application belongs to the field of computer application technology, and in particular relates to a method and device for detecting driver distraction.
  • Support Vector Machine SVM
  • SVM uses quadratic programming to solve the support vector, and solving the quadratic programming involves the calculation of the m-order matrix.
  • m is the number of samples.
  • the storage and calculation of the matrix will consume a lot of machine memory and computing time. Therefore, in the prior art, when the driver is distracted, the detection efficiency is low and inaccurate.
  • the embodiments of the present application provide a method and a device for detecting driver distraction, which can solve the problem of low detection efficiency and inaccuracy when detecting driver distraction in the prior art.
  • an embodiment of the present application provides a method for detecting driver distraction, including:
  • the distraction detection model uses the EEG
  • the sample data and its corresponding distraction result label are obtained by training a preset recurrent neural network; the distraction detection result is sent to the vehicle terminal associated with the driver's identity information; the distraction detection result Used to trigger the vehicle-mounted terminal to generate driving reminder information according to the distraction detection result.
  • an embodiment of the present application provides a device for detecting driver distraction, including a memory, a processor, and a computer program stored in the memory and running on the processor, and the processor executes The computer program implements the following steps:
  • the distraction detection model uses the EEG sample data and its corresponding distraction
  • the result label is obtained by training the preset recurrent neural network
  • the distraction detection result is sent to a vehicle-mounted terminal associated with the driver's identity information; the distraction detection result is used to trigger the vehicle-mounted terminal to generate driving reminder information according to the distraction detection result.
  • an embodiment of the present application provides a device for detecting driver distraction, including:
  • the acquisition unit is used to acquire the EEG data of the driver
  • the detection unit is used to preprocess the EEG data, and then input the pre-trained distraction detection model to obtain the driver’s distraction detection result;
  • the distraction detection model uses the EEG sample data and
  • the corresponding distraction result label is obtained by training the preset recurrent neural network;
  • an embodiment of the present application provides a computer-readable storage medium, the computer storage medium stores a computer program, and the computer program includes program instructions that, when executed by a processor, cause the processing The device executes the method of the first aspect described above.
  • the embodiments of the present application provide a computer program product, which when the computer program product runs on a terminal device, causes the terminal device to execute the method for detecting driver distraction as described in any one of the above-mentioned first aspects.
  • FIG. 1 is a flowchart of a method for detecting driver distraction provided by Embodiment 1 of the present application;
  • Fig. 2 is a flowchart of a method for detecting driver distraction provided in the second embodiment of the present application
  • FIG. 3 is a schematic diagram of the model training and detection application provided in Embodiment 2 of the present application.
  • FIG. 4 is a schematic diagram of a preprocessing flow of EEG data provided in Embodiment 2 of the present application.
  • FIG. 5 is a diagram of the electrode position of the collection device provided in the second embodiment of the present application.
  • Fig. 6 is a schematic diagram of artifact analysis provided in the second embodiment of the present application.
  • FIG. 7 is a schematic diagram of large noise and selection and removal in the EEG provided in the second embodiment of the present application.
  • FIG. 8 is a schematic diagram of a cyclic neural network for time-series driving distraction prediction provided by Embodiment 2 of the present application;
  • FIG. 9 is a schematic diagram of the cyclic structure of the gated cyclic unit provided in the second embodiment of the present application.
  • FIG. 10 is a graph of detection results of three network structures provided in the second embodiment of the present application.
  • FIG. 11 is a schematic diagram of a device for detecting driver distraction provided by Embodiment 3 of the present application.
  • FIG. 12 is a schematic diagram of a device for detecting distraction of a driver provided by Embodiment 4 of the present application.
  • the driver’s driving performance often has a great impact on local traffic conditions. Unsafe driving, fatigue driving, and distracted driving all pose great threats to road safety. Many researchers have done research on the impact of distraction on road safety. If it is possible to predict the driver’s distracted state, fatigue state, etc. in advance, the driver can be reminded in dangerous situations, providing greater protection for road traffic safety, and at the same time providing a theoretical basis for road traffic safety.
  • the prediction research of the driver's driving state has a positive effect on the safety of the traffic system. In addition to alleviating urban traffic pressure, it effectively reduces the incidence of traffic accidents. It can also play a role in the transfer of autopilot and manual driving in the assisted driving system of future cars.
  • the research on driver driving state prediction is mainly to improve driving safety and traffic safety.
  • This embodiment is mainly a research on the driving state prediction method.
  • the pre-processed EEG signal of the driver is used as the input feature, and the driving state is recognized through the convolutional neural network, thereby predicting the driving state information of the driver, which is a dangerous driving behavior
  • Early warning is given to reduce the occurrence of traffic accidents and improve driving safety.
  • it also provides a new idea for the processing of EEG signals. If there is a large enough database, the time domain signal processing EEG should still be great Tap the potential.
  • the overall research first carries out the training module.
  • the cleaned EEG data is obtained, which is used to train the convolution-cyclic neural network CSRN used for driver analysis in this embodiment, and the network is continuously adjusted Structural parameters are used to optimize the network structure to obtain the best network parameters.
  • This adjusted network model is applied to the vehicle system as the actual model. If there is an instrument for collecting EEG, the trained network can be used to predict in real time The driver is distracted and feeds back to the driving assistance system to make reasonable decisions.
  • the generally used network structure was a multi-layer perceptron.
  • a multi-layer fully connected layer can also fit any polynomial function, but the actual effect is not good, because it is A sufficiently complex function, the multi-layer perceptron needs very large parameters to support, which not only increases the difficulty of training, but also very easy to fall into the phenomenon of overfitting.
  • the input is an image
  • Each pixel will be connected to every neuron in the next layer, which leads to the network's sensitivity to position too high and weak generalization ability.
  • the network needs to be retrained, and for Images of different sizes are fixed to the network and must be cropped and transformed into images of a specified size before they can be input.
  • Convolutional neural network is a type of feedforward neural network that includes convolution calculation and has a deep structure. It is one of the representative algorithms of deep learning.
  • convolution kernel In each convolutional layer, there is a convolution kernel of a specified size. This convolution kernel completes the convolution operation of the entire data according to a given step size. Therefore, it can be considered that the sensitivity of the network to position is reduced, and Compatible with different sizes of data.
  • Convolutional neural networks have been proved by many experiments to be very effective in feature extraction.
  • image recognition technologies are also based on convolutional neural networks.
  • the network structure of this application also uses convolutional layers and has achieved good results.
  • a recurrent neural network is used to train the sample data
  • the recurrent neural network in this embodiment includes a convolution-loop structure.
  • the first three layers of networks are all convolutional networks.
  • the data in each layer of network reaches the next layer after convolution, pooling, batch normalization, and activation.
  • the output of the convolutional network is used as the input of the gated loop unit.
  • a feature vector with a preset length such as a 128-digit feature vector, is input into the fully connected layer and finally output is obtained to detect whether the driver is currently in a distracted state.
  • step S102 it may further include: if the distraction detection result is that the driver is distracted, sending the distraction detection result to a preset driving assistance device in the vehicle for assisting The driver drives safely.
  • an auxiliary driving device is preset on the vehicle.
  • the auxiliary driving device of this embodiment is used to assist the driver in driving. For example, when the driver is distracted, it can give corresponding reminders or provide safety protection, for example Improve the level of safety protection, etc.
  • the recurrent neural network obtained through the above training detects the current driver’s EEG data and obtains the driver’s distraction detection result
  • the distraction detection result is sent to the driving assistance device , To assist the driver to drive safely.
  • S103 Send the distraction detection result to a vehicle-mounted terminal associated with the driver's identity information; the distraction detection result is used to trigger the vehicle-mounted terminal to generate driving reminder information according to the distraction detection result.
  • a vehicle-mounted terminal is installed on the vehicle, which is used to trigger the vehicle-mounted terminal to generate driving reminder information according to the distraction detection result. Specifically, after detecting the driver's distraction, driving reminder information, such as voice information, is generated to remind the driver to concentrate on driving, or music is played to relieve the driver's driving fatigue, which is not limited here.
  • the EEG data of the driver is obtained; the EEG data is preprocessed and then input into the pre-trained distraction detection model to obtain the distraction detection result of the driver; the distraction detection The model is obtained by training a preset recurrent neural network through EEG sample data and its corresponding distraction result label; sending the distraction detection result to the vehicle terminal associated with the driver's identity information; The distraction detection result is used to trigger the on-board terminal to generate driving reminder information according to the distraction detection result.
  • By detecting the driver’s EEG data obtained in real time according to the trained cyclic neural network it is judged whether the driver is distracted, and the corresponding processing is performed through the preset on-board terminal when distraction is detected, which improves the driver’s score.
  • the accuracy and efficiency of heart detection reduces the probability of traffic accidents.
  • FIG. 2 is a flowchart of a method for detecting driver distraction provided by Embodiment 1 of the present application.
  • the execution body of the method for detecting driver distraction in this embodiment is a device with a function of detecting driver distraction, including but not limited to devices such as computers, servers, tablets, or terminals.
  • the method for detecting driver distraction as shown in the figure may include the following steps:
  • S201 in this embodiment is completely the same as that of S101 in the embodiment corresponding to FIG. 1.
  • Figure 3 is a schematic diagram of the model training and detection application provided by this embodiment.
  • EEG sample collection and EEG preprocessing are used to obtain cleaned EEG data and use it for training.
  • CSRN network and constantly adjust and optimize the network parameters to optimize the network structure, to obtain the best network parameters, that is, to obtain the CSRN network with fixed parameter weights.
  • This adjusted network model is applied to the vehicle system as the actual model, through a EEG equipment that collects EEG to obtain real-time EEG data.
  • This trained CSRN network can be used to detect and obtain the driver’s distraction status in real time, and finally feedback to the vehicle, such as the pre-installed auxiliary driving device in the vehicle.
  • Reasonable decision-making and regulation are used to obtain cleaned EEG data and use it for training.
  • This embodiment hopes to use the powerful computing performance of today’s computers to directly process time-domain EEG signals through neural networks. Although the recognition rate of the final test set is only 85%, which is comparable to traditional methods, neural networks are often better at processing Big data, this embodiment builds a database support through the collected EEG data. In the actual test process, the experimenter established a huge database support by collecting 18 hours of data from the subjects, believing that the potential of using neural networks to process EEG signals is huge.
  • S203 Preprocess the EEG sample data to obtain preprocessed data.
  • FIG 4 is a schematic diagram of the EEG data preprocessing process.
  • the EEG signal is very weak, and the EEG signal needs to be captured by a very high amplification rate amplifier. In practical applications, EEG tends to have a low signal-to-noise ratio. In addition to high-frequency noise and 50Hz power frequency noise, clutter with a frequency similar to EEG will also be mixed into the EEG signal, and these mixed clutter Often referred to as artifacts, the artifacts in this embodiment can include electrooculogram artifacts, electromyographic artifacts, and ECG artifacts. The signal-to-noise ratio of the EEG signal that has not been removed is very low and cannot be used directly.
  • step S203 includes:
  • S2031 Obtain the identification information of the collection point corresponding to the EEG sample data, and determine the first position information of the electrode corresponding to the identification information of the collection point on the data collection device.
  • Figure 5 is a diagram of the electrode positions of the collection device used in this embodiment, where C3 ⁇ C5, Cp3 ⁇ Cp5, F4 ⁇ F4, Fc1 ⁇ Fc2, Fp1 ⁇ Fp2, O1 ⁇ O2, P3 All the marks in the figures of ⁇ P4, T4 to T5, Tp7 to Tp8, etc. are used to indicate the corresponding electrode marks at different collection positions on the collection device.
  • the collection device in this embodiment may be an EEG cap. Since the number and position of electrodes of different types of EEG caps are different, for EEG data, it is necessary to input the position information of an EEG electrode to perform principal component analysis of EEG.
  • the method further includes: performing frequency reduction processing on the EEG sample data; passing the EEG sample data after the frequency down processing through a low-pass filter with a preset frequency to obtain filtered EEG sample data .
  • the sampling frequency of most EEG devices is very high.
  • the upper cut-off frequency is The 50Hz low-pass filter filters out irrelevant high-frequency noise and power frequency noise.
  • S2032 Determine, according to the first position information, second position information of the emission source corresponding to the collection point on the brain surface.
  • the electrode of the EEG cap Since the electrode of the EEG cap is determined artificially, it only represents the receiving source of EEG, not the emission source of EEG. Each sampling electrode of the EEG is derived from the superposition effect of multiple emission sources, so it is necessary to relocate the EEG emission signal source (i.e., the second location information) through EEG location information (i.e., the first location information).
  • the first position information is used to indicate the position information of the electrodes collected on the data acquisition device, and the second position Information to represent the location information corresponding to the EEG emission source.
  • step S2032 includes: determining the electrode corresponding to the first position information on the data acquisition device; determining the second position information of the emission source corresponding to the electrode; the emission source is the brain surface that generates the The area of EEG sample data.
  • the electrode corresponding to the first position information on the data collection device is determined according to the first position information, and then the second position information of the emission source corresponding to the electrode is determined.
  • the emission source in the embodiment is used to represent the area on the surface of the brain where the EEG sample data is generated.
  • S2033 Remove artifacts in the EEG sample data according to the second position information, and slice according to a preset slicing period to obtain the preprocessed data; the artifacts are the set positions to be removed The corresponding EEG sample data.
  • the electrode of the EEG cap Since the electrode of the EEG cap is determined artificially, it only represents the receiving source of EEG, not the emission source of EEG. Each sampling electrode of EEG comes from the superposition effect of multiple emission sources, so it is necessary to relocate the EEG emission signal source through the EEG position information in 1.2. In addition, the independent component analysis method can also locate some artifact emission sources, so as to remove artifacts.
  • the working principle of the independent component analysis is as follows: In this embodiment, it can be assumed that there are n emission sources in the brain that are emitting EEG signals at the same time, and the experiment uses an EEG cap with n electrodes to collect the n emission sources. Signal, after a period of time, a set of data can be obtained
  • n EEG emission sources are:
  • FIG. 6 is a schematic diagram of artifact analysis provided by this embodiment. After performing the principal component analysis operation, you can obtain the calculated 30 new emission sources. Even if there is a gap with the real source, you can use the de-artifacting plug-in of the Matrix Laboratory (MATLAB) to correct the artifacts. For removal, the 30 relocated sources are shown in Fig. 6, and the sources to be removed can be directly selected and removed.
  • MATLAB Matrix Laboratory
  • EEG data reflects a dynamic brain potential change.
  • DC signals cannot reflect brain information. Therefore, in EEG signal analysis, the DC component needs to be eliminated.
  • the phenomenon of baseline drift also occurs in the step of removing large noises. Therefore, the removal of the DC component is completed in the last step of EEG preprocessing.
  • the current DC component can be obtained by calculating the average value of the data of each channel of the EEG.
  • the DC component can be removed by subtracting this component from the data.
  • the time domain signal of the EEG is too long to be directly input into the neural network for training.
  • Each piece is marked as a corresponding state, including distracted driving and normal driving.
  • S204 Input the preprocessed data into a preset recurrent neural network for training, optimize the parameters of the recurrent neural network, and obtain the distraction detection model.
  • the generally used network structure was a multi-layer perceptron.
  • a multi-layer fully connected layer can also fit any polynomial function, but the actual effect is not good, because it is A sufficiently complex function, the multi-layer perceptron needs very large parameters to support, which not only increases the difficulty of training, but also very easy to fall into the phenomenon of overfitting.
  • the input is an image, then Each pixel will be connected to every neuron in the next layer, which leads to the network's sensitivity to position too high and weak generalization ability.
  • Convolutional neural network is a type of feedforward neural network that includes convolution calculation and has a deep structure. It is one of the representative algorithms of deep learning. In each convolutional layer, there is a convolution kernel of a specified size. This convolution kernel completes the convolution operation of the entire data according to a given step size. Therefore, it can be considered that the sensitivity of the network to position is reduced, and Compatible with different sizes of data.
  • Convolutional neural networks have been proved by many experiments to be very effective in feature extraction.
  • image recognition technologies are also based on convolutional neural networks.
  • the network structure of this application also uses convolutional layers and has achieved good results.
  • the EEG signal is quite special. At the same point in time, it has spatial information, the EEG signals from different locations in the brain, and time information, real-time domain signals. Therefore, this embodiment focuses on the convolutional neural network. As well as the advantages of the recurrent neural network, the convolutional neural network is used in the first few layers of the network to extract the spatial characteristics of a single point in time, and then the processed data is input to the gated recurrent network sensitive to time series to find the time characteristics of the data. The 128-length feature vector finally obtained is used for the state classification network of the fully connected layer.
  • FIG. 8 is a schematic diagram of a cyclic neural network for timing driving distraction prediction provided by this embodiment.
  • the first three layers of networks are all convolutional networks, and the data in each layer of network reaches the next layer after convolution, pooling, batch normalization, and activation.
  • first input the pre-processed data of b ⁇ 200 ⁇ 30 pass the first layer of convolution kernel 3 ⁇ 3 ⁇ 3 and the first layer of pooling window 1 ⁇ 2 ⁇ 2 to obtain b ⁇ 5 ⁇ 6 ⁇ 200 ⁇ 1
  • second layer convolution kernel 3 ⁇ 3 ⁇ 3 and the second layer pooling window 1 ⁇ 1 ⁇ 2 input it into the second layer convolution kernel 3 ⁇ 3 ⁇ 3 and the second layer pooling window 1 ⁇ 1 ⁇ 2 to get b ⁇ 2 ⁇ 3 ⁇ 100 ⁇ 64 data; then enter it into the third layer
  • the layer convolution kernel 2 ⁇ 1 ⁇ 3 and the second layer pooling window 1 ⁇ 1 ⁇ 2 obtain data of b ⁇ 1 ⁇ 1 ⁇ 25 ⁇ 512.
  • the cyclic neural network in this embodiment includes a gate control loop unit, the output of the convolutional network is used as the input of the gate control loop unit, and the gate control loop node in the gate control unit in this embodiment is set as input data 512 is a structure with 128 bits of hidden layer and 4 layers.
  • the gated loop unit After the gated loop unit, a feature vector of 128 length is obtained, which is input to the fully connected layer and finally output.
  • the three fully connected layers are b ⁇ 128, b ⁇ 64 and b ⁇ 16, the final output data obtained is b ⁇ 2, and finally the detection result of whether the driver is in a distracted state is obtained.
  • step S204 includes: inputting the preprocessed data into the recurrent neural network for convolution to obtain a convolution result, and inputting the convolution result into a preset gated recurrent unit to obtain a feature vector, and The feature vector is input into the preset fully connected layer to obtain the detection result; the parameters of the recurrent neural network are optimized according to the difference between the detection result and the corresponding distraction result label to obtain the distraction detection model;
  • the gated loop unit is used to control the direction of data flow and the amount of data flow in the loop neural network.
  • FIG. 9 is a schematic diagram of the loop structure of the gated loop unit provided by this embodiment.
  • the gating loop unit x t represents the input x at the current moment
  • h t-1 represents the output at the previous moment.
  • the update gate is used to control the extent to which the state information of the previous moment is brought into the current state. The larger the value of the update gate, the more the state information of the previous moment is brought in.
  • the reset gate is used to control the degree of ignoring the state information at the previous moment. The smaller the reset gate value, the more ignored.
  • this structure can better transmit the information of the previous sequence to the back, because when the traditional recurrent neural network is trained to a very deep layer, the previous information has been ignored, and the gated recurrent unit can Control the retained information and the ignored information, so it performs better in the recurrent neural network.
  • S205 After preprocessing the EEG data, input it into a pre-trained distraction detection model to obtain the driver’s distraction detection result; the distraction detection model uses the EEG sample data and its corresponding The distraction result label is obtained by training the preset recurrent neural network.
  • a recurrent neural network is used to train the sample data
  • the recurrent neural network in this embodiment includes a convolution-loop structure.
  • the first three layers of networks are all convolutional networks.
  • the data in each layer of network reaches the next layer after convolution, pooling, batch normalization, and activation.
  • the output of the convolutional network is used as the input of the gated loop unit.
  • a 128-length feature vector is obtained, which is input into the fully connected layer and finally output is obtained to detect whether the driver is currently in a distracted state.
  • FIG. 10 is a graph of the detection results of the three network structures provided in this embodiment.
  • Table 2 includes the recognition performance of each of the three networks.
  • the true positive rate represents the proportion of positive cases that are matched, and the false positive rate represents the proportion of negative cases that are incorrectly identified as positive cases.
  • the three networks are all 7-layer networks, in which the convolutional neural network does not add recurrent units and is insensitive to time series, while the recurrent neural network does not add convolution nodes and is insensitive to the spatial location distribution of EEG.
  • Convolution-recurrent model Absorbing the characteristics of both the convolution model and the cyclic model, the performance is the best, reaching a recognition accuracy of 85%, while the convolution model and the cyclic model have only 78% and 76 recognition accuracy.
  • S206 Send the distraction detection result to a vehicle-mounted terminal associated with the driver's identity information; the distraction detection result is used to trigger the vehicle-mounted terminal to generate driving reminder information according to the distraction detection result.
  • a vehicle-mounted terminal is installed on the vehicle, which is used to trigger the vehicle-mounted terminal to generate driving reminder information according to the distraction detection result. Specifically, after detecting the driver's distraction, driving reminder information, such as voice information, is generated to remind the driver to concentrate on driving, or music is played to relieve the driver's driving fatigue, which is not limited here.
  • FIG. 11 is a schematic diagram of a device for detecting distraction of a driver provided in the third embodiment of the present application.
  • the device 1100 for detecting driver distraction may be a mobile terminal such as a smart phone or a tablet computer.
  • the device 1100 for detecting driver distraction in this embodiment includes units for executing steps in the embodiment corresponding to FIG. 1. For details, please refer to related descriptions in the embodiment corresponding to FIG. 1 and FIG. 1. Repeat.
  • the device 1100 for detecting driver distraction in this embodiment includes:
  • the obtaining unit 1101 is used to obtain the driver's brain electricity data
  • the detection unit 1102 is configured to preprocess the EEG data and then input the pre-trained distraction detection model to obtain the driver’s distraction detection result; the distraction detection model uses the EEG sample data And its corresponding distraction result label, obtained by training the preset recurrent neural network;
  • the sending unit 1103 is configured to send the distraction detection result to a vehicle-mounted terminal associated with the driver’s identity information; the distraction detection result is used to trigger the vehicle-mounted terminal to generate driving based on the distraction detection result Reminder information.
  • FIG. 12 is a schematic diagram of a device for detecting distraction of a driver provided in the fourth embodiment of the present application.
  • the device 1200 for detecting driver distraction in this embodiment as shown in FIG. 12 may include a processor 1201, a memory 1202, and a computer program 1203 stored in the memory 1202 and running on the processor 1201.
  • the processor 1201 executes the computer program 1203, the steps in the above embodiments of the method for detecting driver distraction are implemented.
  • the memory 1202 is used to store a computer program, and the computer program includes program instructions.
  • the processor 1201 is configured to execute program instructions stored in the memory 1202. Wherein, the processor 1201 is configured to call the program instructions to perform the following operations:
  • the processor 1201 is used to:
  • the distraction detection model uses the EEG sample data and its corresponding distraction
  • the result label is obtained by training the preset recurrent neural network
  • the distraction detection result is sent to a vehicle-mounted terminal associated with the driver's identity information; the distraction detection result is used to trigger the vehicle-mounted terminal to generate driving reminder information according to the distraction detection result.
  • the processor 1201 may be a central processing unit (Central Processing Unit, CPU), and the processor may also be other general-purpose processors or digital signal processors (DSPs). , Application Specific Integrated Circuit (ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the memory 1202 may include a read-only memory and a random access memory, and provides instructions and data to the processor 1201. A part of the memory 1202 may also include a non-volatile random access memory. For example, the memory 1202 may also store device type information.
  • the processor 1201, the memory 1202, and the computer program 1203 described in the embodiments of the present application can execute the methods described in the first and second embodiments of the method for detecting driver distraction provided in the embodiments of the present application.
  • the implementation manner of the terminal described in the embodiment of the present application can also be implemented, which will not be repeated here.
  • a computer-readable storage medium stores a computer program, the computer program includes program instructions, and the program instructions are implemented when executed by a processor:
  • the distraction detection model uses the EEG sample data and its corresponding distraction
  • the result label is obtained by training the preset recurrent neural network
  • the distraction detection result is sent to a vehicle-mounted terminal associated with the driver's identity information; the distraction detection result is used to trigger the vehicle-mounted terminal to generate driving reminder information according to the distraction detection result.
  • the computer-readable storage medium may be the internal storage unit of the terminal described in any of the foregoing embodiments, such as the hard disk or memory of the terminal.
  • the computer-readable storage medium may also be an external storage device of the terminal, such as a plug-in hard disk equipped on the terminal, a smart memory card (Smart Media Card, SMC), or a Secure Digital (SD) card , Flash Card, etc.
  • the computer-readable storage medium may also include both an internal storage unit of the terminal and an external storage device.
  • the computer-readable storage medium is used to store the computer program and other programs and data required by the terminal.
  • the computer-readable storage medium can also be used to temporarily store data that has been output or will be output.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of this application is essentially or the part that contributes to the existing technology, or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium It includes several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the method described in each embodiment of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program code .
  • U disk mobile hard disk
  • read-only memory Read-Only Memory
  • RAM random access memory
  • magnetic disk or optical disk and other media that can store program code .

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Pathology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Psychiatry (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Psychology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Developmental Disabilities (AREA)
  • Mathematical Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Child & Adolescent Psychology (AREA)
  • General Engineering & Computer Science (AREA)
  • Physiology (AREA)
  • Social Psychology (AREA)
  • Hospice & Palliative Care (AREA)
  • Educational Technology (AREA)
  • Software Systems (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Fuzzy Systems (AREA)
  • Computational Linguistics (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)

Abstract

一种检测驾驶员分心的方法,包括:通过获取驾驶员的脑电数据;将所述脑电数据预处理后,再输入预先训练得到的分心检测模型中,得到所述驾驶员的分心检测结果;所述分心检测模型通过脑电样本数据及其对应的分心结果标签,对预设的循环神经网络进行训练得到;将所述分心检测结果发送至与所述驾驶员的身份信息关联的车载终端;所述分心检测结果用于触发所述车载终端根据所述分心检测结果生成驾驶提醒信息。通过根据训练得到的循环神经网络检测实时获取到的驾驶员的脑电数据,判断驾驶员是否分心,并在检测到分心时通过预设的车载终端进行对应的处理,提高了驾驶员分心检测的精确度和效率,进而降低交通事故的发生概率。

Description

一种检测驾驶员分心的方法及装置
本申请要求于2019年08月01日在中国专利局提交的、申请号为201910707858.4、发明名称为“一种检测驾驶员分心的方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请属于计算机应用技术领域,尤其涉及一种检测驾驶员分心的方法及装置。
背景技术
如今汽车使用率日渐升高,虽然汽车的出现极大地方便了社会,但也带来了巨大的交通隐患,特别是交通事故。自2015年来,中国的汽车交通事故率大大增大。这让我们不得不敲响警钟。其中,分心驾驶占据着交通安全非常大的一块,据美国国家高速公路安全管理局的实际路上驾驶实验发现,近80%的碰撞和65%的临界碰撞均与分心驾驶相关。而且随着目前车载娱乐设备,手机等设备的普及,导致驾驶分心的因素也变得越来越多,越来越普遍。
现有技术中通过支持向量机(Support Vector Machine,SVM)来检测驾驶员是否分心驾驶,但由于SVM是借助二次规划来求解支持向量,而求解二次规划将涉及m阶矩阵的计算,m为样本的个数,当m数目很大时该矩阵的存储和计算将耗费大量的机器内存和运算时间。因此,现有技术中在对驾驶员进行分心检测时,存在检测效率较低且不精确的问题。
发明内容
本申请实施例提供了检测驾驶员分心的方法及装置,可以解决现有技术中在对驾驶员进行分心检测时,存在检测效率较低且不精确问题。
第一方面,本申请实施例提供了一种检测驾驶员分心的方法,包括:
获取驾驶员的脑电数据;将所述脑电数据预处理后,再输入预先训练得到的分心检测模型中,得到所述驾驶员的分心检测结果;所述分心检测模型通过脑电样本数据及其对应的分心结果标签,对预设的循环神经网络进行训练得到;将所述分心检测结果发送至与所述驾驶员的身份信息关联的车载终端;所述分心检测结果用于触发所述车载终端根据所述分心检测结果生成驾驶提醒信息。
应理解,通过根据训练得到的循环神经网络检测实时获取到的驾驶员的脑电数据,判断驾驶员是否分心,并在检测到分心时通过预设的车载终端进行对应的处理,提高了驾驶员分心检测的精确度和效率,进而降低交通事故的发生概率。
第二方面,本申请实施例提供了一种检测驾驶员分心的装置,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机 程序时实现以下步骤:
获取驾驶员的脑电数据;
将所述脑电数据预处理后,再输入预先训练得到的分心检测模型中,得到所述驾驶员的分心检测结果;所述分心检测模型通过脑电样本数据及其对应的分心结果标签,对预设的循环神经网络进行训练得到;
将所述分心检测结果发送至与所述驾驶员的身份信息关联的车载终端;所述分心检测结果用于触发所述车载终端根据所述分心检测结果生成驾驶提醒信息。
第三方面,本申请实施例提供了一种检测驾驶员分心的装置,包括:
获取单元,用于获取驾驶员的脑电数据;
检测单元,用于将所述脑电数据预处理后,再输入预先训练得到的分心检测模型中,得到所述驾驶员的分心检测结果;所述分心检测模型通过脑电样本数据及其对应的分心结果标签,对预设的循环神经网络进行训练得到;
发送单元,用于将所述分心检测结果发送至与所述驾驶员的身份信息关联的车载终端;所述分心检测结果用于触发所述车载终端根据所述分心检测结果生成驾驶提醒信息。
第四方面,本申请实施例提供了一种计算机可读存储介质,所述计算机存储介质存储有计算机程序,所述计算机程序包括程序指令,所述程序指令当被处理器执行时使所述处理器执行上述第一方面的方法。
第五方面,本申请实施例提供了一种计算机程序产品,当计算机程序产品在终端设备上运行时,使得终端设备执行上述第一方面中任一项所述的检测驾驶员分心的方法。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其它特征、目的和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例一提供的检测驾驶员分心的方法的流程图;
图2是本申请实施例二提供的检测驾驶员分心的方法的流程图;
图3是本申请实施例二提供的模型训练和检测应用示意图;
图4是本申请实施例二提供的脑电数据预处理流程示意图;
图5是本申请实施例二提供的采集装置的电极位置图;
图6是本申请实施例二提供的伪迹分析示意图;
图7是本申请实施例二提供的脑电中的大噪声及选取去除示意图;
图8是本申请实施例二提供的时序驾驶分心预测循环神经网络示意图;
图9是本申请实施例二提供的门控循环单元的循环结构示意图;
图10是本申请实施例二提供的三种网络结构的检测结果曲线图;
图11是本申请实施例三提供的检测驾驶员分心的装置的示意图;
图12是本申请实施例四提供的检测驾驶员分心的装置的示意图。
具体实施方式
以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本申请。在其它情况中,省略对众所周知的系统、装置、电路以及方法的详细说明,以免不必要的细节妨碍本申请的描述。
参见图1,图1是本申请实施例一提供的一种检测驾驶员分心的方法的流程图。本实施例中检测驾驶员分心的方法的执行主体为具有检测驾驶员分心的功能的装置,包括但不限于计算机、服务器、平板电脑或者终端等装置。如图所示的检测驾驶员分心的方法可以包括以下步骤:
S101:获取驾驶员的脑电数据。
如今汽车使用率日渐升高,虽然汽车的出现极大地方便了社会,但也带来了巨大的交通隐患,特别是交通事故。自2015年来,中国的汽车交通事故率大大增大,这让我们不得不敲响警钟。其中,分心驾驶占据着交通安全非常大的一块,据美国国家高速公路安全管理局的实际路上驾驶实验发现,近80%的碰撞和65%的临界碰撞均与分心驾驶相关。因此,分心驾驶的检测显得尤为重要。而且随着目前车载娱乐设备,手机等设备的普及,导致驾驶分心的因素也变得越来越多,越来越普遍。因此有必要对驾驶员的分心状态进行检测,提高道路的安全性。作为汽车的操纵者,驾驶员的驾驶表现往往会对局部交通情况有很大的影响。不安全的驾驶方式,疲劳驾驶,分心驾驶都会为道路安全带来极大的威胁。许多研究者针对分心对路面安全的影响做了研究。对如果能够提前预测驾驶员的分心状态,疲劳状态等,即可在危险情况下对驾驶员进行提醒,为道路交通安全提供更大的保障,同时为道路交通安全性提供理论依据。驾驶员所处的驾驶状态的预测研究对交通系统的安全性具有积极作用。除了缓和城市交通压力、有效降低交通事故发生率。也能够在将来汽车的辅助驾驶系统中,自动驾驶和手动驾驶的交接权上发挥作用。
为解决驾驶分心问题,前人提出许多检测人目前精神状态的办法。驾驶员驾驶状态预测研究主要是为了提高驾驶安全和交通安全。本实施例主要是对驾驶状态预测方法的研究,以驾驶员的经过预处理的脑电信号作为输入特征,通过卷积神经网络识别驾驶状态,从而预测驾驶员的驾驶状态信息,为危险驾驶行为做出预警,减少交通事故的发生,提高驾驶安全性,同时还为脑电信号的处理提供一种新的思路,如果有足够大的数据库支撑,时域信号处理脑 电应该仍有很大的挖掘潜力。
S102:将所述脑电数据预处理后,再输入预先训练得到的分心检测模型中,得到所述驾驶员的分心检测结果;所述分心检测模型通过脑电样本数据及其对应的分心结果标签,对预设的循环神经网络进行训练得到。
传统的脑电分析往往是将数据从时域转换成频域进行分析,然而在这种情况下,时域的时间信号就被破坏,即使是其他的一些改良的方法也或多或少的破坏了时域的信息,简化的信息未必能够真实的反映脑电的全部。本实施例希望能够利用现如今计算机强大的计算性能,通过神经网络来直接处理时域的脑电信号,虽然最后测试集识别率只有85%,与传统方法相当,但是神经网络往往更擅长于处理大数据,本次实验收集的脑电数据并不多,只有18个小时的数据,通过建立一个庞大的数据库支撑,相信利用神经网络处理脑电信号的潜力是巨大的。
整体的研究先进行训练模块,通过样本采集以及预处理,得到清洗后的脑电数据,用其训练本实施例中用于检测驾驶员分析的卷积-循环神经网络CSRN,并且不断的调整网络结构参数以优化网络结构,得到一个最好的网络参数,这个调整好的网络模型作为实际的模型应用到车载系统上,若有一个采集脑电的仪器,利用这个训练好的网络即可实时预测驾驶员分心状态,并且反馈到辅助驾驶系统,做出合理的决策。
在卷积神经网络发展之前,一般使用的网络结构是多层的感知器,理论上一个多层的全连接层也能够拟合任何的多项式函数,然而实际上效果却不好,因为为了拟合一个足够复杂的函数,多层感知器需要非常庞大的参数来支撑,这不但增大了训练的难度,而且非常容易陷入过拟合的现象,除此之外,如果输入是一张图像,那么每个像素点都会连接到下一层的每一个神经元,这导致了网络对位置的敏感度太高,泛化能力弱,一旦同一个目标出现在不同的区域,网络需要重新训练,并且对于不同大小的图像,网络的输入是固定的,必须裁剪变换成指定大小的图像才能进行输入。
由于多层感知器的诸多缺点,卷积神经网络应运而生。卷积神经网络是一类包含卷积计算且具有深度结构的前馈神经网络,是深度学习的代表算法之一。在每一个卷积层中,都有一个指定大小的卷积核,这个卷积核根据给定的步长,完成对整个数据的卷积操作,因此可以认为网络对位置的敏感度降低,且兼容不同大小的数据。卷积神经网络已被诸多实验证明在特征提取方面效果非常优秀,如今许多图像识别的技术也是基于卷积神经网络,本应用的网络结构也是采用卷积层,取得了很好的效果。
本实施例中采用循环神经网络对样本数据进行训练,在本实施例中的循环神经网络中包括卷积-循环的结构。具体的,前三层网络都是卷积网络,每一层网络中的数据经过卷积、池化、批规范化、激活后到达下一层,卷积网络的输出作为门控循环单元的输入,经过门控循环单元后得到预设长度的特征向量,例如128数字位的特征向量,输入到全连接层中最终得到输出,检测驾驶员当前是否处于分心状态。
进一步的,步骤S102之后,还可以包括:若所述分心检测结果为所述驾驶员分心,则将所述分心检测结果发送至所述车辆中预设的辅助驾驶装置,用于辅助所述驾驶员安全驾驶。
本实施例中在车辆上预设有辅助驾驶装置,本实施例的辅助驾驶装置用于辅助驾驶员进行驾驶,比如,当驾驶员分心时,可以进行相应的提醒,或者进行安全保护,例如提升安全保护等级等。在通过上述训练得到的循环神经网络检测当前驾驶员的脑电数据,得到驾驶员的分心检测结果时,当分心检测结果为驾驶员分心,则将该分心检测结果发送至辅助驾驶装置,以辅助驾驶员安全驾驶。
S103:将所述分心检测结果发送至与所述驾驶员的身份信息关联的车载终端;所述分心检测结果用于触发所述车载终端根据所述分心检测结果生成驾驶提醒信息。
本实施例中的车辆上安装有车载终端,用于触发车载终端根据分心检测结果生成驾驶提醒信息。具体的,在检测到驾驶员分心之后,生成驾驶提醒信息,例如语音信息,以提醒驾驶员专心驾驶,或者播放音乐,缓解驾驶员的驾驶疲劳,此处不做限定。
上述方案,通过获取驾驶员的脑电数据;将所述脑电数据预处理后,再输入预先训练得到的分心检测模型中,得到所述驾驶员的分心检测结果;所述分心检测模型通过脑电样本数据及其对应的分心结果标签,对预设的循环神经网络进行训练得到;将所述分心检测结果发送至与所述驾驶员的身份信息关联的车载终端;所述分心检测结果用于触发所述车载终端根据所述分心检测结果生成驾驶提醒信息。通过根据训练得到的循环神经网络检测实时获取到的驾驶员的脑电数据,判断驾驶员是否分心,并在检测到分心时通过预设的车载终端进行对应的处理,提高了驾驶员分心检测的精确度和效率,进而降低交通事故的发生概率。
参见图2,图2是本申请实施例一提供的一种检测驾驶员分心的方法的流程图。本实施例中检测驾驶员分心的方法的执行主体为具有检测驾驶员分心的功能的装置,包括但不限于计算机、服务器、平板电脑或者终端等装置。如图所示的检测驾驶员分心的方法可以包括以下步骤:
S201:获取驾驶员的脑电数据。
在本实施例中S201与图1对应的实施例中S101的实现方式完全相同,具体可参考图1对应的实施例中的S101的相关描述,在此不再赘述。
请一并参阅图3,图3是本实施例提供的模型训练和检测应用示意图,其中,在训练时,通过脑电样本采集以及脑电预处理,得到清洗后的脑电数据,用其训练CSRN网络,并且不断的调整优化网络参数以优化网络结构,得到一个最好的网络参数,即得到参数权重固定的CSRN网络,这个调整好的网络模型作为实际的模型应用到车载系统上,通过一个采集脑电的脑电设备来获取实时脑电数据,利用这个训练好的CSRN网络即可实时检测并获取驾驶员分心状态,最后反馈到车辆,例如车辆中预置的辅助驾驶装置,做出合理的决策和调控。
S202:获取所述脑电样本数据。
本实施例希望能够利用现如今计算机强大的计算性能,通过神经网络来直接处理时域的 脑电信号,虽然最后测试集识别率只有85%,与传统方法相当,但是神经网络往往更擅长于处理大数据,本实施例通过收集的脑电数据,来建立一个数据库支撑。在实际试验过程中,实验者通过采集被测者18个小时的数据,建立一个庞大的数据库支撑,相信利用神经网络处理脑电信号的潜力是巨大的。
S203:对所述脑电样本数据进行预处理,得到预处理数据。
整体的研究先进行训练模块,通过样本采集以及预处理,得到清洗后的脑电数据,用其训练CSRN网络,并且不断的调整网络结构参数以优化网络结构,得到一个最好的网络参数,这个调整好的网络模型作为实际的模型应用到车载系统上,若有一个采集脑电的仪器,利用这个训练好的网络即可实时预测驾驶员分心状态,并且反馈到辅助驾驶系统,做出合理的决策。
请一并参与图4,图4为脑电数据预处理流程示意图。脑电信号十分微弱,需要通过极高放大率的放大器才能捕捉到脑电信号。实际应用中脑电往往会有较低的信噪比,除了高频率的噪声和50Hz的工频噪声以外,与脑电频率相近的杂波也会混入脑电信号当中,这些混入其中的杂波常常被称为伪迹,本实施例的伪迹可以包括眼电伪迹、肌电伪迹以及心电伪迹等,未经过去除伪迹的脑电信号信噪比很低,没办法直接使用,因此必须经过预处理步骤,脑电预处理主要流程中,通过导入数据、数据降采样、导入脑电位置信息、脑电主成分分析、去除脑电伪迹、去除大噪声、去除基线最后进行时序数据切片,得到预处理数据。
进一步的,步骤S203包括:
S2031:获取所述脑电样本数据对应的采集点的标识信息,确定在数据采集装置上与所述采集点的标识信息对应的电极的第一位置信息。
请一并参阅图5,图5为本实施例用到的采集装置的电极位置图,其中,C3~C5、Cp3~Cp5、F4~F4、Fc1~Fc2、Fp1~Fp2、O1~O2、P3~P4、T4~T5、Tp7~Tp8等图中所有的标识都用于表示采集装置上不同采集位置处对应的电极标识,本实施例中的采集装置可以为脑电帽。由于不同型号的脑电帽的电极数目,位置等都不一样,因此对于脑电的数据,需要输入一个脑电的电极位置信息,从而进行脑电的主成分分析。
进一步的,步骤S2031之前,还包括:对所述脑电样本数据进行降频处理;将降频处理之后的脑电样本数据通过预设频率的低通滤波器,得到滤波之后的脑电样本数据。
具体的,大部分脑电设备的采样频率都很高,在这里我们将脑电数据降频至100Hz以减少计算量,另外将数据通过一个预设频率的低通滤波器,例如上限截止频率为50Hz的低通滤波器,滤去无关的高频噪声和工频噪声。
S2032:根据所述第一位置信息,确定所述采集点在大脑表层上对应的发射源的第二位置信息。
由于脑电帽的电极是人为决定的,它仅代表了脑电的接受源,并不代表脑电的发射源。脑电的每一个采样电极都来源于多个发射源的叠加效应,因此需要通过脑电位置信息(即第 一位置信息)对脑电发射信号源(即第二位置信息)进行重新定位。
需要说明的是,本实施例中为了便于区别和体现电极位置和大脑皮层发射源位置之间的不同和联系,通过第一位置信息来表示数据采集装置上采集电极的位置信息,通过第二位置信息来表示脑电发射源对应的位置信息。
进一步的,步骤S2032包括:确定在所述数据采集装置上所述第一位置信息对应的电极;确定所述电极对应的发射源的第二位置信息;所述发射源为大脑表层上生成所述脑电样本数据的区域。
具体的,本实施例中在确定了第一位置信息之后,根据第一位置信息确定数据采集装置上所述第一位置信息对应的电极,再确定电极对应的发射源的第二位置信息,本实施例中的发射源用于表示大脑表层上生成脑电样本数据的区域。
S2033:根据所述第二位置信息去除所述脑电样本数据中的伪迹,并根据预设的切片时段进行切片,得到所述预处理数据;所述伪迹为设定的待去除位置处对应的脑电样本数据。
由于脑电帽的电极是人为决定的,它仅代表了脑电的接受源,并不代表脑电的发射源。脑电的每一个采样电极都来源于多个发射源的叠加效应,因此需要通过1.2中的脑电位置信息对脑电发射信号源进行重新定位。另外通过独立成分分析方法还能够定位到一些伪迹的发射源,从而进行伪迹的去除工作。
独立成分分析的工作原理如下:在本实施例中,可以假设大脑中同时有n个发射源正在发射脑电信号,而实验采用了n个电极的脑电帽来采集这个n个放射源发射的信号,经过一段时间,可以得到一组数据
x∈{x (i);i=1,2,…,m},其中,m表示采样数。
假设n个脑电发射源为:
s={s 1,s 2,...,s n} T,s∈R n,其中每一个维度都是一个独立的源,令A为一个未知的混合矩阵,用来叠加脑电发射信号,即:
Figure PCTCN2019120566-appb-000001
由于A和s都是未知的,需要通过X推出s,这个过程也被称为盲源信号分离,令W=A -1,则s (i)=Wx (i)。假设随机变量S有概率密度函数p s(s),连续值是概率密度函数,离散值是概率。为了简便,再假设s是实数,还有一个随机变量x=As,A和x都是实数。令p x(x)是x的概率密度。设概率密度函数为p(x),其对应的累积分布函数为F(x),其中关于p x(x)的推导公式为:
F x(x)=P(X≤x)=P(As≤x)=P(s≤Wx)=F s(Wx)
p x(x)=F′ x(x)=F′ s(Wx)=p s(Wx)|W|。
接着可以利用最大似然估计来计算参数W,假定每个s i有概率密度p s,那么给定时刻原信号的联合分布就是:
Figure PCTCN2019120566-appb-000002
这个式子有一个假设前提:每个信号源发出的信号独立。由p x(x)的推导公式得到:
Figure PCTCN2019120566-appb-000003
如果没有先验知识,无法求得W和s。因此需要知道p s(s)。选取一个概率密度函数赋给s。由于概率密度函数p(x)由累计分布函数F(x)求导得到,且常规的F(x)需要满足两个性质:即函数单调递增且其值域范围为[0,1],而阈值函数sigmoid函数满足此条件。因此假定s的累积分布函数符合sigmoid函数:
Figure PCTCN2019120566-appb-000004
求导后即:
Figure PCTCN2019120566-appb-000005
在知道了以后,只需要确认W,因此在给定的脑电采集员x的情况下,求出对数似然估计:
Figure PCTCN2019120566-appb-000006
接下来就可以对W进行求导和迭代,只需指定学习率α,即可得到W:
Figure PCTCN2019120566-appb-000007
在本次实验中,计算完独立成分分析,即可得到30个计算出来的脑电发射源,进行下一步的伪迹去除。
请一并参阅图6,图6为本实施例提供的伪迹分析示意图。在进行了主成分分析运算后,即可获取计算出的30个新的发射源,即使与真实源有差距,在这里可以利用矩阵实验室(Matrix Laboratory,MATLAB)的去伪迹插件对伪迹进行去除,其中,重新定位的30个源如图6所示,可以直接选取需要去除的源直接去除。
请一并参阅图7,图7为本实施例提供的脑电中的大噪声及选取去除示意图。在实际应用中,往往会有一些无法避免的情况导致被试进行一些大幅度的晃动,或者电极掉落,当出现这种情况的时候脑电往往会有一个巨大的波形抖动,需要人工去除波形,可以通过MATLAB中脑电图处理插件,选取不需要的波形直接去除。
脑电数据反映的是一个动态的大脑电位变化,直流信号并不能够反映大脑的信息,因此在脑电信号分析中,直流分量需要进行剔除。另外,在去除大噪声的步骤中也会出现基线漂移的现象,因此去除直流分量在脑电预处理的最后一步完成,可以通过计算脑电每一个通道的数据平均值获取当前直流分量,将所有数据减去该分量即可去除直流分量。
脑电的时域信号太长,无法直接输入到神经网络当中训练,在这里我们将脑电数据切分成短时的时间序列,在预设时段内根据预设的切片周期进行切片,例如,从15分钟一段的数据切分为2秒一段的数据,减少神经网络计算量且提高网络的实时性,每一份标记为对应的状态,包括处于驾驶分心状态和正常驾驶状态。
S204:将所述预处理数据输入预设的循环神经网络中进行训练,优化所述循环神经网络的参数,得到所述分心检测模型。
在卷积神经网络发展之前,一般使用的网络结构是多层的感知器,理论上一个多层的全连接层也能够拟合任何的多项式函数,然而实际上效果却不好,因为为了拟合一个足够复杂的函数,多层感知器需要非常庞大的参数来支撑,这不但增大了训练的难度,而且非常容易陷入过拟合的现象,除此之外,如果输入是一张图像,那么每个像素点都会连接到下一层的每一个神经元,这导致了网络对位置的敏感度太高,泛化能力弱,一旦同一个目标出现在不同的区域,网络需要重新训练,并且对于不同大小的图像,网络的输入是固定的,必须裁剪变换成指定大小的图像才能进行输入。由于多层感知器的诸多缺点,卷积神经网络应运而生。卷积神经网络是一类包含卷积计算且具有深度结构的前馈神经网络,是深度学习的代表算法之一。在每一个卷积层中,都有一个指定大小的卷积核,这个卷积核根据给定的步长,完成对整个数据的卷积操作,因此可以认为网络对位置的敏感度降低,且兼容不同大小的数据。
卷积神经网络已被诸多实验证明在特征提取方面效果非常优秀,如今许多图像识别的技术也是基于卷积神经网络,本应用的网络结构也是采用卷积层,取得了很好的效果。
脑电信号比较特殊,在同一个时间点上,它拥有空间的信息,脑部不同位置发出的脑电波信号,又有着时间上的信息,即时域信号,因此本实施例集中了卷积神经网络以及循环神经网络的优势,在网络的前几层利用卷积神经网络提取单个时间点的空间特征,再将处理过的数据输入到对时间序列敏感的门控循环网络,寻找数据的时间特征,最终得到的128长度的特征向量进行全连接层的状态分类网络。
请一并参阅图8,图8为本实施例提供的时序驾驶分心预测循环神经网络示意图。在图中前三层网络都是卷积网络,每一层网络中的数据经过卷积、池化、批规范化、激活后到达下一层。具体的,先输入b×200×30的预处理数据,经过第一层卷积核3×3×3和第一层池化窗口1×2×2,得到b×5×6×200×1的数据;再将其输入第二层卷积核3×3×3和第二层池化窗口1×1×2,得到b×2×3×100×64的数据;再将其输入第三层卷积核2×1×3和第二层池化窗口1×1×2,得到b×1×1×25×512的数据。进一步的,本实施例中的循环神经网络中包括门控制循环单元,卷积网络的输出作为门控循环单元的输入,本实施例中的门控制单元中的门控制循环节点设定为输入数据512为、隐藏层128位以及层数为4层的结构,经过门控循环单元后得到128长度的特征向量,输入到全连接层中最终得到输出,三层全连接层分别为b×128、b×64以及b×16,最终得到的输出数据为b×2,最后得到驾驶员是否处于分心状态的检测结果。
进一步的,步骤S204包括:将所述预处理数据输入所述循环神经网络中进行卷积得到卷积结果,并将所述卷积结果输入预设的门控循环单元得到特征向量,将所述特征向量输入预设的全连接层得到检测结果;根据所述检测结果与其对应的分心结果标签之间的差异值对所述循环神经网络的参数进行优化,得到所述分心检测模型;所述门控循环单元用于控制所述循环神经网络中的数据流转方向和流转数据量。
具体的,在处理时间信号或者其他序列信号的时候,很容易可以发现传统神经网络和卷积神经网络的不足。一段序列,如一篇文章,很有可能上一个词和下一个词有关联,甚至于上一段和下一段有联系,传统神经网络无法构建这种联系。卷积神经网络虽然可以构建相邻区域的联系,抓取特征,但是一旦超过卷积核的范围,就无法提取这种特征了,这在长序列中是一个致命的缺点,循环神经网络则很好的解决了这个问题。
请一并参阅图9,图9为本实施例提供的门控循环单元的循环结构示意图。其中,门控循环单元x t代表着当前时刻的输入x,h t-1代表着上一个时刻的输出。而每一个循环单元中有两个门,分别为更新门z t和重置门r t。更新门用于控制前一时刻的状态信息被带入到当前状态中的程度,更新门的值越大说明前一时刻的状态信息带入越多。重置门用于控制忽略前一时刻的状态信息的程度,重置门的值越小说明忽略得越多。相比起传统的神经网络,这种结构能够更好传递前面序列的信息到后面,因为传统循环神经网络当训练到很深层的时候,靠前面的信息已经被忽略了,而门控循环单元能够控制保留的信息和忽略的信息,因此在循环神经网络中表现的更好。
S205:将所述脑电数据预处理后,再输入预先训练得到的分心检测模型中,得到所述驾驶员的分心检测结果;所述分心检测模型通过脑电样本数据及其对应的分心结果标签,对预设的循环神经网络进行训练得到。
本实施例中采用循环神经网络对样本数据进行训练,在本实施例中的循环神经网络中包括卷积-循环的结构。具体的,前三层网络都是卷积网络,每一层网络中的数据经过卷积、池化、批规范化、激活后到达下一层,卷积网络的输出作为门控循环单元的输入,经过门控循环单元后得到128长度的特征向量,输入到全连接层中最终得到输出,检测驾驶员当前是否处于分心状态。
请一并参阅图10与表2,图10为本实施例中提供的三种网络结构的检测结果曲线图,表2中包括了三种网络各自的识别表现。其中真阳性率代表将正例分对的比例,假阳性率代表将负例错误识别为正例的比例。在本实施例中,我们采取了三种网络结构进行比较,其中包括我们最终的卷积-循环网络,以及卷积神经网络和循环神经网络。三种网络均为7层的网络,其中卷积神经网络没有加入循环单元,对时间序列不敏感,而循环神经网络没有加入卷积节点,对脑电空间位置分布不敏感,卷积-循环模型吸收了卷积模型和循环模型两者的特点,因此表现最佳,达到了85%的识别准确率,而卷积模型和循环模型则只有78%和76的识别准确率。
S206:将所述分心检测结果发送至与所述驾驶员的身份信息关联的车载终端;所述分心检测结果用于触发所述车载终端根据所述分心检测结果生成驾驶提醒信息。
本实施例中的车辆上安装有车载终端,用于触发车载终端根据分心检测结果生成驾驶提醒信息。具体的,在检测到驾驶员分心之后,生成驾驶提醒信息,例如语音信息,以提醒驾驶员专心驾驶,或者播放音乐,缓解驾驶员的驾驶疲劳,此处不做限定。
参见图11,图11是本申请实施例三提供的一种检测驾驶员分心的装置的示意图。检测驾驶员分心的装置1100可以为智能手机、平板电脑等移动终端。本实施例的检测驾驶员分心的装置1100包括的各单元用于执行图1对应的实施例中的各步骤,具体请参阅图1及图1对应的实施例中的相关描述,此处不赘述。本实施例的检测驾驶员分心的装置1100包括:
获取单元1101,用于获取驾驶员的脑电数据;
检测单元1102,用于将所述脑电数据预处理后,再输入预先训练得到的分心检测模型中,得到所述驾驶员的分心检测结果;所述分心检测模型通过脑电样本数据及其对应的分心结果标签,对预设的循环神经网络进行训练得到;
发送单元1103,用于将所述分心检测结果发送至与所述驾驶员的身份信息关联的车载终端;所述分心检测结果用于触发所述车载终端根据所述分心检测结果生成驾驶提醒信息。
参见图12,图12是本申请实施例四提供的一种检测驾驶员分心的装置的示意图。如图12所示的本实施例中的检测驾驶员分心的装置1200可以包括:处理器1201、存储器1202以及存储在存储器1202中并可在处理器1201上运行的计算机程序1203。处理器1201执行计算机程序1203时实现上述各个检测驾驶员分心的方法实施例中的步骤。存储器1202用于存储计算机程序,所述计算机程序包括程序指令。处理器1201用于执行存储器1202存储的程序指令。其中,处理器1201被配置用于调用所述程序指令执行以下操作:
处理器1201用于:
获取驾驶员的脑电数据;
将所述脑电数据预处理后,再输入预先训练得到的分心检测模型中,得到所述驾驶员的分心检测结果;所述分心检测模型通过脑电样本数据及其对应的分心结果标签,对预设的循环神经网络进行训练得到;
将所述分心检测结果发送至与所述驾驶员的身份信息关联的车载终端;所述分心检测结果用于触发所述车载终端根据所述分心检测结果生成驾驶提醒信息。
应当理解,在本申请实施例中,所称处理器1201可以是中央处理单元(Central Processing Unit,CPU),该处理器还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
该存储器1202可以包括只读存储器和随机存取存储器,并向处理器1201提供指令和数据。存储器1202的一部分还可以包括非易失性随机存取存储器。例如,存储器1202还可以存储设备类型的信息。
具体实现中,本申请实施例中所描述的处理器1201、存储器1202、计算机程序1203可执行本申请实施例提供的检测驾驶员分心的方法的第一实施例和第二实施例中所描述的实现方式,也可执行本申请实施例所描述的终端的实现方式,在此不再赘述。
在本申请的另一实施例中提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序包括程序指令,所述程序指令被处理器执行时实现:
获取驾驶员的脑电数据;
将所述脑电数据预处理后,再输入预先训练得到的分心检测模型中,得到所述驾驶员的分心检测结果;所述分心检测模型通过脑电样本数据及其对应的分心结果标签,对预设的循环神经网络进行训练得到;
将所述分心检测结果发送至与所述驾驶员的身份信息关联的车载终端;所述分心检测结果用于触发所述车载终端根据所述分心检测结果生成驾驶提醒信息。
所述计算机可读存储介质可以是前述任一实施例所述的终端的内部存储单元,例如终端的硬盘或内存。所述计算机可读存储介质也可以是所述终端的外部存储设备,例如所述终端上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,所述计算机可读存储介质还可以既包括所述终端的内部存储单元也包括外部存储设备。所述计算机可读存储介质用于存储所述计算机程序及所述终端所需的其他程序和数据。所述计算机可读存储介质还可以用于暂时地存储已经输出或者将要输出的数据。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以是两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分,或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到各种等效的修改或替换,这些修改或替换都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护 范围为准。

Claims (20)

  1. 一种检测驾驶员分心的方法,其特征在于,包括:
    获取驾驶员的脑电数据;
    将所述脑电数据预处理后,再输入预先训练得到的分心检测模型中,得到所述驾驶员的分心检测结果;所述分心检测模型通过脑电样本数据及其对应的分心结果标签,对预设的循环神经网络进行训练得到;
    将所述分心检测结果发送至与所述驾驶员的身份信息关联的车载终端;所述分心检测结果用于触发所述车载终端根据所述分心检测结果生成驾驶提醒信息。
  2. 如权利要求1所述的检测驾驶员分心的方法,其特征在于,所述将所述脑电数据输入预先训练得到的分心检测模型中,得到所述驾驶员的分心检测结果之前,还包括:
    获取所述脑电样本数据;
    对所述脑电样本数据进行预处理,得到预处理数据;
    将所述预处理数据输入预设的循环神经网络中进行训练,优化所述循环神经网络的参数,得到所述分心检测模型。
  3. 如权利要求2所述的检测驾驶员分心的方法,其特征在于,所述将所述预处理数据输入预设的循环神经网络中进行训练,优化所述循环神经网络的参数,得到所述分心检测模型,包括:
    将所述预处理数据输入所述循环神经网络中进行卷积得到卷积结果,并将所述卷积结果输入预设的门控循环单元得到特征向量,将所述特征向量输入预设的全连接层得到检测结果;根据所述检测结果与其对应的分心结果标签之间的差异值对所述循环神经网络的参数进行优化,得到所述分心检测模型;所述门控循环单元用于控制所述循环神经网络中的数据流转方向和流转数据量。
  4. 如权利要求2所述的检测驾驶员分心的方法,其特征在于,所述对所述脑电样本数据进行预处理,得到预处理数据,包括:
    获取所述脑电样本数据对应的采集点的标识信息,确定在数据采集装置上与所述采集点的标识信息对应的电极的第一位置信息;
    根据所述第一位置信息,确定所述采集点在大脑表层上对应的发射源的第二位置信息;
    根据所述第二位置信息去除所述脑电样本数据中的伪迹,并根据预设的切片时段进行切片,得到所述预处理数据;所述伪迹为设定的待去除位置处对应的脑电样本数据。
  5. 如权利要求4所述的检测驾驶员分心的方法,其特征在于,所述获取所述脑电样本数据对应的采集点的标识信息,确定在数据采集装置上与所述采集点的标识信息对应的电极的第一位置信息之前,还包括:
    对所述脑电样本数据进行降频处理;
    将降频处理之后的脑电样本数据通过预设频率的低通滤波器,得到滤波之后的脑电样本数据。
  6. 如权利要求1所述的检测驾驶员分心的方法,其特征在于,所述将所述脑电数据输入预先训练得到的分心检测模型中,得到所述驾驶员的分心检测结果之后,还包括:
    若所述分心检测结果为所述驾驶员分心,则将所述分心检测结果发送至所述车辆中预设的辅助驾驶装置,用于辅助所述驾驶员安全驾驶。
  7. 如权利要求2所述的检测驾驶员分心的方法,其特征在于,所述将所述脑电数据输入预先训练得到的分心检测模型中,得到所述驾驶员的分心检测结果之后,还包括:
    若所述分心检测结果为所述驾驶员分心,则将所述分心检测结果发送至所述车辆中预设的辅助驾驶装置,用于辅助所述驾驶员安全驾驶。
  8. 如权利要求3所述的检测驾驶员分心的方法,其特征在于,所述将所述脑电数据输入预先训练得到的分心检测模型中,得到所述驾驶员的分心检测结果之后,还包括:
    若所述分心检测结果为所述驾驶员分心,则将所述分心检测结果发送至所述车辆中预设的辅助驾驶装置,用于辅助所述驾驶员安全驾驶。
  9. 如权利要求4所述的检测驾驶员分心的方法,其特征在于,所述将所述脑电数据输入预先训练得到的分心检测模型中,得到所述驾驶员的分心检测结果之后,还包括:
    若所述分心检测结果为所述驾驶员分心,则将所述分心检测结果发送至所述车辆中预设的辅助驾驶装置,用于辅助所述驾驶员安全驾驶。
  10. 如权利要求5所述的检测驾驶员分心的方法,其特征在于,所述将所述脑电数据输入预先训练得到的分心检测模型中,得到所述驾驶员的分心检测结果之后,还包括:
    若所述分心检测结果为所述驾驶员分心,则将所述分心检测结果发送至所述车辆中预设的辅助驾驶装置,用于辅助所述驾驶员安全驾驶。
  11. 一种检测驾驶员分心的装置,其特征在于,包括:
    获取单元,用于获取驾驶员的脑电数据;
    检测单元,用于将所述脑电数据预处理后,再输入预先训练得到的分心检测模型中,得到所述驾驶员的分心检测结果;所述分心检测模型通过脑电样本数据及其对应的分心结果标签,对预设的循环神经网络进行训练得到;
    发送单元,用于将所述分心检测结果发送至与所述驾驶员的身份信息关联的车载终端;所述分心检测结果用于触发所述车载终端根据所述分心检测结果生成驾驶提醒信息。
  12. 如权利要求11所述的检测驾驶员分心的装置,其特征在于,所述检测驾驶员分心的装置,还包括:
    样本获取单元,用于获取所述脑电样本数据;
    预处理单元,用于对所述脑电样本数据进行预处理,得到预处理数据;
    训练单元,用于将所述预处理数据输入预设的循环神经网络中进行训练,优化所述循环神经网络的参数,得到所述分心检测模型。
  13. 一种检测驾驶员分心的装置,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如下步骤:
    获取驾驶员的脑电数据;
    将所述脑电数据预处理后,再输入预先训练得到的分心检测模型中,得到所述驾驶员的分心检测结果;所述分心检测模型通过脑电样本数据及其对应的分心结果标签,对预设的循环神经网络进行训练得到;
    将所述分心检测结果发送至与所述驾驶员的身份信息关联的车载终端;所述分心检测结果用于触发所述车载终端根据所述分心检测结果生成驾驶提醒信息。
  14. 如权利要求13所述的检测驾驶员分心的装置,其特征在于,所述将所述脑电数据输入预先训练得到的分心检测模型中,得到所述驾驶员的分心检测结果之前,还包括:
    获取所述脑电样本数据;
    对所述脑电样本数据进行预处理,得到预处理数据;
    将所述预处理数据输入预设的循环神经网络中进行训练,优化所述循环神经网络的参数,得到所述分心检测模型。
  15. 如权利要求14所述的检测驾驶员分心的装置,其特征在于,所述将所述预处理数据输入预设的循环神经网络中进行训练,优化所述循环神经网络的参数,得到所述分心检测模型,包括:
    将所述预处理数据输入所述循环神经网络中进行卷积得到卷积结果,并将所述卷积结果输入预设的门控循环单元得到特征向量,将所述特征向量输入预设的全连接层得到检测结果;根据所述检测结果与其对应的分心结果标签之间的差异值对所述循环神经网络的参数进行优化,得到所述分心检测模型;所述门控循环单元用于控制所述循环神经网络中的数据流转方向和流转数据量。
  16. 如权利要求14所述的检测驾驶员分心的装置,其特征在于,所述对所述脑电样本数据进行预处理,得到预处理数据,包括:
    获取所述脑电样本数据对应的采集点的标识信息,确定在数据采集装置上与所述采集点的标识信息对应的电极的第一位置信息;
    根据所述第一位置信息,确定所述采集点在大脑表层上对应的发射源的第二位置信息;
    根据所述第二位置信息去除所述脑电样本数据中的伪迹,并根据预设的切片时段进行切片,得到所述预处理数据;所述伪迹为设定的待去除位置处对应的脑电样本数据。
  17. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如下步骤:
    获取驾驶员的脑电数据;
    将所述脑电数据预处理后,再输入预先训练得到的分心检测模型中,得到所述驾驶员的分心检测结果;所述分心检测模型通过脑电样本数据及其对应的分心结果标签,对预设的循环神经网络进行训练得到;
    将所述分心检测结果发送至与所述驾驶员的身份信息关联的车载终端;所述分心检测结果用于触发所述车载终端根据所述分心检测结果生成驾驶提醒信息。
  18. 如权利要求17所述的计算机可读存储介质,其特征在于,所述将所述脑电数据输入预先训练得到的分心检测模型中,得到所述驾驶员的分心检测结果之前,还包括:
    获取所述脑电样本数据;
    对所述脑电样本数据进行预处理,得到预处理数据;
    将所述预处理数据输入预设的循环神经网络中进行训练,优化所述循环神经网络的参数,得到所述分心检测模型。
  19. 如权利要求18所述的计算机可读存储介质,其特征在于,所述将所述预处理数据输入预设的循环神经网络中进行训练,优化所述循环神经网络的参数,得到所述分心检测模型,包括:
    将所述预处理数据输入所述循环神经网络中进行卷积得到卷积结果,并将所述卷积结果输入预设的门控循环单元得到特征向量,将所述特征向量输入预设的全连接层得到检测结果;根据所述检测结果与其对应的分心结果标签之间的差异值对所述循环神经网络的参数进行优化,得到所述分心检测模型;所述门控循环单元用于控制所述循环神经网络中的数据流转方向和流转数据量。
  20. 如权利要求18所述的计算机可读存储介质,其特征在于,所述对所述脑电样本数据进行预处理,得到预处理数据,包括:
    获取所述脑电样本数据对应的采集点的标识信息,确定在数据采集装置上与所述采集点的标识信息对应的电极的第一位置信息;
    根据所述第一位置信息,确定所述采集点在大脑表层上对应的发射源的第二位置信息;
    根据所述第二位置信息去除所述脑电样本数据中的伪迹,并根据预设的切片时段进行切片,得到所述预处理数据;所述伪迹为设定的待去除位置处对应的脑电样本数据。
PCT/CN2019/120566 2019-08-01 2019-11-25 一种检测驾驶员分心的方法及装置 WO2021017329A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/629,944 US20220175287A1 (en) 2019-08-01 2019-11-25 Method and device for detecting driver distraction

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910707858.4A CN110575163B (zh) 2019-08-01 2019-08-01 一种检测驾驶员分心的方法及装置
CN201910707858.4 2019-08-01

Publications (1)

Publication Number Publication Date
WO2021017329A1 true WO2021017329A1 (zh) 2021-02-04

Family

ID=68810910

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/120566 WO2021017329A1 (zh) 2019-08-01 2019-11-25 一种检测驾驶员分心的方法及装置

Country Status (3)

Country Link
US (1) US20220175287A1 (zh)
CN (1) CN110575163B (zh)
WO (1) WO2021017329A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113171095A (zh) * 2021-04-23 2021-07-27 哈尔滨工业大学 一种层级式驾驶员认知分心检测系统
CN114463726A (zh) * 2022-01-07 2022-05-10 所托(杭州)汽车智能设备有限公司 疲劳驾驶判别方法及相关装置

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111516700A (zh) * 2020-05-11 2020-08-11 安徽大学 一种驾驶员分心细粒度监测方法和系统
CN111860427B (zh) * 2020-07-30 2022-07-01 重庆邮电大学 基于轻量级类八维卷积神经网络的驾驶分心识别方法
CN111984118A (zh) * 2020-08-14 2020-11-24 东南大学 基于复数循环神经网络从脑电信号解码肌电信号的方法
CN111985403B (zh) * 2020-08-20 2024-07-02 中再云图技术有限公司 一种基于人脸姿态估计和视线偏离的分心驾驶检测方法
CN112180927B (zh) * 2020-09-27 2021-11-26 安徽江淮汽车集团股份有限公司 一种自动驾驶时域构建方法、设备、存储介质及装置
CN112329714A (zh) * 2020-11-25 2021-02-05 浙江天行健智能科技有限公司 基于gm-hmm的驾驶员高速驾驶分心识别建模方法
CN113177482A (zh) * 2021-04-30 2021-07-27 中国科学技术大学 一种基于最小类别混淆的跨个体脑电信号分类方法
CN113256981B (zh) * 2021-06-09 2021-09-21 天津所托瑞安汽车科技有限公司 基于车辆行驶数据的报警分析方法、装置、设备和介质
CN113254648B (zh) * 2021-06-22 2021-10-22 暨南大学 一种基于多层次图池化的文本情感分析方法
CN114255454A (zh) * 2021-12-16 2022-03-29 杭州电子科技大学 分心检测模型的训练方法、分心检测方法及装置
CN116712091A (zh) * 2023-06-14 2023-09-08 北京交通大学 基于脑电复杂网络的分心驾驶状态识别方法及系统
CN117541865B (zh) * 2023-11-14 2024-06-04 中国矿业大学 一种基于粗粒度深度估计的身份分析和手机使用检测方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008204056A (ja) * 2007-02-19 2008-09-04 Tokai Rika Co Ltd 運転支援装置
CN107334481A (zh) * 2017-05-15 2017-11-10 清华大学 一种驾驶分心检测方法及系统
CN107961007A (zh) * 2018-01-05 2018-04-27 重庆邮电大学 一种结合卷积神经网络和长短时记忆网络的脑电识别方法
CN108776788A (zh) * 2018-06-05 2018-11-09 电子科技大学 一种基于脑电波的识别方法
CN109770925A (zh) * 2019-02-03 2019-05-21 闽江学院 一种基于深度时空网络的疲劳检测方法
CN109820525A (zh) * 2019-01-23 2019-05-31 五邑大学 一种基于cnn-lstm深度学习模型的驾驶疲劳识别方法

Family Cites Families (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8301108B2 (en) * 2002-11-04 2012-10-30 Naboulsi Mouhamad A Safety control system for vehicles
TWI446297B (zh) * 2007-12-28 2014-07-21 私立中原大學 睡意辨識系統
US11137832B2 (en) * 2012-12-13 2021-10-05 Eyesight Mobile Technologies, LTD. Systems and methods to predict a user action within a vehicle
US9636063B2 (en) * 2014-03-18 2017-05-02 J. Kimo Arbas System and method to detect alertness of machine operator
US11836802B2 (en) * 2014-04-15 2023-12-05 Speedgauge, Inc. Vehicle operation analytics, feedback, and enhancement
US9283847B2 (en) * 2014-05-05 2016-03-15 State Farm Mutual Automobile Insurance Company System and method to monitor and alert vehicle operator of impairment
KR20160035466A (ko) * 2014-09-23 2016-03-31 현대자동차주식회사 착용 가능한 스마트 기기를 이용한 운전자 긴급 상황 지원 시스템 및 방법
US9771081B2 (en) * 2014-09-29 2017-09-26 The Boeing Company System for fatigue detection using a suite of physiological measurement devices
US10705519B2 (en) * 2016-04-25 2020-07-07 Transportation Ip Holdings, Llc Distributed vehicle system control system and method
US10467488B2 (en) * 2016-11-21 2019-11-05 TeleLingo Method to analyze attention margin and to prevent inattentive and unsafe driving
US10922566B2 (en) * 2017-05-09 2021-02-16 Affectiva, Inc. Cognitive state evaluation for vehicle navigation
US20190092337A1 (en) * 2017-09-22 2019-03-28 Aurora Flight Sciences Corporation System for Monitoring an Operator
CN108309290A (zh) * 2018-02-24 2018-07-24 华南理工大学 单通道脑电信号中肌电伪迹的自动去除方法
CN109009092B (zh) * 2018-06-15 2020-06-02 东华大学 一种去除脑电信号噪声伪迹的方法
CN109157214A (zh) * 2018-09-11 2019-01-08 河南工业大学 一种适用于单通道脑电信号的在线去除眼电伪迹的方法
US20200241525A1 (en) * 2019-01-27 2020-07-30 Human Autonomous Solutions LLC Computer-based apparatus system for assessing, predicting, correcting, recovering, and reducing risk arising from an operator?s deficient situation awareness
WO2020204884A1 (en) * 2019-03-29 2020-10-08 Huawei Technologies Co Ltd. Personalized routing based on driver fatigue map
CN109820503A (zh) * 2019-04-10 2019-05-31 合肥工业大学 单通道脑电信号中多种伪迹同步去除方法
US10744936B1 (en) * 2019-06-10 2020-08-18 Ambarella International Lp Using camera data to automatically change the tint of transparent materials
US20210403022A1 (en) * 2019-07-05 2021-12-30 Lg Electronics Inc. Method for controlling vehicle and intelligent computing apparatus controlling the vehicle
CN111460892A (zh) * 2020-03-02 2020-07-28 五邑大学 一种脑电图模式分类模型的训练方法、分类方法及系统
US20230271617A1 (en) * 2022-02-25 2023-08-31 Hong Kong Productivity Council Risky driving prediction method and system based on brain-computer interface, and electronic device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008204056A (ja) * 2007-02-19 2008-09-04 Tokai Rika Co Ltd 運転支援装置
CN107334481A (zh) * 2017-05-15 2017-11-10 清华大学 一种驾驶分心检测方法及系统
CN107961007A (zh) * 2018-01-05 2018-04-27 重庆邮电大学 一种结合卷积神经网络和长短时记忆网络的脑电识别方法
CN108776788A (zh) * 2018-06-05 2018-11-09 电子科技大学 一种基于脑电波的识别方法
CN109820525A (zh) * 2019-01-23 2019-05-31 五邑大学 一种基于cnn-lstm深度学习模型的驾驶疲劳识别方法
CN109770925A (zh) * 2019-02-03 2019-05-21 闽江学院 一种基于深度时空网络的疲劳检测方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
XU LEI : "Study on Driving Based on Human Rhythmic and Physiological Signals", THESIS, no. 7, 1 December 2017 (2017-12-01), pages 1 - 70, XP009525868 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113171095A (zh) * 2021-04-23 2021-07-27 哈尔滨工业大学 一种层级式驾驶员认知分心检测系统
CN113171095B (zh) * 2021-04-23 2022-02-08 哈尔滨工业大学 一种层级式驾驶员认知分心检测系统
CN114463726A (zh) * 2022-01-07 2022-05-10 所托(杭州)汽车智能设备有限公司 疲劳驾驶判别方法及相关装置

Also Published As

Publication number Publication date
CN110575163B (zh) 2021-01-29
CN110575163A (zh) 2019-12-17
US20220175287A1 (en) 2022-06-09

Similar Documents

Publication Publication Date Title
WO2021017329A1 (zh) 一种检测驾驶员分心的方法及装置
US10322728B1 (en) Method for distress and road rage detection
US10157441B2 (en) Hierarchical system for detecting object with parallel architecture and hierarchical method thereof
CN110047487B (zh) 车载语音设备的唤醒方法、装置、车辆以及机器可读介质
CN109993093B (zh) 基于面部和呼吸特征的路怒监测方法、系统、设备及介质
CN105261153A (zh) 车辆行驶监控方法和装置
US9424743B2 (en) Real-time traffic detection
Wu et al. Driving behaviour‐based event data recorder
WO2019119515A1 (zh) 人脸分析、过滤方法、装置、嵌入式设备、介质和集成电路
Ma et al. Real time drowsiness detection based on lateral distance using wavelet transform and neural network
Yan et al. Recognizing driver inattention by convolutional neural networks
Hou et al. A lightweight framework for abnormal driving behavior detection
Jiang et al. Analytical comparison of two emotion classification models based on convolutional neural networks
CN113705427B (zh) 基于车规级芯片SoC的疲劳驾驶监测预警方法及系统
CN106446822A (zh) 基于圆拟合的眨眼检测方法
Ali et al. Intelligent and secure real-time auto-stop car system using deep-learning models
CN107334481B (zh) 一种驾驶分心检测方法及系统
Reddy et al. Soft Computing Techniques for Driver Alertness
CN112489363A (zh) 基于智能无线耳机的后方来车预警方法、设备及存储介质
CN116206289A (zh) 一种跨域司机疲劳驾驶检测方法、装置、终端及存储介质
CN106384096B (zh) 一种基于眨眼检测的疲劳驾驶监测方法
Yuan et al. Research on vehicle detection algorithm of driver assistance system based on vision
Shariff et al. Detection of wet road surfaces from acoustic signals using scalogram and optimized AlexNet
TW202326624A (zh) 嵌入式深度學習多尺度物件偵測暨即時遠方區域定位裝置及其方法
CN111098709B (zh) 一种安全驾驶系统解锁启动方法和系统

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 23/08/2022)