CN117643470A - Fatigue driving detection method and device based on electroencephalogram interpretation - Google Patents

Fatigue driving detection method and device based on electroencephalogram interpretation Download PDF

Info

Publication number
CN117643470A
CN117643470A CN202410121969.8A CN202410121969A CN117643470A CN 117643470 A CN117643470 A CN 117643470A CN 202410121969 A CN202410121969 A CN 202410121969A CN 117643470 A CN117643470 A CN 117643470A
Authority
CN
China
Prior art keywords
data
fatigue
decision tree
model
fatigue driving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410121969.8A
Other languages
Chinese (zh)
Other versions
CN117643470B (en
Inventor
韩鑫传
邹勤
周剑
杜博
王中元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN202410121969.8A priority Critical patent/CN117643470B/en
Publication of CN117643470A publication Critical patent/CN117643470A/en
Application granted granted Critical
Publication of CN117643470B publication Critical patent/CN117643470B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention provides a fatigue driving detection method and device based on electroencephalogram interpretation. According to the invention, brain electrointerpretation data are recorded, and fatigue driving judgment strategies are trained through a convolutional neural network and a recurrent neural network model by combining with inertial sensor data, and the fatigue driving judgment strategies are input into a port sequence signal detected by a scalp potential sensor and a three-dimensional acceleration signal of the inertial sensor, and are output into labels for classifying fatigue state characteristics. The fatigue state during driving is innovatively detected by combining the electroencephalogram signal and the acceleration signal, and the detection precision of fatigue driving is remarkably improved by fusing the two modal signals, so that the misjudgment rate can be effectively reduced, and the possibility of occurrence of fatigue driving accidents is reduced.

Description

Fatigue driving detection method and device based on electroencephalogram interpretation
Technical Field
The invention relates to the field of artificial intelligence, in particular to a fatigue driving detection method and device based on electroencephalogram interpretation.
Background
The fatigue driving detection based on electroencephalogram interpretation refers to early warning of fatigue driving by analyzing brain wave signals of a driver to identify and detect whether the driver is in a fatigue state, so that road traffic safety is improved, and possibility of accident occurrence is reduced.
At present, a deep learning technology represented by a convolutional neural network is rapidly developed, and a fatigue driving detection method based on deep learning is sequentially proposed. By means of the strong fitting capacity of the neural network, the methods achieve good results. The existing fatigue driving recognition technology closest to the technology of the invention is a video-based fatigue recognition technology, and mainly adopts video facial feature mapping (Facial Feature Mapping, FFM) as an input feature of fatigue detection. The method first requires collecting the driver's facial video data. Typically, cameras are mounted inside a vehicle to capture the facial expression and motion of the driver. During the data acquisition process, the system extracts facial features for each frame of video image, including key points of the face, expressions, eye states, mouth movements, etc. These features help to capture physiological changes in the driver, particularly fatigue related features. The extracted facial features are mapped to an input layer of the neural network. This process is called Facial Feature Mapping (FFM). The neural network is trained with a large amount of labeled data that includes facial features associated with fatigue states, outputting a neural network model that classifies the fatigue states. After training is completed, the neural network model can be used for real-time fatigue driving detection, and the model judges whether the driver is in a fatigue state or not through real-time facial features of the driver. Once the system detects signs of fatigue, a corresponding warning system may be triggered to alert the driver to take the necessary rest or action.
The facial feature mapping and extracting method is simple, and the characteristics such as fatigue degree of a driver can be well represented. Video-based fatigue recognition systems still have significant weaknesses.
First, using facial feature mapping as a fatigue recognition feature simply uses the relevant state of the driver's facial five sense organs, it is difficult to effectively reflect the fatigue state of the driver in both cases. One is the face state under the circumstance that the light changes severely, for example, when entering a tunnel from a common highway, the light is changed from bright to dark, the face features change greatly along with the light, and a great challenge is brought to robust feature extraction; another is the facial feature status of a particular group of people, for example, some people have very small eyes, and for feature recognition of "squinting eyes", uncertainty is introduced to the method of using the magnitude of the eye opening angle to determine fatigue. Secondly, the existing fatigue driving detection based on image video and deep learning is easy to generate erroneous judgment. When the fatigue degree is calculated through the facial feature mapping and the trunk feature mapping, certain autonomous facial movements and trunk movements of the driver are recognized as fatigue, such as normal blinking, low head, and the like, and how to improve the accuracy of fatigue driving detection and reduce misjudgment caused by facial expression or physical movements of the driver is a very difficult problem.
Brain wave data is brain rhythm wave data of a person recorded by a scalp potential sensor, and the quality of brain waves is determined by the accuracy of the sensor. There is a close relationship in time sequence between brain rhythmic wave data from different moments of the same piece of data. By extracting and analyzing the relation and the difference of the brain wave data characteristics between different time periods, the law that the brain wave reflects the fatigue state can be estimated, so that the relation between the brain rhythm wave data of the driver and the fatigue state of the driver can be fully utilized, and the accuracy of fatigue state identification is improved. Therefore, research on fatigue driving recognition methods based on brain wave interpretation is carried out, and the method has important practical significance.
Disclosure of Invention
The invention provides a fatigue driving recognition and detection method and device based on brain wave interpretation in order to improve the accuracy of fatigue driving recognition. The input is a port sequence signal detected by a scalp potential sensor and an acceleration signal of an inertial sensor, and the output is a tag for classifying fatigue state characteristics.
The fatigue driving detection method based on electroencephalogram interpretation comprises the following steps:
acquiring brain wave signals of a plurality of drivers in a non-fatigue driving state and a fatigue driving state and head acceleration data synchronous with the brain wave signals;
respectively calculating the average value of brain wave signals and head acceleration data in a non-fatigue state, and subtracting the average value of the brain wave signals and the head acceleration data in the fatigue state from the average value of the brain wave signals and the head acceleration data in the non-fatigue state to obtain a brain wave data set and an acceleration data set;
constructing a fatigue state detection network, wherein the network comprises a signal characteristic extraction module and a decision tree discrimination module; the input of the signal characteristic extraction module is brain wave data set and acceleration data set, which are used for describing the characteristics of brain wave signals and acceleration signals, and inputting the characteristics into the decision tree discrimination module, and the decision tree discrimination module outputs driving fatigue state values;
and training the fatigue state detection network by using the brain wave data set and the acceleration data set to obtain a fatigue driving detection model.
Further, the brain wave signals are acquired by a scalp potential sensor, and the head acceleration data are acquired by a gyroscope.
Further, the feature extraction module is composed of a convolutional neural network and a recurrent neural network, the convolutional network is used for capturing the spatial features of the signals, feature mapping of different scales is extracted, and the recurrent neural network is used for capturing time correlation information among the features.
Preferably, the convolutional network is a feature-coded network, employing a generic backbone network including, but not limited to, VGG16, resNet34, resNet101.
Further, the brain wave signals and the head acceleration data are subjected to dimension assembly and then input into a feature extraction module.
Further, the decision tree discrimination module is based on a random forest model, and the specific creation process is as follows:
creating an empty random forest model, determining the number of decision trees contained in the integration, and for each decision tree, performing Bootstrap sampling with put-back from a data set to create data of a random subset of each decision tree, and for each subset, randomly selecting a part of features to reduce the correlation between each decision tree; training a decision tree regression model for each decision tree using the selected data subset and feature subset, each decision tree learning to predict fatigue states from features, repeating the above steps a number of times to create a plurality of decision trees, constructing a random forest, predicting each decision tree using the model, each decision tree generating a predicted value for fatigue state for each input sample, integrating predictions for all decision trees, and generating a continuous predicted value for fatigue state.
Further, the splitting process of the decision tree adopts a mean square error to evaluate the quality of the splitting point based on the criterion of the reduction of the non-purity.
Further, cross entropy is adopted as a loss function in the network training process, and the difference between the prediction of the model and the real label is measured; the Adam optimization algorithm is used to minimize the loss function and update the weights and parameters of the deep learning network.
Further, the specific process when using the trained model for detection is as follows:
the method comprises the steps of preprocessing data acquired in real time, calculating an average data value of the data acquired in real time, which is received in the past several seconds, for each moment, and differencing the data value at the current moment with the average value to obtain a differentiated value, wherein the steps are continuously performed when the device is operated, the average value of the past five seconds is calculated at each moment, the value at the current moment is subtracted from the average value to obtain the differentiated data, the differentiated data is used as input of a neural network, the data format is ensured to be matched with the training data, the training model is used for reasoning, the real-time differentiated data is transmitted to the model, and the model outputs a predicted value of a fatigue state.
Based on the same inventive concept, the invention also designs an electronic device, comprising:
one or more processors, two types of sensors;
the storage device is used for storing one or more program alarm devices and used for alarming when the fatigue state is detected;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the fatigue driving identification and detection method based on brain wave interpretation as described above.
Based on the same inventive concept, the invention also designs a computer readable medium, on which a computer program is stored, which program, when being executed by a processor, implements the fatigue driving identification and detection method based on brain wave interpretation as described above.
The invention has the advantages that:
1. according to the invention, the fatigue driving is innovatively detected by combining the electroencephalogram signal and the acceleration signal, and the detection precision of the fatigue driving is improved by fusing the two modal signals, so that the false judgment rate can be effectively reduced; the convolution network and the recursion network are adopted to extract the spatial information and the time information of the features respectively, so that seamless fusion of the spatial and temporal information is realized, and the detection accuracy is improved.
2. The invention innovatively uses the random forest module, integrates the results of a plurality of decision trees to carry out regression training, and greatly enhances the convergence speed and generalization capability of the network through sampling and integrating the results of the plurality of decision trees, thereby improving the accuracy of fatigue driving discrimination.
3. According to the invention, when data preprocessing is performed, a difference solving mode is adopted, and data distribution with general significance is obtained through estimation of large sample size, so that the influence of noise data can be obviously reduced.
Drawings
Fig. 1 is a representation of a mannequin wearing article of the present invention.
Fig. 2 is a diagram of a neural network architecture of the fatigue feature extraction module of the present invention.
FIG. 3 is a schematic diagram of a decision tree discrimination module neural network according to the present invention.
Detailed Description
In order to facilitate the understanding and practice of the invention, those of ordinary skill in the art will now make further details with reference to the drawings and examples, it being understood that the examples described herein are for the purpose of illustration and explanation only and are not intended to limit the invention thereto.
Example 1
The method provided by the invention designs a fatigue driving identification and detection method based on electroencephalogram interpretation, which comprises the following steps:
step S1, referring to the detection device diagram in FIG. 1, fixing a sensor on the head of a test person, respectively attaching acquisition electrodes to the scalp and clamping earlobe parts to capture electrical signals related to brain electrical activity, and then transmitting the electrical signals to a data acquisition unit for processing to obtain brain wave signals reflecting the fatigue state of a driver; meanwhile, a gyroscope is used for collecting head acceleration data of a driver during brain wave detection, the acceleration data reflect movements and postures of the head at different time points, and information about the movements and postures of the driver is provided; 10000 parts of electroencephalogram data of 100 persons in fatigue driving are collected and marked as B 1 -B 10000 Acceleration data 10000 shares, labeled A 1 -A 10000 The method comprises the steps of carrying out a first treatment on the surface of the Similarly, 10000 sets of electroencephalogram data of 100 persons in the same group are collected during normal driving, and the data is marked as B' 1 -B’ 10000 Acceleration data 10000 parts, labeled A' 1 -A’ 10000 The method comprises the steps of carrying out a first treatment on the surface of the The synchronous acquisition of the data ensures the relevance between the electroencephalogram data and the acceleration data; the invention combines the simultaneous acquisition of brain wave signals and head acceleration data in the data acquisition and processing stage, which is the follow-upFatigue state detection provides a reliable basis. The detailed collection and labeling of these data ensures the accuracy of analyzing and comparing the driver's fatigue and normal brain electrical and head movement patterns. For a better understanding and implementation of the present invention, reference may be made to fig. 1. In fig. 1, a schematic diagram of the detection device and the body worn device is shown for a better understanding of the acquisition process.
S2, preprocessing data; the two sensors acquire thirty-two times of information per second, and the brain wave sensor acquires 2X 32-dimensional data per second; the acceleration sensor will get 1 x 32 dimensional data per second. For the acquired brain electrical data B 'of non-fatigue driving' 1 -B’ 10000 Is averaged to obtain average brain electrical data B' a This average data represents the brain electrical activity in the non-fatigue driving state; for the collected non-fatigue driving acceleration data A' 1 -A’ 10000 Is averaged to obtain average electroencephalogram data A' a This average data represents the brain electrical activity in the non-fatigue driving state. Electroencephalogram data B for fatigue driving 1 -B 10000 And B' a Difference is made to obtain a series of difference electroencephalogram data C 1 -C 10000 These difference data reflect differences in brain electrical activity from non-fatigue states in the fatigue state; likewise, acceleration data A of fatigue driving 1 -A 10000 And average acceleration data A' a Difference is made to obtain a series of difference acceleration data D 1 -D 10000 These difference data reflect differences in head motion from non-fatigue states in fatigue states. Through the data preprocessing procedure described above, we obtain two key data sets: difference brain electrical data C 1 To C 10000 And difference acceleration data D 1 To D 10000 . The difference data are used as an input layer of a signal characteristic extraction module in a subsequent step to complete model construction for identifying the fatigue state of a driver, so that the driving safety and the performance of a warning system are improved. To achieve this goal, we will employ neural networks as a powerful tool to process and analyze such data.The neural network will be used to learn patterns and features related to fatigue status in order to more accurately identify fatigue driving.
Step S3, constructing a fatigue state detection network based on a random forest regression model, wherein the fatigue state detection network comprises a signal characteristic extraction module F and a decision tree discrimination module T;
the feature extraction module F adopts a combination of a convolutional neural network CNN and a recurrent neural network LSTM, for extracting features with information content from the original signal. The training process of this module is shown in fig. 2. The characteristic extraction module F is responsible for combining and assembling the electroencephalogram signal difference data and the acceleration signal difference data into a high-dimensional signal, and inputting the high-dimensional signal into the convolutional neural network CNN. The convolution layer is used for capturing the spatial characteristics of the signals and extracting characteristic mapping of different scales. The signals undergo downsampling twice, the convolved signal sequences are transmitted to a recurrent neural network LSTM, and the LSTM layer is used for capturing time correlation information among features, so that the time sequence of brain wave signals and acceleration signals can be fully considered. Finally, the characteristic sequence output by the LSTM layer contains the correlation characteristics of the electroencephalogram signal and the acceleration signal.
The decision tree discrimination module T comprises a random forest and consists of a plurality of decision trees. The following is the workflow of constructing the decision tree discrimination module T, as shown in fig. 3:
an empty random forest model is created, and the number of decision trees contained in the integration is determined. Bootstrap bootstrapping samples with a put back are performed from the dataset for each decision tree to create data for a random subset of each decision tree. For each subset, a part of features are randomly selected to reduce the correlation between each decision tree and improve the diversity of random forests. A decision tree regression model is trained for each decision tree using the selected data subset and feature subset to learn to predict fatigue status from the features. Repeating the steps for a plurality of times, creating a plurality of decision trees and constructing a random forest. For each decision tree, a model is used to predict, resulting in a fatigue state prediction value for each input sample. The predictions of all decision trees are integrated to generate a continuous fatigue state prediction value to reflect the driver's possible fatigue state level.
S3-1, a feature extraction module F comprises two main components: convolutional neural network CNN and recurrent neural network LSTM. The two components work together to extract features with information content from the original signal. The first part of the feature extraction module is a convolutional neural network, which is used to extract the spatial features of the signal. Here, a general backbone network structure such as VGG16, res net34, res net101, etc. may be selected. These backbone networks have proven their effectiveness in the field of computer vision and are therefore also suitable for use in the field of signal processing. The input of the convolution network is a 3X 32-dimensional signal composed of port signals from the sensors, including brain electrical difference data C 1 -C 10000 And acceleration difference data D 1 -D 10000 . These signals provide raw information about brain waves and head movements. The convolution layer is used to extract spatial features from the input signal. The convolution layer will perform a filtering operation by learning to identify specific brain wave patterns or acceleration patterns to capture different characteristic patterns in the signal that may be related to the state (fatigue or wakefulness) of the driver. Each convolution kernel will be responsible for detecting one or more features, such as head movement, frequency changes, or other relevant signal features.
The activation function introduces non-linear properties after the convolution layer that help capture complex features and correlations in the signal. For electroencephalogram data, the activation function can help the model identify different frequency components and their relative intensities. For acceleration data, the activation function may strengthen or weaken the response of different vibration modes that are related to the driver's posture or head movements.
The output of the convolution layer will be input to the pooling layer to reduce the dimensionality of the data while preserving important features. The invention sub-samples the main characteristics in the signals through the pooling operation, thereby reducing the calculation burden. For electroencephalogram data, this helps to extract the most important frequency components within the time window so that the model can focus on the most relevant information. For acceleration data, pooling can reduce the effects of noise and capture key features of head motion.
Stacking multiple convolution layers together allows the model to learn progressively higher-level, more abstract features. This can help the model understand more complex waveform patterns in the electroencephalogram data, as well as capture more complex motion patterns in the acceleration data. These stacked convolutional layers fully represent the data hierarchy, enabling the network to understand different levels of information.
In the final fully connected layer, the model integrates these different features and feature maps together to form the final decision. And the fatigue state of the driver is judged by combining the characteristics of the brain electricity data and the acceleration data. This is an advanced decision layer that fuses together different types of information to make the final decision.
The various layers of the convolutional neural network will work cooperatively to detect different patterns and features in the electroencephalogram data and the acceleration data, and then integrate these features into one 1 x 8-dimensional signal as an input node of the LSTM neural network by performing two downsampling maps. This deep learning approach helps the model to better understand the complex signals and make more accurate decisions.
The output feature sequence of the convolutional neural network is input into the recurrent neural network RNN to obtain time-related information of the features. In this embodiment, a classical long and short term memory network LSTM is employed because it is able to effectively capture the chronology of sequence data. The input data of the LSTM is a combined signal extracted from the convolutional neural network and downsampled, the combined signal being taken as a node of one input every five seconds. This allows the model to understand the timing of the signals, i.e. brain wave signals and the timing between head movements and fatigue. The LSTM layer can effectively model the timing of the signals, capturing the characteristics of the fatigue state.
By integrating the convolutional and recurrent neural networks, the feature extraction module F is able to extract signal features including spatial and temporal features that will be used for subsequent fatigue state detection. The output of this module will become an input to the neural network, helping the neural network learn patterns and features to more accurately identify the driver's fatigue status.
S3-2, the decision tree discrimination module T aims at constructing a random forest model which consists of a plurality of decision trees and is used for predicting the fatigue state of a driver. Firstly, an empty random forest model is created, the number N of decision trees contained in the random forest is determined, and the number N is an important super parameter and determines the number of trees in the integration. For each decision tree: for the construction of each decision tree, a replaced Bootstrap sample is taken from the original dataset. This means that each sample may appear multiple times in the same subset, while some samples may not be included in the subset at all. This process creates multiple different subsets of data, with each tree having independent training data. For each decision tree construction, a portion of the features are randomly selected. This is to reduce the correlation between each decision tree and increase the diversity of random forests. Typically, only a portion of the available features, but not all of the features, are considered in the splitting of each node. A decision tree regression model is trained using the selected data subsets and feature subsets. In the training process, the decision tree builds a tree structure according to the characteristics and labels of the data. The splitting process of the decision tree is based on the criterion of reducing the non-purity, and the invention adopts the MSE to evaluate the quality of the splitting point. The above steps are repeated a number of times to create a plurality of decision trees. Each decision tree is trained based on a different subset of data and feature, thereby increasing the diversity of the model.
Wherein for each decision tree, a model is used for prediction. Each decision tree will produce a predicted value of fatigue status for each input sample. Assuming that there isNA decision tree, each decision tree having a predicted value ofy iiFrom the slave1To the point ofN) Averaging the predictions, which reflect the level of possible fatigue of the driver, by averaging the predictions of all decision treesEnsemblePredictionThe calculations are as follows:
in addition, use the testThe data of the set are used for evaluating the performance of the random forest regression model obtained in the last step, and the following is sety i Is the firstiThe actual observed value of the individual samples is,is the firstiThe model predictive value of the individual samples is calculated,nis the number of samples, the mean square error MSE is:
step S4, training the deep learning network constructed in the step S3 by utilizing the electroencephalogram data C and the acceleration data D constructed in the step S2 signal to obtain a fatigue driving detection model; selecting cross entropy as a loss function to measure the difference between the prediction of the model and the real label; minimizing a loss function by using an Adam optimization algorithm, and updating weights and parameters of the deep learning network; is provided withmFor momentum variable, useΦAs an intermediate of the calculations,vfor the RMSprop variable,β 1 is the exponential decay rate of the momentum,β 2 is the exponential decay rate of RMSprop,tthe number of iterations is indicated and,learning rate is the rate of learning to be performed,εis a small constant which is used to control the temperature,gradientrepresenting the gradient of the loss function to the model parameters, updating the model parameters may be represented as follows:
the training model process is as follows, and the electroencephalogram data C and the acceleration data D are input into the deep learning network. In each iteration, the loss function value is calculated by forward propagation, and then the Adam algorithm is used to update the model parameters. This process is iterated until a predetermined number of training rounds or other stop condition is reached. During training, the validation set may be used to evaluate the performance of the model. And checking performance indexes such as accuracy, recall rate, F1 score and the like of the model to ensure the effectiveness of the model in the fatigue state detection task. Once the model training is completed and the performance meets the requirements, the model parameters are saved for subsequent fatigue state detection.
And S5, detecting the fatigue state of the driver by using the model trained in the step S4. The general workflow for performing fatigue state detection is as follows:
the device is used for collecting real-time electroencephalogram data C and acceleration data D, and the data are used as input of a model for fatigue state detection. And (3) carrying out preprocessing steps similar to training data on the data acquired in real time, calculating an average data value of the data acquired in real time, which is received in the past five seconds, for each moment, and carrying out difference between the data value at the current moment and the average value to obtain a differentiated numerical value. This step will continue as the device is running, each time an average of the past five seconds will be calculated, and the value at the current time will be subtracted from the average to obtain differentiated data as input to the neural network. Ensuring that the data format matches the training data. The deep learning model trained in step S4 is used for reasoning. The real-time differential data is transferred to the model, which outputs a predicted value of the fatigue state.
Based on the output of the model, an appropriate threshold is set to determine whether the driver is in a tired state. The choice of threshold depends on the output range of the model and the application requirements. Based on the threshold value, it is determined whether the driver is in a tired state. If the output value of the model is above the threshold, the driver may be in a fatigue state; conversely, if the output value is below the threshold value, the driver may be in a normal state. If the driver is determined to be in a tired state, a corresponding warning system is triggered to alert the driver to take action, such as rest, parking or other necessary action. The status of the driver is continuously monitored and any potential fatigue status is responded to in time. Real-time data and results of fatigue state detection are recorded for subsequent analysis and improvement. The system is checked and maintained regularly to ensure its proper operation and to update the model, thresholds and parameters as required.
Through the process, the fatigue state detection system can be implemented, so that the safety of a driver is improved, and the potential danger brought by fatigue driving is reduced.
The method combines the convolution network and the recursion network, constructs and trains brain wave and acceleration signal extraction models, and improves fatigue feature extraction and identification effects; and the fatigue state is judged by using the random forest regression model, so that the fatigue state identification accuracy and robustness of the driver are improved.
Example two
Based on the same inventive concept, the invention also provides an electronic device comprising one or more processors; a storage means for storing one or more programs; the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method described in embodiment one.
Because the device described in the second embodiment of the present invention is an electronic device for implementing the fatigue driving detection method based on electroencephalogram interpretation in the first embodiment of the present invention, based on the method described in the first embodiment of the present invention, a person skilled in the art can understand the specific structure and deformation of the electronic device, and therefore, the description thereof is omitted herein. All electronic devices adopted by the method of the embodiment of the invention belong to the scope of protection to be protected.
Example III
Based on the same inventive concept, the present invention also provides a computer readable medium having stored thereon a computer program which, when executed by a processor, implements the method described in embodiment one.
Since the device described in the third embodiment of the present invention is a computer readable medium used for implementing the fatigue driving detection method based on electroencephalogram interpretation in the first embodiment of the present invention, based on the method described in the first embodiment of the present invention, a person skilled in the art can understand the specific structure and deformation of the electronic device, and thus will not be described herein. All electronic devices adopted by the method of the embodiment of the invention belong to the scope of protection to be protected.
The specific embodiments described herein are offered by way of example only to illustrate the spirit of the invention. Those skilled in the art may make various modifications or additions to the described embodiments or substitutions thereof without departing from the spirit of the invention or exceeding the scope of the invention as defined in the accompanying claims.

Claims (10)

1. The fatigue driving detection method based on electroencephalogram interpretation is characterized by comprising the following steps of:
acquiring brain wave signals of a plurality of drivers in a non-fatigue driving state and a fatigue driving state and head acceleration data synchronous with the brain wave signals;
respectively calculating the average value of brain wave signals and head acceleration data in a non-fatigue state, and subtracting the average value of the brain wave signals and the head acceleration data in the fatigue state from the average value of the brain wave signals and the head acceleration data in the non-fatigue state to obtain a brain wave data set and an acceleration data set;
constructing a fatigue state detection network, wherein the network comprises a signal characteristic extraction module and a decision tree discrimination module; the input of the signal characteristic extraction module is brain wave data set and acceleration data set, which are used for describing the characteristics of brain wave signals and acceleration signals, and inputting the characteristics into the decision tree discrimination module, and the decision tree discrimination module outputs driving fatigue state values;
training a fatigue state detection network by utilizing the brain wave data set and the acceleration data set to obtain a fatigue driving detection model;
and detecting the fatigue state of the driver by using the trained model.
2. The brain-electric interpretation-based fatigue driving detection method according to claim 1, characterized in that: the feature extraction module consists of a convolutional neural network and a recurrent neural network, wherein the convolutional network is used for capturing the spatial features of signals, extracting feature mapping of different scales, and the recurrent neural network is used for capturing time correlation information among the features.
3. The brain-electric interpretation-based fatigue driving detection method according to claim 2, characterized in that: the convolutional network is a feature-coded network, employing a generic backbone network including, but not limited to, VGG16, res net34, res net101.
4. The brain-electric interpretation-based fatigue driving detection method according to claim 1, characterized in that: and (5) carrying out dimension assembly on brain wave signals and head acceleration data, and inputting the brain wave signals and the head acceleration data into a feature extraction module.
5. The brain-electric interpretation-based fatigue driving detection method according to claim 1, characterized in that: the decision tree discrimination module is based on a random forest model, and the specific creation process is as follows:
creating an empty random forest model, determining the number of decision trees contained in the integration, sampling the Bootstrap with the replacement from the dataset for each decision tree to create data of a random subset of each decision tree, and randomly selecting a part of features for each subset to reduce the correlation between each decision tree; training a decision tree regression model for each decision tree using the selected data subset and feature subset, each decision tree learning to predict fatigue states from features, repeating the above steps a number of times to create a plurality of decision trees, constructing a random forest, predicting each decision tree using the model, each decision tree generating a predicted value for fatigue state for each input sample, integrating predictions for all decision trees, and generating a continuous predicted value for fatigue state.
6. The brain-electric interpretation-based fatigue driving detection method according to claim 5, wherein:
the splitting process of the decision tree adopts a mean square error to evaluate the quality of the splitting point based on the criterion of the reduction of the non-purity.
7. The brain-electric interpretation-based fatigue driving detection method according to claim 1, characterized in that: in the network training process, cross entropy is adopted as a loss function, and the difference between the prediction of the model and the real label is measured; the Adam optimization algorithm is used to minimize the loss function and update the weights and parameters of the deep learning network.
8. The brain-electric interpretation-based fatigue driving detection method according to claim 1, characterized in that: the specific process when using the trained model for detection is as follows:
the method comprises the steps of preprocessing data acquired in real time, calculating an average data value of the data acquired in real time, which is received in the past several seconds, for each moment, and differencing the data value at the current moment with the average value to obtain a differentiated value, wherein the steps are continuously performed when the device is operated, the average value of the past five seconds is calculated at each moment, the value at the current moment is subtracted from the average value to obtain the differentiated data, the differentiated data is used as input of a neural network, the data format is ensured to be matched with the training data, the training model is used for reasoning, the real-time differentiated data is transmitted to the model, and the model outputs a predicted value of a fatigue state.
9. An electronic device, comprising:
one or more processors;
a storage means for storing one or more programs;
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-8.
10. A computer readable medium having a computer program stored thereon, characterized by: the program, when executed by a processor, implements the method of any of claims 1-8.
CN202410121969.8A 2024-01-30 2024-01-30 Fatigue driving detection method and device based on electroencephalogram interpretation Active CN117643470B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410121969.8A CN117643470B (en) 2024-01-30 2024-01-30 Fatigue driving detection method and device based on electroencephalogram interpretation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410121969.8A CN117643470B (en) 2024-01-30 2024-01-30 Fatigue driving detection method and device based on electroencephalogram interpretation

Publications (2)

Publication Number Publication Date
CN117643470A true CN117643470A (en) 2024-03-05
CN117643470B CN117643470B (en) 2024-04-26

Family

ID=90043731

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410121969.8A Active CN117643470B (en) 2024-01-30 2024-01-30 Fatigue driving detection method and device based on electroencephalogram interpretation

Country Status (1)

Country Link
CN (1) CN117643470B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040236235A1 (en) * 2003-05-21 2004-11-25 Delta Tooling Co., Ltd. Human condition evaluation system, computer program, and computer-readable record medium
JP2012065853A (en) * 2010-09-24 2012-04-05 Sleep System Kenkyusho:Kk Sleep level determining device and sleep level determining method
CN107822623A (en) * 2017-10-11 2018-03-23 燕山大学 A kind of driver fatigue and Expression and Action method based on multi-source physiologic information
CN109820525A (en) * 2019-01-23 2019-05-31 五邑大学 A kind of driving fatigue recognition methods based on CNN-LSTM deep learning model
CN111603158A (en) * 2020-04-21 2020-09-01 苏州乐达纳米科技有限公司 Fatigue driving warning method and system based on electrophysiological signal artificial intelligence analysis
CN112101152A (en) * 2020-09-01 2020-12-18 西安电子科技大学 Electroencephalogram emotion recognition method and system, computer equipment and wearable equipment
CN113907758A (en) * 2021-12-13 2022-01-11 深圳市心流科技有限公司 Driver fatigue detection method, device, equipment and storage medium
CN116616794A (en) * 2023-05-18 2023-08-22 中国人民解放军海军特色医学中心 Underwater operator fatigue adjustment method and system based on electroencephalogram signals
CN116975781A (en) * 2023-08-04 2023-10-31 重庆邮电大学 Automatic driving vehicle behavior decision system and method

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040236235A1 (en) * 2003-05-21 2004-11-25 Delta Tooling Co., Ltd. Human condition evaluation system, computer program, and computer-readable record medium
JP2012065853A (en) * 2010-09-24 2012-04-05 Sleep System Kenkyusho:Kk Sleep level determining device and sleep level determining method
CN107822623A (en) * 2017-10-11 2018-03-23 燕山大学 A kind of driver fatigue and Expression and Action method based on multi-source physiologic information
CN109820525A (en) * 2019-01-23 2019-05-31 五邑大学 A kind of driving fatigue recognition methods based on CNN-LSTM deep learning model
CN111603158A (en) * 2020-04-21 2020-09-01 苏州乐达纳米科技有限公司 Fatigue driving warning method and system based on electrophysiological signal artificial intelligence analysis
CN112101152A (en) * 2020-09-01 2020-12-18 西安电子科技大学 Electroencephalogram emotion recognition method and system, computer equipment and wearable equipment
CN113907758A (en) * 2021-12-13 2022-01-11 深圳市心流科技有限公司 Driver fatigue detection method, device, equipment and storage medium
CN116616794A (en) * 2023-05-18 2023-08-22 中国人民解放军海军特色医学中心 Underwater operator fatigue adjustment method and system based on electroencephalogram signals
CN116975781A (en) * 2023-08-04 2023-10-31 重庆邮电大学 Automatic driving vehicle behavior decision system and method

Also Published As

Publication number Publication date
CN117643470B (en) 2024-04-26

Similar Documents

Publication Publication Date Title
Ghoddoosian et al. A realistic dataset and baseline temporal model for early drowsiness detection
Xing et al. Identification and analysis of driver postures for in-vehicle driving activities and secondary tasks recognition
Omerustaoglu et al. Distracted driver detection by combining in-vehicle and image data using deep learning
Sigari et al. A driver face monitoring system for fatigue and distraction detection
Li et al. Fall detection for elderly person care using convolutional neural networks
Khodairy et al. Driving behavior classification based on oversampled signals of smartphone embedded sensors using an optimized stacked-LSTM neural networks
CN108764059A (en) A kind of Human bodys' response method and system based on neural network
CN110717389B (en) Driver fatigue detection method based on generation countermeasure and long-short term memory network
CN106846729A (en) A kind of fall detection method and system based on convolutional neural networks
Hossain et al. Automatic driver distraction detection using deep convolutional neural networks
WO2008127465A1 (en) Real-time driving danger level prediction
WO2017211395A1 (en) Control device, system and method for determining the perceptual load of a visual and dynamic driving scene
Celona et al. A multi-task CNN framework for driver face monitoring
Kassem et al. Yawn based driver fatigue level prediction
CN116956222A (en) Multi-complexity behavior recognition system and method based on self-adaptive feature extraction
Biju et al. Drowsy driver detection using two stage convolutional neural networks
Lamaazi et al. Smart edge-based driver drowsiness detection in mobile crowdsourcing
Li et al. Monitoring and alerting of crane operator fatigue using hybrid deep neural networks in the prefabricated products assembly process
CN117643470B (en) Fatigue driving detection method and device based on electroencephalogram interpretation
Suriani et al. Smartphone sensor accelerometer data for human activity recognition using spiking neural network
CN117041972A (en) Channel-space-time attention self-coding based anomaly detection method for vehicle networking sensor
Utomo et al. Driver fatigue prediction using different sensor data with deep learning
CN115438705A (en) Human body action prediction method based on wearable equipment
Hong et al. Towards drowsiness driving detection based on multi-feature fusion and LSTM networks
Priyanka et al. A Review on Drowsiness Prediction System using Deep Learning Approaches

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant