CN114343661B - Method, device and equipment for estimating reaction time of driver in high-speed rail and readable storage medium - Google Patents

Method, device and equipment for estimating reaction time of driver in high-speed rail and readable storage medium Download PDF

Info

Publication number
CN114343661B
CN114343661B CN202210214221.3A CN202210214221A CN114343661B CN 114343661 B CN114343661 B CN 114343661B CN 202210214221 A CN202210214221 A CN 202210214221A CN 114343661 B CN114343661 B CN 114343661B
Authority
CN
China
Prior art keywords
time
information
feature
data set
characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210214221.3A
Other languages
Chinese (zh)
Other versions
CN114343661A (en
Inventor
张二田
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Jiaotong University
Original Assignee
Southwest Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Jiaotong University filed Critical Southwest Jiaotong University
Priority to CN202210214221.3A priority Critical patent/CN114343661B/en
Publication of CN114343661A publication Critical patent/CN114343661A/en
Application granted granted Critical
Publication of CN114343661B publication Critical patent/CN114343661B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/318Heart-related electrical modalities, e.g. electrocardiography [ECG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/162Testing reaction times
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/18Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state for vehicle drivers or machine operators
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7203Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
    • A61B5/7207Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal of noise induced by motion artifacts
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7225Details of analog processing, e.g. isolation amplifier, gain or sensitivity adjustment, filtering, baseline or drift compensation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/725Details of waveform analysis using specific filters therefor, e.g. Kalman or adaptive filters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2503/00Evaluating a particular growth phase or type of persons or animals
    • A61B2503/20Workers
    • A61B2503/22Motor vehicles operators, e.g. drivers, pilots, captains

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • Theoretical Computer Science (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Surgery (AREA)
  • Pathology (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Physiology (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Psychology (AREA)
  • Software Systems (AREA)
  • Hospice & Palliative Care (AREA)
  • Developmental Disabilities (AREA)
  • Child & Adolescent Psychology (AREA)
  • Social Psychology (AREA)
  • Educational Technology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Fuzzy Systems (AREA)
  • Cardiology (AREA)

Abstract

The invention provides a method, a device and equipment for estimating reaction time of a high-speed rail driver and a readable storage medium, which relate to the technical field of high-speed rail trains and comprise the steps of acquiring first information, wherein the first information comprises a real-time electroencephalogram signal in a first time period and a real-time electrocardio signal in a second time period; acquiring an original data set; respectively preprocessing the first information and the second information; performing feature extraction processing on the first information and the second information to obtain a real-time feature set and a feature data set; establishing and training a two-stage feature fusion estimation model according to the feature data set; the real-time feature set is used as input information of the two-stage feature fusion estimation model, the two-stage feature fusion estimation model is solved to obtain the reaction time of the high-speed rail driver, the method makes full use of the internal correlation information between the features through the fusion features, and therefore the accuracy of the method for estimating the reaction time of the high-speed rail driver is improved.

Description

Method, device and equipment for estimating reaction time of high-speed rail driver and readable storage medium
Technical Field
The invention relates to the technical field of high-speed rail trains, in particular to a method, a device and equipment for estimating the reaction time of a driver of a high-speed rail and a readable storage medium.
Background
The reduced alertness of high-speed rail (HSR) drivers is one of the major factors leading to accidents, where the Reaction Time (RT) of a high-speed rail driver detected by a signal is an objective indicator of alertness. However, no method for estimating the reaction time of the high-speed rail driver exists in the prior art.
Disclosure of Invention
The invention aims to provide a method, a device, equipment and a readable storage medium for estimating the reaction time of a high-speed rail driver, so as to improve the problems. In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
in a first aspect, the present application provides a method for estimating a reaction time of a high-speed rail driver, including: acquiring first information, wherein the first information comprises a real-time electroencephalogram signal in a first time period and a real-time electrocardiosignal in a second time period, and the cut-off time of the first time period and the cut-off time of the second time period are both current time; acquiring an original data set, wherein the original data set comprises at least three historical data sets, and the historical data sets comprise historical electroencephalogram signals, historical electrocardiosignals and reaction time; respectively preprocessing the first information and the original data set, and updating the first information and the original data set into preprocessed data; performing feature extraction processing on the first information and the original data set to obtain a real-time feature set and a feature data set; establishing and training a two-stage feature fusion estimation model according to the feature data set; and taking the real-time feature set as input information of the two-stage feature fusion estimation model, and solving the two-stage feature fusion estimation model to obtain the reaction time of the high-speed rail driver.
In a second aspect, the present application also provides a high-speed rail driver reaction time estimation apparatus, including: the first acquisition unit is used for acquiring first information, wherein the first information comprises a real-time electroencephalogram (EEG) signal in a first time period and a real-time Electrocardiosignal (ECG) in a second time period, and the cut-off time of the first time period and the cut-off time of the second time period are both current time; the second acquisition unit is used for acquiring an original data set, wherein the original data set comprises at least three historical data sets, and the historical data sets comprise historical electroencephalogram signals, historical electrocardiosignals and reaction time; the preprocessing unit is used for respectively preprocessing the first information and the original data set and updating the first information and the original data set into preprocessed data; the characteristic extraction unit is used for carrying out characteristic extraction processing on the first information and the original data set to obtain a real-time characteristic set and a characteristic data set; the model establishing unit is used for establishing and training a two-stage feature fusion estimation model according to the feature data set; and the time estimation unit is used for taking the real-time feature set as input information of the two-stage feature fusion estimation model and solving the two-stage feature fusion estimation model to obtain the reaction time of the high-speed rail driver.
In a third aspect, the present application also provides a high-speed rail driver reaction time estimation apparatus, including:
a memory for storing a computer program;
a processor for implementing the steps of the high-speed rail driver reaction time estimation method when executing the computer program.
In a fourth aspect, the present application further provides a readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the steps of the above-mentioned method for estimating a reaction time based on a high-speed railway driver.
The invention has the beneficial effects that:
the method and the device have the advantages that the features are fused in the two-stage feature fusion estimation model, and the internal correlation information among the features is fully utilized, so that the accuracy of the estimation of the reaction time of a high-speed rail driver is improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the embodiments of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
FIG. 1 is a schematic flow chart of a method for estimating the reaction time of a driver in a high-speed rail according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a reaction time estimation apparatus for a high-speed rail driver according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of the high-speed rail driver reaction time estimation device according to the embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present invention, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
In the prior art, in order to prevent the alertness of a high-speed rail driver from being reduced in the operation process, the high-speed rail in China requires the driver to step on a response pedal device within every 30 seconds in the driving process, otherwise, an emergency stop system is triggered.
The embodiment of the method is carried out in a driving simulator of a fixed base CRH 380A. The simulator consists of an immersive video system, a digital sound simulation system and a lighting simulation system. The arrangement can truly simulate the background environment of the high-speed train in the running process. The console is equipped with an operation control panel, a power control handle, a pantograph button, and the like, as in the case of a real vehicle. A stimulation box capable of presenting random signals is arranged in front of the operation table. The position of the stimulation box is within the visual field of the subject without obstructing their driving line of sight. The RT of the high-speed rail driver was collected to a random stimulus using a USB button. The EEG data is recorded by a 64-channel Neuroscan amplifier, and the electrode distribution is set according to an international 10-20 EEG system. Electrocardiographic data was collected using BIOPACMP 150.
In the example, 30 high-speed rail drivers were invited to participate. The section of the vehicle from the south station of Shanghai Kunming high-speed railway to the south station of the screen panel is determined as a simulated driving route. The driving task starts from south Kaili, goes to the south screen panel, returns to south Kaili after the south screen panel has a rest (about 3 minutes), and does not stop at any other station. In the running process, each driver is required to control the running of the train.
Example 1:
the embodiment provides a method for estimating the reaction time of a high-speed rail driver.
Referring to fig. 1, it is shown that the method includes step S100, step S200, step S300, step S400, step S500 and step S600.
S100, first information is obtained, wherein the first information comprises a real-time electroencephalogram signal in a first time period and a real-time electrocardiosignal in a second time period, the ending time of the first time period and the ending time of the second time period are both current times, the starting time of the first time period is different from the starting time of the first time period, and the real-time electroencephalogram signal and the real-time electrocardiosignal are both acquired by physiological signal acquisition equipment on the same high-speed rail driver.
It should be noted that, in the present application, the electroencephalogram signal is an EEG, and the electrocardiographic signal is an ECG. Meanwhile, it should be noted that the historical electroencephalogram signals and the historical electrocardiosignals mentioned in the subsequent steps are EEG and ECG only in different sampling time, and are respectively renamed in the application to avoid misunderstanding. Meanwhile, in the present application, the time length of the first time period of the EEG is 5s, and the time length of the second time period is 20 s. And brain activity due to the central and parietal occipital regions is highly correlated with alertness. In this embodiment, nine highly sensitive EEG channels are selected to extract features for alertness estimation: c1, CZ, C2, P1, PZ, P2, PO3, POZ and PO 5. The estimation method of the method takes 5sEEG and 20sECG as input to achieve the purpose of estimating the reaction time.
S200, obtaining an original data set, wherein the original data set comprises at least three historical data sets, and the historical data sets comprise historical electroencephalogram signals, historical electrocardiosignals and reaction time.
It should be noted that the response time is the time from when the stimulation box is lighted to when the driver of the high-speed rail presses the USB button. And a corresponding 5s EEG, a 20s ECG and a reaction time in the historical data set form a subset of the historical data set, wherein each subset of the historical data set is a sample.
S300, preprocessing the first information and the original data set respectively, and updating the first information and the original data set into preprocessed data.
S400, performing feature extraction processing on the first information and the original data set to obtain a real-time feature set and a feature data set.
It should be noted that the real-time feature set mentioned in this step includes an electroencephalogram feature set and an electrocardiographic feature set, and the corresponding feature data set also includes an electroencephalogram feature set and an electrocardiographic feature set corresponding to the data thereof.
And S500, establishing and training a two-stage feature fusion estimation model according to the feature data set.
S600, the real-time feature set is used as input information of the two-stage feature fusion estimation model, and the two-stage feature fusion estimation model is solved to obtain the reaction time of the high-speed rail driver.
In the method, the intrinsic correlation information among the features is fully utilized through the fusion features, so that the accuracy of the estimation of the reaction time of a high-speed rail driver in the method is improved.
In some specific embodiments, step S300 of the method includes step S310, step S320, and step S330. It should be noted that the preprocessing procedure in S310-330 mentioned below is processing performed on the first information. Similarly, the process of processing the original data set is still performed in the present application, and the process is the same as the process of processing the first information.
S310, performing down-sampling processing on the first information by using a preset frequency threshold value to obtain the down-sampled first information.
It should be noted that, in this step, the preset frequency threshold is 250 Hz.
S320, performing primary filtering processing in a preset frequency range on the first information to obtain first information after primary filtering, wherein the primary filtering processing is performed by a Butterworth band-pass filter with a preset frequency width.
Note that, in this step, the predetermined frequency range mentioned is 0.01 to 30 Hz. The baseline wander and high frequency artifacts of the original signal are removed in this step by applying a butterworth bandpass filter in the frequency range of 0.01 to 30 Hz.
S330, performing secondary filtering processing on the real-time electroencephalogram signals, updating the real-time electroencephalogram signals into data after the secondary filtering processing, and processing the secondary filtering processing into a spatial filter.
Note that, in this step, artifacts caused by blinking in the electroencephalogram signal are removed by a spatial filter.
In some specific embodiments, step S400 of the method includes step S410, step S420 and step S430. It should be noted that the preprocessing process in S410-430 mentioned below is processing performed on one electroencephalogram signal in the first information. Similarly, the same processing procedure for the electroencephalogram signal of the original data set is still provided in the present application, and the processing procedure is the same as the processing procedure for the first information.
S410, carrying out first segmentation on the real-time electroencephalogram signal to obtain at least two sections of first sub-signals, wherein the overlapping percentage of the two adjacent sections of first sub-signals is a preset numerical value, and the time length of the first sub-signals is a preset first time length.
It should be noted that, in the present application, the prior art hamming windows technology can be adopted to implement. The percentage of overlap is 50% in this application, 2 seconds of the time length of the first sub information. That is, in the present application, to extract the EEG characteristic EEG of one channel, the filtered EEG signal is divided into 9 segments with a Hamming window of 2 seconds, with the percentage overlap set to 50%.
And S420, performing FFT conversion on each section of first sub-signal to obtain a frequency domain corresponding to each first sub-signal.
And S430, calculating according to the frequency domain corresponding to each first sub-signal to obtain an electroencephalogram feature set.
It should be noted that there are four electroencephalogram characteristics mentioned in this step, and the specific calculation step includes step S431, step S432, step S433, and step S434. In steps S432 to S434, the same processing procedure may be performed for all the first sub-signals, and the other first sub-signals.
And S431, calculating according to the frequency domain corresponding to the first sub-signal to obtain second information, wherein the second information comprises the power spectral density of the first sub-signal in the first frequency domain range, and the second information is used as a subset in the electroencephalogram feature set.
It should be noted that, in this documentThe first frequency domain range calculated in this application is
Figure 281850DEST_PATH_IMAGE001
: (7-13 Hz), wherein the power spectral density is prior art and is not described in detail in this application.
S432, calculating according to the frequency domain corresponding to the first sub-signal to obtain third information, wherein the third information comprises the power spectral density of the first sub-signal in the second frequency domain range, and the third information is used as a subset in the electroencephalogram feature set.
It should be noted that the first frequency domain range calculated in the present application is β (13-30 Hz), where the power spectral density is the prior art, and is not described in detail in the present application.
And S433, dividing the second information and the third information to obtain fourth information, and taking the fourth information as a subset in the electroencephalogram feature set.
S434, calculating fifth information according to the frequency domain corresponding to the first sub-signal, wherein the fifth information comprises a sample entropy corresponding to the first sub-signal, and the fifth information is used as a subset in the electroencephalogram feature set.
It should be noted that, the sample entropy calculation formula in this application is as follows:
Figure 83584DEST_PATH_IMAGE003
wherein
Figure DEST_PATH_IMAGE005
And
Figure DEST_PATH_IMAGE007
respectively representing two sequences at a tolerable level𝑟Lower match𝑚+1 and𝑚the probability of a point is determined by the probability of the point,𝑁is the length of the data segment. To calculate SampEn for EEG signals, dimensions are embedded𝑚Set to 2, tolerance𝑟Set to 0.2.
In some specific embodiments, step S400 of the method includes step S440, step S450, step S460 and step S470. It should be noted that the preprocessing procedure in S440-470 mentioned below is processing performed on the first information center electrical signal. Similarly, the same processing procedure for the electrocardiographic signals of the original data set is still performed in the present application, and the processing procedure is the same as the processing procedure for the first information.
S440, carrying out first segmentation on the real-time electrocardiosignal to obtain at least two sections of second sub-signals, wherein the overlapping percentage of the two adjacent sections of second sub-signals is a preset value, and the time length of the second sub-signals is a preset first time length.
In this step, the processing of the electrocardiographic signal is the same as that of the electroencephalogram signal. The difference is that the time length of the electrocardiosignal is 20s, namely the electrocardiosignal is divided into 39 segments, and the window is 2 seconds.
S450, FFT conversion is carried out on each section of second sub-signals to obtain a frequency domain corresponding to each second sub-signal.
And S460, calculating to obtain a first electrocardiogram feature set according to the frequency domain corresponding to each second sub-signal, and taking the first electrocardiogram feature set as a subset of the electrocardiogram feature set.
And S470, calculating to obtain a second electrocardiogram feature set according to each second sub-signal, and taking the second electrocardiogram feature set as a subset of the electrocardiogram feature set.
It should be noted that step S460 includes step S461, step S462, step S463, and step S464. In steps S461 to S464, the same processing procedure may be performed for all the other second sub-signals.
S461, sixth information is obtained by calculation according to the frequency domain corresponding to the second sub-signal, where the sixth information includes the power spectral density of the second sub-signal at low frequency, and the sixth information is used as a subset in the real-time feature set.
And S462, calculating to obtain seventh information according to the frequency domain corresponding to the second sub-signal, wherein the seventh information includes the power spectral density of the second sub-signal at high frequency, and the seventh information is used as a subset in the real-time feature set.
S463, the power spectral density of the second sub-signal at the low frequency is divided by the power spectral density of the high frequency to obtain eighth information, and the eighth information is used as a subset in the real-time feature set.
And S464, calculating ninth information according to the frequency domain corresponding to each second sub-signal, wherein the ninth information comprises the sample entropy corresponding to the first sub-signal, and the ninth information is taken as a subset in the real-time feature set.
Step S470 includes step S471, step S472, step S473, and step S474. In addition, in steps S471-S474, the same processing procedure may be performed for all the second sub-signals, and the other second sub-signals.
And S471, calculating according to the Hamilton-Tompkins algorithm and the second sub-signal to obtain tenth information, wherein the tenth information comprises all R peaks of the first sub-signal.
And S472, calculating to obtain an average R peak value interval according to the tenth information.
And S473, calculating eleventh information according to the tenth information, wherein the eleventh information comprises the standard deviation of the normal intervals of the R peak values.
And S474, calculating to obtain twelfth information according to the tenth information, wherein the twelfth information comprises the percentage of difference of continuous R peak values of which the R peak values are larger than a preset second time length, and the average R peak value interval, the eleventh information and the twelfth information are used as subsets of a second electrocardiogram feature set.
In the above-described S460 to S470, 7 features are generated for each electrocardiographic signal.
In order to realize two-stage feature fusion of the reaction time of the high-speed rail driver and make full use of the internal association information between the features in the method, step S500 comprises step S510, step S520, step S530, step S540 and step S550.
And S510, standardizing each element in the characteristic data set, and updating the characteristic data set into standardized data, wherein each element in the updated characteristic data set is located between a preset maximum value and a preset minimum value.
The preset maximum value mentioned in the step is 1 and the preset minimum value is-1, and through the above steps, the individual difference in the data can be reduced.
S520, establishing a convolution neural network model and a long-term and short-term memory artificial neural network model.
And S530, taking the characteristic data set as input information of the convolutional neural network model, and solving the convolutional neural network model to obtain a fusion characteristic data set.
It should be noted that, in this step, a feature set extracted from a 5s EEG and a feature set extracted from a 20s ECG are used as input information of the CNN, and a fusion model is extracted through the CNN model.
And S540, training a long-term and short-term memory artificial neural network model by respectively utilizing the fusion characteristic data set, the electroencephalogram characteristic set and the electrocardio characteristic set to obtain a fusion characteristic neural network model, an electroencephalogram characteristic neural network model and an electrocardio characteristic neural network model.
S550, establishing a fusion estimation model, wherein input information of the fusion estimation model comprises an estimation result of the fusion characteristic neural network model, an estimation result of the electroencephalogram characteristic neural network model and an estimation result of the electrocardio characteristic neural network model, and output information of the fusion estimation model comprises the reaction time of a high-speed rail driver.
In the method, by adding a CNN algorithm in a feature level, fusion information is extracted from EEG and ECG features through CNN, and the accuracy and reliability of estimation are further improved; and the characteristics of different sensors represent the alertness fluctuation of different angles due to the electrocardiosignals and the electroencephalograms, and the CNN effectively fuses the characteristics and contains some internal associated information which can be used for evaluating the alertness of a driver.
Meanwhile, since the high-speed rail driving speed exceeds 350 km/h, the estimated speed should be considered in the high-speed rail driver alertness estimation system. In the method, in order to avoid increasing the computational burden, the three feature sets are input into three long-short memory (LSTM) neural networks in parallel.
It should be further noted that, in step S600, the step of responding to the high-speed rail driver includes:
inputting the electroencephalogram feature set and the electrocardio feature set in the real-time feature set into a CNN algorithm to obtain a fusion feature set;
respectively inputting the fusion characteristic set, the electroencephalogram characteristic set and the electrocardio characteristic set into a fusion characteristic neural network model, an electroencephalogram characteristic neural network model and an electrocardio characteristic neural network model to obtain corresponding reaction time estimation results;
and inputting the estimation result of the fusion characteristic neural network model, the estimation result of the electroencephalogram characteristic neural network model and the estimation result of the electrocardio characteristic neural network model into the fusion estimation model, and solving to obtain the reaction time of the high-speed rail driver.
Also in the present method, step S500 includes step S551 and step S552.
S551, establishing a reliability decision mathematical model based on the Layida criterion, wherein input information of the reliability decision mathematical model is an estimation result of the fusion characteristic neural network model, an estimation result of the electroencephalogram characteristic neural network model and an estimation result of the electrocardio characteristic neural network model.
It should be noted that in the method, reliability evaluation is performed on the three estimated results by means of the Layouta criterion, and the reliability of the reaction time with inconsistent reliability is 0, so that the purpose of removing suspicious data is achieved.
S552, establishing a decision fusion model based on a linear regression principle, taking a reliability decision mathematical model as input information of the decision fusion model, and taking the input information of the decision fusion model as the reaction time of a high-speed rail driver.
It should be noted that, in the method, the estimation results of different models are fused according to a linear regression mode, where the calculation process related to linear regression is in the prior art, and is not described herein, and through the above mode, reliability evaluation is added in decision-level fusion to remove abnormal data, so that the reliability of the model proposed by us can be effectively improved, especially under the condition of invalid features or sensor failure.
For ease of understanding, the present application will use a total of 1200 valid samples collected in simulated HSR driving for training and testing our proposed model. The RT samples for all drivers ranged from 603 milliseconds to 1355 milliseconds, with an average of 890.25(196.5) milliseconds. In our study, 1000 samples were used to train the model, with the remaining 200 samples being used as test samples. To improve the alertness estimate of our proposed method, we compared the outputs of the three LSTM models. The use of mean absolute error () and root mean square error () in this application reflects the distance between the estimated value and the observed value. Pearson correlation coefficient () represents the consistency of the trend of change between the estimated RT and the observed ground truth.
TABLE 2 estimated Performance achieved when Using different fusion strategies
Model (model) Data set MAE (ms) RMSE (ms) PCC Running time (s)
EEG 92.76 128.61 0.71 0.05
LSTM ECG 145.62 208.35 0.64 0.03
Fusion feature set 83.54 117.36 0.78 0.07
Two-stage feature fusion estimation model The above three kinds of 81.03 112.33 0.82 0.07
As shown in table 2, the results show that the two-stage feature fusion estimation model proposed by us achieves better performance than the comparative model in all evaluation measures. The best MAE, RMSE and PCC for our proposed model are 81.03ms, 112.33ms and 0.82, respectively. This indicates that the developed model can effectively assess the alertness of high-speed rail drivers by estimating their RT. By using a feature-level and decision-level fusion strategy, the estimated performance is improved by about 12%. For RT estimation without any information fusion technique, EEG features show better performance than ECG features. Specifically, the best MAE, RMSE and PCC obtained by LSTM networks when using EEG features are 92.76ms, 128.61ms and 0.71, respectively. The values obtained when only ECG features were used were 135.62ms, 180.35ms and 0.64, respectively. By fusing the EEG and ECG features with CNN, the MAE performance of LSTM was improved by 9.9%, RMSE by 9%, and PCC by 10.8%. The reason for this improvement is that the features from different sensors represent alertness fluctuations from different angles, and the CNN effectively fuses these features. With respect to decision-level fusion, estimation performance is further improved. MAE decreased by 2.51ms, RMSE decreased by 5.03ms, and PCC increased by 0.04. The result obtained by fusing different models is an integrated learning method, and the generalization capability and reliability of the adopted model in the process of estimating RTs of the HSR driver can be improved. The feature level fusion shows better performance in the aspect of improving the estimation precision.
Meanwhile, according to the results in the table 2, a simple regression algorithm is used for adding a decision-level fusion module, so that the influence on the calculation speed is small. Another factor that affects the speed of computation is the dimension of the input. We attempt to enter the EEG signature, ECG signature, and fusion signature together into the LSTM. MAE was 84.42ms, RMSE was 116.21ms, and PCC was 0.79. The performance approximates a model with fused features extracted by CNN. However, the run time was 0.11 seconds, 43% longer than our proposed parallel processing strategy. Efficiency and accuracy are two contradictory factors requiring the balance of researchers, and comparison results show that the efficiency and accuracy can be better ensured by using parallel processing in the model proposed by the researchers.
Example 2:
as shown in fig. 2, the present embodiment provides a high-speed railway driver reaction time estimation apparatus, which includes:
the first obtaining unit 1 is configured to obtain first information, where the first information includes a real-time electroencephalogram signal in a first time period and a real-time electrocardiosignal in a second time period, and ending times of the first time period and the second time period are current times.
And the second acquisition unit 2 is used for acquiring an original data set, wherein the original data set comprises at least three historical data sets, and the historical data sets comprise historical electroencephalogram signals, historical electrocardiosignals and reaction time.
And the preprocessing unit 3 is used for respectively preprocessing the first information and the original data set and updating the first information and the original data set into preprocessed data.
And the feature extraction unit 4 is used for performing feature extraction processing on the first information and the original data set to obtain a real-time feature set and a feature data set.
And the model establishing unit 5 is used for establishing and training a two-stage feature fusion estimation model according to the feature data set.
And the time estimation unit 6 is used for solving the two-stage feature fusion estimation model to obtain the reaction time of the high-speed rail driver by taking the real-time feature set as input information of the two-stage feature fusion estimation model.
In some specific implementations, the pre-treatment unit 3 comprises:
the resampling unit 31 is configured to perform downsampling processing on the first information by using a preset frequency threshold, so as to obtain downsampled first information.
The primary filtering unit 32 is configured to perform primary filtering processing in a preset frequency range on the first information to obtain first information after primary filtering, where the primary filtering processing is performed by a butterworth band-pass filter with a preset frequency width.
And the secondary filtering unit 33 is configured to perform secondary filtering processing on the live electroencephalogram signal, update the live electroencephalogram signal into data after the secondary filtering processing, and perform the secondary filtering processing into spatial filter processing.
In some specific implementations, the feature extraction unit 4 includes:
the first segmentation unit 41 is configured to perform first segmentation on the live electroencephalogram signal to obtain at least two segments of first sub-signals, where an overlap percentage of two adjacent segments of the first sub-signals is a preset value, and a time length of the first sub-signals is a preset first time length.
The first converting unit 42 is configured to perform FFT conversion on each segment of the first sub-signal to obtain a frequency domain corresponding to each first sub-signal.
And the first frequency domain calculating unit 43 is configured to calculate an electroencephalogram feature set according to the frequency domain corresponding to each first sub-signal.
Wherein, the first frequency domain calculating unit 43 includes:
the first calculating unit 431 is configured to obtain second information through calculation according to a frequency domain corresponding to the first sub-signal, where the second information includes a power spectral density of the first sub-signal in a first frequency domain range, and the second information is used as a subset in the electroencephalogram feature set.
The second calculating unit 432 is configured to obtain third information through calculation according to a frequency domain corresponding to the first sub-signal, where the third information includes a power spectral density of the first sub-signal in a second frequency domain range, and the third information is used as a subset in the electroencephalogram feature set.
And the third calculating unit 433 is configured to divide the second information and the third information to obtain fourth information, and use the fourth information as a subset of the electroencephalogram feature set.
And a fourth calculating unit 434, configured to calculate fifth information according to the frequency domain corresponding to the first sub-signal, where the fifth information includes the sample entropy corresponding to the first sub-signal, and uses the fifth information as a subset in the electroencephalogram feature set.
In some specific implementations, the feature extraction unit 4 further includes:
the second segmentation unit 44 is configured to perform first segmentation on the real-time electrocardiographic signal to obtain at least two segments of second sub-signals, where an overlap percentage of two adjacent segments of second sub-signals is a preset value, and a time length of the second sub-signals is a preset first time length.
And a second converting unit 45, configured to perform FFT conversion on each segment of the second sub-signal to obtain a frequency domain corresponding to each second sub-signal.
And the second frequency domain calculating unit 46 is configured to calculate a first electrocardiographic feature set according to the frequency domain corresponding to each second sub-signal, and use the first electrocardiographic feature set as a subset of the electrocardiographic feature set.
And the fifth calculating unit 47 is configured to calculate a second electrocardiographic feature set according to each second sub-signal, and use the second electrocardiographic feature set as a subset of the electrocardiographic feature set.
Wherein, the second frequency domain calculating unit 46 includes:
the sixth calculating unit 461 is configured to obtain sixth information according to the frequency domain corresponding to the second sub-signal, where the sixth information includes the power spectral density of the second sub-signal at low frequency, and the sixth information is used as a subset in the real-time feature set.
The seventh calculating unit 462 is configured to obtain seventh information according to the frequency domain corresponding to the second sub-signal, where the seventh information includes the power spectral density of the second sub-signal at high frequency, and is used as a subset in the real-time feature set.
The eighth calculating unit 463, configured to divide the power spectral density of the second sub-signal at the low frequency by the power spectral density of the high frequency to obtain eighth information, and use the eighth information as a subset in the real-time feature set.
A ninth calculating unit 464, configured to calculate ninth information according to the frequency domain corresponding to each second sub-signal, where the ninth information includes the sample entropy corresponding to the first sub-signal, and uses the ninth information as a subset in the real-time feature set.
Wherein, the fifth calculating unit 47 includes:
a tenth calculating unit 471, configured to calculate tenth information according to the Hamilton-Tompkins algorithm and the second sub-signal, where the tenth information includes all R peaks of the first sub-signal.
An eleventh calculating unit 472, configured to calculate an average R peak interval according to the tenth information.
A twelfth calculating unit 473, configured to calculate eleventh information from the tenth information, where the eleventh information includes a standard deviation of the R peak normal interval.
A thirteenth calculating unit 474 configured to calculate twelfth information according to the tenth information, where the twelfth information includes a percentage of difference between consecutive R peak values having an R peak value greater than a preset second time length, and the average R peak value interval, the eleventh information, and the twelfth information are used as a subset of the second electrocardiographic feature set.
In some specific implementations, the model building unit 5 further includes:
and a normalizing unit 51, configured to perform normalization processing on each element in the feature data set, and update the feature data set to be normalized data, where each element in the updated feature data set is located between a preset maximum value and a preset minimum value.
And the model establishing unit 52 is used for establishing a convolutional neural network model and a long-term and short-term memory artificial neural network model.
And the extraction model unit 53 is configured to use the feature data set as input information of the convolutional neural network model, and solve the convolutional neural network model to obtain a fused feature data set.
And the model training unit 54 is used for training a long-term and short-term memory artificial neural network model by respectively utilizing the fusion characteristic data set, the electroencephalogram characteristic set and the electrocardio characteristic set to obtain a fusion characteristic neural network model, an electroencephalogram characteristic neural network model and an electrocardio characteristic neural network model.
And the fusion model establishing unit 55 is used for establishing a fusion estimation model, input information of the fusion estimation model comprises an estimation result of the fusion characteristic neural network model, an estimation result of the electroencephalogram characteristic neural network model and an estimation result of the electrocardio characteristic neural network model, and output information of the fusion estimation model comprises the reaction time of a high-speed rail driver.
In some specific implementations, building the fusion model unit 55 includes:
and the establishment reliability decision unit 551 is used for establishing a reliability decision mathematical model based on the Lauda criterion, and the input information of the reliability decision mathematical model is the estimation result of the fusion characteristic neural network model, the estimation result of the electroencephalogram characteristic neural network model and the estimation result of the electrocardio characteristic neural network model.
The model is established in the linear regression unit 552, and is used for establishing a decision fusion model based on the linear regression principle, taking the reliability decision mathematical model as the input information of the decision fusion model, and the input information of the decision fusion model is the reaction time of the high-speed rail driver.
It should be noted that, regarding the apparatus in the above embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated herein.
Example 3:
corresponding to the above method embodiment, the present embodiment further provides a high-speed railway driver reaction time estimation device, and a high-speed railway driver reaction time estimation device described below and a high-speed railway driver reaction time estimation method described above can be referred to correspondingly.
Fig. 3 is a block diagram illustrating a high-speed rail driver reaction time estimation apparatus 800 according to an exemplary embodiment. As shown in fig. 3, the high-speed railway driver reaction time estimation apparatus 800 may include: a processor 801, a memory 802. The high-speed rail driver reaction time estimation device 800 may further include one or more of a multimedia component 803, an I/O interface 804, and a communication component 805.
The processor 801 is configured to control the overall operation of the high-speed railway driver reaction time estimation apparatus 800, so as to complete all or part of the steps of the high-speed railway driver reaction time estimation method. The memory 802 is used to store various types of data to support operation of the driver reaction time estimation device 800, such data may include, for example, instructions for any application or method operating on the driver reaction time estimation device 800, as well as application-related data, such as contact data, messaging, pictures, audio, video, and so forth. The Memory 802 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk or optical disk. The multimedia components 803 may include screen and audio components. Wherein the screen may be, for example, a touch screen and the audio component is used for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signal may further be stored in the memory 802 or transmitted through the communication component 805. The audio assembly further comprises at least one speaker for outputting audio signals. The I/O interface 804 provides an interface between the processor 801 and other interface modules, such as a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component 805 is used for wired or wireless communication between the high-speed rail driver reaction time estimation device 800 and other devices. Wireless communication, such as Wi-Fi, bluetooth, Near Field Communication (NFC), 2G, 3G, or 4G, or a combination of one or more of them, so that the corresponding communication component 805 may include: Wi-Fi module, bluetooth module, NFC module.
In an exemplary embodiment, the high-speed rail driver reaction time estimation apparatus 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors or other electronic components for performing the high-speed rail driver reaction time estimation method described above.
In another exemplary embodiment, a computer readable storage medium comprising program instructions which, when executed by a processor, implement the steps of the high-speed rail driver reaction time estimation method described above is also provided. For example, the computer readable storage medium may be the memory 802 described above including program instructions executable by the processor 801 of the high-speed rail driver reaction time estimation apparatus 800 to perform the high-speed rail driver reaction time estimation method described above.
Example 4:
in accordance with the above method embodiment, a readable storage medium is also provided in this embodiment, and a readable storage medium described below and a high-speed railway driver reaction time estimation method described above can be correspondingly referred to each other.
A readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the high-speed railway driver reaction time estimation method of the above-mentioned method embodiment.
The readable storage medium may be a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and various other readable storage media capable of storing program codes.
The above is only a preferred embodiment of the present invention, and is not intended to limit the present invention, and various modifications and changes will occur to those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (8)

1. A method for estimating a reaction time of a high-speed rail driver, comprising:
acquiring first information, wherein the first information comprises a real-time electroencephalogram signal in a first time period and a real-time electrocardiosignal in a second time period, and the cut-off time of the first time period and the cut-off time of the second time period are both current time;
acquiring an original data set, wherein the original data set comprises at least three historical data sets, and the historical data sets comprise historical electroencephalogram signals, historical electrocardiosignals and reaction time;
respectively preprocessing the first information and the original data set, and updating the first information and the original data set into preprocessed data;
performing feature extraction processing on the first information and the original data set to obtain a real-time feature set and a feature data set, wherein the feature data set comprises an electroencephalogram feature set and an electrocardio feature set;
establishing and training a two-stage feature fusion estimation model according to the feature data set;
taking the real-time feature set as input information of the two-stage feature fusion estimation model, and solving the two-stage feature fusion estimation model to obtain the reaction time of the high-speed rail driver;
wherein the establishing and training of the two-stage feature fusion estimation model according to the feature data set comprises:
standardizing each element in the feature data set, and updating the feature data set into standardized data, wherein each element in the updated feature data set is positioned between a preset maximum value and a preset minimum value;
establishing a convolutional neural network model and a long-term and short-term memory artificial neural network model;
taking the characteristic data set as input information of the convolutional neural network model, and solving the convolutional neural network model to obtain a fusion characteristic data set;
training one long-short term memory artificial neural network model by respectively utilizing the fusion characteristic data set, the electroencephalogram characteristic set and the electrocardio characteristic set to obtain a fusion characteristic neural network model, an electroencephalogram characteristic neural network model and an electrocardio characteristic neural network model;
and establishing a fusion estimation model, wherein input information of the fusion estimation model comprises an estimation result of the fusion characteristic neural network model, an estimation result of the electroencephalogram characteristic neural network model and an estimation result of the electrocardio characteristic neural network model, and output information of the fusion estimation model comprises the reaction time of a high-speed rail driver.
2. The method of driver reaction time estimation for high-speed rail according to claim 1, wherein preprocessing the first information and the raw data set separately comprises:
performing down-sampling processing on the first information by using a preset frequency threshold value to obtain down-sampled first information;
performing primary filtering processing in a preset frequency range on the first information to obtain first information after primary filtering, wherein the primary filtering processing is performed by a Butterworth band-pass filter with a preset frequency width;
and carrying out secondary filtering processing on the real-time electroencephalogram signals, and updating the real-time electroencephalogram signals into data after the secondary filtering processing, wherein the secondary filtering processing is spatial filter processing.
3. The method for estimating the reaction time of the driver in the high-speed rail according to claim 1, wherein the step of performing feature extraction processing on the first information and the raw data set to obtain a real-time feature set and a feature data set, wherein the real-time feature set comprises an electroencephalogram feature set, and comprises the steps of:
performing first segmentation on a real-time electroencephalogram signal to obtain at least two sections of first sub-signals, wherein the overlapping percentage of the two adjacent sections of the first sub-signals is a preset value, and the time length of the first sub-signals is a preset first time length;
performing FFT conversion on each section of the first sub-signals to obtain a frequency domain corresponding to each first sub-signal;
and calculating to obtain an electroencephalogram feature set according to the frequency domain corresponding to each first sub-signal.
4. A high-speed rail driver reaction time estimation apparatus, comprising:
the first acquisition unit is used for acquiring first information, wherein the first information comprises a real-time electroencephalogram (EEG) signal in a first time period and a real-time Electrocardiosignal (ECG) in a second time period, and the cut-off time of the first time period and the cut-off time of the second time period are both current time;
the second acquisition unit is used for acquiring an original data set, wherein the original data set comprises at least three historical data sets, and the historical data sets comprise historical electroencephalogram signals, historical electrocardiosignals and reaction time;
the preprocessing unit is used for respectively preprocessing the first information and the original data set and updating the first information and the original data set into preprocessed data;
the characteristic extraction unit is used for carrying out characteristic extraction processing on the first information and the original data set to obtain a real-time characteristic set and a characteristic data set, wherein the characteristic data set comprises an electroencephalogram characteristic set and an electrocardio characteristic set;
the model establishing unit is used for establishing and training a two-stage feature fusion estimation model according to the feature data set;
the time estimation unit is used for taking the real-time feature set as input information of the two-stage feature fusion estimation model and solving the two-stage feature fusion estimation model to obtain the reaction time of the high-speed rail driver;
wherein the model building unit further comprises:
the normalization unit is used for normalizing each element in the feature data set and updating the feature data set into normalized data, and each element in the updated feature data set is positioned between a preset maximum value and a preset minimum value;
the model establishing unit is used for establishing a convolutional neural network model and a long-term and short-term memory artificial neural network model;
the extraction model unit is used for solving the convolutional neural network model to obtain a fusion characteristic data set by taking the characteristic data set as input information of the convolutional neural network model;
the model training unit is used for training one long-term and short-term memory artificial neural network model by respectively utilizing the fusion characteristic data set, the electroencephalogram characteristic set and the electrocardio characteristic set to obtain a fusion characteristic neural network model, an electroencephalogram characteristic neural network model and an electrocardio characteristic neural network model;
and the fusion model establishing unit is used for establishing a fusion estimation model, the input information of the fusion estimation model comprises the estimation result of the fusion characteristic neural network model, the estimation result of the electroencephalogram characteristic neural network model and the estimation result of the electrocardio characteristic neural network model, and the output information of the fusion estimation model comprises the reaction time of a high-speed rail driver.
5. The high-speed rail driver reaction time estimation device according to claim 4, wherein the preprocessing unit includes:
the resampling unit is used for performing down-sampling processing on the first information by using a preset frequency threshold value to obtain down-sampled first information;
the first-stage filtering unit is used for carrying out first-stage filtering processing in a preset frequency range on the first information to obtain first information after first-stage filtering, and the first-stage filtering processing is Butterworth band-pass filter processing in a preset frequency width;
and the secondary filtering unit is used for carrying out secondary filtering processing on the real-time electroencephalogram signals and updating the real-time electroencephalogram signals into data after the secondary filtering processing, and the secondary filtering processing is spatial filter processing.
6. The high-speed rail driver reaction time estimation device according to claim 4, wherein the real-time feature set comprises an electroencephalogram feature set, and the feature extraction unit comprises:
the real-time electroencephalogram signal processing device comprises a first segmentation unit, a second segmentation unit and a processing unit, wherein the first segmentation unit is used for carrying out first segmentation on a real-time electroencephalogram signal to obtain at least two sections of first sub-signals, the overlapping percentage of the two adjacent sections of the first sub-signals is a preset numerical value, and the time length of the first sub-signals is a preset first time length;
a first conversion unit, configured to perform FFT conversion on each segment of the first sub-signal to obtain a frequency domain corresponding to each first sub-signal;
and the first frequency domain calculating unit is used for calculating to obtain an electroencephalogram feature set according to the frequency domain corresponding to each first sub-signal.
7. A high-speed rail driver reaction time estimation apparatus, comprising:
a memory for storing a computer program;
a processor for implementing the steps of the high-speed railway driver reaction time estimation method as claimed in any one of claims 1 to 3 when executing the computer program.
8. A readable storage medium, characterized by: the readable storage medium has stored thereon a computer program which, when executed by a processor, carries out the steps of the high-speed railway driver reaction time estimation method according to any one of claims 1 to 3.
CN202210214221.3A 2022-03-07 2022-03-07 Method, device and equipment for estimating reaction time of driver in high-speed rail and readable storage medium Active CN114343661B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210214221.3A CN114343661B (en) 2022-03-07 2022-03-07 Method, device and equipment for estimating reaction time of driver in high-speed rail and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210214221.3A CN114343661B (en) 2022-03-07 2022-03-07 Method, device and equipment for estimating reaction time of driver in high-speed rail and readable storage medium

Publications (2)

Publication Number Publication Date
CN114343661A CN114343661A (en) 2022-04-15
CN114343661B true CN114343661B (en) 2022-05-27

Family

ID=81094725

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210214221.3A Active CN114343661B (en) 2022-03-07 2022-03-07 Method, device and equipment for estimating reaction time of driver in high-speed rail and readable storage medium

Country Status (1)

Country Link
CN (1) CN114343661B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114821968B (en) * 2022-05-09 2022-09-13 西南交通大学 Intervention method, device and equipment for fatigue driving of motor car driver and readable storage medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107095670A (en) * 2017-05-27 2017-08-29 西南交通大学 Time of driver's reaction Forecasting Methodology
CN108846332A (en) * 2018-05-30 2018-11-20 西南交通大学 A kind of railway drivers Activity recognition method based on CLSTA
CN109820525A (en) * 2019-01-23 2019-05-31 五邑大学 A kind of driving fatigue recognition methods based on CNN-LSTM deep learning model
CN110660194A (en) * 2019-09-05 2020-01-07 深圳市德赛微电子技术有限公司 Driving monitoring early warning method and system
WO2020205655A1 (en) * 2019-03-29 2020-10-08 Intel Corporation Autonomous vehicle system
WO2020253965A1 (en) * 2019-06-20 2020-12-24 Toyota Motor Europe Control device, system and method for determining perceptual load of a visual and dynamic driving scene in real time
CN112149908A (en) * 2020-09-28 2020-12-29 深圳壹账通智能科技有限公司 Vehicle driving prediction method, system, computer device and readable storage medium
CN112617835A (en) * 2020-12-17 2021-04-09 南京邮电大学 Multi-feature fusion fatigue detection method based on transfer learning
CN113420624A (en) * 2021-06-11 2021-09-21 华中师范大学 Non-contact fatigue detection method and system
CN113415285A (en) * 2021-07-07 2021-09-21 西南交通大学 Driver alertness assessment method and system
CN113591525A (en) * 2020-10-27 2021-11-02 蓝海(福建)信息科技有限公司 Driver road rage recognition method with deep fusion of facial expressions and voice
CN113743471A (en) * 2021-08-05 2021-12-03 暨南大学 Driving evaluation method and system
CN215820948U (en) * 2021-03-12 2022-02-15 复旦大学 Driver state monitoring device

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108765876A (en) * 2018-05-31 2018-11-06 东北大学 Driving fatigue depth analysis early warning system based on multimode signal and method
CN110555388A (en) * 2019-08-06 2019-12-10 浙江大学 CNN and LSTM-based method for constructing intracardiac abnormal excitation point positioning model
US11506888B2 (en) * 2019-09-20 2022-11-22 Nvidia Corp. Driver gaze tracking system for use in vehicles
CN111460892A (en) * 2020-03-02 2020-07-28 五邑大学 Electroencephalogram mode classification model training method, classification method and system
US11751784B2 (en) * 2020-03-18 2023-09-12 Ford Global Technologies, Llc Systems and methods for detecting drowsiness in a driver of a vehicle
US11568655B2 (en) * 2020-03-26 2023-01-31 Intel Corporation Methods and devices for triggering vehicular actions based on passenger actions
CN111407260B (en) * 2020-03-30 2021-07-20 华南理工大学 Electroencephalogram and electrocardio-based fatigue detection method with steering wheel embedded in electrocardio sensor
CN112528815A (en) * 2020-12-05 2021-03-19 西安电子科技大学 Fatigue driving detection method based on multi-mode information fusion

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107095670A (en) * 2017-05-27 2017-08-29 西南交通大学 Time of driver's reaction Forecasting Methodology
CN108846332A (en) * 2018-05-30 2018-11-20 西南交通大学 A kind of railway drivers Activity recognition method based on CLSTA
CN109820525A (en) * 2019-01-23 2019-05-31 五邑大学 A kind of driving fatigue recognition methods based on CNN-LSTM deep learning model
WO2020205655A1 (en) * 2019-03-29 2020-10-08 Intel Corporation Autonomous vehicle system
WO2020253965A1 (en) * 2019-06-20 2020-12-24 Toyota Motor Europe Control device, system and method for determining perceptual load of a visual and dynamic driving scene in real time
CN110660194A (en) * 2019-09-05 2020-01-07 深圳市德赛微电子技术有限公司 Driving monitoring early warning method and system
CN112149908A (en) * 2020-09-28 2020-12-29 深圳壹账通智能科技有限公司 Vehicle driving prediction method, system, computer device and readable storage medium
CN113591525A (en) * 2020-10-27 2021-11-02 蓝海(福建)信息科技有限公司 Driver road rage recognition method with deep fusion of facial expressions and voice
CN112617835A (en) * 2020-12-17 2021-04-09 南京邮电大学 Multi-feature fusion fatigue detection method based on transfer learning
CN215820948U (en) * 2021-03-12 2022-02-15 复旦大学 Driver state monitoring device
CN113420624A (en) * 2021-06-11 2021-09-21 华中师范大学 Non-contact fatigue detection method and system
CN113415285A (en) * 2021-07-07 2021-09-21 西南交通大学 Driver alertness assessment method and system
CN113743471A (en) * 2021-08-05 2021-12-03 暨南大学 Driving evaluation method and system

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Classification of pilots" mental states using a multimodal deep learning network;Han, SY; Kwak, NS; (...); Lee, SW;《BIOCYBERNETICS AND BIOMEDICAL ENGINEERING》;20191227;第40卷(第1期);第324-336页 *
Guo, ZZ ; Pan, YF ; (...) ; Dong, N.Recognizing Hazard Perception in a Visual Blind Area Based on EEG Features.《IEEE ACCESS》.2020, *
基于CNNLSTM的驾驶员精神负荷检测研究;刘宇;《中国优秀硕士学位论文全文数据库 工程科技Ⅰ辑》;20210815(第8期);第7-11、47-64页 *
基于脑电信号的动车司机对突发事件反应时间的预测方法研究;郭孜政,谭茜,吴志敏,潘雨帆,张俊;《铁道学报》;20181215;第40卷(第12期);第1-6页 *
机器学习在铁路列车调度调整中的应用综述;文超;李津;李忠灿;智利军;田锐;《交通运输工程与信息学报》;20210924;第20卷(第1期);第1-14页 *

Also Published As

Publication number Publication date
CN114343661A (en) 2022-04-15

Similar Documents

Publication Publication Date Title
Rigas et al. Real-time driver's stress event detection
Pardey et al. A new approach to the analysis of the human sleep/wakefulness continuum
CN112869711B (en) Automatic sleep staging and migration method based on deep neural network
CN102119857B (en) Electroencephalogram detecting system and method for fatigue driving on basis of matching pursuit algorithm
CN109646022B (en) Child attention assessment system and method thereof
CN105893765B (en) A kind of classification diagnosis and treatment analysis and data visualisation system based on Echarts
CN102469948A (en) A system for vehicle security, personalization and cardiac activity monitoring of a driver
CN107252313A (en) The monitoring method and system of a kind of safe driving, automobile, readable storage medium storing program for executing
CN111951637B (en) Task-context-associated unmanned aerial vehicle pilot visual attention distribution mode extraction method
CN114343661B (en) Method, device and equipment for estimating reaction time of driver in high-speed rail and readable storage medium
Wan et al. On‐road experimental study on driving anger identification model based on physiological features by ROC curve analysis
CN108420429A (en) A kind of brain electricity epilepsy automatic identifying method based on the fusion of various visual angles depth characteristic
CN109044280B (en) Sleep staging method and related equipment
CN106913333B (en) Method for selecting sensitivity characteristic index of continuous attention level
WO2014020134A1 (en) Device, method and application for establishing a current load level
CN104490391A (en) Combatant state monitoring system based on electroencephalogram signals
CN114391846A (en) Emotion recognition method and system based on filtering type feature selection
CN113633296A (en) Reaction time prediction model construction method, device, equipment and readable storage medium
CN116340332A (en) Method and device for updating scene library of vehicle-mounted intelligent system and vehicle
CN116386845A (en) Schizophrenia diagnosis system based on convolutional neural network and facial dynamic video
Wang et al. EEG-based emergency braking intention prediction for brain-controlled driving considering one electrode falling-off
CN116115240A (en) Electroencephalogram emotion recognition method based on multi-branch chart convolution network
Tan et al. Detecting driver's distraction using long-term recurrent convolutional network
CN115908076A (en) Home-based old-age care environment improvement method based on historical multidimensional data stream and active feedback
CN113191191A (en) Community epidemic situation management method and system based on user habit analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant