CN114652326A - Real-time brain fatigue monitoring device based on deep learning and data processing method - Google Patents
Real-time brain fatigue monitoring device based on deep learning and data processing method Download PDFInfo
- Publication number
- CN114652326A CN114652326A CN202210114279.0A CN202210114279A CN114652326A CN 114652326 A CN114652326 A CN 114652326A CN 202210114279 A CN202210114279 A CN 202210114279A CN 114652326 A CN114652326 A CN 114652326A
- Authority
- CN
- China
- Prior art keywords
- data
- module
- convolution
- brain
- dual
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 210000004556 brain Anatomy 0.000 title claims abstract description 50
- 238000013135 deep learning Methods 0.000 title claims abstract description 22
- 238000012806 monitoring device Methods 0.000 title claims abstract description 19
- 238000003672 processing method Methods 0.000 title claims abstract description 10
- 108010054147 Hemoglobins Proteins 0.000 claims abstract description 27
- 102000001554 Hemoglobins Human genes 0.000 claims abstract description 27
- 210000001061 forehead Anatomy 0.000 claims abstract description 21
- QVGXLLKOCUKJST-UHFFFAOYSA-N atomic oxygen Chemical group [O] QVGXLLKOCUKJST-UHFFFAOYSA-N 0.000 claims abstract description 15
- 239000008280 blood Substances 0.000 claims abstract description 15
- 210000004369 blood Anatomy 0.000 claims abstract description 15
- 229910052760 oxygen Inorganic materials 0.000 claims abstract description 15
- 239000001301 oxygen Substances 0.000 claims abstract description 15
- 238000007781 pre-processing Methods 0.000 claims abstract description 13
- 238000007405 data analysis Methods 0.000 claims abstract description 11
- 230000033001 locomotion Effects 0.000 claims abstract description 8
- 238000001914 filtration Methods 0.000 claims abstract description 6
- 238000012545 processing Methods 0.000 claims abstract description 5
- 239000013598 vector Substances 0.000 claims description 34
- 230000005540 biological transmission Effects 0.000 claims description 25
- 238000000034 method Methods 0.000 claims description 21
- 230000000295 complement effect Effects 0.000 claims description 17
- 238000012549 training Methods 0.000 claims description 17
- 238000013519 translation Methods 0.000 claims description 16
- 238000011176 pooling Methods 0.000 claims description 15
- 239000011159 matrix material Substances 0.000 claims description 12
- 238000005070 sampling Methods 0.000 claims description 12
- INGWEZCOABYORO-UHFFFAOYSA-N 2-(furan-2-yl)-7-methyl-1h-1,8-naphthyridin-4-one Chemical compound N=1C2=NC(C)=CC=C2C(O)=CC=1C1=CC=CO1 INGWEZCOABYORO-UHFFFAOYSA-N 0.000 claims description 11
- 108010002255 deoxyhemoglobin Proteins 0.000 claims description 11
- 108010064719 Oxyhemoglobins Proteins 0.000 claims description 10
- 230000003321 amplification Effects 0.000 claims description 9
- 230000009977 dual effect Effects 0.000 claims description 9
- 238000003199 nucleic acid amplification method Methods 0.000 claims description 9
- 238000013528 artificial neural network Methods 0.000 claims description 8
- 125000004122 cyclic group Chemical group 0.000 claims description 8
- 230000006870 function Effects 0.000 claims description 7
- 238000012544 monitoring process Methods 0.000 claims description 7
- 238000006243 chemical reaction Methods 0.000 claims description 4
- 230000004913 activation Effects 0.000 claims description 3
- 238000009826 distribution Methods 0.000 claims description 3
- 230000001360 synchronised effect Effects 0.000 claims description 3
- 210000003128 head Anatomy 0.000 abstract description 4
- 238000004422 calculation algorithm Methods 0.000 abstract description 3
- 230000000694 effects Effects 0.000 abstract description 3
- 230000010354 integration Effects 0.000 abstract description 3
- 230000002360 prefrontal effect Effects 0.000 abstract description 3
- 230000003925 brain function Effects 0.000 abstract description 2
- 206010016256 fatigue Diseases 0.000 description 45
- 238000013527 convolutional neural network Methods 0.000 description 19
- 230000008859 change Effects 0.000 description 13
- 230000003287 optical effect Effects 0.000 description 8
- 230000008033 biological extinction Effects 0.000 description 6
- 238000001514 detection method Methods 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 238000002474 experimental method Methods 0.000 description 5
- 230000006996 mental state Effects 0.000 description 5
- 239000011541 reaction mixture Substances 0.000 description 5
- 238000002610 neuroimaging Methods 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000003595 spectral effect Effects 0.000 description 3
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 2
- 238000010521 absorption reaction Methods 0.000 description 2
- 230000006399 behavior Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000004461 rapid eye movement Effects 0.000 description 2
- 210000004761 scalp Anatomy 0.000 description 2
- 230000004599 slow eye movement Effects 0.000 description 2
- CVOFKRWYWCSDMA-UHFFFAOYSA-N 2-chloro-n-(2,6-diethylphenyl)-n-(methoxymethyl)acetamide;2,6-dinitro-n,n-dipropyl-4-(trifluoromethyl)aniline Chemical compound CCC1=CC=CC(CC)=C1N(COC)C(=O)CCl.CCCN(CCC)C1=C([N+]([O-])=O)C=C(C(F)(F)F)C=C1[N+]([O-])=O CVOFKRWYWCSDMA-UHFFFAOYSA-N 0.000 description 1
- LFQSCWFLJHTTHZ-UHFFFAOYSA-N Ethanol Chemical compound CCO LFQSCWFLJHTTHZ-UHFFFAOYSA-N 0.000 description 1
- WHXSMMKQMYFTQS-UHFFFAOYSA-N Lithium Chemical compound [Li] WHXSMMKQMYFTQS-UHFFFAOYSA-N 0.000 description 1
- 208000019914 Mental Fatigue Diseases 0.000 description 1
- 206010039203 Road traffic accident Diseases 0.000 description 1
- 230000036626 alertness Effects 0.000 description 1
- 230000003542 behavioural effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000004397 blinking Effects 0.000 description 1
- 210000005013 brain tissue Anatomy 0.000 description 1
- 238000007635 classification algorithm Methods 0.000 description 1
- 150000001875 compounds Chemical class 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 230000002996 emotional effect Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000004424 eye movement Effects 0.000 description 1
- 210000000744 eyelid Anatomy 0.000 description 1
- 239000006261 foam material Substances 0.000 description 1
- 210000001652 frontal lobe Anatomy 0.000 description 1
- 238000002329 infrared spectrum Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 230000002045 lasting effect Effects 0.000 description 1
- 229910052744 lithium Inorganic materials 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000002442 prefrontal cortex Anatomy 0.000 description 1
- 238000006748 scratching Methods 0.000 description 1
- 230000002393 scratching effect Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
- A61B5/369—Electroencephalography [EEG]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/145—Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue
- A61B5/1455—Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue using optical sensors, e.g. spectral photometrical oximeters
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
- A61B5/369—Electroencephalography [EEG]
- A61B5/372—Analysis of electroencephalograms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24133—Distances to prototypes
- G06F18/24137—Distances to cluster centroïds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/049—Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/02—Preprocessing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/08—Feature extraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/12—Classification; Matching
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Heart & Thoracic Surgery (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Veterinary Medicine (AREA)
- Public Health (AREA)
- Animal Behavior & Ethology (AREA)
- Surgery (AREA)
- Medical Informatics (AREA)
- Mathematical Physics (AREA)
- Pathology (AREA)
- Psychology (AREA)
- Psychiatry (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Optics & Photonics (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The invention discloses a real-time brain fatigue monitoring device and a data processing method based on deep learning, which adopt a portable forehead wearable device with high integration, and simultaneously surround an elastic soft belt on the head, collect blood oxygen signals of the forehead of the brain in real time, display a current brain function activity topographic map in real time, preprocess data, including removing motion artifacts, moving average and Butterworth band-pass filtering processing and modified Lambert-beer law, and obtain oxygenated hemoglobin concentration variation data and deoxygenated hemoglobin concentration variation data through data preprocessing; performing data analysis and data classification on the blood oxygen signals of the collected brain prefrontal lobes of the preprocessed data by using a deep learning-based CNN algorithm, performing a four-classification brain fatigue identification task on the preprocessed data, wherein classification labels are respectively as follows: mild fatigue, severe fatigue, relaxation and concentration, and finally obtaining the brain fatigue state of the tested object.
Description
Technical Field
The invention relates to a near-infrared forehead brain imaging fatigue detection device. In particular to a forehead brain imaging device and a data processing method based on deep learning.
Background
Due to the lack of alertness and vigilance of drivers during driving, the proportion of traffic accidents caused by driving fatigue in society is increasing. Therefore, driving fatigue has become one of the main causes of road tragedy. The cause of serious consequences may be a lack of awareness of potential accidents, a lack of awareness of traffic conditions, and an inability to control the vehicle. Recent statistics have shown that mental fatigue of drivers poses a devastating threat to the lives of drivers and other traffic offenders. Since fatigue driving and drunk driving have long been the most serious traffic problems affecting public safety, attempts to detect signals in both cases have been in existence for many years. For drunk driving, people adopt different methods for detecting, controlling and preventing drunk driving, such as an alcohol sensor, STM 32-based brain wave signal acquisition equipment, analog and digital integrated circuit modules and the like, for decades. However, the conventional and sophisticated driving fatigue detection methods are still limited, and electroencephalogram (EEG) techniques are easily affected by the movement of eyes or other parts of the body, and noise has a particularly large influence on electroencephalogram signals. Analysis of Electrooculogram (EOG) techniques requires information about human eye movements, such as blinking, eyelid movement, Rapid Eye Movement (REM), and Slow Eye Movement (SEM). These signals can be used to extract information about the degree of fatigue, but this information reflects only a certain aspect of the degree of fatigue of the human body and does not allow a comprehensive measurement of the degree of fatigue of the driver.
The frontal lobe is the largest part of the brain, affecting our personality, intelligence, behavior and self-awareness, and the prefrontal cortex of the brain is responsible for behavioral planning, decision-making, emotional control, self-awareness and independence from others. Therefore, the fatigue degree of a person can be shown to be important in the forehead of the brain.
The portable near-infrared forehead brain imaging fatigue monitoring device adopts a functional near-infrared spectrum technology (fNIRS), and can non-invasively and continuously measure the concentration change of oxygenated hemoglobin and deoxygenated hemoglobin. The near infrared detector is able to overcome the disadvantages of EEG and EOG in terms of flexibility, as its results are not affected by body part movements. In addition, other advantages, such as non-invasiveness and portability, make it unique to fatigue detection techniques, as these characteristics make it suitable for on-road field applications. After obtaining the results of the oxyhemoglobin concentration and the deoxyhemoglobin concentration variations, it is easier to compare the results between different subjects, allowing a comprehensive assessment of the driving fatigue level, which if further applied in the medical field, helps to determine the cause of the disease in the population.
Disclosure of Invention
The invention aims to solve the technical problem of providing a portable fatigue monitoring device for monitoring the mental state of a driver in real time and a monitoring method thereof.
In order to solve the technical problems, the invention provides a real-time brain fatigue monitoring device based on deep learning, which comprises a data acquisition module and a control module, wherein the data acquisition module and the control module are respectively connected with a power module, and the control module is connected with an upper computer through a WiFi wireless data transmission module.
The data acquisition module comprises an 8-channel detector and a biological signal acquisition chip; the 8-channel detector is composed of a dual-wavelength LED light source circuit and 6 photoelectric sensors and used for collecting blood oxygen signals of the forehead lobes of the brain of a measured object, the dual-wavelength LED light source circuit comprises 2 dual-wavelength LED light sources and a triode amplifying circuit, and the dual-wavelength LED light sources are of a common-anode structure and are complementary light-emitting; the biological signal acquisition chip is used for amplifying and carrying out analog-to-digital conversion on 8 channel data of the 8-channel detector and then sending the data to the control module. The control module sends complementary PWM waves to the dual-wavelength LED light source circuit to drive the dual-wavelength LED light source to emit light, and the control module controls the WiFi wireless data transmission module to perform data transmission with an upper computer. And the upper computer stores the received data in a file mode, and after the data are preprocessed, the collected blood oxygen signals of the forehead of the brain are subjected to data analysis by using a CNN (neural network) framework based on a discriminant feature learning strategy and data enhancement to obtain the brain fatigue state of the tested object.
Further, the invention relates to a real-time brain fatigue monitoring device based on deep learning, wherein:
the wavelengths of the dual-wavelength LED light source are 760nm and 850nm respectively; the photoelectric sensors are arranged in a square matrix of two rows and three columns to form two squares, and the two double-wavelength LED light sources are respectively positioned at the central points of the two squares; the distance between each double-wavelength LED light source and the adjacent 4 photoelectric sensors is 30mm, and 8 transmitting-receiving channels are formed in sequence along the row direction.
The biological signal acquisition chip is a biological signal acquisition chip integrating a high common mode rejection ratio analog input module for receiving voltage signals of a photoelectric sensor and for receiving complementary PWM signals, a low noise Programmable Gain Amplifier (PGA) for amplifying the voltage signals and a high resolution synchronous sampling analog-to-digital converter (ADC) for converting the analog signals into digital signals.
The control module adopts STM32 microprocessor, STM32 microprocessor produces the complementary PWM square wave that the duty cycle is 50%, is used for controlling two triodes in the dual wavelength LED light source circuit to work in the region of amplification and ending district respectively, and two triode outputs cooperation resistance voltage drop and produce two different voltages, are used for controlling whether the LED that the wavelength is 760nm and 850nm respectively in the dual wavelength LED light source is luminous.
The two triodes are respectively marked as a triode 1 and a triode 2, the triode 1 is connected with an LED with 850nm wavelength in a dual-wavelength and dual-wavelength LED light source, and the triode 2 is connected with an LED with 760nm wavelength in the dual-wavelength and dual-wavelength LED light source; PWM square wave 1 and PWM square wave 2 generated by the STM32 microprocessor; when the PWM square wave 1 is at a high level of 3.3V, the PWM square wave 2 is at a low level of 0V, and the triode 1 works in an amplification area and outputs a voltage of 1.45V; the triode 2 works in a cut-off region and outputs 3.3V voltage; at the moment, the LED with the wavelength of 850nm emits infrared light, and the LED with the wavelength of 760nm does not emit light; when the PWM square wave 1 is at a low level of 0V, the PWM square wave 2 is at a high level of 3.3V, and the triode 1 works in a cut-off region and outputs 3.3V voltage; the triode 2 works in an amplification area and outputs 1.75V voltage; at this time, the 850nm wavelength LED did not emit light, and the 760nm wavelength LED emitted red light.
The WiFi wireless data transmission module adopts an ESP8266 module, serial port output of the STM32 microprocessor is continuously sent to an upper computer in a transparent transmission mode, and the ESP8266 module has a working mode of a hybrid mode (Station + AP) and has the highest data transmission rate of 4.068 Mbps.
The invention discloses a data processing method in the detection process by utilizing the real-time brain fatigue monitoring device based on deep learning, which mainly comprises data preprocessing and data analysis, wherein after the data preprocessing, a CNN frame based on discriminant feature learning strategy and data enhancement is used for carrying out data analysis on the collected blood oxygen signals of the forehead of the brain to obtain the brain fatigue state of a detected object.
Further, the data preprocessing comprises motion artifact removal, moving average and Butterworth band-pass filtering processing and a modified Lambert-beer law, and oxygenated hemoglobin concentration variation data and deoxygenated hemoglobin concentration variation data are obtained through data preprocessing; performing data analysis and data classification on the preprocessed data by using a CNN frame based on a discriminant feature learning strategy and data enhancement, wherein the preprocessed data is subjected to a four-classification brain fatigue recognition task, and classification labels are respectively as follows: mild fatigue, severe fatigue, relaxation and concentration, and finally obtaining the brain fatigue state of the tested object.
The CNN framework comprises a convolution module and a full connection module; the convolution module consists of a time convolution module, a space convolution module and a general convolution module; the time convolution module comprises a convolution layer with the kernel size of 1 multiplied by 8 and the step length of (1, 1) and is used for extracting time characteristics from the near infrared data sample of the input sample; the spatial convolution module comprises a spatial convolution layer, the size of the spatial convolution layer kernel is 8 multiplied by 1, the step length is (1, 1), and the spatial convolution layer kernel is used for extracting spatial features of different channels; the general convolution module comprises two convolution layers and two maximum pooling layers and is used for improving the learning capacity of the framework and integrating the extracted time and space characteristics; wherein: the convolution kernel size of the first convolution layer is 1 multiplied by 17, the step length is (1, 1), and the convolution kernel size is used for extracting large-scale features; the pooling kernel size of the first pooling layer is 1 × 6, step size is (1, 6); the convolution kernel size of the second convolution layer is 1 multiplied by 7, the step length is (1, 1), and the second convolution layer is used for extracting small-scale features; the pooling kernel size of the second pooling layer is 1 × 6, the step size is (1, 6); the full-connection module comprises a full-connection layer, the ReLU is selected as an activation function, and the input samples are classified according to the features extracted by the convolution module.
Obtaining oxyhemoglobin concentration variation data and deoxyhemoglobin concentration variation data by using Lambert-beer law, and performing data enhancement on the oxyhemoglobin concentration variation data and the deoxyhemoglobin concentration variation data by using the CNN framework and adopting a cyclic translation-based method; the oxyhemoglobin concentration variation data and the deoxyhemoglobin concentration variation data are a C x T matrix, rows represent data collected from different channels, columns represent data at different sampling time points, each row represents data collected from different channels, and columns represent data at different sampling time points; the data enhancement step using the cyclic shift method is as follows:
1) in the time dimension, the initial sample is 0-T, the samples are circularly moved in the time dimension by taking D as a step length, and meanwhile, the arrangement relation among the channels is kept unchanged;
2) splicing a first circular translation sample by D-T and 0-D, splicing a second circular translation sample by 2D-T and 0-2D, … …, and splicing a kth circular translation sample by Kd-T and 0-Kd;
3) the new samples obtained by cyclic translation have the same size as the original samples, and the new samples generated by the same sample have only some staggered positions in the time dimension, so that the obtained samples retain the time and space characteristics of the original samples and have certain difference with the corresponding original samples.
Extracting feature distribution from the same type of samples by using a discriminant learning strategy by using the CNN framework so as to improve the classification precision; using LcenConstraining, namely gradually reducing the distance between the characteristic vectors of the samples of the same type, and finally clustering each sample of the same type to be close to the corresponding central vector; the specific treatment steps are as follows:
1) initializing a central vector, and taking the average value of initial characteristic vectors extracted from each type of samples in the acquired data as the initial value of the corresponding central vector
Wherein:is the initial center vector corresponding to the sample labeled j; n represents the total number of samples in the training set; yi represents the category of the ith sample in the training set; and fi 0Is the initial feature vector, δ, of the ith sample in the training setiSelecting a factor for the category;
2) calculating the selection factor deltai
Wherein: yi represents the category of the ith sample in the training set, and j represents a sample label;
3)Lcenis the average distance between the feature vector of a sample class and its corresponding center vector, expressed as LcenAs the basis of distinguishing features
Wherein:a feature vector representing an ith sample class in the t iteration; yi represents the ith sample class in training;a center vector representing an ith sample class during the t iteration; n denotes the number of samples per channel.
Compared with the prior art, the invention has the beneficial effects that:
the invention discloses a near-infrared forehead brain imaging fatigue detection device based on deep learning. The method can realize accurate acquisition, effective identification and correct classification of the brain prefrontal lobe blood oxygen signals, and utilizes a deep learning classification algorithm to identify the current mental state of a driver by acquiring real-time data of the driver prefrontal lobe and uploading the data to a calculation center, so as to give reasonable guidance and suggestion to driving behaviors.
Drawings
FIG. 1 is a block diagram showing the configuration of a brain fatigue monitoring device according to the present invention;
FIG. 2 is a topological diagram of a dual-wavelength LED light source and a photoelectric sensor in the brain fatigue monitoring device of the invention;
FIG. 3 is a PWM wave for driving a dual wavelength LED light source in accordance with the present invention;
FIG. 4 is a flow chart of the present invention for pre-processing data obtained by the detector;
FIG. 5 is a structure diagram of a CNN framework based on discriminant feature learning strategy and data enhancement method in the present invention;
fig. 6 is an experimental training paradigm in the present invention.
Detailed Description
The invention will be further described with reference to the following figures and specific examples, which are not intended to limit the invention in any way.
The design idea of the real-time brain fatigue monitoring device and method based on deep learning is as follows: the method is characterized in that the integration is high, meanwhile, a portable forehead wearable device is encircled on the head through an elastic soft belt, blood oxygen signals of the forehead of the brain are collected in real time, a current brain function activity topographic map is displayed in real time, after data are preprocessed, the collected blood oxygen signals of the forehead of the brain are subjected to data analysis and data classification by using a Convolutional Neural Network (CNN) algorithm based on deep learning, a four-classification brain fatigue recognition task is performed on the preprocessed data, and classification labels are as follows: mild fatigue, severe fatigue, relaxation and concentration, and finally obtaining the brain fatigue state of the tested object.
As shown in fig. 1, the device for monitoring brain fatigue in real time based on deep learning provided by the invention comprises a data acquisition module and a control module which are respectively connected with a power module, wherein the control module is connected with an upper computer through a WiFi wireless data transmission module. The function of each module is explained in detail below.
The data acquisition module. The data acquisition module comprises an 8-channel detector and a biological signal acquisition chip; the 8-channel detector is composed of a dual-wavelength LED light source circuit and 6 photoelectric sensors and used for collecting blood oxygen signals of the forehead lobes of the brain of a measured object, the dual-wavelength LED light source circuit comprises 2 dual-wavelength LED light sources and a triode amplifying circuit, and the dual-wavelength LED light sources are of a common-anode structure and are complementary light-emitting; the biological signal acquisition chip is used for amplifying and carrying out analog-to-digital conversion on 8 channel data of the 8-channel detector and then sending the data to the control module. As shown in fig. 2, the wavelengths of the dual wavelength LED light source are 760nm and 850nm, respectively; the photoelectric sensors are arranged in a square matrix of two rows and three columns to form two squares, and the two double-wavelength LED light sources are respectively positioned at the central points of the two squares; the distance between each double-wavelength LED light source and the adjacent 4 photoelectric sensors is 30mm, and 8 transmitting-receiving channels are formed in sequence along the row direction. The biological signal acquisition chip is a biological signal acquisition chip integrating a high common mode rejection ratio analog input module for receiving voltage signals of a photoelectric sensor and for receiving complementary PWM signals, a low noise Programmable Gain Amplifier (PGA) for amplifying the voltage signals and a high resolution synchronous sampling analog-to-digital converter (ADC) for converting the analog signals into digital signals.
And II, a control module. The control module sends complementary PWM waves to the dual-wavelength LED light source circuit to drive the dual-wavelength LED light source to emit light, and the control module controls the WiFi wireless data transmission module to perform data transmission with an upper computer. As shown in fig. 3, the control module adopts an STM32 microprocessor, the STM32 microprocessor generates a complementary PWM square wave with a duty ratio of 50%, the complementary PWM square wave is used to control two transistors of a transistor amplifying circuit in a dual-wavelength LED light source circuit to respectively work in an amplifying region and a cut-off region, the outputs of the two transistors cooperate with a resistance voltage drop to generate two different voltages, and the two different voltages are used to control whether LEDs with wavelengths of 760nm and 850nm in the dual-wavelength LED light source emit light or not. In the invention, an NPN type triode 2N3904 is adopted to design the triode amplifying circuit, the conducting voltage of a base electrode and an emitting electrode is 0.8V, two triodes are respectively marked as a triode 1 and a triode 2, the triode 1 is connected with an LED with 850nm wavelength in a dual-wavelength and dual-wavelength LED light source, and the triode 2 is connected with an LED with 760nm wavelength in the dual-wavelength LED light source; the STM32 microprocessor generates PWM square wave 1 and PWM square wave 2, and the PWM1 square wave and the PWM2 square wave are complementary square waves, the high level is 3.3V, and the low level is 0V. When the PWM outputs low level, the base voltage is 0 and less than 0.8V, the triode works in a cut-off region, the anode and cathode voltages of the triode are both 3.3V, and the triode does not emit light. When the PWM square wave 1 is at a high level of 3.3V, the PWM square wave 2 is at a low level of 0V, and the triode 1 works in an amplification area and outputs a voltage of 1.45V; the triode 2 works in a cut-off region and outputs 3.3V voltage; at the moment, the LED with the wavelength of 850nm emits infrared light, and the LED with the wavelength of 760nm does not emit light; when the PWM square wave 1 is at a low level of 0V, the PWM square wave 2 is at a high level of 3.3V, and the triode 1 works in a cut-off region and outputs 3.3V voltage; the triode 2 works in an amplification area and outputs 1.75V voltage; in this case, the 850nm wavelength LED did not emit light, and the 760nm wavelength LED emitted red light.
As shown in fig. 1 and fig. 3, the complementary PWM wave is not only used to control and drive the dual-wavelength LED light source, but also used to separate two optical signals from the mixed signal, wherein 760nm wave and 850nm wave are alternately emitted, and the photo sensor is always operated in a photosensitive state, so that the output of the photo sensor is a mixed signal. The mixed signal is synchronously input into the A/D biological signal acquisition chip, and a two-dimensional vector table of the mixed signal can be obtained. The rows represent data collected from different channels and the columns represent data at different sampling time points. The near infrared light band of 700-900nm is called as a 'spectral window' in the biomedical photonics, the absorption coefficient of the deoxyhemoglobin is larger than that of the oxyhemoglobin in the red spectral region of the wavelength of 700-805nm, and the absorption coefficient of the oxyhemoglobin is larger than that of the deoxyhemoglobin in the infrared spectral region of the wavelength of 805-900 nm. Separating the mixed light intensity change signal output by the photoelectric sensor into a 760nm light intensity change signal and a 850nm light intensity change signal, and separating the signals by adopting the PWM amplitude as a distinguishing basis
In the formula (1), Δ U is the filtered original data, Δ OD760nmThe voltage value, Δ OD, corresponding to the change in the intensity of the wavelength of 760nm850nmVoltage value, U, corresponding to intensity change of 850nm wavelengthPWMAnd (4) the corresponding PWM square wave voltage value is obtained for each sampling point data.
The data acquisition module and the control module in the invention have the same function, so that not only can data acquisition be completed, but also interactive communication can be carried out with an upper computer, and the upper computer sends an instruction to the control module through the WiFi wireless data transmission module, so that the data sampling rate, the LED flashing frequency and the start and stop of data acquisition can be adjusted in real time.
And thirdly, a WiFi wireless data transmission module. In the invention, a WiFi communication mode is adopted, the maximum transmission rate can reach 300Mbps, and a plurality of devices in the same local area network can synchronously work at the same time. The WiFi wireless data transmission module adopts an ESP8266 module, serial port output of the STM32 microprocessor is continuously sent to an upper computer in a transparent transmission mode, and the ESP8266 module has a working mode of a hybrid mode (Station + AP) and has the highest data transmission rate of 4.068 Mbps. The portable fatigue monitoring system has portability and scene universality, has the functions of mobile measurement, real-time data acquisition and real-time monitoring, and can be perfectly applied to fatigue state monitoring in the driving process of a driver. The steps of establishing the communication link are as follows:
(1) initializing a serial port, and setting an interrupt priority and a baud rate;
(2) the working mode is set, and the wireless transmission has three working modes, namely a server side mode (AP), a client side mode (Station) and a mixed mode (Station + AP). The server mode is that the server is used as a hot spot for connecting other equipment; the client mode is that the self equipment can be connected with the current environment hotspot; the hybrid mode will have two IP addresses, one AP address and one STA address. Setting '1' as a server mode, '2' as a client mode and '3' as a mixed mode in a software program;
(3) connection is restarted, only the restart can take effect after the working mode is set, only the ESP8266 module is restarted, and the whole device does not need to be restarted;
(4) connecting a WiFi hotspot in the current environment;
(5) establishing TCP connection, wherein the address and port number of an upper computer to be connected, namely addr and port, need to be known;
(6) and (3) opening a transparent transmission mode, the data acquisition chip cannot directly establish communication with an upper computer, and the data needs to be transferred to an ESP8266 module through a serial port of the STM 32. After the transparent transmission module is started, the whole data transmission process is completely transparent;
(7) and sending the data.
And fourthly, an upper computer. And the upper computer stores the received data in a file mode, and after data preprocessing, carries out data analysis on the collected blood oxygen concentration change signals of the forehead of the brain by using a CNN (neural network) frame based on a discriminant feature learning strategy and data enhancement to obtain the brain fatigue state of the tested object.
And fifthly, a power supply module. The power module adopts the 3.7V lithium cell power supply, provides the operating voltage of the different chips of system through voltage conversion, for control module (STM32 microprocessor), dual wavelength LED light source and photoelectric sensor among the data acquisition module, realize the ESP8266 module power supply of wiFi communication.
The data processing method of the real-time brain fatigue monitoring device based on deep learning mainly comprises data preprocessing and analysis and classification of the preprocessed data.
Firstly, data preprocessing: the method comprises the steps of removing motion artifacts, performing moving average and Butterworth band-pass filtering processing and correcting Lambert-beer law, and obtaining oxygenated hemoglobin concentration variation data and deoxygenated hemoglobin concentration variation data through data preprocessing, wherein as shown in figure 4, because the blood oxygen signal changes slowly, a low-pass 6-order zero-phase Butterworth filter with the cut-off frequency of 0.2Hz is used for filtering fNIRS original data to obtain effective information for removing 50Hz power frequency interference. And then, separating out voltage signals corresponding to the 760nm and 850nm light intensity changes according to the PWM amplitude, and converting the voltage signals into oxyhemoglobin change concentration and deoxyhemoglobin change concentration by using the Lambert-beer law. The method comprises the following specific steps:
(1) calculating an optical parameter matrix
In the formula (2), the reaction mixture is,is an optical parameter matrix; d is the distance between the light source and the sensor,is the average optical path difference factor of the wavelength lambda,is the extinction coefficient of hemoglobin;
In the formula (4), the reaction mixture is,for oxyhemoglobin at a wavelength of lambda1The extinction coefficient of the light of (a),for oxyhemoglobin at a wavelength of lambda2The extinction coefficient of the light of (a),for deoxyhemoglobin to a wavelength of lambda1The extinction coefficient of the light of (a),for deoxyhemoglobin to a wavelength of lambda2The extinction coefficient of light of (1).
In the formula (5), the reaction mixture is,is a wavelength lambda1The average optical path difference factor of (a),is the wavelength lambda2The average optical path difference factor of (2).
(3) The relationship between the change of input/output light intensity and the change of hemoglobin concentration is as follows:
in the formula (6), the reaction mixture is,a hemoglobin concentration change matrix;is an optical parameter matrix inverse matrix;is a light intensity variation matrix, and the numerical value is from the output of the photoelectric sensor;
(4) this device adopts dual wavelength LED, and the wavelength is 760nm and 850nm respectively, and the distance between light source and the photoelectric sensor is d for 30mm, directly obtains oxygenated hemoglobin concentration variation according to photoelectric sensor output voltage valueAnd the amount of change in the concentration of deoxyhemoglobin Δ CHb。
In the invention, software filtering is adopted to replace hardware filtering, and a low-pass 6-order zero-phase Butterworth filter with the cutoff frequency of 0.2Hz is designed to filter the fNIRS data set. Not only 50Hz power frequency interference and high-frequency noise in the collected data are removed, but also no filter circuit is needed to be additionally arranged, the size of the circuit board is reduced, and the portable data acquisition device is more portable.
Secondly, data analysis and classification: and performing data analysis on the preprocessed data by using a distinguishing feature learning strategy and a data-enhanced CNN framework, and performing four-classification on the oxygenated hemoglobin concentration variation data and the deoxygenated hemoglobin concentration variation data, wherein the classification labels are four mental states of light fatigue, severe fatigue, relaxation and concentration.
The CNN framework based on the discriminant feature learning strategy and data enhancement mainly comprises a convolution module and a full-connection module. The convolution module consists of a time convolution module, a space convolution module and a general convolution module, wherein the time convolution module is used for extracting time characteristics, and the time convolution module with the convolution kernel size of 1 x m is used for extracting the time characteristics; the spatial convolution module is used for extracting spatial features, and the convolution module with convolution kernel size of n multiplied by 1 is used for extracting the spatial features. After these two modules, a general convolution module is introduced for feature integration and advanced feature extraction, i.e. the convolution module consists of a temporal convolution module, a spatial convolution module and a general convolution module. After this module, a fully connected module is used to classify based on the features extracted by the convolved portions.
As shown in fig. 5, the detailed structure of the CNN based on discriminant feature learning strategy and data enhancement is as follows:
(1) the time convolution module comprises a convolution layer with the kernel size of 1 multiplied by 8, the step length of the convolution kernel is (1, 1), and the time feature is extracted from the input near infrared data sample by the main function of the module.
(2) The space convolution module comprises a space convolution layer, the size of the space convolution layer kernel is C multiplied by 1, the step length is (1, 1), wherein C is the number of channels for inputting near infrared data, the number is 8, when the space convolution module is expanded into more channels, the C only needs to take the corresponding number of the channels, and the space convolution module is used for extracting the space characteristics of different channels.
(3) The general convolution module comprises two convolution layers and two maximum pooling layers and is used for improving the learning capacity of the framework and integrating the extracted time and space characteristics. Wherein:
3-1) the convolution kernel size of the first convolution layer is 1 x 17, the step length is (1, 1), and the convolution kernel size is used for extracting large-scale features;
3-2) the pooling kernel size of the first pooling layer is 1 × 6, step size is (1, 6);
3-3) the convolution kernel size of the second convolution layer is 1 x 7, the step length is (1, 1), and the convolution kernel size is used for extracting small-scale features;
3-4) the pooling kernel size of the second pooling layer is 1 × 6, step size is (1, 6);
(4) the full-connection module comprises a full-connection layer, selects ReLU as an activation function, and classifies the input samples according to the features extracted by the convolution part;
(5) and outputting a classification result, and performing four-classification on the oxygenated hemoglobin concentration variation data and the deoxygenated hemoglobin concentration variation data by using a convolutional neural network, wherein the classification labels are four mental states of 'light fatigue, severe fatigue, relaxation and concentration'.
In the invention, the CNN frame is used for enhancing the oxygenated hemoglobin concentration variation data and the deoxygenated hemoglobin concentration variation data by adopting a method based on cyclic translation; the oxygenated hemoglobin concentration variation data and the deoxygenated hemoglobin concentration variation data obtained after data processing are a C x T matrix, rows represent data collected from different channels, columns represent data at different sampling time points, each row represents data collected from different channels, and columns represent data at different sampling time points, and the data enhancement step by using a cyclic translation method comprises the following steps:
(1) in the time dimension, the initial sample is 0-T, the samples are circularly moved in the time dimension by taking D as a step length, and meanwhile, the arrangement relation among the channels is kept unchanged;
(2) splicing a first circular translation sample by D-T and 0-D, splicing a second circular translation sample by 2D-T and 0-2D, … …, and splicing a Kth circular translation sample by Kd-T and 0-Kd;
(3) the new samples obtained by cyclic translation have the same size as the original samples, and the new samples generated by the same sample have only some staggered positions in the time dimension, so that the obtained samples retain the time and space characteristics of the original samples and have certain difference with the corresponding original samples.
In the invention, the CNN framework is utilized, and a discriminant learning strategy is used for extracting feature distribution from the same type of samples so as to improve the classification precision; using LcenConstraining, namely gradually reducing the distance between the characteristic vectors of the samples of the same type, and finally clustering each type of sample to be close to the corresponding central vector; the specific treatment steps are as follows:
(1) initializing a central vector, taking the average value of initial characteristic vectors extracted from each type of samples in the collected data as the initial value of the corresponding central vector,
in the formula (8), the reaction mixture is,is the initial center vector corresponding to the sample labeled j; n represents the total number of samples in the training set; f. ofi 0Is the initial feature vector of the ith sample in the training set, δiA factor is selected for the category.
(2) Calculating the selection factor deltai
In equation (9), yi represents the class of the ith sample in the training set, and j represents the sample label.
(3)LcenIs the average distance between the feature vector of a sample class and its corresponding center vector, expressed as LcenAs a basis for the determination of the feature,
in the formula (10), the compound represented by the formula (10),a feature vector representing an ith sample class in the t iteration;a center vector representing an ith sample class during the t-th iteration; n denotes the number of samples per channel.
Example (b):
in the invention, in order to reduce the number of the sensors and make the design of the whole device more concise and beautiful, one photoelectric sensor is used for sensing two paths of signals, and each photoelectric sensor outputs a mixed signal of red light corresponding acquired data and infrared light corresponding acquired data. Because independent red light corresponding acquisition data and infrared light corresponding acquisition data are needed when the Lambert-beer algorithm is used for calculating the blood oxygen concentration, a complementary PWM wave signal and an output signal of the photoelectric sensor are synchronously input into the biological signal acquisition chip to finish data acquisition work. In the data preprocessing process, the corresponding collected data of the red light and the corresponding collected data of the infrared light output by the photoelectric sensor are separated from the mixed signal according to the complementary PWM wave high-low level.
As shown in fig. 2, the deep learning-based real-time brain fatigue monitoring device of the present invention adopts a flexible circuit board technology to integrate the photoelectric sensors and the dual-wavelength LED light sources on a flexible circuit board, which has 8 data acquisition channels in total, and the flexible circuit board has 6 photoelectric sensors and 2 dual-wavelength LED light sources arranged according to fig. 2, and the size of the whole flexible circuit board is 10cm × 5 cm; the dual-wavelength LED light source adopts dual-wavelength medical analysis LED with wavelengths of 760nm and 850nm respectively, in fig. 2, a circle represents a light source, a square represents a photosensor, every two adjacent photosensors are equidistant, the left 4 photosensors and the right four photosensors form two squares, the two light sources are respectively positioned at the centers of the two squares, the distance between the light source and the adjacent photosensors is 30mm, the connecting edges between the light source and the photosensors sequentially form 8 transmitting-receiving channels, the transmitting light source and the photosensors fixed on a flexible circuit board are embedded in a high-density medical foam material, firstly, the sharp corners of the photosensors are prevented from contacting with the scalp of a test object, so as to avoid scratching the scalp and prevent the ambient light which may cause the photoelectric sensor to output saturation from entering the photoelectric sensor. When the flexible lining plate sticker is used, the flexible lining plate sticker is fixed on a head through a head band with adjustable length, and real-time data acquisition can be carried out in outdoor movement. The flexible lining plate with 8 transmitting-receiving channels is attached to the forehead of a brain, light with wavelengths of 760nm and 850nm is emitted into the forehead brain tissue of a human body at the same position, and is captured by the photoelectric sensor after being absorbed and scattered by the forehead cortex, so that blood oxygen signals of the whole forehead of the brain can be acquired.
As shown in fig. 6, since individuals have differences, each subject should be subjected to a training experiment before using the device, and the experimental paradigm is divided into four parts, namely "relax, focus, light fatigue and heavy fatigue", each part lasting 120 s. The experimental subject self-judges the current mental state, starts the experiment when the brain is in a relaxed state, lasts for two minutes, and then carries out the concentration state experiment, the mild fatigue experiment and the severe fatigue experiment in sequence. And training the CNN network based on the discriminant feature learning strategy and data enhancement by using the acquired data as a training set to obtain the trained CNN network, and performing normal real-time state detection work by using the trained CNN network.
While the present invention has been described with reference to the accompanying drawings, the present invention is not limited to the above-described embodiments, which are illustrative only and not restrictive, and various modifications which do not depart from the spirit of the present invention and which are intended to be covered by the claims of the present invention may be made by those skilled in the art.
Claims (10)
1. A real-time brain fatigue monitoring device based on deep learning comprises a data acquisition module and a control module which are respectively connected with a power module, wherein the control module is connected with an upper computer through a WiFi wireless data transmission module; the method is characterized in that:
the data acquisition module comprises an 8-channel detector and a biological signal acquisition chip; the 8-channel detector is composed of a dual-wavelength LED light source circuit and 6 photoelectric sensors and used for collecting blood oxygen signals of the forehead lobes of the brain of a measured object, the dual-wavelength LED light source circuit comprises 2 dual-wavelength LED light sources and a triode amplifying circuit, and the dual-wavelength LED light sources are of a common-anode structure and are complementary light-emitting; the biological signal acquisition chip is used for amplifying and carrying out analog-to-digital conversion on 8 channel data of the 8-channel detector and then sending the data to the control module;
the control module sends complementary PWM waves to the dual-wavelength LED light source circuit to drive the dual-wavelength LED light source to emit light, and controls the WiFi wireless data transmission module to perform data transmission with an upper computer;
and the upper computer stores the received data in a file mode, and after the data are preprocessed, the collected blood oxygen signals of the forehead of the brain are subjected to data analysis by using a CNN (neural network) framework based on a discriminant feature learning strategy and data enhancement to obtain the brain fatigue state of the tested object.
2. The deep learning based real-time brain fatigue monitoring device of claim 1, wherein the dual wavelength LED light source has wavelengths of 760nm and 850nm, respectively; the photoelectric sensors are arranged in a square matrix of two rows and three columns to form two squares, and the two double-wavelength LED light sources are respectively positioned at the central points of the two squares; the distance between each double-wavelength LED light source and the adjacent 4 photoelectric sensors is 30mm, and 8 transmitting-receiving channels are formed in sequence along the row direction.
3. The deep learning-based real-time brain fatigue monitoring device according to claim 1, wherein the biological signal acquisition chip is a biological signal acquisition chip integrating a high common mode rejection ratio analog input module for receiving a photosensor voltage signal and for receiving a complementary PWM signal, a low noise Programmable Gain Amplifier (PGA) for voltage signal amplification, and a high resolution synchronous sampling analog-to-digital converter (ADC) for converting an analog signal into a digital signal.
4. The deep learning-based real-time brain fatigue monitoring device according to claim 1, wherein the control module employs an STM32 microprocessor, the STM32 microprocessor generates complementary PWM square waves with a duty cycle of 50% for controlling two triodes in the dual wavelength LED light source circuit to respectively operate in an amplification region and a cut-off region, and outputs of the two triodes cooperate with a resistance drop to generate two different voltages for controlling whether LEDs with wavelengths of 760nm and 850nm in the dual wavelength LED light source emit light or not.
5. The device for monitoring brain fatigue in real time based on deep learning as claimed in claim 4, wherein two triodes are respectively marked as triode 1 and triode 2, the triode 1 is connected with the 850nm wavelength LED of the dual-wavelength and dual-wavelength LED light source, and the triode 2 is connected with the 760nm wavelength LED of the dual-wavelength and LED light source; PWM square wave 1 and PWM square wave 2 generated by the STM32 microprocessor;
when the PWM square wave 1 is at a high level of 3.3V, the PWM square wave 2 is at a low level of 0V, and the triode 1 works in an amplification area and outputs a voltage of 1.45V; the triode 2 works in a cut-off region and outputs 3.3V voltage; at the moment, the LED with the wavelength of 850nm emits infrared light, and the LED with the wavelength of 760nm does not emit light;
when the PWM square wave 1 is at a low level of 0V, the PWM square wave 2 is at a high level of 3.3V, and the triode 1 works in a cut-off region and outputs 3.3V voltage; the triode 2 works in an amplification area and outputs 1.75V voltage; in this case, the 850nm wavelength LED did not emit light, and the 760nm wavelength LED emitted red light.
6. The real-time brain fatigue monitoring device based on deep learning of claim 4, characterized in that the WiFi wireless data transmission module adopts an ESP8266 module, the serial port output of the STM32 microprocessor is continuously sent to the upper computer through a transparent transmission mode, the ESP8266 module has a working mode of a hybrid mode (Station + AP) and a highest data transmission rate of 4.068 Mbps.
7. The data processing method of the real-time brain fatigue monitoring device based on deep learning of claim 1, wherein the data preprocessing comprises removing motion artifacts, moving average and Butterworth band-pass filtering processing and modified Lambert-beer law, and the oxygenated hemoglobin concentration variation data and the deoxygenated hemoglobin concentration variation data are obtained through the data preprocessing; performing data analysis and data classification on the preprocessed data by using a CNN frame based on a discriminant feature learning strategy and data enhancement, wherein the preprocessed data is subjected to a four-classification brain fatigue recognition task, and classification labels are respectively as follows: mild fatigue, severe fatigue, relaxation and concentration, and finally obtaining the brain fatigue state of the tested object.
8. The data processing method of claim 7, wherein the CNN framework comprises a convolution module and a full-connection module;
the convolution module consists of a time convolution module, a space convolution module and a general convolution module;
the time convolution module comprises a convolution layer with the kernel size of 1 multiplied by 8 and the step length of (1, 1) and is used for extracting time characteristics from the near infrared data sample of the input sample;
the spatial convolution module comprises a spatial convolution layer, the size of the spatial convolution layer kernel is 8 multiplied by 1, the step length is (1, 1), and the spatial convolution layer kernel is used for extracting spatial features of different channels;
the general convolution module comprises two convolution layers and two maximum pooling layers and is used for improving the learning capacity of the framework and integrating the extracted time and space characteristics; wherein: the convolution kernel size of the first convolution layer is 1 multiplied by 17, the step length is (1, 1), and the method is used for extracting large-scale features; the pooling kernel size of the first pooling layer is 1 × 6, step size is (1, 6); the convolution kernel size of the second convolution layer is 1 multiplied by 7, the step length is (1, 1), and the convolution kernel size is used for extracting small-scale features; the pooling kernel size of the second pooling layer is 1 × 6, the step size is (1, 6);
the full-connection module comprises a full-connection layer, the ReLU is selected as an activation function, and the input samples are classified according to the features extracted by the convolution module.
9. The data processing method of claim 8, wherein the oxygenated hemoglobin concentration variation data and the deoxygenated hemoglobin concentration variation data are obtained using lambert-beer's law, and the oxygenated hemoglobin concentration variation data and the deoxygenated hemoglobin concentration variation data are subjected to data enhancement using a cyclic shift-based method using the CNN framework;
the oxyhemoglobin concentration variation data and the deoxyhemoglobin concentration variation data are a C x T matrix, rows represent data collected from different channels, columns represent data at different sampling time points, each row represents data collected from different channels, and columns represent data at different sampling time points;
the data enhancement step using the circular translation method is as follows:
1) in the time dimension, the initial sample is 0-T, the samples are circularly moved in the time dimension by taking D as a step length, and meanwhile, the arrangement relation among the channels is kept unchanged;
2) splicing a first circular translation sample by D-T and 0-D, splicing a second circular translation sample by 2D-T and 0-2D, … …, and splicing a kth circular translation sample by Kd-T and 0-Kd;
3) the new samples obtained by cyclic translation have the same size as the original samples, and the new samples generated by the same sample have only some staggered positions in the time dimension, so that the obtained samples retain the time and space characteristics of the original samples and have certain difference with the corresponding original samples.
10. The data processing method of claim 8, wherein the CNN framework is used to extract feature distributions from the same class of samples using a discriminant learning strategy to improve classification accuracy; using LcenConstraining, namely gradually reducing the distance between the characteristic vectors of the samples of the same type, and finally clustering each sample of the same type to be close to the corresponding central vector; the specific treatment steps are as follows:
1) initializing a central vector, and taking the average value of initial characteristic vectors extracted from each type of samples in the acquired data as the initial value of the corresponding central vector
Wherein:is the initial center vector corresponding to the sample labeled j; n represents the total number of samples in the training set; yi represents the category of the ith sample in the training set; and fi 0Is the initial feature vector, δ, of the ith sample in the training setiSelecting a factor for the category;
2) calculating the selection factor deltai
Wherein: yi represents the class of the ith sample in the training set, and j represents a sample label;
3)Lcenis the average distance between the feature vector of a sample class and its corresponding center vector, expressed as LcenAs the basis of distinguishing features
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210114279.0A CN114652326A (en) | 2022-01-30 | 2022-01-30 | Real-time brain fatigue monitoring device based on deep learning and data processing method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210114279.0A CN114652326A (en) | 2022-01-30 | 2022-01-30 | Real-time brain fatigue monitoring device based on deep learning and data processing method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114652326A true CN114652326A (en) | 2022-06-24 |
Family
ID=82025861
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210114279.0A Pending CN114652326A (en) | 2022-01-30 | 2022-01-30 | Real-time brain fatigue monitoring device based on deep learning and data processing method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114652326A (en) |
Citations (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130144140A1 (en) * | 2010-06-04 | 2013-06-06 | Mclean Hospital Corporation | Multi-modal imaging of blood flow |
CN104545951A (en) * | 2015-01-09 | 2015-04-29 | 天津大学 | Body state monitoring platform based on functional near-infrared spectroscopy and motion detection |
CN107080543A (en) * | 2017-04-27 | 2017-08-22 | 北京师范大学 | A kind of real-time Cerebral cortex blood oxygen signal harvester of new near-infrared |
CN107137097A (en) * | 2017-06-28 | 2017-09-08 | 李婷 | Packaged type fatigue driving monitor |
CN108491858A (en) * | 2018-02-11 | 2018-09-04 | 南京邮电大学 | Method for detecting fatigue driving based on convolutional neural networks and system |
CN108720851A (en) * | 2018-05-23 | 2018-11-02 | 释码融和(上海)信息科技有限公司 | A kind of driving condition detection method, mobile terminal and storage medium |
CN109171764A (en) * | 2018-10-24 | 2019-01-11 | 重庆科技学院 | A kind of SCM Based blood oxygen detection method |
CN109977793A (en) * | 2019-03-04 | 2019-07-05 | 东南大学 | Trackside image pedestrian's dividing method based on mutative scale multiple features fusion convolutional network |
CN110069958A (en) * | 2018-01-22 | 2019-07-30 | 北京航空航天大学 | A kind of EEG signals method for quickly identifying of dense depth convolutional neural networks |
CN110490174A (en) * | 2019-08-27 | 2019-11-22 | 电子科技大学 | Multiple dimensioned pedestrian detection method based on Fusion Features |
CN111329497A (en) * | 2020-02-21 | 2020-06-26 | 华南理工大学 | Wearable fatigue driving monitoring system and method based on forehead electroencephalogram signals |
CN111450380A (en) * | 2020-03-30 | 2020-07-28 | 重庆大学 | Driving fatigue intelligent detection warning awakening system based on near-infrared brain function imaging |
CN111568412A (en) * | 2020-04-03 | 2020-08-25 | 中山大学 | Method and device for reconstructing visual image by utilizing electroencephalogram signal |
CN111832416A (en) * | 2020-06-16 | 2020-10-27 | 杭州电子科技大学 | Motor imagery electroencephalogram signal identification method based on enhanced convolutional neural network |
CN112130663A (en) * | 2020-08-31 | 2020-12-25 | 上海大学 | Object recognition training system and method based on EEG-NIRS |
CN112668473A (en) * | 2020-12-28 | 2021-04-16 | 东南大学 | Vehicle state accurate sensing method based on multi-feature deep fusion neural network |
CN112766199A (en) * | 2021-01-26 | 2021-05-07 | 武汉大学 | Hyperspectral image classification method based on self-adaptive multi-scale feature extraction model |
CN113171087A (en) * | 2021-04-26 | 2021-07-27 | 重庆大学 | Noninvasive cerebral blood oxygen monitoring device |
WO2021196528A1 (en) * | 2020-03-30 | 2021-10-07 | 东南大学 | Global ionospheric total electron content prediction method based on deep recurrent neural network |
US20220016423A1 (en) * | 2018-12-14 | 2022-01-20 | Brainpatch Ltd | Brain interfacing apparatus and method |
-
2022
- 2022-01-30 CN CN202210114279.0A patent/CN114652326A/en active Pending
Patent Citations (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130144140A1 (en) * | 2010-06-04 | 2013-06-06 | Mclean Hospital Corporation | Multi-modal imaging of blood flow |
CN104545951A (en) * | 2015-01-09 | 2015-04-29 | 天津大学 | Body state monitoring platform based on functional near-infrared spectroscopy and motion detection |
CN107080543A (en) * | 2017-04-27 | 2017-08-22 | 北京师范大学 | A kind of real-time Cerebral cortex blood oxygen signal harvester of new near-infrared |
CN107137097A (en) * | 2017-06-28 | 2017-09-08 | 李婷 | Packaged type fatigue driving monitor |
CN110069958A (en) * | 2018-01-22 | 2019-07-30 | 北京航空航天大学 | A kind of EEG signals method for quickly identifying of dense depth convolutional neural networks |
CN108491858A (en) * | 2018-02-11 | 2018-09-04 | 南京邮电大学 | Method for detecting fatigue driving based on convolutional neural networks and system |
CN108720851A (en) * | 2018-05-23 | 2018-11-02 | 释码融和(上海)信息科技有限公司 | A kind of driving condition detection method, mobile terminal and storage medium |
CN109171764A (en) * | 2018-10-24 | 2019-01-11 | 重庆科技学院 | A kind of SCM Based blood oxygen detection method |
US20220016423A1 (en) * | 2018-12-14 | 2022-01-20 | Brainpatch Ltd | Brain interfacing apparatus and method |
CN109977793A (en) * | 2019-03-04 | 2019-07-05 | 东南大学 | Trackside image pedestrian's dividing method based on mutative scale multiple features fusion convolutional network |
CN110490174A (en) * | 2019-08-27 | 2019-11-22 | 电子科技大学 | Multiple dimensioned pedestrian detection method based on Fusion Features |
CN111329497A (en) * | 2020-02-21 | 2020-06-26 | 华南理工大学 | Wearable fatigue driving monitoring system and method based on forehead electroencephalogram signals |
WO2021196528A1 (en) * | 2020-03-30 | 2021-10-07 | 东南大学 | Global ionospheric total electron content prediction method based on deep recurrent neural network |
CN111450380A (en) * | 2020-03-30 | 2020-07-28 | 重庆大学 | Driving fatigue intelligent detection warning awakening system based on near-infrared brain function imaging |
CN111568412A (en) * | 2020-04-03 | 2020-08-25 | 中山大学 | Method and device for reconstructing visual image by utilizing electroencephalogram signal |
CN111832416A (en) * | 2020-06-16 | 2020-10-27 | 杭州电子科技大学 | Motor imagery electroencephalogram signal identification method based on enhanced convolutional neural network |
CN112130663A (en) * | 2020-08-31 | 2020-12-25 | 上海大学 | Object recognition training system and method based on EEG-NIRS |
CN112668473A (en) * | 2020-12-28 | 2021-04-16 | 东南大学 | Vehicle state accurate sensing method based on multi-feature deep fusion neural network |
CN112766199A (en) * | 2021-01-26 | 2021-05-07 | 武汉大学 | Hyperspectral image classification method based on self-adaptive multi-scale feature extraction model |
CN113171087A (en) * | 2021-04-26 | 2021-07-27 | 重庆大学 | Noninvasive cerebral blood oxygen monitoring device |
Non-Patent Citations (1)
Title |
---|
巫嘉陵, 高忠科: "脑机接口技术及其在神经科学中的应用", 中国现代神经疾病杂志, vol. 21, no. 1, 31 January 2021 (2021-01-31), pages 3 - 8 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Magdalena Nowara et al. | SparsePPG: Towards driver monitoring using camera-based vital signs estimation in near-infrared | |
US11793406B2 (en) | Image processing method and corresponding system | |
CN108701357B (en) | Device, system and method for skin detection | |
US10022082B2 (en) | Apparatus and method for detecting a state of a driver based on biometric signals of the driver | |
US10696305B2 (en) | Apparatus and method for measuring physiological information of living subject in vehicle | |
US20160012656A1 (en) | Individualized control system utilizing biometric characteristic and operating method thereof | |
US9818245B2 (en) | Individualized control system utilizing biometric characteristic | |
CN110151203B (en) | Fatigue driving identification method based on multistage avalanche convolution recursive network EEG analysis | |
Ko et al. | Deep recurrent spatio-temporal neural network for motor imagery based BCI | |
Sivasangari et al. | Emotion recognition system for autism disordered people | |
CN111248890A (en) | Non-contact newborn heart rate monitoring method and system based on facial video | |
CN110710978A (en) | Multi-mode immersive synchronous acquisition system based on eye movement tracking-brain function activity detection | |
CN110584597A (en) | Multi-channel electroencephalogram signal monitoring method based on time-space convolutional neural network and application | |
CN106974660A (en) | The method that blood oxygen feature in being detected based on cerebration realizes sex determination | |
US9984222B2 (en) | Individualized control system utilizing biometric characteristic | |
CN111513731B (en) | Flexible intelligent detection device based on sufficient state monitoring attention | |
Angrisani et al. | Brain-computer interfaces for daily-life applications: a five-year experience report | |
CN212353957U (en) | Health regulating system | |
CN114652326A (en) | Real-time brain fatigue monitoring device based on deep learning and data processing method | |
US20230309930A1 (en) | Physiological detection device with white light source | |
CN111671421A (en) | Electroencephalogram-based children demand sensing method | |
CN212546423U (en) | Intelligent mirror based on facial image analysis | |
CN110363242B (en) | Brain consciousness multi-classification method and system based on support vector machine | |
JP6014869B2 (en) | Operation identification system, information processing apparatus, information processing program, and information processing system | |
Cano-Izquierdo et al. | Applying deep learning in brain computer interface to classify motor imagery |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |