CN115736920A - Depression state identification method and system based on bimodal fusion - Google Patents

Depression state identification method and system based on bimodal fusion Download PDF

Info

Publication number
CN115736920A
CN115736920A CN202211370867.7A CN202211370867A CN115736920A CN 115736920 A CN115736920 A CN 115736920A CN 202211370867 A CN202211370867 A CN 202211370867A CN 115736920 A CN115736920 A CN 115736920A
Authority
CN
China
Prior art keywords
electroencephalogram
layer
electrocardiosignal
fusion
signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211370867.7A
Other languages
Chinese (zh)
Inventor
隋金雁
张继洲
王聪聪
刘得成
马佳霖
翟立彬
王明晗
孙保林
邢奥林
陶笑笑
刁志强
刘祖明
王树
曹艳坤
陶可猛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Haorui Zhiyuan Shandong Artificial Intelligence Co ltd
Original Assignee
Haorui Zhiyuan Shandong Artificial Intelligence Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Haorui Zhiyuan Shandong Artificial Intelligence Co ltd filed Critical Haorui Zhiyuan Shandong Artificial Intelligence Co ltd
Priority to CN202211370867.7A priority Critical patent/CN115736920A/en
Publication of CN115736920A publication Critical patent/CN115736920A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

An intelligent method for depression state identification based on bimodal fusion, comprising: acquiring electroencephalogram signals and electrocardiosignals of a target person in a depression state inducing scene; establishing an identification model: performing feature extraction and fusion on the electroencephalogram preprocessed data and the electrocardio preprocessed data by utilizing a multi-modal deep neural network to obtain fusion features; the multi-mode deep neural network comprises an electrocardio characteristic extraction network and an electroencephalogram characteristic extraction network; training a recognition model: and (4) identifying the depression state of the person to be tested by using the optimal model, and finally classifying and grading the depression state. According to the method, the deep learning algorithm is used for identifying the patients in the depression state, so that the huge workload of psychologists in large-scale screening is reduced, judgment references are provided for medical staff, and the identification efficiency of the patients in the depression state is improved.

Description

Depression state identification method and system based on bimodal fusion
Technical Field
The invention discloses a method and a system for identifying a depression state based on bimodal fusion, and belongs to the field of intelligent medical treatment based on deep learning.
Background
With the rapid development of society, the number of patients with global depression status is increasing year by year. Depressive states have a more severe impact on an individual's quality of life, learning and work ability, social activity, and even lead to self-injury and suicidal behavior. Early detection and diagnosis of a depressive state is critical for rehabilitation, as symptoms of the state cause varying degrees of damage to an individual over time.
The traditional depression state identification method such as clinical evaluation and self-evaluation of scales has large subjectivity and more uncertain factors. The existing technology for identifying the depression state by using data analysis and artificial intelligence greatly improves the speed and efficiency. However, the existing algorithm identification mainly collects multi-modal physiological data of electroencephalogram, expression, voice, eye movement and the like of a tested person in a resting state or a task state, and scale scores of the tested person are used as labels to construct a depression state identification algorithm.
In addition, researches find that the unconscious physiological reaction of the human beings aiming at the electroencephalogram and the electrocardio is more reliable than the conscious physiological reaction which is easily influenced by the outside, such as expression, eye movement and the like. Behavioral signals such as facial expressions, voice, eye movements and the like are easily influenced by external environments, and the accuracy rate of depression state identification is poor. The characteristics of high time resolution of electroencephalogram (EEG), no wound, convenient recording, low cost and the like gradually lead the EEG to become a direction for researching depression state. The development of portable electroencephalogram acquisition equipment enables the acquisition cost of electroencephalogram to be reduced and the accuracy to be high. Cardiac electrical signal (ECG) recording of cardiac electrical activity resulting from cardiac tissue polarization and depolarization can reflect changes in the autonomic nervous system associated with emotion and stress, but less research has been conducted in the identification of depressive states.
In conclusion, the traditional method for identifying the subjectivity, the resource cost and the complex and time-consuming process aiming at the depression state; the identification of the depressive state from the behavior signals is not accurate enough.
Disclosure of Invention
Aiming at the defects of the prior art, the invention discloses a depression state identification method based on bimodal fusion.
The invention also discloses a system for loading the identification method.
Summary of the invention:
a depression state identification method based on bimodal fusion is characterized in that high-dimensional characteristics of two modes are extracted through a designed neural network model by utilizing EEG (electroencephalogram) and ECG (electrocardiogram) signals of a tested person to perform feature fusion, and depression state identification is realized more accurately.
The detailed technical scheme of the invention is as follows:
an intelligent method for depression state identification based on bimodal fusion, comprising:
1) Acquiring an electroencephalogram signal and an electrocardiosignal of a target person in a depression state inducing scene as a sample data set, utilizing an international universal depression state exciting scene as an inducing scene, and acquiring the electroencephalogram signal and the electrocardiosignal of a detected person as the sample data set through a sensor;
2) Preprocessing the acquired electroencephalogram signal and the acquired electrocardiosignal to obtain electroencephalogram preprocessing data and electrocardio preprocessing data, and adopting a corresponding preprocessing method according to the inherent characteristics of the acquired physiological signal in the aspect of frequency amplitude;
3) Establishing a recognition model: performing feature extraction and fusion on the electroencephalogram preprocessed data and the electrocardio preprocessed data by utilizing a multi-modal deep neural network to obtain fusion features; the multi-mode deep neural network comprises an electrocardio characteristic extraction network and an electroencephalogram characteristic extraction network;
4) Training a recognition model:
inputting the fusion characteristics into a constructed LSTM classifier to iterate and store an optimal model, identifying the depression state of the person to be tested by using the optimal model, and finally classifying and grading the depression state.
Preferably, according to the present invention, the pretreatment method of step 2) comprises:
the preprocessing of the electrocardiosignal comprises the following steps: the method comprises the following steps of performing down-sampling on an electrocardiosignal, removing noise which has large influence on the electrocardiosignal through wavelet transformation, segmenting the electrocardiosignal, removing the influence of a filter at the edge of the signal and unifying the lengths of finished signals; further, the deviation and the mean value of the original electrocardiosignals are standardized, and finally, the sequence output of the electrocardiosignals, namely the ECG sequence output, is obtained and is used as the input of the electrocardio characteristic extraction network;
the preprocessing of the brain electrical signals comprises the following steps: because the distribution structure of the electroencephalogram channels is irregular, the acquired electroencephalogram data is not regular structural data and cannot be directly used as the input of a neural network, and the interference of noise around the electroencephalogram signal and the power line frequency is removed by adopting a band-pass filter; the EEG signal is divided to remove the influence of a filter at the edge of the signal and complete the unification of the signal lengths; standardizing the deviation and the mean value of the original electroencephalogram signals, carrying out two-dimensional matrix mapping on the standardized electroencephalogram data through a Nasion 10-20 system, and finally converting the matrix into an EEG two-dimensional picture as the input of an electroencephalogram feature extraction network. In the processing of the electroencephalogram signals, a Z-score standardization method is adopted for the standardization of the mean value and the deviation. The electroencephalogram preprocessing method adopts a common method in electroencephalogram processing, and can be realized by contacting with electroencephalogram processing.
According to the invention, the specific method for preprocessing the electrocardiosignal comprises the following steps:
the acquisition frequency of the electrocardiosignals is 256Hz, and in order to reduce errors in the fusion process caused by different frequencies, the electrocardiosignals are subjected to down-sampling: sampling the collected original surface electrocardiosignal with the interval of L, and passing through
Figure BDA0003925445110000031
In formula (I), ω is the digital angular frequency of the ecg signal in the frequency domain; expanding the frequency spectrum of the original electrocardiosignal X on the frequency domain to the original L times to obtain the electrocardiosignal Y after down sampling, wherein the low sampling rate after down sampling is
Figure BDA0003925445110000032
The length of the electrocardiosignals is reduced under the condition of not influencing the cross-correlation of the electrocardiosignals;
the wavelet transform is used for removing high-frequency noise, power frequency interference, electromyogram signals and the like: the original electrocardiosignal has a larger wavelet coefficient than noise after wavelet decomposition, and the interference in the original electrocardiosignal can be filtered by setting a proper threshold, firstly wavelet decomposition of 8 layers is carried out on the electrocardiosignal through a wavedec function in python.
Figure BDA0003925445110000033
In the formula (II), lambda is a selected denoising threshold, P is the length of the electrocardiosignal after the downsampling, and finally wavelet inverse transformation is carried out through a waverec function in a python. Pywt library to obtain the denoised electrocardiosignal; segmenting the electrocardiosignal to finish ERP segmentation, only keeping signals of a specific time interval when a testee receives a certain stimulus and reacts, removing the influence of the filter at the edge offset by each 1s segment before and after each segment of the signals, and carrying out Z-Score standardization on the electrocardiosignal:
Figure BDA0003925445110000034
in formula (III), z is the measured physiological signal of the individual after normalizationThe score of the number is used for inputting the feature extraction network;
Figure BDA0003925445110000035
segmenting the mean value of the physiological signal for each tested person; mu is the overall average value of physiological signals of all tested persons; n is the number of the tested personnel; x is the number of i I =1,2,3 and … … are physiological signals of each person to be tested.
According to the optimization of the invention, the specific method for preprocessing the electroencephalogram signals comprises the following steps:
the mean of all electrodes is subtracted from the selected channel by a Common mean Reference (CAR) to remove the brain electrical noise:
Figure BDA0003925445110000041
in the formula (IV), the reaction mixture is,
Figure BDA0003925445110000042
the potential after the electroencephalogram signal is filtered at the time t; chi shape i (t) the ith electrode potential at time t; g is the number of selected channels; chi shape j (t) the electrode potential of a selected channel at time t;
removing artifact signals of eye movement, power frequency and the like by using a 0-45Hz band-pass filter;
segmenting the electroencephalogram signal to finish ERP segmentation, and removing the influence of a front signal and a rear signal of 1s on eliminating an edge filter; finally, carrying out Z-Score standardization on the electroencephalogram signals, such as processing the electroencephalogram signals in a mode of a formula (III), wherein the Z-Score standardization is a common method for data processing, and the formula (III) is a standard formula of a standardization mode, wherein the standardization of the data of two modes in the text is to uniformly convert the data of different magnitudes into the same magnitude and uniformly measure the data by using a calculated Z-Score value so as to ensure comparability of the data;
and for each segmented EEG signal sample containing 16 electrodes, completing mapping of a channel set to an index set by using an international 10-20 system, wherein the single-channel sampling frequency is 128Hz, in a Standard-10-20-Cap81, 16 electrodes Fp1, fp2, F7, F3, F4, F8, T7, C3, C4, T8, P7, P5, P4, P8, O1 and O2 are mapped to a one-dimensional topological vector according to the numerical indexes 1,3,13,15,19 and 21,35,37,41,43,57,58,63,65,74,76 of the one-dimensional topological vector, and the rest is 0, so that a characteristic matrix with 81x128 dimensions of a single EEG segment of a certain measured person is obtained, and finally the characteristic matrix is converted into an EEG two-dimensional picture as the input of an EEG characteristic extraction network.
Preferably, in step 3), the ecg feature extraction network includes: the method comprises the steps of extracting high-level features of the ECG sequence output by using a CNN-LSTM network, extracting time features from the ECG sequence through the convolution and pooling layer of the CNN, extracting sequential dependency relations among the time features through the LSTM layer, completing extraction of the high-level features of the ECG sequence, and using deep learning as a feature extractor to reduce the tedious steps of manually extracting linear and nonlinear features of physiological signals, remove redundant features in data, reduce system performance requirements and improve identification efficiency.
Preferably, in step 3), the electrocardiographic feature extraction network is a 1D-CNN and LSTM series network, and includes:
taking a sequence with a fixed time window of 1s as input, extracting spatial characteristics of an electrocardiosignal sequence through a CNN network, inputting the spatial characteristics into a two-Layer LSTM network through a Flatten Layer to extract a sequential dependency relationship between time characteristics, and outputting an extracted high-dimensional electrocardiosignal characteristic matrix F through a full connection Layer 1
The CNN network comprises two layers of networks, wherein the first layer of network comprises a one-dimensional convolutional layer, a Batch Normalization layer, a ReLU layer and a maximum pooling layer, and the second layer of network comprises a one-dimensional convolutional layer, a ReLU layer and a maximum pooling layer.
Preferably, in step 3), the electroencephalogram feature extraction network includes: and extracting the high-level features of the electroencephalogram by training convolution kernels in a filter by using a 2D-CNN network.
Preferably, in step 3), the structure of the electroencephalogram feature extraction network includes:
four groups of two-dimensional convolution networks carry out preprocessing on electroencephalogram signalsExtracting line characteristics, wherein each group of networks comprises a first two-dimensional convolution layer, a ReLU, a maximum pooling layer, a second two-dimensional convolution layer, a Batch Normalization, a ReLU and a maximum pooling layer; inputting a picture of 81x128 dimensions, generating a feature map through a convolution kernel, carrying out nonlinear activation by utilizing a ReLU, consolidating the feature map generated by the first two-dimensional convolution layer by utilizing the second two-dimensional convolution layer, and finally outputting a high-dimensional electroencephalogram signal feature matrix F through a full connection layer 2 (ii) a Wherein, the Batch Normalization processes the characteristics in the network, and prevents the characteristic output value close to the full connection layer from generating violent change due to the deepening of the network layer.
According to the invention, the specific method for fusing the features in the step 3) comprises the following steps:
separately computing a modal cross matrix N for ECG and EEG 1 ,N 2
Figure BDA0003925445110000051
In equation (V), T stands for transpose, and the probability distribution values α, β of ECG and EEG bimodal cross matrices are calculated by Softmax layer:
Figure BDA0003925445110000052
in formula (VI), i, j is the dimension of the two-modal feature matrix; finally obtaining a weighted fusion matrix M:
M=concate[αF 1 ,βF 2 ] (VII)
in formula (VII), the concatee is a concatenate function, and is used to splice two modal features to form a deep bimodal feature, where the method for fusing features includes: and performing feature level fusion on the extracted high-level features of the electrocardiosignals and the high-level features of the electroencephalogram signals to remove redundant information, and further fusing the two high-level features into a unified deep-layer bimodal feature.
Preferably, in the step 4) of training the recognition model, the high-level features of different EEG two-dimensional pictures are extracted from the data presented to the CNN by updating each layer of convolution kernel in the model training; the spatial relation of the images is tight, the CNN can perform local perception on the generated EEG two-dimensional picture through the neurons, then the local information of the EEG two-dimensional picture obtained at the lower layer is integrated at the higher layer to obtain global information, and the high-layer characteristics of the EEG two-dimensional picture are extracted.
Preferably according to the invention, in step 4), the deep bimodal features are classified using an LSTM classifier: a classification of the rank of the depressive state is obtained as an aid to the evaluation of patients with depressive states.
Preferably, in step 4), a training data set constructed through a depression state excitation link is processed and feature extracted and input to a constructed LSTM classifier for training, and the basic structure of the LSTM classifier comprises: an Input gate, a forgetting gate, an Output gate and a memory unit;
the Input gate: i.e. i t =σ(W i [h t-1 ,M t ]+b i ) (Ⅷ)
The forgetting gate Forget gate: f. of t =σ(W f [h t-1 ,M t ]+b f ) (IX)
The Output gate: o t =σ(W o [h t-1 ,M t ]+b o ) (X)
Updating the state of the memory unit:
Figure BDA0003925445110000061
and (3) outputting:
Figure BDA0003925445110000062
in formulas (VIII) - (XII), M is in each LSTM classifier t Fusing features of EEG and ECG input at time t for the LSTM classifier; sigma is a Sigmoid activation function; w i ,b i Respectively representing the weight matrix and the deviation of the input gate; w f ,b f Respectively representing a weight matrix and a deviation of the forgetting gate; w o ,b o Respectively representing the weight matrix and the deviation of the output gate; h is t-1 Is a hidden state at the moment of t-1; i.e. i t ,f t ,o t An input gate, a forgetting gate and an output gate at the moment t respectively; c t Is the cell state at time t; w c ,b c Respectively representing the weight matrix and the deviation of the memory unit; and after updating the parameters, storing the optimal model, inputting the test sample into a trained LSTM classifier, and outputting the depression state grade category of the testee through Softmax.
A system loaded with an intelligent method for bi-modal fusion based depression state identification, comprising:
the electroencephalogram and electrocardio acquisition module comprises: the method is used for collecting according to the step 1), acquiring original electroencephalogram signals and electrocardiosignals of patients and normal persons which are clinically diagnosed to be in a depression state based on a depression state stimulation experiment designed in advance, using the acquired original electroencephalogram signals and electrocardiosignals as a training sample set, and manually marking a test sample; acquiring original electroencephalogram signals and electrocardiosignals of a person to be tested as a test sample set according to the same task configuration;
the physiological signal processing module: the method is used for preprocessing the electroencephalogram signals and the electrocardiosignals according to the step 2), and aiming at the characteristics of the two acquired physiological signals in the aspects of channels, amplitudes and frequencies, the method which is configured in the system and is used for preprocessing the optimal data of the physiological signals is adopted according to the types of the acquired physiological signals, so that the original physiological signals are converted into the input data types which are more suitable for the characteristic extraction module;
the physiological signal feature extraction and fusion module: the method is used for respectively carrying out feature extraction and fusion on the electroencephalogram signals and the electrocardiosignals according to the step 3), selecting a proper feature extraction network according to the types of the processed data to extract EEG and ECG high-dimensional features, and fusing the data of the two extracted physiological signals to obtain multi-modal fusion features;
the neural network identification module: training a pre-constructed recognition model and an LSTM classifier by using the collected training sample set, and finally recognizing and classifying the depression grade of the tested person by using the trained recognition model.
The invention has the beneficial effects that:
1. according to the invention, through internationally recognized depression state induction experiments, the rules of physiological activities of patients in depression states are explored to the maximum extent, and the problems of high subjectivity and poor accuracy of self-evaluation of the traditional scales can be effectively improved. Because the physiological signal fluctuation of a depression state patient can be generated along with the change of subjective emotion in some specific scenes, the electroencephalogram signal and the electrocardiosignal in the induced scene are acquired by using the high-precision portable sensor, the error caused by behavior information such as videos and voices which are easily affected by external noise can be reduced, and the personal privacy of a tested person is not involved.
2. The physiological signal of a tested person is acquired by using a high-precision non-invasive portable sensor, so that physical injury and psychological pressure brought to the tested person can be relieved, and the affection feeling of a patient in a depression state is eliminated; the deep learning algorithm is used for identifying the patients in the depression state, so that the huge workload of psychological workers facing large-range screening is reduced, judgment reference is provided for medical workers, and the identification efficiency of the patients in the depression state is improved.
3. According to the characteristics of the electrocardiosignals and the electroencephalogram signals, different feature extraction networks are utilized, the high-level features of the psychological signals of the detected person are extracted to the maximum extent, and meanwhile, the influence of different dimensionalities, original feature redundancy and insufficient key features of the two original physiological signals on the identification efficiency and accuracy is reduced by utilizing feature level fusion. Aiming at the characteristics of electroencephalogram signals in the aspects of frequency and amplitude, the electroencephalogram signals are converted into 2D pictures by using a designed preprocessing tool, and the characteristics of the electroencephalogram signals are extracted by using 2D-CNN, so that the requirements of deep learning data volume can be met, the physiological signal characteristics of a patient in a depression state can be fully extracted by completing data acquisition in a short time, and the interference caused by subjective rejection of a tested person under a long-time inducing scene is reduced.
Drawings
FIG. 1 is a schematic flow chart of a method for identifying a depression state based on bimodal fusion according to the present invention;
FIG. 2 is a schematic diagram of the identification system of the present invention;
FIG. 3 is a schematic illustration of data acquisition as described in the present invention;
FIG. 4 is a schematic flow chart of the data preprocessing of the present invention;
fig. 5 is a schematic diagram of an ECG and EEG feature extraction network structure in the present invention.
Detailed Description
The invention is further described, but not limited, by the following description and the accompanying drawings.
Examples 1,
An intelligent method for depression state identification based on bimodal fusion, comprising:
1) Acquiring an electroencephalogram signal and an electrocardiosignal of a target person in a depression state inducing scene as a sample data set, utilizing an international universal depression state exciting scene as an inducing scene, and acquiring the electroencephalogram signal and the electrocardiosignal of a detected person as the sample data set through a sensor;
2) Preprocessing the acquired electroencephalogram signal and the acquired electrocardiosignal to obtain electroencephalogram preprocessing data and electrocardio preprocessing data, and adopting a corresponding preprocessing method according to the inherent characteristics of the acquired physiological signal in the aspect of frequency amplitude;
3) Establishing an identification model: performing feature extraction and fusion on the electroencephalogram preprocessed data and the electrocardio preprocessed data by utilizing a multi-modal deep neural network to obtain fusion features; the multi-mode deep neural network comprises an electrocardio characteristic extraction network and an electroencephalogram characteristic extraction network;
4) Training a recognition model:
inputting the fusion characteristics into a constructed LSTM classifier to iterate and store an optimal model, identifying the depression state of the person to be tested by using the optimal model, and finally classifying and grading the depression state.
The pretreatment method of the step 2) comprises the following steps:
the preprocessing of the electrocardiosignal comprises the following steps: the method comprises the following steps of performing down-sampling on an electrocardiosignal, removing noise which has large influence on the electrocardiosignal through wavelet transformation, segmenting the electrocardiosignal, removing the influence of a filter at the edge of the signal and unifying the lengths of finished signals; further, the deviation and the mean value of the original electrocardiosignals are standardized, and finally, the sequence output of the electrocardiosignals, namely the ECG sequence output, is obtained and is used as the input of the electrocardio characteristic extraction network;
the preprocessing of the brain electrical signals comprises the following steps: because the distribution structure of the electroencephalogram channels is irregular, the acquired electroencephalogram data is not regular structural data and cannot be directly used as the input of a neural network, and the interference of noise around the electroencephalogram signal and the power line frequency is removed by adopting a band-pass filter; the EEG signal is divided to remove the influence of a filter at the edge of the signal and complete the unification of the signal lengths; standardizing the deviation and the mean value of the original electroencephalogram signals, carrying out two-dimensional matrix mapping on the standardized electroencephalogram data through a Nasion 10-20 system, and finally converting the matrix into an EEG two-dimensional picture as the input of an electroencephalogram feature extraction network. The Z-score standardization method is adopted for the standardization of the mean value and the deviation in the electroencephalogram signal processing. The electroencephalogram preprocessing method adopts a common method in electroencephalogram processing, and can be realized by contacting with electroencephalogram processing.
The specific method for preprocessing the electrocardiosignal comprises the following steps:
the acquisition frequency of the electrocardiosignals is 256Hz, and in order to reduce errors in the fusion process caused by different frequencies, the electrocardiosignals are subjected to down-sampling: sampling the collected original surface electrocardiosignal with the interval of L, and passing through
Figure BDA0003925445110000091
In formula (I), ω is the digital angular frequency of the electrocardiographic signal in the frequency domain; expanding the frequency spectrum of the original electrocardiosignal X on the frequency domain to be L times of the original electrocardiosignal X to obtain the electrocardiosignal Y after down sampling, wherein the down sampling has a low sampling rate
Figure BDA0003925445110000092
The length of the electrocardiosignals is reduced under the condition of not influencing the cross-correlation of the electrocardiosignals;
the wavelet transform is used for removing high-frequency noise, power frequency interference, electromyogram signals and the like: the method comprises the following steps that inevitable interference can occur when electrocardiosignals of a human body are measured, because original electrocardiosignals have a wavelet coefficient larger than noise after wavelet decomposition, the interference in the original electrocardiosignals can be filtered by setting a proper threshold, firstly, wavelet decomposition of 8 layers is carried out on the electrocardiosignals through a wavedec function in python. Pywt library, and then, noise reduction is carried out through an Sqtwoolol threshold:
Figure BDA0003925445110000101
in the formula (II), lambda is a selected denoising threshold, P is the length of the electrocardiosignal after the downsampling, and finally wavelet inverse transformation is carried out through a waverec function in a python. Pywt library to obtain the denoised electrocardiosignal; segmenting the electrocardiosignal to finish ERP segmentation, only keeping signals of a specific time interval when a testee receives a certain stimulus and reacts, removing the influence of the filter at the edge offset by each 1s segment before and after each segment of the signals, and carrying out Z-Score standardization on the electrocardiosignal:
Figure BDA0003925445110000102
in formula (III), z is the score of the measured physiological signal of the individual obtained after normalization, which is used as the input to the feature extraction network;
Figure BDA0003925445110000103
segmenting the mean value of the physiological signal for each tested person; mu is the overall average value of physiological signals of all tested persons; n is the number of the tested personnel; xi, i =1,2,3, … … is the physiological signal of each person under test.
The specific method for preprocessing the electroencephalogram signals comprises the following steps:
the mean of all electrodes is subtracted from the selected channel by a Common mean Reference (CAR) to remove the brain electrical noise:
Figure BDA0003925445110000104
in the formula (IV), the reaction mixture is,
Figure BDA0003925445110000105
the potential after the electroencephalogram signal is filtered at the time t; chi shape i (t) the ith electrode potential at time t; g is the number of selected channels; chi shape j (t) the electrode potential of a selected channel at time t;
removing artifact signals of eye movement, power frequency and the like by using a 0-45Hz band-pass filter;
segmenting the electroencephalogram signal to finish ERP segmentation, and removing the influence of a front signal and a rear signal of 1s on eliminating an edge filter; finally, Z-Score standardization is carried out on the electroencephalogram signals, the electroencephalogram signals are processed in a mode of a formula (III), the Z-Score standardization is a common method for data processing, the formula (III) is a standard formula of the standardization mode, the standardization of data of two modes in the text is realized by uniformly converting data of different magnitudes into the same magnitude, and the data are uniformly measured by using a calculated Z-Score value to ensure comparability of the data;
and (2) completing mapping of a matching sequence from a channel set to an index set by utilizing an international 10-20 system for each segmented EEG signal sample containing 16 electrodes, wherein the single-channel sampling frequency is 128Hz,16 electrodes Fp1, fp2, F7, F3, F4, F8, T7, C3, C4, T8, P7, P5, P4, P8, O1 and O2 are mapped to a one-dimensional topological vector according to the digital index 8978 zft 8978 in the Standard-10-20-Cap81, and the rest is 0, so that a 81x 128-dimensional feature matrix of a single EEG segment of a certain measured person is obtained, and finally the feature matrix is converted into an EEG two-dimensional picture as the input of an EEG feature extraction network.
In the step 3), the electrocardiographic feature extraction network includes: the method comprises the steps of extracting high-level features of the ECG sequence output by using a CNN-LSTM network, extracting time features from the ECG sequence through the convolution and pooling layer of the CNN, extracting sequential dependency relations among the time features through the LSTM layer, completing extraction of the high-level features of the ECG sequence, and using deep learning as a feature extractor to reduce the tedious steps of manually extracting linear and nonlinear features of physiological signals, remove redundant features in data, reduce system performance requirements and improve identification efficiency.
In the step 3), the electrocardiographic feature extraction network is a 1D-CNN and LSTM series network, and includes:
taking a sequence with a fixed time window of 1s as input, extracting spatial characteristics of an electrocardiosignal sequence through a CNN network, inputting the spatial characteristics into a two-Layer LSTM network through a Flatten Layer to extract a sequential dependency relationship between time characteristics, and outputting an extracted high-dimensional electrocardiosignal characteristic matrix F through a full connection Layer 1
The CNN network comprises two layers of networks, wherein the first layer of network comprises a one-dimensional convolutional layer, a Batch Normalization layer, a ReLU layer and a maximum pooling layer, and the second layer of network comprises a one-dimensional convolutional layer, a ReLU layer and a maximum pooling layer.
In the step 3), the electroencephalogram feature extraction network includes: and extracting high-level features of the brain electricity by training a convolution kernel in a filter by using a 2D-CNN network.
In the step 3), the structure of the electroencephalogram feature extraction network includes:
performing feature extraction on the preprocessed electroencephalogram signals by four groups of two-dimensional convolution networks, wherein each group of networks comprises a first two-dimensional convolution layer, a ReLU, a maximum pooling layer, a second two-dimensional convolution layer, a Batch Normalization, a ReLU and a maximum pooling layer; inputting a picture of 81x128 dimensions, generating a feature map through a convolution kernel, carrying out nonlinear activation by utilizing a ReLU, consolidating the feature map generated by the first two-dimensional convolution layer by utilizing the second two-dimensional convolution layer, and finally outputting a high-dimensional electroencephalogram signal feature matrix F through a full connection layer 2 (ii) a Wherein, the Batch Normalization processes the characteristics in the network, and prevents the characteristic output value close to the full connection layer from generating violent change due to the deepening of the network layer.
The specific method for fusing the features in the step 3) comprises the following steps:
separately computing a modal cross matrix N for ECG and EEG 1 ,N 2
Figure BDA0003925445110000121
In equation (V), T stands for transpose, and the probability distribution values α, β of ECG and EEG bimodal cross matrices are calculated by Softmax layer:
Figure BDA0003925445110000122
in formula (VI), i, j is the dimension of the two-modal feature matrix; finally obtaining a weighted fusion matrix M:
M=concate[αF 1 ,βF 2 ] (VII)
in formula (VII), the concatee is a concatenate function, and is used to splice two modal features to form a deep bimodal feature, where the method for fusing features includes: and performing feature level fusion on the extracted high-level features of the electrocardiosignals and the high-level features of the electroencephalograms to remove redundant information, and further fusing the two high-level features into a unified deep bimodal feature.
In the step 4) of training the recognition model, extracting the high-level features of different EEG two-dimensional pictures from the data presented to the CNN by updating each layer of convolution kernel in model training; the spatial relation of the images is tight, the CNN can perform local perception on the generated EEG two-dimensional picture through the neurons, then the local information of the EEG two-dimensional picture obtained at the lower layer is integrated at the higher layer to obtain global information, and the high-layer characteristics of the EEG two-dimensional picture are extracted.
In step 4), classifying deep bimodal features using an LSTM classifier: a classification of the rank of the depressive state is obtained as an aid to the evaluation of patients with depressive states.
In step 4), a training data set constructed through a depression state excitation link is processed and feature extracted and input into a constructed LSTM classifier for training, and the basic structure of the LSTM classifier comprises: an Input gate, a forgetting gate, an Output gate and a memory unit;
the Input gate: i.e. i t =σ(W i [h t-1 ,M t ]+b i ) (Ⅷ)
The Forget gate Forget gate: f. of t =σ(W f [h t-1 ,M t ]+b f ) (IX)
The Output gate: o t =σ(W o [h t-1 ,M t ]+b o ) (X)
The memory cell state is updated:
Figure BDA0003925445110000131
and (3) outputting:
Figure BDA0003925445110000132
in formulas (VIII) - (XII), M is in each LSTM classifier t Fusing features of EEG and ECG input at time t for the LSTM classifier; sigma is a Sigmoid activation function; w t ,b i Respectively representing the weight matrix and the offset of the input gate; w f ,b f Respectively representing a weight matrix and a deviation of the forgetting gate; w o ,b o Respectively representing the weight matrix and the deviation of the output gate; h is t-1 Is a hidden state at the moment of t-1; i.e. i t ,f t ,o t An input gate, a forgetting gate and an output gate at the moment t respectively; c t Is the cell state at time t; w c ,b c Respectively representing the weight matrix and the deviation of the memory unit; and after updating the parameters, storing the optimal model, inputting the test sample into a trained LSTM classifier, and outputting the depression state grade category of the testee through Softmax.
Examples 2,
A system loaded with an intelligent method for bi-modal fusion based depression state identification, comprising:
the electroencephalogram and electrocardio acquisition module comprises: the method is used for collecting according to the step 1), acquiring original electroencephalogram signals and electrocardiosignals of patients and normal persons which are clinically diagnosed to be in a depression state based on a depression state stimulation experiment designed in advance, using the acquired original electroencephalogram signals and electrocardiosignals as a training sample set, and manually marking a test sample; acquiring original electroencephalogram signals and electrocardiosignals of a person to be tested as a test sample set according to the same task configuration;
the physiological signal processing module: the method is used for preprocessing the electroencephalogram signal and the electrocardiosignal according to the step 2), aiming at the characteristics of the two acquired physiological signals in the aspects of channel, amplitude and frequency, adopting a method which is configured in the system and is used for preprocessing the optimal data aiming at the physiological signals according to the types of the acquired physiological signals, and converting the original physiological signals into an input data type which is more suitable for a feature extraction module;
the physiological signal feature extraction and fusion module: the method is used for respectively carrying out feature extraction and fusion on the electroencephalogram signals and the electrocardiosignals according to the step 3), selecting a proper feature extraction network according to the type of the processed data to extract EEG and ECG high-dimensional features, and fusing the data of the two extracted physiological signals to obtain multi-modal fusion features;
the neural network identification module: training a pre-constructed recognition model and an LSTM classifier by using the collected training sample set, and finally recognizing and classifying the depression grade of the tested person by using the trained recognition model.
In practical operation, the steps of the specific application system include:
(1-1) constructing a depression state induction experiment, setting two types of depression state identification testing tasks, starting a bimodal physiological recorder to collect electroencephalogram and electrocardiosignal of a user in the whole process of completing the testing task, and constructing a training and testing sample set;
two depression state inducing scenes are designed, one is a Luo Xia ink experiment with wide clinical psychology application, and a subject describes ink graphs which appear on a screen according to a specified sequence according to system voice prompt; and in the second virtual interview, a virtual character can appear on a screen in front of the tested person to perform intelligent interview according to the result of the Luo Xia ink experiment. The content in the interview was from the International Standard handbook of Mental disease diagnosis and statistics-V (Diagnostic and Statistical Manual of Mental Disorders-V, DSM-V) and the Depression State Standard examination Scale, hamilton Depression Scale (HAMD). In the process of completing the test task, the testee only needs to answer according to the voice prompt requirement of the system.
(1-2) constructing a training data set: selecting a Patient in a depression state and a normal control group which are diagnosed by a clinical psychologist and meet the depression state diagnosis standard of 'a mental disease diagnosis and statistics manual-V', carrying out a test task according to a standard flow, starting a physiological recorder to record electroencephalograms and electrocardiosignals of a tested person, and carrying out PHQ-9 Health Questionnaire (patent Health questonnaire-9, PHQ-9) and a Beck depression self-rating scale (BDI) provided in a completion system after the test is completed.
(1-3) constructing a test sample set: and (3) completing the test task of the depression state inducing scene by the testee according to the same standard as the standard in the step (1-2), starting the electrocardio-electroencephalograph recorder to complete data acquisition in the process of completing the test task without completing the PHQ-9 and BDI scale, and forming a test sample set.
A depression state scene inducing module is constructed in the system, test tasks of an ink mark experiment part and a virtual interview part are set, a 16-channel dry electrode electroencephalograph cap is adopted by an electroencephalograph recorder, the acquisition frequency is 128Hz, and 16 channels comprise Fp1, fp2, F7, F3, F4, F8, T7, C3, C4, T8, P7, P5, P4, P8, O1, O2 and a reference electrode Cz. The electrode distribution refers to the research on brain regions related to depression states in brain science. The electrocardio acquisition adopts 2 channels, the acquisition frequency is 256Hz, the electrocardio acquisition is respectively arranged on the left arm and the right arm, and the reference electrode is arranged on the left foot. And (5) starting.
The data acquisition module receives and utilizes the intelligent voice prompt, starts the bimodal physiological signal recorder and records the electroencephalogram and electrocardio data of the tested person in the process of completing the test task, so as to form a test sample set. Electrocardio and brain signals of a patient clinically determined to be in a depression state under the same environment are collected to be used as a training data set of a depression state classifier.

Claims (9)

1. An intelligent method for depression state identification based on bimodal fusion, comprising:
1) Acquiring electroencephalogram signals and electrocardiosignals of a target person in a depression state inducing scene;
2) Preprocessing the acquired electroencephalogram signals and the acquired electrocardiosignals to obtain electroencephalogram preprocessing data and electrocardio preprocessing data;
3) Establishing an identification model: performing feature extraction and fusion on the electroencephalogram preprocessed data and the electrocardio preprocessed data by utilizing a multi-modal deep neural network to obtain fusion features; the multi-mode deep neural network comprises an electrocardio characteristic extraction network and an electroencephalogram characteristic extraction network;
4) Training a recognition model:
inputting the fusion characteristics into a constructed LSTM classifier to iterate and store an optimal model, identifying the depression state of the person to be tested by using the optimal model, and finally classifying and grading the depression state.
2. The intelligent method for identifying a depressive state based on bimodal fusion according to claim 1, wherein the preprocessing method of the step 2) includes:
the preprocessing of the electrocardiosignal comprises the following steps: the method comprises the following steps of performing down-sampling on an electrocardiosignal, removing noise on the electrocardiosignal through wavelet transformation, segmenting the electrocardiosignal to remove the influence of a filter at the edge of the signal and unifying the lengths of finished signals; further, the deviation and the mean value of the original electrocardiosignals are standardized, and finally the sequence output of the electrocardiosignals, namely the ECG sequence output, is obtained and is used as the input of the electrocardio characteristic extraction network;
the preprocessing of the brain electrical signals comprises the following steps: removing noise around the electroencephalogram signal and interference of power line frequency by adopting a band-pass filter; the EEG signal is divided to remove the influence of a filter at the edge of the signal and complete the unification of the signal lengths; standardizing the deviation and the mean value of the original electroencephalogram signals, carrying out two-dimensional matrix mapping on the standardized electroencephalogram data through a Nasion 10-20 system, and finally converting the matrix into an EEG two-dimensional picture as the input of an electroencephalogram feature extraction network.
3. An intelligent method for depression state identification based on bimodal fusion as claimed in claim 1, wherein the specific method for preprocessing the electrocardiosignal comprises:
down-sampling the cardiac electrical signal: sampling the collected original surface electrocardiosignal with the interval of L, and passing through
Figure FDA0003925445100000011
In formula (I), ω is the digital angular frequency of the electrocardiographic signal in the frequency domain; expanding the frequency spectrum of the original electrocardiosignal X on the frequency domain to the original L times to obtain the electrocardiosignal Y after down sampling, wherein the low sampling rate after down sampling is
Figure FDA0003925445100000021
The wavelet transformation comprises the following steps: wavelet decomposition of 8 layers is firstly carried out on electrocardiosignals through a wavedec function in python.
Figure FDA0003925445100000022
In the formula (II), lambda is a selected denoising threshold, P is the length of the electrocardiosignal after the downsampling, and finally wavelet inverse transformation is carried out through a waverec function in a python. Pywt library to obtain the denoised electrocardiosignal; segmenting the electrocardiosignal to finish ERP segmentation, only keeping signals of a specific time interval when a testee receives a certain stimulus and reacts, removing the influence of the filter at the edge offset by each 1s segment before and after each segment of the signals, and carrying out Z-Score standardization on the electrocardiosignal:
Figure FDA0003925445100000023
in formula (III), z is normalizedThen the obtained value of the physiological signal of the individual to be detected is used for inputting the characteristic extraction network;
Figure FDA0003925445100000024
segmenting the mean value of the physiological signal for each tested person; mu is the overall average value of physiological signals of all tested persons; n is the number of the tested personnel; x is the number of i I =1,2,3 and … … are physiological signals of each person to be tested.
4. The intelligent method for depression state identification based on bimodal fusion as claimed in claim 1, wherein the specific method for preprocessing the electroencephalogram signal comprises:
the brain electrical noise is removed by subtracting the average of all electrodes from the selected channel by the common average reference:
Figure FDA0003925445100000025
in the formula (IV), the reaction mixture is,
Figure FDA0003925445100000026
the potential after the electroencephalogram signal is filtered at the time t; chi shape i (t) the ith electrode potential at time t; g is the number of selected channels; chi shape j (t) the electrode potential of a selected channel at time t;
removing artifact signals by using a band-pass filter;
segmenting the electroencephalogram signal to complete ERP segmentation, and removing the influence of a front signal elimination edge filter and a rear signal elimination edge filter for 1 s; finally, carrying out Z-Score standardization on the electroencephalogram signals;
and (3) completing the mapping of a matching sequence from a channel set to an index set by utilizing an international 10-20 system on each segmented EEG signal sample containing 16 electrodes, further obtaining a 81x 128-dimensional characteristic matrix of a single EEG segment of a certain measured person, and finally converting the characteristic matrix into an EEG two-dimensional picture as the input of an EEG characteristic extraction network.
5. An intelligent method for identifying depression state based on bimodal fusion as claimed in claim 1, wherein in step 3), said electrocardio feature extraction network comprises: and extracting high-level features of the ECG sequence output by using a CNN-LSTM network, extracting time features from the ECG sequence through a CNN convolution and pooling layer, and extracting sequential dependency relations among the time features through an LSTM layer to finish the extraction of the high-level features of the ECG sequence.
6. The intelligent method for identifying a depressive state based on bimodal fusion as claimed in claim 1, wherein in step 3), the ECG feature extraction network is a series network of 1D-CNN and LSTM, including:
taking a sequence with a fixed time window of 1s as input, extracting spatial characteristics of an electrocardiosignal sequence through a CNN network, inputting the spatial characteristics into a two-Layer LSTM network through a Flatten Layer to extract a sequential dependency relationship between time characteristics, and outputting an extracted high-dimensional electrocardiosignal characteristic matrix F through a full connection Layer 1
The CNN network comprises two layers of networks, wherein the first layer of network comprises a one-dimensional convolutional layer, a Batch Normalization layer, a ReLU layer and a maximum pooling layer, and the second layer of network comprises a one-dimensional convolutional layer, a ReLU layer and a maximum pooling layer.
7. The intelligent method for depression state identification based on bimodal fusion as claimed in claim 6, wherein in the step 3), the electroencephalogram feature extraction network comprises: extracting high-level features of the electroencephalogram in a filter by training a convolution kernel by using a 2D-CNN network;
preferably, in the step 3), the structure of the electroencephalogram feature extraction network includes:
performing feature extraction on the preprocessed electroencephalogram signals by four groups of two-dimensional convolution networks, wherein each group of networks comprises a first two-dimensional convolution layer, a ReLU, a maximum pooling layer, a second two-dimensional convolution layer, a Batch Normalization, a ReLU and a maximum pooling layer; generating characteristic graph by convolution kernel for input picture, and performing negation by using ReLULinear activation, consolidating the characteristic diagram generated by the first two-dimensional convolution layer by using the second two-dimensional convolution layer, and finally outputting a high-dimensional electroencephalogram characteristic matrix F through the full-connection layer 2
Preferably, the specific method for fusing characteristics in step 3) includes:
separately computing a modal cross matrix N for ECG and EEG 1 ,N 2
Figure FDA0003925445100000041
In equation (V), T stands for transpose, and the probability distribution values α, β of ECG and EEG bimodal cross matrices are calculated by Softmax layer:
Figure FDA0003925445100000042
in formula (VI), i, j is the dimension of the two-modal feature matrix; finally, obtaining a weighted fusion matrix M:
M=concate[αF 1 ,βF 2 ] (VII)
in formula (VII), the concatee is a concatenate function, and is used to splice two modal features to form a deep bimodal feature.
8. An intelligent method for bi-modal fusion based depression state recognition according to claim 1, wherein in the step 4) training the recognition model, the high-level features of different EEG two-dimensional pictures are extracted from the data presented to CNN by updating each layer of convolution kernel in the model training;
preferably, in step 4), the deep bimodal features are classified using an LSTM classifier: obtaining a depression status grade classification;
preferably, in step 4), the basic structure of the LSTM classifier includes: an Input gate, a forgetting gate, an Output gate and a memory unit;
the Input gate: i.e. i t =σ(W i [h t-1 ,M t ]+b i ) (VIII)
The forgetting gate Forget gate: f. of t =σ(W f [h t-1 ,M t ]+b f ) (IX)
The Output gate: o t =σ(W o [h t-1 ,M t ]+b o ) (X)
The memory cell state is updated:
Figure FDA0003925445100000043
and (3) outputting:
Figure FDA0003925445100000044
in equations (VIII) - (XII), in each LSTM classifier, M t Fusing features of EEG and ECG input at time t for the LSTM classifier; sigma is a Sigmoid activation function; w is a group of i ,b i Respectively representing the weight matrix and the deviation of the input gate; w f ,b f Respectively representing the weight matrix and the deviation of the forgetting gate; w o ,b o Respectively representing the weight matrix and the deviation of the output gate; h is t-1 Is a hidden state at the moment of t-1; i all right angle t ,f t ,o t An input gate, a forgetting gate and an output gate at the moment t respectively; c t Is the cell state at time t; w c ,b c Respectively representing the weight matrix and the deviation of the memory cell.
9. A system loaded with the intelligent method for bi-modal fusion based depression state recognition according to any one of claims 1-8, comprising:
the electroencephalogram and electrocardio acquisition module comprises: for collecting according to step 1);
the physiological signal processing module: used for preprocessing the electroencephalogram signals and the electrocardiosignals according to the step 2);
the physiological signal feature extraction and fusion module: the characteristic extraction and fusion are respectively carried out on the electroencephalogram signal and the electrocardiosignal according to the step 3);
the neural network identification module: training a pre-constructed recognition model and an LSTM classifier by using the collected training sample set, and finally recognizing and classifying the depression grade of the tested person by using the trained recognition model.
CN202211370867.7A 2022-11-03 2022-11-03 Depression state identification method and system based on bimodal fusion Pending CN115736920A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211370867.7A CN115736920A (en) 2022-11-03 2022-11-03 Depression state identification method and system based on bimodal fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211370867.7A CN115736920A (en) 2022-11-03 2022-11-03 Depression state identification method and system based on bimodal fusion

Publications (1)

Publication Number Publication Date
CN115736920A true CN115736920A (en) 2023-03-07

Family

ID=85357693

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211370867.7A Pending CN115736920A (en) 2022-11-03 2022-11-03 Depression state identification method and system based on bimodal fusion

Country Status (1)

Country Link
CN (1) CN115736920A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115995116A (en) * 2023-03-23 2023-04-21 苏州复变医疗科技有限公司 Depression state evaluation method, device, terminal and medium based on computer vision

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115995116A (en) * 2023-03-23 2023-04-21 苏州复变医疗科技有限公司 Depression state evaluation method, device, terminal and medium based on computer vision

Similar Documents

Publication Publication Date Title
CN109157231B (en) Portable multichannel depression tendency evaluation system based on emotional stimulation task
Khare et al. PDCNNet: An automatic framework for the detection of Parkinson’s disease using EEG signals
CN110070105B (en) Electroencephalogram emotion recognition method and system based on meta-learning example rapid screening
CN111714118B (en) Brain cognition model fusion method based on ensemble learning
CN114224342B (en) Multichannel electroencephalogram signal emotion recognition method based on space-time fusion feature network
CN110013250B (en) Multi-mode characteristic information fusion prediction method for suicidal behavior of depression
CN111568446A (en) Portable electroencephalogram depression detection system combined with demographic attention mechanism
CN113662545B (en) Personality assessment method based on emotion electroencephalogram signals and multitask learning
CN111920420B (en) Patient behavior multi-modal analysis and prediction system based on statistical learning
CN114533086A (en) Motor imagery electroencephalogram decoding method based on spatial domain characteristic time-frequency transformation
CN112185493A (en) Personality preference diagnosis device and project recommendation system based on same
CN114781442A (en) Fatigue classification method based on four-dimensional attention convolution cyclic neural network
CN115299947A (en) Psychological scale confidence evaluation method and system based on multi-modal physiological data
CN113509186B (en) ECG classification system and method based on deep convolutional neural network
Zhang et al. DWT-Net: Seizure detection system with structured EEG montage and multiple feature extractor in convolution neural network
CN115736920A (en) Depression state identification method and system based on bimodal fusion
Saini et al. Light-weight 1-D convolutional neural network architecture for mental task identification and classification based on single-channel EEG
CN110693510A (en) Attention deficit hyperactivity disorder auxiliary diagnosis device and using method thereof
CN108962379B (en) Mobile phone auxiliary detection system for cranial nerve system diseases
CN110569968B (en) Method and system for evaluating entrepreneurship failure resilience based on electrophysiological signals
Yun-Mei et al. The abnormal detection of electroencephalogram with three-dimensional deep convolutional neural networks
Mitra et al. Analyzing Clinical 12-Lead ECG Images Using Deep Learning Algorithms for Objective Detection of Cardiac Diseases
CN114699093A (en) Electroencephalogram seizure signal detection method based on convolutional neural network and long-term and short-term memory
CN114569116A (en) Three-channel image and transfer learning-based ballistocardiogram ventricular fibrillation auxiliary diagnosis system
CN111671421A (en) Electroencephalogram-based children demand sensing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination