CN113974557A - Deep neural network anesthesia depth analysis method based on electroencephalogram singular spectrum analysis - Google Patents

Deep neural network anesthesia depth analysis method based on electroencephalogram singular spectrum analysis Download PDF

Info

Publication number
CN113974557A
CN113974557A CN202111264602.4A CN202111264602A CN113974557A CN 113974557 A CN113974557 A CN 113974557A CN 202111264602 A CN202111264602 A CN 202111264602A CN 113974557 A CN113974557 A CN 113974557A
Authority
CN
China
Prior art keywords
data
anesthesia
neural network
electroencephalogram
patient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111264602.4A
Other languages
Chinese (zh)
Inventor
李洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Second Affiliated Hospital Army Medical University
Original Assignee
Second Affiliated Hospital Army Medical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Second Affiliated Hospital Army Medical University filed Critical Second Affiliated Hospital Army Medical University
Priority to CN202111264602.4A priority Critical patent/CN113974557A/en
Publication of CN113974557A publication Critical patent/CN113974557A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4821Determining level or depth of anaesthesia
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/377Electroencephalography [EEG] using evoked responses
    • A61B5/378Visual stimuli
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/377Electroencephalography [EEG] using evoked responses
    • A61B5/38Acoustic or auditory stimuli
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7203Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7221Determining signal validity, reliability or quality
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems

Abstract

The invention relates to the technical field of medical anesthesia, in particular to a deep neural network anesthesia depth analysis method based on electroencephalogram singular spectrum analysis, which comprises the following steps: marking an anesthetic event: marking anesthesia events according to brain waveform data, facial image data and a brainstem auditory induction method; and (3) data processing: converting brain waveform data into a matrix by a singular spectrum analysis method, decomposing singular values, and reconstructing original one-dimensional brain electrical information by matrix transformation; and (3) data analysis step: carrying out filtering processing, Fourier transform, wavelet transform, power spectrum analysis and sample entropy analysis on original one-dimensional electroencephalogram information to obtain characteristic information of the electroencephalogram information on a time domain and a frequency domain; and (3) anesthesia depth prediction step: and constructing a deep learning neural network model, inputting the labeling result and the characteristic information into the neural network model, and outputting the anesthesia depth prediction result of the patient. The method can improve the accuracy of anesthesia depth prediction.

Description

Deep neural network anesthesia depth analysis method based on electroencephalogram singular spectrum analysis
Technical Field
The invention relates to the technical field of medical anesthesia, in particular to a deep neural network anesthesia depth analysis method based on electroencephalogram singular spectrum analysis.
Background
According to clinical statistics, only about 60% of patients can enjoy full quality anesthesia service, about 14% of patients are over-anesthetized, 16% of patients are over-anesthetized, and 10% of patients are under-anesthetized. Wherein, the deep anesthesia causes the respiration to slow down due to the excessive medicine, and the brain is lack of oxygen for a long time, thereby causing the cardiac arrest of the patient and leading the death of the patient. The anesthesia is too shallow, the patient can have memory and even feel pain in the operation, the mental or sleep disorder can be caused seriously, and the terrorist memory possibly caused in the operation becomes the pain of the life after the operation; it is known that recovery of consciousness also occurs during general anesthesia operation. In order to reduce the incidence of over shallow or over deep anesthesia, the anesthesia state needs to be monitored in real time to reduce the working intensity of medical staff and the pain of patients, and ensure the safety and the anesthesia effect.
The traditional anesthesia depth judgment is realized by observing the vital sign monitoring of a patient, such as the change of body temperature, pulse, respiration, blood pressure, pupils and the like, and mainly reflects the state of the vegetative nerve function in the operation; with the intensive research on brain waves, anesthesia depth monitors based on brain electrical signals, such as BIS index, Nacotend index, auditory evoked potential and the like, have been developed, but both have advantages and disadvantages. The former is simple and easy to operate, but the quantitative determination cannot be carried out at present; the latter can be quantitatively detected, but still is affected by physiological factors, such as age, race, sex, low temperature, hypoglycemia, cerebral anoxia, and the like, so that the prediction accuracy of the current anesthesia depth estimation method needs to be improved.
Disclosure of Invention
The invention aims to provide a deep neural network anesthesia depth analysis method based on electroencephalogram singular spectrum analysis, which can improve the accuracy of anesthesia depth prediction.
In order to achieve the aim, the deep neural network anesthesia depth analysis method based on electroencephalogram singular spectrum analysis is provided, and comprises the following steps:
a data acquisition step: acquiring brain waveform data and facial image data of a patient in a clinical operation;
marking an anesthetic event: marking each anesthesia event of a patient in an anesthesia state according to brain waveform data, facial image data and a brainstem auditory induction method, and obtaining a marking result;
and (3) data processing: converting brain waveform data into a matrix by a singular spectrum analysis method, decomposing singular values, grouping the decomposed singular values, removing pseudo noise, and reconstructing original one-dimensional brain information by matrix transformation;
and (3) data analysis step: filtering the original one-dimensional electroencephalogram information to obtain electroencephalogram information in a normal activity frequency range of the human brain, and then combining Fourier transform, wavelet transform, power spectrum analysis and sample entropy analysis to obtain characteristic information of the electroencephalogram information on a time domain and a frequency domain;
and (3) anesthesia depth prediction step: and constructing a deep learning neural network model, inputting the labeling result and the characteristic information into the neural network model, and outputting the anesthesia depth prediction result of the patient.
The principle and the advantages are as follows:
1. the setting of the marking step of the anesthesia event can objectively and sensitively reflect the function of the central nervous system through the brainstem auditory induction method, the brain waveform data and the facial image data, thereby facilitating the understanding of the anesthesia state of a patient at the moment and further realizing the reverse marking of the anesthesia event. The reverse marking of the anesthesia event can facilitate the analysis of the current anesthesia state of the patient, so that the anesthesia depth of the patient can be conveniently analyzed according to the anesthesia state, and data support is provided for the prediction of the anesthesia depth in the later period.
2. The data processing step and the data analysis step are arranged, the brain waveform data are converted into a matrix through a singular spectrum analysis method in the data processing step, singular value decomposition is carried out, original one-dimensional brain electrical information is reconstructed through matrix transformation, the brain waveform data at a high latitude are reduced to a low latitude, the data processing amount is greatly reduced, and therefore the data analysis processing efficiency is improved. Then reconstructing original one-dimensional electroencephalogram information through matrix transformation; and filtering the original one-dimensional electroencephalogram information, and combining Fourier transform, wavelet transform, power spectrum analysis and sample entropy analysis, invalid data in the electroencephalogram data can be fully removed, so that electroencephalogram information which is deeply related to anesthesia depth is obtained. And finally, the anesthesia depth prediction result of the patient can be quickly and accurately obtained by the constructed deep learning neural network model and inputting the marking result and the characteristic information into the model, so that the workload of a doctor is greatly reduced and the implementation pressure of an anesthesia operation is reduced.
Further, the singular spectrum analysis method in the data processing step comprises the following substeps:
and a track matrix construction sub-step: converting an original EEG signal in brain waveform data into a multi-dimensional track matrix X;
singular value decomposition substep: and constructing a covariance matrix, converting the asymmetric multidimensional track matrix into a symmetric square matrix, and decomposing eigenvalues to obtain eigenvalues arranged in a descending order and corresponding eigenvectors.
For the symmetric square matrix, the eigenvalue and the singular value are equal, so that the eigenvalue and the eigenvector of the multi-dimensional track matrix X can be quickly acquired through singular value decomposition, the dimension reduction processing of data is facilitated, the data processing capacity is reduced, and the data processing efficiency is improved.
Further, the original EEG signal is a single channel time series signal s ═(s)1,s2,……,sN)TThe formula of the multi-dimensional trajectory matrix X is as follows:
Figure BDA0003326446680000031
n is the length of the original EEG signal and L is the embedded time window of the trajectory matrix, where L < N, K is N-L + 1.
Further, the data processing step specifically includes the following substeps:
grouping pseudo noise reduction substep: grouping the eigenvalues and eigenvectors obtained by decomposing the singular values according to an eigenvalue change rate formula; removing artifacts in the original EEG signal according to an artifact removal formula and a fixed threshold set in the artifact removal formula; noise in the original EEG signal is removed according to the magnitude of the singular values and a clustering result with a characteristic meaning is obtained.
Further, the formula for artifact removal in the original EEG signal is as follows:
Figure BDA0003326446680000032
when the maximum amplitude in the original EEG signal is greater than a fixed threshold V0When the signal is received, RC1 and RC2 are classified as artifacts, otherwise RC1 is classified as artifacts; RC1 and RC2 are each composed of eigenvectors corresponding to the grouped eigenvalues.
Further, the data processing step specifically includes the following substeps:
matrix reconstruction substep: and selecting an effective characteristic vector group according to a characteristic value change rate formula, and reconstructing a track matrix according to a reconstruction formula.
Further, the characteristic value change rate formula is as follows:
Figure BDA0003326446680000033
wherein λ isiIs a characteristic value, i<j<L。
Further, the reconstruction formula is as follows:
Figure BDA0003326446680000034
wherein the content of the first and second substances,
Figure BDA0003326446680000035
Xiand recombining the track matrix for the specific characteristic vector.
Further, the anesthesia event marking step specifically comprises the following steps:
a reverse labeling substep: extracting characteristic data according to the brain wave data, and carrying out reverse annotation on the anesthesia event according to the characteristic data to obtain a first annotation result; carrying out image recognition on the collected facial image data to obtain a second labeling result of each anesthesia event;
and a neural network model labeling substep: and the electroencephalogram data, the feature data and the first labeling result are stored in an associated mode and serve as first accumulated data, the facial image data and the second labeling result serve as second accumulated data, a neural network model is built, trained and optimized according to the first accumulated data and the second accumulated data, the facial image data of the patient to be tested are input into the neural network model to automatically judge and label the anesthesia event, and a third labeling result is obtained.
The function of the central nervous system can be objectively and sensitively reflected by extracting the characteristic data from the brain wave data, so that the anesthesia state of a patient at the moment can be conveniently known, and then the reverse marking of an anesthesia event is realized. When a patient is anesthetized, a doctor needs to perform related operation on the face of the patient, the camera of the intelligent terminal is used for acquiring facial image data of the patient in the operation period, the operation of the doctor can be acquired, and the operation can be fed back to the face of the patient and then is identified through the image. It is convenient to analyze the anesthesia status of the patient and to complete the marking of the anesthesia event. The anesthesia event marker has the advantages that the real-time attention of medical staff is not needed, so that the medical staff is not easy to be distracted and is more convenient. And finally, constructing, training and optimizing a neural network model according to the first accumulated data and the second accumulated data, inputting the facial image data of the patient to be tested into the neural network model for automatic judgment and marking of the anesthesia event, and obtaining a third marking result. The face data of the patient to be tested is very convenient to acquire, only one camera needs to be installed, the cost is low, the brain wave equipment is high in cost, the operation is complex during use, the equipment such as a head-wearing sensor needs to be worn, the equipment adjusting operation is carried out, and the like, and the equipment adjusting operation is very inconvenient. According to the scheme, the neural network model is used for correlating the face data with the brain wave data, namely, the correlation between the face data and the brain wave data is found, the corresponding brain wave data is analyzed through rapid correlation of the face data, the analyzed brain wave data is used for further confirming the anesthesia depth, the measurement precision of the anesthesia depth is guaranteed while the cost is low and the efficiency is high, and the advantages and the disadvantages of the face data and the brain wave data are complemented. Therefore, the accuracy of the analysis of the anesthesia state can be further improved, and meanwhile, medical personnel do not need to pay attention to and record in real time, so that the labor cost is saved, and the efficiency is higher. Therefore, the anesthesia state of the patient can be automatically recorded and labeled, and the problem that recorded data is not favorable for follow-up clinical anesthesia scientific research and application due to errors caused by the reason that a recorder wrongly reads time, wrongly writes wrong contents and the like is avoided.
Further, the data acquisition step: acquiring static data, physical sign data and operation data of a patient before an operation; the static data comprises height, weight, sex, age and medical history;
the method further comprises the steps of:
a data storage management step: and screening and classifying the patients according to the static data of the patients, removing the patients with brain or nerve abnormality, and dividing the clinical case range.
Accidental errors caused by special cases can be eliminated through screening and classification, and therefore universality of data is guaranteed.
Drawings
FIG. 1 is a flow chart of a deep neural network anesthesia depth analysis method based on electroencephalogram singular spectrum analysis in an embodiment of the present invention;
fig. 2 is a block flow diagram of the data processing steps of fig. 1.
Detailed Description
The following is further detailed by way of specific embodiments:
example one
A deep neural network anesthesia depth analysis method based on electroencephalogram singular spectrum analysis uses devices including an intelligent terminal, a Bluetooth headset and an auditory evoked potential monitor. The intelligent terminal carries on there is APP for bluetooth headset carries out communication connection, sends the amazing information of short sound, and intelligent terminal adopts the panel computer of taking the camera in this embodiment, and the panel computer passes through the support to be fixed on the operating table. The Bluetooth headset is matched with the tablet personal computer through Bluetooth, and remote wireless communication connection is achieved. Substantially as shown in figures 1 and 2, comprising the following steps:
a data acquisition step: acquiring brain waveform data and facial image data of a patient in a clinical operation; acquiring static data, physical sign data and operation data of a patient before an operation; the static data comprises height, weight, sex, age and medical history;
patient screening and managing steps: and screening and classifying the patients according to the static data of the patients, removing the patients with brain or nerve abnormality, and dividing the clinical case range.
Marking an anesthetic event: marking each anesthesia event of a patient in an anesthesia state according to brain waveform data, facial image data and a brainstem auditory induction method, and obtaining a marking result;
the anesthesia event marking step specifically comprises the following steps:
s001, wearing a Bluetooth earphone and an auditory evoked potential monitor for a patient before an operation starts;
s002, initializing equipment, and detecting whether the Bluetooth headset, the auditory evoked potential monitor and the camera are normally connected;
s101, before an operation is started, identity information of a patient and configuration information of short-sound stimulation information are input into an intelligent terminal, wherein the configuration information comprises stimulation information content, volume and playing time interval of the stimulation information content. The stimulation information content is the name of the patient and requires the patient to blink while hearing the call.
S102, according to the identity information of the patient, the hearing physical sign data of the patient are called from a medical record database of the hospital, and the volume setting in the configuration information is optimized according to the hearing physical sign data. For adjusting the volume to a proper level for subsequent analysis of the anesthesia status.
S1, after the operation is started, aligning the camera of the intelligent terminal to the face of the patient, and acquiring the face image data (namely face video image data) of the patient during the operation; after the operation is completed, the face image data is saved.
S2, when an operation is started, short sound stimulation information is sent to a Bluetooth headset worn by a patient through an intelligent terminal, and the auditory evoked potential monitor worn by the patient is used for recording brain wave data of the patient; when a patient hears the stimulation information content in an anesthesia state, the patient responds, so that waveform information (characteristic wave) corresponding to a cycle period (a phase difference exists between the waveform information and the playing time interval of the stimulation information content) exists in electroencephalogram data. The characteristic wave of the patient in the anesthesia state is different from the characteristic wave of the patient in the non-anesthesia state, and the characteristic wave is specifically represented by the frequency, the amplitude and the like of the characteristic wave.
S3, extracting feature data according to the brain wave data, and carrying out reverse annotation on the anesthesia event according to the feature data to obtain a first annotation result; carrying out image recognition on the collected facial image data to obtain a second labeling result of each anesthesia event;
the step S3 further includes the steps of:
s301, extracting feature data according to brain wave data, wherein the feature data comprises amplitude, phase and frequency of the brain wave;
s302, reversely labeling the anesthesia events by combining the characteristic data with the analysis of the characteristics of the frequency domain and the power spectrum, and obtaining a first labeling result, wherein the first labeling result comprises time nodes of each anesthesia event; when the patient is in a non-anesthesia state and hears the stimulation information content, the response is stronger, the power is higher, the amplitude is larger, and the frequency can be basically overlapped with the playing frequency of the stimulation information content. In the anesthesia state, the central nerve of the patient is shielded, when the patient hears the stimulation information content, the response is weak, the power is small, the amplitude is small, the frequency is reduced, and the phase of the characteristic wave is changed to a certain extent. And the strength of the shielding changes, which causes the power, amplitude, frequency and phase to change correspondingly. A reverse annotation of the anesthetic event can be made.
S303, carrying out image identification on the collected facial image data, and obtaining a second labeling result of each anesthesia event, wherein the second labeling result comprises image data corresponding to each anesthesia event, and the image data comprises image frame data from the starting time point to the ending time point of each anesthesia event. Corresponding time information exists in the image frame data. When a doctor performs a surgical operation related to an anesthesia event, the feedback of the patient appears on the face, so that the anesthesia event of the patient can be indirectly marked by performing image recognition on the facial image data.
S304, according to the time node of each anesthesia event in the first labeling result, key image frame data which the time node accords with is screened and matched from the image data corresponding to each anesthesia event in the second labeling result, if the consistent key image frame data exists, the first labeling result and the second labeling result are marked as normal data, and if the consistent key image frame data does not exist or does not completely accord with, the first labeling result and the second labeling result are marked as abnormal data.
S4, performing correlation storage on electroencephalogram data, feature data and a first labeling result to serve as first accumulated data, and using facial image data and a second labeling result to serve as second accumulated data, constructing, training and optimizing a neural network model according to the first accumulated data and the second accumulated data, inputting the facial image data of a patient to be tested into the neural network model to perform automatic judgment and labeling on anesthesia events, and obtaining a third labeling result;
s401, when the first labeling result and the second labeling result are marked as abnormal data, sending the first accumulated data and the second accumulated data to a doctor, acquiring labeling data of the doctor for performing anesthesia event relabeling on the first labeling result and the second labeling result, and replacing an anesthesia event labeled in the first labeling result and/or the second labeling result according to the labeling data;
s402, randomly extracting first accumulated data and second accumulated data of which the first marking results and the second marking results are marked as normal data, sending the first accumulated data and the second accumulated data to a doctor, and acquiring marking data of the doctor for performing anesthesia event relabeling on the first marking results and the second marking results;
and S403, accumulating the set number of the labeled data, the first accumulated data and the second accumulated data, and constructing, training and optimizing a neural network model according to the labeled data, the first accumulated data and the second accumulated data. In this embodiment, the neural network model is modeled by using a deep neural network algorithm, and the structure of multiple hidden layers and multiple neural network units is adopted, and the problems of over-fitting and under-fitting are considered. The deep neural network algorithm belongs to a mature technology, and is not described in detail in this embodiment.
And S5, outputting the second labeling result and performing association record with the patient.
And (3) data processing: converting brain waveform data into a matrix by a singular spectrum analysis method, decomposing singular values, grouping the decomposed singular values, removing pseudo noise, and reconstructing original one-dimensional brain information by matrix transformation; compared with the traditional filtering method, the method provides time domain electroencephalogram information with more representation significance than the original electroencephalogram waveform, and can more accurately analyze the anesthesia state of the patient.
The singular spectrum analysis method in the data processing step comprises the following substeps:
and a track matrix construction sub-step: converting an original EEG signal in brain waveform data into a multi-dimensional track matrix X;
the original EEG signal is a single channel time series signal s ═(s)1,s2,……,sN)TThe formula of the multi-dimensional trajectory matrix X is as follows:
Figure BDA0003326446680000081
n is the length of the original EEG signal and L is the embedded time window of the trajectory matrix, where L < N, K is N-L + 1.
Singular value decomposition substep: constructing covariance matrix C XXTConverting the asymmetric multidimensional track matrix into a symmetric square matrix, and decomposing the eigenvalues to obtain eigenvalues lambda arranged in descending orderi1≥λ2≥…≥λL≧ 0) and the corresponding feature vector Vi. Of Singular Value Decomposition (SVD)The calculation is fully disclosed in the prior art, and redundant description is not repeated in this embodiment.
Grouping pseudo noise reduction substep: analyzing and extracting waveform components according to eigenvalues and eigenvectors obtained after singular value decomposition, wherein the waveform is a complex wave, and the main components comprise three part features which are respectively artifacts, oscillation and noise; while the oscillation characteristic is required to be preserved, the artifacts and noise are required to be removed.
Removing artifacts in the original EEG signal according to an artifact removal formula and a fixed threshold value set in the artifact removal formula; and removing noise in the original waveform according to the size of the singular value, grouping according to a characteristic value change rate formula, and obtaining a characteristic value grouping result with characteristic significance. The change rate of the characteristic value corresponding to the former section of waveform and the latter section of waveform in the same time window is less than 5%, which indicates that the former section of waveform and the latter section of waveform have small change and can be regarded as the same waveform. If the change rate of the characteristic value is more than or equal to 5%, the former section of waveform and the latter section of waveform are two groups of different waveforms, so as to realize grouping.
The artifact removal formula in the original EEG signal is as follows:
Figure BDA0003326446680000082
when the maximum amplitude in the original EEG signal is greater than a fixed threshold V0When the signal is received, RC1 and RC2 are classified as artifacts, otherwise RC1 is classified as artifacts; the RC is a construction component, the division standard of the RC is similar characteristic values, namely the characteristic values with small front and back changes are classified into the same RC, and the RC is composed of characteristic vectors corresponding to the characteristic values. Typical EEG signal sizes are less than 100 μ V, a fixed threshold V in this embodiment0200 μ V.
Matrix reconstruction substep: and selecting an effective characteristic vector group according to a characteristic value change rate formula, and reconstructing a track matrix according to a reconstruction formula.
The characteristic value change rate formula is as follows:
Figure BDA0003326446680000083
wherein λ isiIs a characteristic value, i<j<L。
The reconstruction formula is as follows:
Figure BDA0003326446680000091
wherein the content of the first and second substances,
Figure BDA0003326446680000092
Xiand recombining the track matrix for the specific characteristic vector.
And (3) data analysis step: the original one-dimensional electroencephalogram information is filtered to obtain electroencephalogram information in a normal activity frequency range of the human brain, and the electroencephalogram information comprises a plurality of sine waveform data, such as sine waveforms of alpha, beta, theta and delta frequency bands. The method comprises the steps of performing Fourier transform on time domain information of electroencephalogram information to realize time domain and frequency domain transform, extracting frequency domain information to analyze, considering the problem caused by an unsteady waveform of electroencephalogram in the transformation process, further adopting wavelet transform to obtain frequency domain information, simultaneously taking the time domain information into consideration, and obtaining characteristic information of the electroencephalogram information on the time domain and the frequency domain by combining power spectrum analysis and sample entropy; for example, the ratio of the high frequency to the low frequency in the electroencephalogram information, and the energy value of the sine waveform of the α, β, θ, δ frequency band.
The Fourier transform is used for transforming the signal between a time domain and a frequency domain, the characteristic expression of time domain information on the frequency domain is obtained through the Fourier transform, the Fourier transform only acts on a steady-state waveform, and the window time is long. And performing frequency domain conversion by adopting discrete Fourier transform, wherein for the sequence { x [ N ] }0 ≦ N ≦ N, the discrete Fourier transform is as follows:
Figure BDA0003326446680000093
wavelet transform refers to an analysis method for identifying the characteristics of an original signal in the time domain and the frequency domain by using an oscillation waveform of a mother wavelet with finite length or fast attenuation. The wavelet transform can embody the transformed frequency domain information and simultaneously retain the original time domain information. The wavelet transform acts on unsteady waveforms, the window time is short, and an unsteady variable time length analysis method is provided, the discrete wavelet transform is adopted to perform time-frequency feature conversion extraction on original signals, and the discrete wavelet transform formula is as follows:
Figure BDA0003326446680000094
the wavelet transform can be regarded as that a convolution of a mother wavelet and an original signal on a time domain is calculated under multiple scales (frequencies), so that similar coefficients of the wavelet and the original signal at each time point and each scale are obtained, and therefore, the frequency domain composition of the original signal at each time point is analyzed.
The power spectrum analysis is to calculate power spectrums corresponding to alpha, beta, theta and delta frequency bands respectively according to signal frequency domain classification, divide original one-dimensional electroencephalogram information with the length of N into L segments, wherein the data length of each segment is M, namely N is LM, a window function w is used on each segment of signal to calculate the power spectrum of each segment of signal, and finally, the power spectrum of each segment of signal is averaged to obtain the power spectrum of the whole segment of signal, wherein the power spectrum calculation formula is as follows:
Figure BDA0003326446680000101
the sample entropy is an algorithm for analyzing the complexity of the time series, and the greater the sample entropy, the greater the complexity, and vice versa. The sample entropy is calculated as follows;
assume that a time series N of N data is { x ═ x1,x2,x3,……,xN};
(1) Dividing N into a plurality of subsets with m as a step size, wherein the formula is as follows: xm(i)={xi,xi+1,xi+2,……,xi+m-1}
(2) Separately calculating Chebyshev distance between the subsets, d [ X [ ]m(i),Xm(j)](i ≠ j), the Chebyshev distance is the absolute value of the maximum value of the corresponding position difference values in the two sequences; the formula is as follows:
maxk=0,…,m-1(|x(i+k)-x(i+k)|)
(3) calculating sample entropy; the formula is as follows:
Figure BDA0003326446680000102
wherein A is a group satisfying d [ X ] for all im+1(i),Xm+1(j)]<The sum of the number of r; b is a group satisfying d [ X ] for all im(i),Xm(j)]<The sum of the number of r. Fourier transform, wavelet transform, power spectrum analysis and sample entropy are used as relatively mature data processing technologies, and in this embodiment, specific improvements are not involved, so that redundant details are not described.
And (3) anesthesia depth prediction step: and constructing a deep learning neural network model, inputting the labeling result and the characteristic information into the neural network model, and outputting the anesthesia depth prediction result of the patient. In this embodiment, a Deep Neural Network (DNN) is adopted, the Deep Neural network uses multiple hidden layers (specifically, 5 hidden layers are adopted), the number of Neural network units is 2048, 1024, 512, 256, 128, and model parameters such as drop and spare Rate are set, an epoch of a Neural network model is set to 100000, an initial spare Rate is set to 0.001, a drop proportion is 0.3 (that is, 30% of neurons are randomly shielded in a training process and are considered as a model penalty term to prevent overfitting of the model), and a loss function is selected as a log-likelihood loss function.
Example two
At present, when a patient performs an operation, the patient needs to be signed by family members to fill out an operation notice book or an informed book. Firstly, because some patients go to the hospital by oneself and do the operation under the condition that the family is unknown, under such condition, the patient does not have the people to look after, and the problem appears in the art, postoperative problem does not have family's to look after, appears the problem easily. Secondly, the family members of the patients also have informed consent, and medical staff is obligated to communicate the illness state of the patients with the family members of the patients so as to obtain the support and trust of the family members of the patients. Thirdly, after the patient suffers from general anesthesia of the operation, once the patient loses consciousness and has unpredictable risks, the patient needs to communicate with the family members of the patient, and the patient carries out the next diagnosis and treatment activities after the signature of the family members of the patient is confirmed.
The difference between the second embodiment and the first embodiment is that the method further comprises the following steps:
calling first family data from a medical record database of a hospital according to the identity information of the patient; the first family data includes family names, relationships with patients, and contact addresses.
Calling video monitoring data from a hospital according to the face data of the patient, and performing face recognition according to the video monitoring data to obtain personnel data accompanying the patient to the hospital for treatment; the person data includes a name and a contact address.
Acquiring second family data of the patient according to the operation notice or the informed notice; the second family data includes family names, patient relationships, and contact details.
Different weight scores are set for the first family data, the personnel data and the second family data, and relationship progress reference scores are set to comprehensively calculate the relationship progress total score of each personnel. And then performing descending ranking according to the relationship affinity progress total score of each person, and defining the person with the first ranking as the nearest person. And when the first family data or the personnel data are empty, the data are analyzed and calculated with a zero value.
And carrying out voice communication on the closest personnel according to the contact way of the closest personnel, and acquiring set voice information through the voice communication, wherein the information content of the voice information is the name of the patient, for example, the patient is required to blink when hearing a call, then the voice information is edited, and the playing interval and the cycle are set to obtain the short sound stimulation information.
In the embodiment, the closest personnel are analyzed by acquiring the family data, the personnel data related to the closest personnel and the like of the patient, so that the sound information of the closest personnel is acquired, the sound information can be made into the short sound stimulation information of the auditory evoked potential technology, the patient can feel more intimate after the anesthesia of the patient, the response of the patient can be generated more easily, the amplitude of the response signal in the brain wave is larger, and the analysis and the prediction of the anesthesia depth are more convenient.
The foregoing is merely an example of the present invention, and common general knowledge in the field of known specific structures and characteristics is described herein in more detail, so that a person of ordinary skill in the art can understand all the prior art in the field and have the ability to apply routine experimentation before the present date, after knowing that all the common general knowledge in the field of the invention before the application date or the priority date of the invention, and the person of ordinary skill in the art can, in light of the teaching provided herein, combine his or her own abilities to complete and implement the present invention, and some typical known structures or known methods should not become an obstacle to the implementation of the present invention. It should be noted that, for those skilled in the art, without departing from the structure of the present invention, several changes and modifications can be made, which should also be regarded as the protection scope of the present invention, and these will not affect the effect of the implementation of the present invention and the practicability of the patent. The scope of the claims of the present application shall be determined by the contents of the claims, and the description of the embodiments and the like in the specification shall be used to explain the contents of the claims.

Claims (10)

1. The deep neural network anesthesia depth analysis method based on electroencephalogram singular spectrum analysis is characterized by comprising the following steps of:
a data acquisition step: acquiring brain waveform data and facial image data of a patient in a clinical operation;
marking an anesthetic event: marking each anesthesia event of a patient in an anesthesia state according to brain waveform data, facial image data and a brainstem auditory induction method, and obtaining a marking result;
and (3) data processing: converting brain waveform data into a matrix by a singular spectrum analysis method, decomposing singular values, grouping the decomposed singular values, removing pseudo noise, and reconstructing original one-dimensional brain information by matrix transformation;
and (3) data analysis step: filtering the original one-dimensional electroencephalogram information to obtain electroencephalogram information in a normal activity frequency range of the human brain, and then combining Fourier transform, wavelet transform, power spectrum analysis and sample entropy analysis to obtain characteristic information of the electroencephalogram information on a time domain and a frequency domain;
and (3) anesthesia depth prediction step: and constructing a deep learning neural network model, inputting the labeling result and the characteristic information into the neural network model, and outputting the anesthesia depth prediction result of the patient.
2. The deep neural network anesthesia depth analysis method based on electroencephalogram singular spectrum analysis as claimed in claim 1, characterized in that: the singular spectrum analysis method in the data processing step comprises the following substeps:
and a track matrix construction sub-step: converting an original EEG signal in brain waveform data into a multi-dimensional track matrix X;
singular value decomposition substep: constructing a covariance matrix, converting the asymmetric multidimensional track matrix into a symmetric square matrix, and decomposing eigenvalues to obtain eigenvalues lambda arranged in descending orderiAnd corresponding feature vector Vi
3. The deep neural network anesthesia depth analysis method based on electroencephalogram singular spectrum analysis as claimed in claim 2, characterized in that: the original EEG signal is a single channel time series signal s ═(s)1,s2,……,sN)TThe formula of the multi-dimensional trajectory matrix X is as follows:
Figure FDA0003326446670000011
n is the length of the original EEG signal and L is the embedded time window of the trajectory matrix, where L < N, K is N-L + 1.
4. The deep neural network anesthesia depth analysis method based on electroencephalogram singular spectrum analysis as claimed in claim 3, characterized in that: the data processing step specifically comprises the following substeps:
grouping pseudo noise reduction substep: grouping the eigenvalues and eigenvectors obtained by decomposing the singular values according to an eigenvalue change rate formula; removing artifacts in the original EEG signal according to an artifact removal formula and a fixed threshold set in the artifact removal formula; noise in the original EEG signal is removed according to the magnitude of the singular values and a clustering result with a characteristic meaning is obtained.
5. The deep neural network anesthesia depth analysis method based on electroencephalogram singular spectrum analysis as claimed in claim 4, characterized in that:
the artifact removal formula in the original EEG signal is as follows:
Figure FDA0003326446670000021
when the maximum amplitude in the original EEG signal is greater than a fixed threshold V0When the signal is received, RC1 and RC2 are classified as artifacts, otherwise RC1 is classified as artifacts; RC1 and RC2 are each composed of eigenvectors corresponding to the grouped eigenvalues.
6. The deep neural network anesthesia depth analysis method based on electroencephalogram singular spectrum analysis as claimed in claim 5, characterized in that: the data processing step specifically comprises the following substeps:
matrix reconstruction substep: and selecting an effective characteristic vector group according to a characteristic value change rate formula, and reconstructing a track matrix according to a reconstruction formula.
7. The deep neural network anesthesia depth analysis method based on electroencephalogram singular spectrum analysis as claimed in claim 6, characterized in that: the characteristic value change rate formula is as follows:
Figure FDA0003326446670000022
wherein λ isiIs a characteristic value, i<j<L。
8. The deep neural network anesthesia depth analysis method based on electroencephalogram singular spectrum analysis as claimed in claim 6, characterized in that: the reconstruction formula is as follows:
Figure FDA0003326446670000023
wherein the content of the first and second substances,
Figure FDA0003326446670000024
Xiand recombining the track matrix for the specific characteristic vector.
9. The deep neural network anesthesia depth analysis method based on electroencephalogram singular spectrum analysis as claimed in claim 1, characterized in that: the anesthesia event marking step specifically comprises the following steps:
a reverse labeling substep: extracting characteristic data according to the brain wave data, and carrying out reverse annotation on the anesthesia event according to the characteristic data to obtain a first annotation result; carrying out image recognition on the collected facial image data to obtain a second labeling result of each anesthesia event;
and a neural network model labeling substep: and the electroencephalogram data, the feature data and the first labeling result are stored in an associated mode and serve as first accumulated data, the facial image data and the second labeling result serve as second accumulated data, a neural network model is built, trained and optimized according to the first accumulated data and the second accumulated data, the facial image data of the patient to be tested are input into the neural network model to automatically judge and label the anesthesia event, and a third labeling result is obtained.
10. The deep neural network anesthesia depth analysis method based on electroencephalogram singular spectrum analysis as claimed in claim 9, characterized in that: the data acquisition step: acquiring static data of a patient before an operation; the static data comprises height, weight, sex, age and medical history;
the method further comprises the steps of:
patient screening and managing steps: and screening and classifying the patients according to the static data of the patients, removing the patients with brain or nerve abnormality, and dividing the clinical case range.
CN202111264602.4A 2021-10-28 2021-10-28 Deep neural network anesthesia depth analysis method based on electroencephalogram singular spectrum analysis Pending CN113974557A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111264602.4A CN113974557A (en) 2021-10-28 2021-10-28 Deep neural network anesthesia depth analysis method based on electroencephalogram singular spectrum analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111264602.4A CN113974557A (en) 2021-10-28 2021-10-28 Deep neural network anesthesia depth analysis method based on electroencephalogram singular spectrum analysis

Publications (1)

Publication Number Publication Date
CN113974557A true CN113974557A (en) 2022-01-28

Family

ID=79743608

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111264602.4A Pending CN113974557A (en) 2021-10-28 2021-10-28 Deep neural network anesthesia depth analysis method based on electroencephalogram singular spectrum analysis

Country Status (1)

Country Link
CN (1) CN113974557A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115018018A (en) * 2022-08-05 2022-09-06 北京航空航天大学杭州创新研究院 Double spatial filtering method for inhibiting background noise
CN115251909A (en) * 2022-07-15 2022-11-01 山东大学 Electroencephalogram signal hearing assessment method and device based on space-time convolutional neural network
CN115251909B (en) * 2022-07-15 2024-04-30 山东大学 Method and device for evaluating hearing by electroencephalogram signals based on space-time convolutional neural network

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201521676A (en) * 2013-12-13 2015-06-16 Nat Inst Chung Shan Science & Technology Method of generating index for determining anesthesia consciousness alertness level using artificial neural network
CN106821335A (en) * 2017-04-01 2017-06-13 新乡医学院第附属医院 One kind anesthesia and depth of consciousness monitoring modular
CN108717535A (en) * 2018-05-25 2018-10-30 山东大学 A kind of depth of anesthesia method of estimation based on composite character and long memory network in short-term
CN108836301A (en) * 2018-05-04 2018-11-20 江苏师范大学 A kind of Single Visual-evoked Potential method based on singular spectrum analysis and rarefaction representation
CN110201287A (en) * 2019-06-10 2019-09-06 科大讯飞股份有限公司 A kind of postoperative awakening method of patient with general anesthesia and device
CN110680315A (en) * 2019-10-21 2020-01-14 西安交通大学 Electroencephalogram and electromyogram signal monitoring method based on asymmetric multi-fractal detrending correlation analysis
CN110811557A (en) * 2019-11-15 2020-02-21 西安交通大学 Anesthesia depth monitoring system and method based on micro-state power spectrum analysis
CN110974222A (en) * 2019-12-24 2020-04-10 付晓 Device for monitoring anesthesia depth based on auditory evoked signal

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201521676A (en) * 2013-12-13 2015-06-16 Nat Inst Chung Shan Science & Technology Method of generating index for determining anesthesia consciousness alertness level using artificial neural network
US20150164413A1 (en) * 2013-12-13 2015-06-18 National Chung Shan Institute Of Science And Technology Method of creating anesthetic consciousness index with artificial neural network
CN106821335A (en) * 2017-04-01 2017-06-13 新乡医学院第附属医院 One kind anesthesia and depth of consciousness monitoring modular
CN108836301A (en) * 2018-05-04 2018-11-20 江苏师范大学 A kind of Single Visual-evoked Potential method based on singular spectrum analysis and rarefaction representation
CN108717535A (en) * 2018-05-25 2018-10-30 山东大学 A kind of depth of anesthesia method of estimation based on composite character and long memory network in short-term
CN110201287A (en) * 2019-06-10 2019-09-06 科大讯飞股份有限公司 A kind of postoperative awakening method of patient with general anesthesia and device
CN110680315A (en) * 2019-10-21 2020-01-14 西安交通大学 Electroencephalogram and electromyogram signal monitoring method based on asymmetric multi-fractal detrending correlation analysis
CN110811557A (en) * 2019-11-15 2020-02-21 西安交通大学 Anesthesia depth monitoring system and method based on micro-state power spectrum analysis
CN110974222A (en) * 2019-12-24 2020-04-10 付晓 Device for monitoring anesthesia depth based on auditory evoked signal

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
宋佳佳: "基于奇异谱分析算法的孤独症脑电特征提取及分类研究", 中国优秀硕士学位论文全文数据库(电子期刊)医药卫生科技辑, 15 August 2020 (2020-08-15), pages 069 - 123 *
张烈平等: "小波变换在麻醉监测诱发脑电信号分类中的应用", 计算机工程与应用, 11 July 2003 (2003-07-11), pages 198 - 200 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115251909A (en) * 2022-07-15 2022-11-01 山东大学 Electroencephalogram signal hearing assessment method and device based on space-time convolutional neural network
CN115251909B (en) * 2022-07-15 2024-04-30 山东大学 Method and device for evaluating hearing by electroencephalogram signals based on space-time convolutional neural network
CN115018018A (en) * 2022-08-05 2022-09-06 北京航空航天大学杭州创新研究院 Double spatial filtering method for inhibiting background noise
CN115018018B (en) * 2022-08-05 2022-11-11 北京航空航天大学杭州创新研究院 Double spatial filtering method for inhibiting background noise

Similar Documents

Publication Publication Date Title
CA2750643C (en) Method and device for probabilistic objective assessment of brain function
Henderson et al. Development and assessment of methods for detecting dementia using the human electroencephalogram
CA2939790C (en) Method for assessing brain function and portable automatic brain function assessment apparatus
CA2784267C (en) Method and device for point-of-care neuro-assessment and treatment guidance
US20160029946A1 (en) Wavelet analysis in neuro diagnostics
WO2011059951A1 (en) Brain activity as a marker of disease
Obayya et al. Automatic classification of sleep stages using EEG records based on Fuzzy c-means (FCM) algorithm
Kim et al. Wedea: A new eeg-based framework for emotion recognition
Zainuddin et al. Performance of support vector machine in classifying EEG signal of dyslexic children using RBF kernel
Chen et al. Seizure prediction using convolutional neural networks and sequence transformer networks
Desai et al. Electrodermal activity (EDA) for treatment of neurological and psychiatric disorder patients: a review
CN113974557A (en) Deep neural network anesthesia depth analysis method based on electroencephalogram singular spectrum analysis
Mahajan et al. Depression diagnosis and management using EEG-based affective brain mapping in real time
Kumar et al. Classification of human emotional states based on valence-arousal scale using electroencephalogram
Hosseini A computational framework to discriminate different anesthesia states from EEG signal
Fangmeng et al. Emotional changes detection for dementia people with spectrograms from physiological signals
Chen et al. An empirical quantitative EEG analysis for evaluating clinical brain death
TH et al. Improved feature exctraction process to detect seizure using CHBMIT-dataset
Sivasangari et al. Artificial Intelligence based Epilepsy Seizure Prediction and Detection
Dubey et al. Digital analysis of EEG brain signal
Islam et al. A Review on Emotion Recognition with Machine Learning Using EEG Signals
Qin et al. Research on emotion recognition of bimodal bioelectrical features based on DS evidence theory
An et al. Recognizing the consciousness states of DOC patients by classifying EEG signal
Waghmare et al. 12 Mental Disorder Health through Electroencephalogram Analysis using Computational Model
Waghmare et al. Mental Health Disorder through Electroencephalogram Analysis using Computational Model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination