CN102708288A - Brain-computer interface based doctor-patient interaction method - Google Patents

Brain-computer interface based doctor-patient interaction method Download PDF

Info

Publication number
CN102708288A
CN102708288A CN2012101318690A CN201210131869A CN102708288A CN 102708288 A CN102708288 A CN 102708288A CN 2012101318690 A CN2012101318690 A CN 2012101318690A CN 201210131869 A CN201210131869 A CN 201210131869A CN 102708288 A CN102708288 A CN 102708288A
Authority
CN
China
Prior art keywords
mrow
electroencephalogram
brain
signal
msub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2012101318690A
Other languages
Chinese (zh)
Inventor
刘纪红
孙宇舸
许静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeastern University China
Original Assignee
Northeastern University China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeastern University China filed Critical Northeastern University China
Priority to CN2012101318690A priority Critical patent/CN102708288A/en
Publication of CN102708288A publication Critical patent/CN102708288A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention provides a brain-computer interface based doctor-patient interaction method, which adopts a brain-computer interface based doctor-patient interaction system comprising an electroencephalographic collection module and an electroencephalographic analysis and doctor-patient interaction module. The method comprises the following steps of: setting visual stimulation signals with different frequencies, displaying the signals on a display, and mapping into controls of four commands through different combined codes (00, 01, 10 and 11) of 0 and 1; collecting an electroencephalographic signal of a subject by an electrode in real time, sending into a computer after the signal is amplified; analyzing the electroencephalographic signal; and broadcasting a voice prompt corresponding to the identification result through a speaker, and displaying the result on the display. Therefore, medical staff can carry out corresponding rescue according to the display and voice. The brain state can be identified by the brain-computer interface technology, and external equipment is accurately driven in time, so that communication and control can be realized. Brain can be effectively excited to generate steady state visual evoked potential by visual stimulations in different frequencies, the electroencephalographic signal can be accurately identified under stimulations in different frequencies through signal calculation and processing functions of the computer, and real doctor-patient interaction can be realized in the result display and voice prompt mode.

Description

Doctor-patient interaction method based on brain-computer interface
Technical Field
The invention relates to the technical field of biomedical signal processing, in particular to a doctor-patient interaction method based on a brain-computer interface.
Background
The progressive freezing condition is a kind of motor neuron disease, and the motor ability of the patient is gradually lost until respiratory failure. However, all this happens with their mind being awake and thinking being clear, they clearly follow the whole process of their gradual death. The patients are helped to be reluctant according to the existing scientific technology. The doctor-patient interaction system designed based on Brain-computer interface (BCI) technology can help the 'gradually frozen people' group to effectively solve the communication problem in life, improve the life quality of the people and reduce the possibility of accidents.
On the other hand, the ICU construction and management guide points out that: "the severe medical science subject should be established in the third-level and conditional second-level hospitals in China, and the severe medical science subject belongs to the clinical independent subject and is directly led by the functional department of hospitals. ICU is the clinical base of the critical medicine discipline. Traditionally, medical personnel have always placed all their efforts on the rescue of the patient's life and the monitoring of the condition of the disease, neglecting the patient's needs. Meanwhile, the proportion (2.5-3: 1) of personnel in intensive care units is difficult to achieve in various hospitals. Therefore, in order to solve the problem of insufficient staffing, it is difficult to help the patient in time in the case that the patient needs help, especially at night, but even in the case that staffing is sufficient in the daytime, the patient needs to have the staff, and the medical staff cannot know the thoughts of the patient in time, and sometimes the opposite result can be obtained. At present, the intensive care center is mainly some emergency equipment and instruments for detecting life information indexes, and a doctor-patient interaction system is applied to a large hospital with an ICU (intensive care unit) and can well solve the communication problem among non-cerebral palsy severe patients with limb paralysis or unchanged movement and inconvenient speaking (such as using a breathing machine), doctors, nurses and family nursing personnel.
The brain electrical activity is electrical activity from the nervous tissue of the brain. Clinically, the potential change of the cortex is observed by bipolar or unipolar recording on the scalp, and the recorded brain waves are called electroencephalograms. The electroencephalogram includes spontaneous electroencephalogram signals and induced electroencephalogram signals. Spontaneous brain electrical signals are the continuous rhythmic potential changes of the cerebral cortex; evoked brain electrical signals, also known as Evoked Potentials (EPs), are specific electrical activities generated by the nervous system (including the peripheral or central, sensory or motor system) receiving internal and external "stimuli".
Evoked potentials can be divided into two broad categories, one category is exogenous stimulus-related evoked potentials, including visual evoked potentials, auditory evoked potentials, somatosensory evoked potentials, motor evoked potentials, and the like; another class is the evoked potentials associated with endogenous events, which are involved in the cognitive functioning of the brain. The evoked potential can be applied to the research of nervous system lesions and has important clinical diagnosis value, and in addition, the research of the evoked potential is helpful to know the nature of the human self advanced nerve activity.
Compared with the spontaneous brain electricity, the evoked potential has lower amplitude and is usually submerged in the spontaneous brain electricity and other various artifacts. Evoked potentials have certain spatial, temporal and phase characteristics, and are detected only at specific sites. Furthermore, there is a relatively strict time-locked relationship between latency of evoked potential and stimulus, and it appears almost immediately or instantaneously within a certain time period when a stimulus is administered.
The spontaneous brain electricity and evoked potential can reflect different states of the brain from different aspects, and can be applied to brain-computer interfaces. Not virtually any electroencephalogram signal is available for brain-computer interface. However, in certain situations, through training, it is possible for a person to produce a stable brain electrical signal or to produce a reliable response to a particular stimulus. Only these easily interpreted brain electrical signals can be applied to the brain-computer interface.
When a person receives a Visual stimulus, corresponding electrical activity occurs in the Visual cortex, and this signal is called Visual Evoked Potentials (VEPs). Visual evoked potentials are a reflection of the specific electrical activity of the human eye on the scalp via "focused" activity evoked cortical brain nerves. VEP-based BCI systems rely on the ability of the user to control the direction of eye gaze. When the brain is stimulated by external stimulus with constant frequency, the brain generates a response with the same frequency as the external stimulus frequency or the harmonic thereof, and the strength of the response can be represented by a voltage signal measured on the scalp or the cortex, namely Steady-State Evoked Potential (SSEP), and the occurrence of the Steady-State Evoked Potential can be observed in the experiment of repeatedly stimulating the vision. This apparent brain electrical signal modulated by the stimulation frequency is called the Steady State Visual Evoked Potentials (SSVEP). For many severely paralyzed patients, the visual function is often sound, and thus the visual evoked potential is an electroencephalogram signal that is more suitable for a brain-computer interface.
Disclosure of Invention
The invention aims to provide a doctor-patient interaction method based on a brain-computer interface, which induces a brain to generate corresponding steady-state visual evoked potentials through visual stimulation with different frequencies, collects the steady-state visual evoked potentials through electrodes, amplifies the collected weak electroencephalogram signals by using an electroencephalogram amplifier, transmits the data to a computer through a data transmission line between the electroencephalogram amplifier and the computer for analysis, processing and identification, and finally displays and outputs an identification result through the computer.
The hardware system adopted for realizing the brain-computer interface-based doctor-patient interaction method comprises an electroencephalogram acquisition module and an electroencephalogram analysis and doctor-patient interaction module.
The electroencephalogram acquisition module comprises an electrode and an electroencephalogram amplifier, wherein the electrode is arranged on the surface of the scalp of a subject and is fixed through an electrode cap; the EEG amplifier adopts the existing devices and comprises a pre-amplification circuit and a post-amplification circuit, the electrode is connected to the input end of the pre-amplification circuit, the output end of the pre-amplification circuit is connected with the input end of the post-amplification circuit, and the output end of the post-amplification circuit is connected with a computer. The module collects the brain electrical signals induced by visual stimulation to obtain the data required by analysis.
The electroencephalogram analysis and doctor-patient interaction module comprises a computer, a display and a loudspeaker, the electroencephalogram analysis and doctor-patient interaction module realizes the electroencephalogram signal processing function and the doctor-patient interaction function of a system, the electroencephalogram signal processing comprises preprocessing, feature extraction and recognition classification of electroencephalogram signals, the electroencephalogram signals induced by the stimulation module are recognized to control corresponding commands, the doctor-patient interaction function is realized, the display is used for displaying visual stimulation signals and system recognition results, voice files corresponding to set patient requirements are recorded in advance, corresponding voice files are selected according to the recognition results and are broadcasted through the loudspeaker, and effective communication between patients and medical care personnel is completed.
The invention discloses a doctor-patient interaction method based on a brain-computer interface, which comprises the following specific steps:
step 1: setting visual stimulation signals with different frequencies and displaying the visual stimulation signals on a display, wherein the visual stimulation signals comprise: alarming, determining, deleting and two basic command signals 0 and 1, wherein 4 different doctor-patient interaction commands are formed by two bit codes of 0 and 1; a plurality of different frequency flash stimuli may be presented simultaneously in the visual field of the subject, and each flash may evoke a Steady State Visual Evoked Potential (SSVEP) of the corresponding frequency. If the subject selects the key corresponding to a frequency flash, the key can be focused on, so that the frequency-induced SSVEP is most prominent, which can be called frequency coding. By analyzing the spectrum of the SSVEP, the most significant frequency of induction can be found, and the identification of the key is completed, which is called frequency decoding.
Step 2: the electrode collects the EEG signal of the testee in real time, and the EEG signal is amplified and sent to a computer;
and step 3: carrying out electroencephalogram signal analysis processing, comprising: signal preprocessing, feature extraction and recognition classification.
Step 3.1: signal preprocessing;
and performing accumulation average preprocessing on the acquired real-time electroencephalogram signals, and extracting induced electroencephalogram signals.
The extracted electroencephalogram signal is assumed to satisfy two conditions:
(1) the evoked potential waveform obtained by each stimulus is substantially constant.
(2) Evoked potential and noise are independent of each other, and the mean of the noise is zero.
The model defining the visual evoked potential signals is:
Xi(t)=Si(t)+ni(t)(i=1,2,...,N) (1)
wherein Xi(t) is the signal observed after the ith stimulus, Si(t) the visual evoked potential signal to be extracted for the ith time, ni(t) is the noise signal recorded at the ith time. The signals after N cumulative averages are:
<math> <mrow> <mi>Avg</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <mrow> <mo>(</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>S</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>+</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>n</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </math>
suppose the variance of the noise signal is σnAfter N cumulative averages, the mean and variance of the noise signal after N cumulative averages are respectively expressed by the following equations (3) and (4):
<math> <mrow> <mi>E</mi> <mrow> <mo>(</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>n</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>=</mo> <mn>0</mn> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <mi>Var</mi> <mrow> <mo>(</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>n</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>=</mo> <msup> <msub> <mi>&sigma;</mi> <mi>n</mi> </msub> <mn>2</mn> </msup> <mo>/</mo> <mi>N</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow> </math>
after N cumulative averages, the power-to-noise ratio (i.e., the ratio of signal power to noise power) of the visual evoked potential signal is:
<math> <mrow> <msub> <mi>SNR</mi> <mi>avg</mi> </msub> <mo>=</mo> <mfrac> <mrow> <mo>&lt;</mo> <msub> <mi>S</mi> <mi>i</mi> </msub> <msup> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>></mo> </mrow> <mrow> <msup> <msub> <mi>&sigma;</mi> <mi>n</mi> </msub> <mn>2</mn> </msup> <mo>/</mo> <mi>N</mi> </mrow> </mfrac> <mo>=</mo> <mi>N</mi> <mfrac> <mrow> <mo>&lt;</mo> <msub> <mi>S</mi> <mi>i</mi> </msub> <msup> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>></mo> </mrow> <msubsup> <mi>&sigma;</mi> <mi>n</mi> <mn>2</mn> </msubsup> </mfrac> <mo>=</mo> <mi>N</mi> <mo>&CenterDot;</mo> <msub> <mi>SNR</mi> <mi>ini</mi> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein the SNRiniFor the original power signal-to-noise ratio, σn 2N is the variance after N cumulative averages, Si(t) is the visual evoked potential signal to be extracted.
After N times of superposition averaging, the visual evoked potential is in direct proportion to N and has a good time-locking relation, the noise is partially offset due to the fact that each record has positive or negative after superposition, the power signal-to-noise ratio of average response is N times of the power signal-to-noise ratio of single response, and the amplitude signal-to-noise ratio improves
Figure BDA0000158716480000041
And (4) doubling. However, in practice, the ideal situation cannot be achieved, and the signal-to-noise ratio is improved by a value lower than the theoretical calculation value. Thus, the evoked potential after superposition can be easily extracted, and finally, the original visual evoked potential, namely the signal after N times of accumulation averaging given by the formula (2), can be obtained by dividing the evoked potential by the superposition times.
Step 3.2: extracting the characteristics of the preprocessed electroencephalogram signals, which comprises the following specific steps:
step 3.2.1: wavelet decomposition is carried out on the electroencephalogram signals by utilizing wavelet transformation, and wavelet decomposition coefficients are extracted to serve as initial characteristic parameters;
step 3.2.2: performing AR power spectrum estimation on the wavelet decomposition coefficient extracted after wavelet transformation, extracting electroencephalogram power characteristic parameters, and expressing an electroencephalogram power characteristic parameter matrix by X ═ X (X)1,x2,...,xn)′,xt=(x1t,x2t,...,xpt) ', t is 1, 2, n, n is the number of electroencephalogram samples;
step 3.2.3: selecting main characteristics by using a Principal Component Analysis (PCA);
principal Component Analysis (PCA) is a common method for estimating parameters of a linear model, and an original related independent variable is transformed into another set of independent variables, namely a so-called principal Component, by using an orthogonal Principle, then a part of important components are selected as independent variables (a part of unimportant independent variables are discarded at this time), and finally, model parameters after the principal components are selected are estimated by using a least square method. PCA is a linear transformation that transforms input data from sample space to a new coordinate system whose coordinates are not related, and the maximum variance of the sample space is concentrated on only a few coordinate axes of the new coordinate system, which can also be used as a tool for data compression and noise removal, and is also a way to extract signal features.
The method comprises the following steps of selecting main characteristics by using a principal component analysis method:
step 3.2.3.1: solving a covariance matrix sigma of a vector after AR parameter estimation of a training sample (namely electroencephalogram power characteristic parameters);
∑=(sij)p×p
wherein,
Figure BDA0000158716480000042
sijthe covariance values of the ith row and jth column in the covariance matrix are shown, i, j being 1, 2, …, p.
Step 3.2.3.2: solving an eigenvalue and an eigenvector of the covariance matrix sigma;
determining an eigenvalue lambda of a covariance matrix sigma1、λ2、...λpAnd corresponding orthogonalized unit feature vector, λ1≥λ2≥...λp>0,λ1、λ2、...λpCorresponding orthogonalized unit feature vector a1a2...ap
a 1 = a 11 a 21 . . . a p 1 , a 2 = a 12 a 22 . . . a p 2 , ,..., a p = a 1 p a 2 p . . . a pp
The ith main component of the EEG signal power characteristic parameter matrix X is Fi=ai1, 2, p, a combination coefficient of each principal component
ai′=(a1i,a2i,...,api)
Step 3.2.3.3: arranging the characteristic values from large to small, and drawing a characteristic value curve;
step 3.2.3.4: selecting a characteristic vector corresponding to the characteristic value reaching the preset accumulative contribution rate according to the drawn characteristic value curve to form a new characteristic space;
and reasonably selecting m main components from the determined p main components to realize the final evaluation analysis. Variance contribution rate is used in general
<math> <mrow> <msub> <mi>&alpha;</mi> <mi>i</mi> </msub> <mo>=</mo> <msub> <mi>&lambda;</mi> <mi>i</mi> </msub> <mo>/</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>p</mi> </munderover> <msub> <mi>&lambda;</mi> <mi>k</mi> </msub> </mrow> </math>
Explain principal component FiThe size of the information quantity reflected, alphaiThe larger the size, the stronger the ability of the corresponding principal component to reflect the integrated information.
Determination of m as cumulative contribution rate
<math> <mrow> <mi>G</mi> <mrow> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <msub> <mi>&lambda;</mi> <mi>i</mi> </msub> <mo>/</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>p</mi> </munderover> <msub> <mi>&lambda;</mi> <mi>k</mi> </msub> </mrow> </math>
It is large enough (generally over 85%). But since how many principal components are retained depends on the percentage of the cumulative variance of the retained portion in the sum of variances (i.e., the cumulative contribution ratio), it marks how much of the summary information of the first few principal components. In practice, roughly specifying a percentage may decide to retain several principal components; if one more principal component is left, the cumulative variance increases a little more. The magnitude of the eigenvalue represents the magnitude of energy in the principal component, and the larger the eigenvalue, the higher the amount of information contained.
According to the principle of PCA, the problem that the complexity of the algorithm is influenced by the size of the selected principal component of the PCA, and the complexity is reduced is a problem which needs to be solved.
Step 3.3: identifying and classifying the selected feature vectors of the main features by using a k-nearest neighbor classification algorithm;
the kNN algorithm is generally divided into four steps: 1) selecting a suitable distance metric; 2) finding out k neighbors of the samples to be classified according to the selected distance measurement standard; 3) finding out the category with the majority of categories in the k neighbors; 4) and judging the sample to be classified as the class.
The reason for prediction with the nearest neighbor method is based on the assumption that: neighboring objects have similar predictors. The basic idea of the nearest neighbor algorithm is to use a multidimensional space RtFinding k points nearest to the unknown sample, and judging the class of the unknown sample according to the class of the k points. The k points are the k-nearest neighbors of the unknown sample. The algorithm assumes that all instances correspond to points in the t-dimensional space. The nearest neighbor of an instance is defined according to the standard Euclidean distance, let instance xiThe feature vector of (a) is:
<a1(xi),a2(xi),...,ar(xi)...,at(xi)>
wherein, ar(x) Represents example xiThe r-th attribute value of (1). Two examples xiAnd xjThe distance between them is defined as d (x)i,xj) Wherein:
<math> <mrow> <mi>d</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>x</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <msqrt> <munderover> <mi>&Sigma;</mi> <mrow> <mi>r</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>t</mi> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>a</mi> <mi>r</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>a</mi> <mi>r</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> </mrow> </math>
in nearest neighbor learning, the discrete objective classification function is f: rt- > V where V is a finite set { V1,v2,...vqV is divided into q different classification sets. The k value of the nearest neighbor is selected according to the number and the dispersion degree in each type of sample, and different k values can be selected for different applications.
If sample X is unknowniThe number of surrounding sample points is small, the area covered by the k points is large, and the other way round, the area is small. The nearest neighbor algorithm is susceptible to noisy data, especially isolated points in the sample space. The root of this is that in the basic k-nearest neighbor algorithm, the k nearest neighbor samples of the samples to be predicted are equally positioned.
Identifying and classifying the selected feature vectors of the main features by using a k-nearest neighbor classification algorithm, wherein the method comprises the following steps:
(1) searching electroencephalogram signal training data set
The principle of selecting the training data set is to make the number of various samples generally consistent, the selected historical data is representative, and the used common method is to group the historical data according to categories and then select some representative samples in each group to form the training set, so that the size of the training set is reduced and the higher accuracy is kept.
(2) Determining a distance function
The distance function determines which samples are the k nearest neighbors to the sample to be classified, and its choice depends on the actual data and decision problem. The present invention uses euclidean distances to determine the similarity of samples. The calculation formula of the Euclidean distance is
<math> <mrow> <mi>d</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>x</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <msqrt> <munderover> <mi>&Sigma;</mi> <mrow> <mi>r</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>t</mi> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>a</mi> <mi>r</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>a</mi> <mi>r</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow> </math>
Wherein xi,xjRepresents 2 electroencephalogram feature samples, and t is the number of attributes in each feature sample.
(3) Determining the value of k
The number k of neighbors has a certain influence on the classification result, and generally an initial value is determined and then adjusted until a proper value is found.
(4) Synthesizing classes of k neighbors
Most methods are the simplest of the comprehensive methods, and select a class with the highest frequency of occurrence from the neighbors as the final result, and if more than one class with the highest frequency, select the class of the nearest neighbor. The weight method is a more complicated method, weights are set for k nearest neighbors, and the larger the distance is, the smaller the weight is. In counting the classes, the sum of the weights of each class is calculated, and the largest one is the class of the new sample.
And 4, step 4: and (4) reporting the voice prompt corresponding to the recognition result of the step (3) through a loudspeaker, displaying the result on a display, and performing corresponding rescue by medical personnel according to the display and the voice.
Has the advantages that:
the brain-computer interface technology can identify the brain state so as to timely and accurately drive the external equipment, thereby realizing communication and control. Visual stimuli with different frequencies can effectively stimulate the brain to generate steady visual evoked potentials, accurate identification of brain electrical signals under stimulation with different frequencies is realized through powerful signal calculation and processing functions of a computer, the real-time performance of an algorithm is met, and the real doctor-patient interaction is realized through result display and voice prompt modes. The system is easy to use and operate, the identification accuracy rate reaches 90%, the system is suitable for the field of medical monitoring, and a communication bridge can be built between a patient and medical staff. The method has the advantages that a k-nearest neighbor (kNN) algorithm is combined with wavelet decomposition, AR power spectrum estimation and PCA principal component analysis, most redundant information is removed, the contained characteristic attributes are few, and the using conditions of the k-nearest neighbor classification algorithm are exactly met, so that 90% of high-precision satisfactory recognition rate is obtained in an experiment, classification recognition of steady-state vision-induced electroencephalogram under different stimulation frequency modules is effectively achieved, the recognition result represents the willingness of a subject, and the result is prompted and displayed through voice, so that doctor-patient interaction is realized.
Drawings
FIG. 1 is a schematic diagram of a hardware system used in a doctor-patient interaction method based on a brain-computer interface according to an embodiment of the present invention;
FIG. 2 is a schematic position diagram of the electrode placement method of the International 10/20 System;
FIG. 3 is a schematic diagram of the position of electroencephalogram signal acquisition according to an embodiment of the present invention;
FIG. 4 is an original electroencephalogram signal with a collection time of 3s and a spectrogram thereof according to the embodiment of the present invention, wherein (a) is the original electroencephalogram signal with the collection time of 3s, and (b) is the spectrogram of the electroencephalogram signal with the collection time of 3 s;
FIG. 5 is a flowchart of a doctor-patient interaction method based on a brain-computer interface according to an embodiment of the present invention;
FIG. 6 is a structural diagram of a 3-layer wavelet decomposition in accordance with an embodiment of the present invention;
FIG. 7 is a principal component analysis flow chart according to an embodiment of the present invention.
Detailed Description
The invention is further illustrated with reference to the following figures and examples.
The hardware system adopted by the brain-computer interface-based doctor-patient interaction method comprises an electroencephalogram acquisition module and an electroencephalogram analysis and doctor-patient interaction module, and is shown in figure 1.
The electroencephalogram acquisition module comprises an electrode and an electroencephalogram amplifier; the present invention selects the current international 10/20 system electrode placement method commonly used at home and abroad, each electrode being 10% or 20% of the distance from the adjacent electrode, as shown in fig. 2. The letters F, T, C, P, O are the first letters of the English words Frontal, Temporal, Central, Parietal, Occipital, respectively. Odd numbers 1, 3, 5 … … refer to the left hemisphere, even numbers 2, 4, 6 … … refer to the right hemisphere, and Z refers to the center.
The steady-state visual evoked potential mainly changes obviously at the positions of O1 and O2 of the occiput of the brain, so that only O1 and O2 are collected during electroencephalogram signal collection, and the left earlobes and the right earlobes are used as reference electrodes. The position of the brain electrical signal acquisition is shown in fig. 3.
The electroencephalogram amplifier selects an electroencephalogram amplifier UE-16B with a USB bus for data transmission, and O1, O2, A1 and A2 of the electroencephalogram amplifier UE-16B are respectively connected with O1, O2, A1 and A2 in the picture 2 through leads and electrodes, and are fixed by electrode caps, and acquired electroencephalogram signals are transmitted to a computer through a USB transmission line. The electroencephalogram signal acquisition interface is realized on the basis of a software development kit SDK matched with the electroencephalogram amplifier UE-16B, buttons of starting monitoring, stopping monitoring, starting acquisition and the like in the acquisition interface are improved, electroencephalogram acquisition time is set to be six electroencephalogram acquisition time setting items of 2s, 3s, 5s, 6s, 8s and 10s respectively, electroencephalogram acquisition is started under the condition that electroencephalogram signals are well monitored, and electroencephalogram acquisition is conveniently and quickly completed.
The electroencephalogram analysis and doctor-patient interaction module comprises a computer, a display and a loudspeaker, and the computer is configured as follows:
Inter(R)Core(TM)2Duo processor T6400
(2.0GHz,800MHz FSB,2MB L2cache)
256MB VRAM
1GB DDR2
160GB HDD
the display is selected as follows: 14.1 "WXGA Acer CrystalBtite (TM) LCD
The loudspeaker is the Acer PureZone carried by the notebook
The invention discloses a doctor-patient interaction method based on a brain-computer interface, which comprises the following specific steps:
step 1: setting visual stimulation signals with different frequencies and displaying the visual stimulation signals on a display, wherein the visual stimulation signals comprise: alarming, determining, deleting and two basic command signals 0 and 1, wherein 4 different doctor-patient interaction commands are formed by two bit codes of 0 and 1;
the software information interface between the man-machine realized by software, the visual stimulation signal is displayed on the computer display through the interface, the visual stimulation function key comprises two command code keys 0 and 1(0 and 1 represent two different frequency stimulations), an alarm, a visual stimulation function key for determining and deleting functions, the control of four commands is completed through different combined codes (00, 01, 10 and 11) of 0 and 1, the selection of 0 or 1 needs to be carried out twice to obtain one command, each command represents the requirement of a patient (can be modified according to the specific requirement), in the embodiment, 00-hunger, 01-thirst, 10-urine and 11-abdominal pain. Each visual stimulation signal key is realized by a rectangular block which alternately flickers in black and white at different frequencies on the display. When a subject looks at one of the stimulation modules, i.e., a blinking module of a frequency, the brain is evoked to produce a corresponding steady-state visual evoked potential. The visual evoked potentials detected in the occiput of the brain are therefore mainly due to the target stimulus at which it is focused. By using the locked relationship between the visual stimulus and the evoked potential, it is possible to determine which target the subject is watching from the detected visual evoked potential.
The visual stimulation signal of the invention is a visual stimulation module with five different frequencies of 8Hz, 9Hz, 10Hz, 11Hz and 12Hz realized by the conversion of black and white states, and the meaning of the stimulation module is shown in Table 1. Taking the implementation of a stimulation frequency of 8Hz as an example, the time is 1/8s, the black blocks occupy 1/16s of time, and the white blocks occupy 1/16s of time, so that the visual stimulation of 8Hz can be realized. Similarly, the black blocks and the white blocks of the other stimulation modules occupy 1/18s, 1/20s, 1/22s and 1/24s respectively.
TABLE 1 meanings of stimulation modules
Figure BDA0000158716480000091
Step 2: the electrode collects the EEG signal of the testee in real time, and the EEG signal is amplified and sent to a computer;
the sampling frequency of the electroencephalogram signal is 100Hz, the sampling time is 3s, the comparative analysis is performed by taking an 8Hz evoked electroencephalogram as an example, the original signal and the frequency spectrum thereof are shown in fig. 4(a) and (b), the part with stronger frequency of 7.451Hz can be seen from the spectrogram in the graph, which shows that the part of the electroencephalogram signal contains an electroencephalogram signal which is approximate to 8Hz but is submerged in a larger noise background, and therefore, the preprocessing process of the electroencephalogram signal is required.
And step 3: carrying out electroencephalogram signal analysis processing, comprising: signal preprocessing, feature extraction and recognition classification.
Step 3.1: signal preprocessing;
and performing accumulation average preprocessing on the acquired real-time electroencephalogram signals, and extracting induced electroencephalogram signals.
The extracted electroencephalogram signal is assumed to satisfy two conditions:
(1) the evoked potential waveform obtained by each stimulus is substantially constant.
(2) Evoked potential and noise are independent of each other, and the mean of the noise is zero.
The model defining the visual evoked potential signals is:
Xi(t)=Si(t)+ni(t)(i=1,2,...,N) (1)
wherein Xi(t) is the signal observed after the ith stimulus, Si(t) the visual evoked potential signal to be extracted for the ith time, ni(t) is the noise recorded at the ith time. The signals after N cumulative averages are:
<math> <mrow> <mi>Avg</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <mrow> <mo>(</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>S</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>+</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>n</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </math>
suppose the variance of the noise is σnAfter N cumulative averages, the mean and variance are respectively expressed by the following formulas (3) and (4):
<math> <mrow> <mi>E</mi> <mrow> <mo>(</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>n</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>=</mo> <mn>0</mn> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow> </math>
<math> <mrow> <mi>Var</mi> <mrow> <mo>(</mo> <mfrac> <mn>1</mn> <mi>N</mi> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>n</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>=</mo> <msup> <msub> <mi>&sigma;</mi> <mi>n</mi> </msub> <mn>2</mn> </msup> <mo>/</mo> <mi>N</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein E is the average value of N accumulated averages.
After N cumulative averages, the power-to-noise ratio (i.e., the ratio of signal power to noise power) is:
<math> <mrow> <msub> <mi>SNR</mi> <mi>avg</mi> </msub> <mo>=</mo> <mfrac> <mrow> <mo>&lt;</mo> <msub> <mi>S</mi> <mi>i</mi> </msub> <msup> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>></mo> </mrow> <mrow> <msup> <msub> <mi>&sigma;</mi> <mi>n</mi> </msub> <mn>2</mn> </msup> <mo>/</mo> <mi>N</mi> </mrow> </mfrac> <mo>=</mo> <mi>N</mi> <mfrac> <mrow> <mo>&lt;</mo> <msub> <mi>S</mi> <mi>i</mi> </msub> <msup> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>></mo> </mrow> <msubsup> <mi>&sigma;</mi> <mi>n</mi> <mn>2</mn> </msubsup> </mfrac> <mo>=</mo> <mi>N</mi> <mo>&CenterDot;</mo> <msub> <mi>SNR</mi> <mi>ini</mi> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein the SNRiniFor the original power signal-to-noise ratio, σn 2N is the variance after N cumulative averages, Si(t) is the visual evoked potential signal to be extracted for the ith time.
After N times of superposition averaging, the evoked potential is in direct proportion to N and has a good time-locking relation, and after noise superposition, partial cancellation is realized due to the fact that each record has positive or negative, the power signal-to-noise ratio of average response can be N times of the power signal-to-noise ratio of single response, and the amplitude signal-to-noise ratio improves
Figure BDA0000158716480000103
And (4) doubling. However, in practice, the ideal situation cannot be achieved, and the signal-to-noise ratio is improved by a value lower than the theoretical calculation value. Therefore, the evoked potential after superposition can be easily extracted, and finally, the original evoked potential can be obtained by dividing the evoked potential by the superposition times, namely, the signal after N times of accumulation averaging given by the formula (2).
Step 3.2: extracting the characteristics of the preprocessed electroencephalogram signals, which comprises the following specific steps:
step 3.2.1: performing 3-layer wavelet decomposition on the electroencephalogram signal by using db5 wavelet transform, and extracting 3-layer wavelet decomposition coefficients of an 8-12Hz frequency band as initial characteristic parameters as shown in FIG. 6;
because the frequency band of the original brain electrical signal is 0-50Hz, the brain electrical signal of 0-50Hz is decomposed into a low-frequency signal a1 of 0-25Hz and a high-frequency signal d1 of 25-50Hz by one layer of wavelet decomposition. The two-layer wavelet decomposition only decomposes the low-frequency signal a1 part of 0-25Hz into a low-frequency signal a2 of 0-12.5Hz and a high-frequency signal d2 of 12.5-25 Hz. And (3) three-layer wavelet decomposition, namely decomposing the low-frequency signal a2 of 0-12.5Hz into a low-frequency signal a3 of 0-6.25Hz and a high-frequency signal d3 of 6.25-12.5 Hz. As the visual stimulation signals of the brain electricity are concentrated in the frequency band of 8-12Hz, the wavelet coefficients of the d3 and a2 parts contain frequency band information of 8-12Hz and contain more information, and then the wavelet coefficients of d3 and a2 are reconstructed to serve as initial characteristic parameters.
Step 3.2.2: performing AR power spectrum estimation on wavelet coefficients extracted after wavelet transformation, extracting electroencephalogram signal power characteristic parameters by selecting a method of solving power spectrum values by using a 10-order AR model based on a Curie-Walker (AR) model method, and expressing an electroencephalogram signal power characteristic parameter matrix by using X, wherein X is (X-X)1,x2,...,xn)′,xt=(x1t,x2t,...,xpt) ', t is 1, 2, n, n is the number of electroencephalogram samples;
step 3.2.3: selecting main characteristics by using a Principal Component Analysis (PCA);
synthetic index vector F1,F2,...,FpThe information quantity extracted from the total information quantity provided by the original index is sequentially decreased by p principal components which are principal components, the information quantity extracted from each principal component is measured by variance, and the contribution of the variance of the principal components is equal to the corresponding eigenvalue lambda of the correlation coefficient matrix of the original indexiThe combination coefficient of each principal component
ai′=(a1i,a2i,...,api) Wherein, i is 1, 2, a, p,
is the corresponding eigenvalue λiThe corresponding unit feature vector.
The variance has a contribution rate of
Figure BDA0000158716480000111
αiThe larger the corresponding principal component is, the stronger the ability to reflect the integrated information.
The method comprises the following steps of selecting main characteristics by using a principal component analysis method:
step 3.2.3.1: solving a covariance matrix sigma of a vector after AR parameter estimation of a training sample (namely electroencephalogram power characteristic parameters);
∑=(sij)p×p
wherein, <math> <mrow> <msub> <mi>s</mi> <mi>ij</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mi>n</mi> <mo>-</mo> <mn>1</mn> </mrow> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>m</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>im</mi> </msub> <mo>-</mo> <msub> <mover> <mi>x</mi> <mo>&OverBar;</mo> </mover> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>jm</mi> </msub> <mo>-</mo> <msub> <mover> <mi>x</mi> <mo>&OverBar;</mo> </mover> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>.</mo> </mrow> </math>
step 3.2.3.2: solving an eigenvalue and an eigenvector of the covariance matrix sigma;
determining an eigenvalue lambda of a covariance matrix sigma1、λ2、...λpAnd corresponding orthogonalized unit feature vector, λ1≥λ2≥...λp>0,λ1、λ2、...λpCorresponding orthogonalized unit feature vector a1a2…ap
a 1 = a 11 a 21 . . . a p 1 , a 2 = a 12 a 22 . . . a p 2 , ,..., a p = a 1 p a 2 p . . . a pp
The ith principal component of X is Fi=ai′X i=1,2,...,p
Step 3.2.3.3: arranging the characteristic values from large to small, and drawing a characteristic value curve;
step 3.2.3.4: selecting a characteristic vector corresponding to the characteristic value reaching the preset accumulative contribution rate according to the drawn characteristic value curve to form a new characteristic space;
and reasonably selecting m main components from the determined p main components to realize the final evaluation analysis. Variance contribution rate is used in general
<math> <mrow> <msub> <mi>&alpha;</mi> <mi>i</mi> </msub> <mo>=</mo> <msub> <mi>&lambda;</mi> <mi>i</mi> </msub> <mo>/</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>p</mi> </munderover> <msub> <mi>&lambda;</mi> <mi>k</mi> </msub> </mrow> </math>
Explain principal component FiThe size of the reflected information amount and the cumulative contribution rate
<math> <mrow> <mi>G</mi> <mrow> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <msub> <mi>&lambda;</mi> <mi>i</mi> </msub> <mo>/</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>p</mi> </munderover> <msub> <mi>&lambda;</mi> <mi>k</mi> </msub> </mrow> </math>
The m is determined on the basis that the cumulative contribution rate is sufficiently large (generally more than 85%). But since how many principal components are retained depends on the percentage of the cumulative variance of the retained portion in the sum of variances (i.e., the cumulative contribution ratio), it marks how much of the summary information of the first few principal components. In practice, roughly specifying a percentage may decide to retain several principal components; if one more principal component is left, the cumulative variance increases a little more. The magnitude of the eigenvalue represents the magnitude of energy in the principal component, and the larger the eigenvalue, the higher the amount of information contained. The electroencephalogram signal belongs to a weak signal, and the selected cumulative contribution rate is 99%.
Step 3.3: identifying and classifying the selected feature vectors by using a k-nearest neighbor classification algorithm;
the kNN algorithm is generally divided into four steps: 1) selecting a suitable distance metric; 2) finding out k neighbors of the samples to be classified according to the selected distance measurement standard; 3) finding out the category with the majority of categories in the k neighbors; 4) and judging the sample to be classified as the class.
In order to find out k neighbors of a sample to be classified, a database must be searched and calculated, and the similarity of the sample is determined by the Euclidean distance. The calculation formula of the Euclidean distance is
<math> <mrow> <mi>d</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>x</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <msqrt> <munderover> <mi>&Sigma;</mi> <mrow> <mi>r</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>t</mi> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>a</mi> <mi>r</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>a</mi> <mi>r</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow> </math>
Wherein xi,xjRepresents 2 electroencephalogram feature samples, and t is the number of attributes in each feature sample.
The "k-neighbor rule" classifies a test data point x into the class that occurs the most among its nearest k neighbors. The algorithm starts to grow from the test sample point x, continuously expands the area until k training sample points are included, and classifies the category of the test sample point x as the category with the highest occurrence frequency in the k training sample points.
Identifying and classifying the selected feature vectors of the main features by using a k-nearest neighbor classification algorithm, wherein the method comprises the following steps:
(1) searching electroencephalogram signal training data set
The principle of selecting the training data set is to make the number of various samples generally consistent, the selected historical data is representative, and the used common method is to group the historical data according to categories and then select some representative samples in each group to form the training set, so that the size of the training set is reduced and the higher accuracy is kept.
(2) Determining a distance function
The distance function determines which samples are the k nearest neighbors to the sample to be classified, and its choice depends on the actual data and decision problem. The present invention uses euclidean distances to determine the similarity of samples. The calculation formula of the Euclidean distance is
<math> <mrow> <mi>d</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>x</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <msqrt> <munderover> <mi>&Sigma;</mi> <mrow> <mi>r</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>t</mi> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>a</mi> <mi>r</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>a</mi> <mi>r</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow> </math>
Wherein xi,xjRepresents 2 electroencephalogram feature samples, and t is the number of attributes in each feature sample.
(3) Determining the value of k
The number k of neighbors has a certain influence on the classification result, and generally an initial value is determined and then adjusted until a proper value is found. The value of k used in the present invention is 2.
(4) Synthesizing classes of k neighbors
Most methods are the simplest of the comprehensive methods, and select a class with the highest frequency of occurrence from the neighbors as the final result, and if more than one class with the highest frequency, select the class of the nearest neighbor. The weight method is a more complicated method, weights are set for k nearest neighbors, and the larger the distance is, the smaller the weight is. In counting the classes, the sum of the weights of each class is calculated, and the largest one is the class of the new sample.
In the recognition result of the k-nearest neighbor classification algorithm, if 2 of 20 test samples have errors, the recognition rate is 90%. For the identification of the steady-state vision-induced electroencephalogram, the classification effect meets the requirement, and the effectiveness of a k-nearest neighbor classification algorithm is realized.
And 4, step 4: and (4) reporting the voice prompt corresponding to the recognition result of the step (3) through a loudspeaker, displaying the result on a display, and performing corresponding rescue by medical personnel according to the display and the voice.

Claims (3)

1. A doctor-patient interaction method based on brain-computer interface, the doctor-patient interaction system based on brain-computer interface adopted in the method includes brain electrical acquisition module and brain electrical analysis and doctor-patient interaction module;
the electroencephalogram acquisition module comprises an electrode and an electroencephalogram amplifier, wherein the electrode is arranged on the surface of the scalp of a subject and is fixed through an electrode cap; the electroencephalogram amplifier adopts the existing devices, is connected with a computer and acquires electroencephalogram signals induced by visual stimulation to obtain data required by analysis;
the electroencephalogram analysis and doctor-patient interaction module comprises a computer, a display and a loudspeaker, and realizes the electroencephalogram signal processing function and doctor-patient interaction function of the system;
the method is characterized by comprising the following steps:
step 1: setting visual stimulation signals with different frequencies and displaying the visual stimulation signals on a display, wherein the visual stimulation signals comprise: alarming, determining, deleting and two basic command signals 0 and 1, wherein 4 different doctor-patient interaction commands are formed by two bit codes of 0 and 1;
step 2: the electrode collects the EEG signal of the testee in real time, and the EEG signal is amplified and sent to a computer;
and step 3: carrying out electroencephalogram signal analysis processing, comprising: signal preprocessing, feature extraction and identification classification;
and 4, step 4: and (4) reporting the voice prompt corresponding to the recognition result of the step (3) through a loudspeaker, displaying the result on a display, and performing corresponding rescue by medical personnel according to the display and the voice.
2. The brain-computer interface-based doctor-patient interaction method according to claim 1, characterized in that: and 3, analyzing and processing the electroencephalogram signals, and comprising the following steps:
step 3.1: signal preprocessing;
carrying out accumulated average preprocessing on the acquired real-time electroencephalogram signals, and extracting induced electroencephalogram signals;
step 3.2: extracting the characteristics of the preprocessed electroencephalogram signals;
step 3.3: and identifying and classifying the selected feature vectors of the main features by using a K-nearest neighbor classification algorithm.
3. The brain-computer interface-based doctor-patient interaction method according to claim 2, characterized in that: and 3.2, extracting the characteristics of the preprocessed electroencephalogram signals, which comprises the following steps:
step 3.2.1: wavelet decomposition is carried out on the electroencephalogram signals by utilizing wavelet transformation, and wavelet decomposition coefficients are extracted to serve as initial characteristic parameters;
step 3.2.2: performing AR power spectrum estimation on the wavelet decomposition coefficient extracted after wavelet transformation, and extracting electroencephalogram signal power characteristic parameters;
step 3.2.3: and (4) selecting main characteristics by using a principal component analysis method.
CN2012101318690A 2012-04-28 2012-04-28 Brain-computer interface based doctor-patient interaction method Pending CN102708288A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2012101318690A CN102708288A (en) 2012-04-28 2012-04-28 Brain-computer interface based doctor-patient interaction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2012101318690A CN102708288A (en) 2012-04-28 2012-04-28 Brain-computer interface based doctor-patient interaction method

Publications (1)

Publication Number Publication Date
CN102708288A true CN102708288A (en) 2012-10-03

Family

ID=46901044

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2012101318690A Pending CN102708288A (en) 2012-04-28 2012-04-28 Brain-computer interface based doctor-patient interaction method

Country Status (1)

Country Link
CN (1) CN102708288A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102637357A (en) * 2012-03-27 2012-08-15 山东大学 Regional traffic state assessment method
CN103164026A (en) * 2013-03-22 2013-06-19 山东大学 Method and device of brain-computer interface based on characteristics of box dimension and fractal intercept
CN104503580A (en) * 2014-12-25 2015-04-08 天津大学 Identification method of steady-state visual evoked potential brain-computer interface target
CN104679249A (en) * 2015-03-06 2015-06-03 南京邮电大学 Method for implementing Chinese BCI (brain and computer interface) based on a DIVA (directional into velocities of articulators) model
CN105942975A (en) * 2016-04-20 2016-09-21 西安电子科技大学 Stable state visual sense induced EEG signal processing method
CN107016345A (en) * 2017-03-08 2017-08-04 浙江大学 A kind of demand model construction method applied to Product Conceptual Design
CN107296586A (en) * 2017-06-20 2017-10-27 黄涌 Collimation error detection device/method and writing system/method based on the equipment
CN107635476A (en) * 2015-05-27 2018-01-26 宫井郎 Cerebration reponse system
CN109044350A (en) * 2018-09-15 2018-12-21 哈尔滨理工大学 A kind of eeg signal acquisition device and detection method
CN109445580A (en) * 2018-10-17 2019-03-08 福州大学 Trust Game Experiments system based on brain-computer interface
CN110096149A (en) * 2019-04-24 2019-08-06 西安交通大学 Steady-state evoked potential brain-computer interface method based on multi-frequency sequential coding
CN111743538A (en) * 2020-07-06 2020-10-09 江苏集萃脑机融合智能技术研究所有限公司 Brain-computer interface alarm method and system
CN112559308A (en) * 2020-12-11 2021-03-26 广东电力通信科技有限公司 Statistical model-based root alarm analysis method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN2888524Y (en) * 2005-12-08 2007-04-11 清华大学 Brain-machine interface device based on high-frequency stable vision evoked potentials
CN101576772A (en) * 2009-05-14 2009-11-11 天津工程师范学院 Brain-computer interface system based on virtual instrument steady-state visual evoked potentials and control method thereof
CN102200833A (en) * 2011-05-13 2011-09-28 天津大学 Speller brain-computer interface (SCI) system and control method thereof

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN2888524Y (en) * 2005-12-08 2007-04-11 清华大学 Brain-machine interface device based on high-frequency stable vision evoked potentials
CN101576772A (en) * 2009-05-14 2009-11-11 天津工程师范学院 Brain-computer interface system based on virtual instrument steady-state visual evoked potentials and control method thereof
CN102200833A (en) * 2011-05-13 2011-09-28 天津大学 Speller brain-computer interface (SCI) system and control method thereof

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102637357A (en) * 2012-03-27 2012-08-15 山东大学 Regional traffic state assessment method
CN103164026A (en) * 2013-03-22 2013-06-19 山东大学 Method and device of brain-computer interface based on characteristics of box dimension and fractal intercept
CN103164026B (en) * 2013-03-22 2015-09-09 山东大学 Based on brain-machine interface method and the device of the fractal intercept feature of box peacekeeping
CN104503580A (en) * 2014-12-25 2015-04-08 天津大学 Identification method of steady-state visual evoked potential brain-computer interface target
CN104503580B (en) * 2014-12-25 2018-04-13 天津大学 A kind of recognition methods to Steady State Visual Evoked Potential brain-computer interface target
CN104679249A (en) * 2015-03-06 2015-06-03 南京邮电大学 Method for implementing Chinese BCI (brain and computer interface) based on a DIVA (directional into velocities of articulators) model
CN104679249B (en) * 2015-03-06 2017-07-07 南京邮电大学 A kind of Chinese brain-computer interface implementation method based on DIVA models
CN107635476B (en) * 2015-05-27 2021-04-06 宫井一郎 Brain activity feedback system
CN107635476A (en) * 2015-05-27 2018-01-26 宫井郎 Cerebration reponse system
CN105942975A (en) * 2016-04-20 2016-09-21 西安电子科技大学 Stable state visual sense induced EEG signal processing method
CN107016345A (en) * 2017-03-08 2017-08-04 浙江大学 A kind of demand model construction method applied to Product Conceptual Design
CN107296586A (en) * 2017-06-20 2017-10-27 黄涌 Collimation error detection device/method and writing system/method based on the equipment
CN109044350A (en) * 2018-09-15 2018-12-21 哈尔滨理工大学 A kind of eeg signal acquisition device and detection method
CN109445580A (en) * 2018-10-17 2019-03-08 福州大学 Trust Game Experiments system based on brain-computer interface
CN110096149A (en) * 2019-04-24 2019-08-06 西安交通大学 Steady-state evoked potential brain-computer interface method based on multi-frequency sequential coding
CN110096149B (en) * 2019-04-24 2020-03-31 西安交通大学 Steady-state auditory evoked potential brain-computer interface method based on multi-frequency time sequence coding
CN111743538A (en) * 2020-07-06 2020-10-09 江苏集萃脑机融合智能技术研究所有限公司 Brain-computer interface alarm method and system
CN112559308A (en) * 2020-12-11 2021-03-26 广东电力通信科技有限公司 Statistical model-based root alarm analysis method
CN112559308B (en) * 2020-12-11 2023-02-28 广东电力通信科技有限公司 Statistical model-based root alarm analysis method

Similar Documents

Publication Publication Date Title
CN102708288A (en) Brain-computer interface based doctor-patient interaction method
Anusha et al. Electrodermal activity based pre-surgery stress detection using a wrist wearable
US11172835B2 (en) Method and system for monitoring sleep
Sansone et al. Electrocardiogram pattern recognition and analysis based on artificial neural networks and support vector machines: a review
Mei et al. Automatic atrial fibrillation detection based on heart rate variability and spectral features
CA2750643C (en) Method and device for probabilistic objective assessment of brain function
US20110087125A1 (en) System and method for pain monitoring at the point-of-care
CN105147248A (en) Physiological information-based depressive disorder evaluation system and evaluation method thereof
CN204931634U (en) Based on the depression evaluating system of physiologic information
WO2013056094A1 (en) Seizure detection methods, apparatus, and systems using a wavelet transform maximum modulus or autoregression algorithm
Supratak et al. Survey on feature extraction and applications of biosignals
Obayya et al. Automatic classification of sleep stages using EEG records based on Fuzzy c-means (FCM) algorithm
Zhao et al. An IoT-based wearable system using accelerometers and machine learning for fetal movement monitoring
Damaševičius et al. Visualization of physiologic signals based on Hjorth parameters and Gramian Angular Fields
US20220160296A1 (en) Pain assessment method and apparatus for patients unable to self-report pain
Ahmadi et al. Brain-computer interface signal processing algorithms: A computational cost vs. accuracy analysis for wearable computers
CN112426162A (en) Fatigue detection method based on electroencephalogram signal rhythm entropy
CN114648040A (en) Method for extracting and fusing multiple physiological signals of vital signs
Fan et al. An electrocardiogram acquisition and analysis system for detection of human stress
Gupta et al. Mobile ECG-based drowsiness detection
Kulkarni et al. Driver state analysis for ADAS using EEG signals
Paul et al. Mental stress detection using multimodal characterization of PPG signal for personal healthcare applications
Moeynoi et al. Canonical correlation analysis for dimensionality reduction of sleep apnea features based on ECG single lead
An et al. Recognizing the consciousness states of DOC patients by classifying EEG signal
Ren et al. A review of automatic detection of epilepsy based on EEG signals

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20121003