CN109002798A - It is a kind of singly to lead visual evoked potential extracting method based on convolutional neural networks - Google Patents

It is a kind of singly to lead visual evoked potential extracting method based on convolutional neural networks Download PDF

Info

Publication number
CN109002798A
CN109002798A CN201810795005.6A CN201810795005A CN109002798A CN 109002798 A CN109002798 A CN 109002798A CN 201810795005 A CN201810795005 A CN 201810795005A CN 109002798 A CN109002798 A CN 109002798A
Authority
CN
China
Prior art keywords
signal
evoked potential
visual evoked
neural networks
convolutional neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810795005.6A
Other languages
Chinese (zh)
Other versions
CN109002798B (en
Inventor
邱天爽
丑远婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN201810795005.6A priority Critical patent/CN109002798B/en
Publication of CN109002798A publication Critical patent/CN109002798A/en
Application granted granted Critical
Publication of CN109002798B publication Critical patent/CN109002798B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • G06F2218/04Denoising
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

It is a kind of singly to lead visual evoked potential extracting method based on convolutional neural networks, belong to medicine and bio-signal acquisition and processing analysis technical field.Firstly, by EEG signals removal artefact and Hz noise under obtained visual stimulus.Observation signal is overlapped and averagely obtains visual evoked potential.Observation signal and visual evoked potential are handled, the observation signal and corresponding Evoked ptential signal under emulation different moments stimulation.Data are expanded using the model that Evoked ptential extracts problem, emulate the observation signal and Evoked ptential signal under different signal-to-noise ratio.Training set and verifying collection are divided data into, building convolutional neural networks are trained network, are tested using test set, restore Evoked ptential waveform for extracting Evoked ptential.The present invention can not depend on other priori knowledges, realize the variation of the Evoked ptential under the stimulation of dynamically track different moments, and few time for completing Evoked ptential is extracted, and facilitate the clinical medical analysis of Evoked ptential and research.

Description

It is a kind of singly to lead visual evoked potential extracting method based on convolutional neural networks
Technical field
The invention belongs to medicine and bio-signal acquisitions and processing analysis technical field, are related in EEG signals single guide Feel Evoked ptential extracting method, be related specifically under the premise of individuality medicine diagnosis, realizes single guide using convolutional neural networks Feel Evoked ptential Wave shape extracting method.
Background technique
Evoked ptential plays an important role on clinical medicine, by extracting Evoked ptential, to the important of Evoked ptential Parameter is analyzed, and then is used for clinical diagnosis, operation detection and the functional evaluation of nervous system.Wherein, due to inducing electricity Position is usually flooded by spontaneous brain electricity, thus singly leading Evoked ptential extracting method is always the emphasis studied.With traditional superposition Averaging method extraction singly leads visual evoked potential and needs repeatedly to stimulate subject, be easy to cause neural fatigue, measurement error compared with Big and lossing signal detailed information.If can rapidly extracting list lead its variation of visual evoked potential and dynamically track, will be clinic Some bases are established in the development of medicine.Adaptive-filtering is used for Evoked ptential signal extraction is singly led, still by the method for early stage This method does not fully consider the non-stationary property of Evoked ptential, and signal is needed to refer to when noise is relatively low.Based on small echo The Evoked ptential extracting method of transformation is changed by the time-frequency of signal, and threshold denoising is carried out under different resolution, and this is usually Need more priori knowledge and human intervention.And the Evoked ptential extracting method based on rarefaction representation then needs more to pay close attention to word Allusion quotation selection and training problem.Currently, there is some researcher's researchs are neural network based singly to lead Evoked ptential waveform extracting, although This method can preferably track Evoked ptential signal, but this method has certain limitation in the modeling of ambient noise Property.
Summary of the invention
What the main object of the present invention was is in order to solve problems in the prior art, to provide a kind of based on convolutional neural networks Singly lead visual evoked potential extracting method.
The technical solution adopted by the present invention are as follows:
It is a kind of singly to lead visual evoked potential extracting method based on convolutional neural networks, in the condition of individuality medicine diagnosis Under, the EEG signals under obtained visual stimulus i.e. observation signal is passed through into band-pass filter first, removes artefact and work Frequency interferes.Meanwhile observation signal being overlapped by the stimulation moment and averagely obtains visual evoked potential.Then become by Fourier It changes and observation signal and visual evoked potential is handled with Fourier inversion, to emulate the observation letter under different moments stimulation Number and corresponding Evoked ptential signal.Data are expanded using the model that Evoked ptential extracts problem, are emulated under different signal-to-noise ratio Observation signal and Evoked ptential signal.After the pretreatment for completing data, training set and verifying collection, building volume are divided data into Product neural network is for extracting Evoked ptential.Using observation signal as input, visual evoked potential is as supervisory signals, to network It is trained.It after establishing network parameter, is tested using test set, final network exports the view extracted from very noisy Feel Evoked ptential.Specifically includes the following steps:
The first step obtains observation signal and Visual Evoked Potential Signal under visual stimulus
1.1) EEG signals, i.e. observation signal are detected by scalp under visual stimulus, and removal artefact is filtered to signal.
1.2) visual evoked potential is obtained using superposed average method.
Second step pre-processes observation data and visual evoked potential
2.1) data enhancing is carried out to observation signal and visual evoked potential using Fourier transformation and its inverse transformation, with imitative The dynamic mapping for very stimulating lower Evoked ptential every time, obtains more observation signals and Visual Evoked Potential Signal.
2.2) model that visual evoked potential extracts problem is established, 2.1) gained observation signal is carried out at zero-mean data Reason.
Third step constructs deep learning network, it is made to be used for the extraction of Evoked ptential signal
3.1) it using pretreated observation signal and Visual Evoked Potential Signal as input signal and supervisory signals, and presses Ratio cut partition is at training set and test set.
3.2) it is trained using convolutional neural networks in deep learning and is tested using test set, restore to induce electricity Digit wave form.
To the data enhancement methods of eeg data in the step 2.1).Due to what is arrived by experimental data, actual acquisition Eeg data is less, cannot good training pattern, therefore need to data prediction increase data volume.Specific step is as follows:
2.1.1) by observation signal and vision induced EEG signals by interpolation processing, so that scanning sample points are every time 801 sampled points, increase its frequency resolution.Then two kinds of signal subtractions under corresponding scanning times are obtained into spontaneous brain electricity letter Number, spontaneous brain electricity signal is transformed to by frequency domain by Fourier transformation, 0 is set at random to its partial dot phase, then using in Fu The spontaneous brain electricity signal v that leaf inverse transformation obtainsiTime domain waveform, the spontaneous brain electricity signal inscribed when emulating different stimulated.
2.1.2 the translation transformation of different scale first) is done in the time domain to emulate to the visual evoked potential after superposed average Evoked potential latency variation, then carries out Fourier transformation, does to its partial dot amplitude and set 0 at random, then utilizes Fourier Inverse transformation obtains visual evoked potential siTime domain waveform, with emulate different moments stimulation under visual evoked potential
2.1.3) finally two kinds of linearly are superimposed according to different signal-to-noise ratio, obtain observation signal.
The model that visual evoked potential extracts problem is established in the step 2.2), as shown in formula (2):
xi=si+α·vi
Wherein, si, vi, xiRepresent i-th scan corresponding Visual Evoked Potential Signal, spontaneous brain electricity signal (noise) with And observation signal.Variance is sought in var () representative, and SNR represents signal-to-noise ratio.α represents proportionality coefficient.
Deep learning network is constructed in the step 3.2), it is made to be used for the extraction of Evoked ptential signal.Utilize depth Convolutional neural networks are trained in study, and the structure of convolutional neural networks is as shown in Figure 2.Network is formed by ten layers, mark life Entitled L1-L10.L1 is input layer, is loaded into input data, and L2 and L3 are convolutional layers, finds the correlation between input data, goes Except redundancy.L4 and L5 is warp lamination, does some waveforms according to the feature that L2 and L3 are extracted and restores.In order to enable warp Information after product is more abundant, and L3 and L4 layers of information is done superposition and obtains L6, and L7-L10 is totally interconnected layer, is used to letter Number feature is combined, and exports the visual evoked potential after nonlinear fitting.Network is trained using training data, is adopted Accelerate the training of network with skills such as batch normalization.Test analysis, convolutional neural networks are carried out using test set Output result be exactly the visual evoked potential waveform extracted.Fig. 3 is the induction that the present invention is extracted using convolutional neural networks Electric potential signal and former Evoked ptential signal contrast figure.
The invention has the benefit that utilizing depth by the basis of pretreated to EEG signals under visual stimulus Convolutional neural networks realization extracts individuation Evoked ptential from very noisy observation signal in study.It is appropriate establishing On network foundation, dynamically track, which may be implemented, every time in the present invention stimulates the variation of lower Evoked ptential, i.e., singly leads vision induced electricity Few time of position is extracted, and the clinical medical analysis of Evoked ptential and research are facilitated.
Detailed description of the invention
Fig. 1 is the system block diagram of the invention that singly lead visual evoked potential extracting method based on deep learning.
Fig. 2 is the structure chart of convolutional neural networks in the present invention.
Fig. 3 is the Evoked ptential signal and former Evoked ptential signal contrast figure that the present invention is extracted using convolutional neural networks.
Specific embodiment
Purposes, technical schemes and advantages to implement the present invention are more clear, below in conjunction with technical solution of the present invention It is described in further detail with attached drawing:
Visual evoked potential extracting method is singly led based on convolutional neural networks, system the general frame is as shown in Figure 1.It should Method can be divided into three links, be respectively as follows: the acquisition of signal, and Signal Pretreatment and building network extract Evoked ptential.Specific step It is rapid as follows:
The acquisition of step A. signal mainly comprises the following steps
A1. under gridiron pattern Pattern Reversal visual stimulus, obtain and detect EEG signals in human body scalp --- i.e. observation letter Number, the frequency acquisition of signal is 1000Hz, the band-pass filter for being 0.05-450Hz by passband, removal eye electricity and artefact The signal of serious interference obtains one group of visual evoked potential.This group of visual evoked potential is scanned comprising n times, the record respectively scanned Time is 401 milliseconds, since stimulating first 100 milliseconds.
A2. assuming that i-th scans corresponding observation signal is xi, wherein xiIt is the vector of 401 points, i=for a length 1,2,...,n.It is visual evoked potential by the signal s, s after formula (1) acquisition superposed average.N represents scanning times, amounts to 243 times
Step B. pre-processes the observation data under visual stimulus
B1. data enhancing is done to observation signal and visual evoked potential using Fourier transformation and its inverse transformation.
The step B1 is mainly comprised the steps that
B11. by observation signal and vision induced EEG signals by interpolation processing, so that scanning sample points are every time 801 sampled points, increase its frequency resolution.Then two kinds of signal subtractions under corresponding scanning times are obtained into spontaneous brain electricity letter Number, spontaneous brain electricity signal is transformed to by frequency domain by Fourier transformation, 0 is set at random to its partial dot phase, then using in Fu The spontaneous brain electricity signal v that leaf inverse transformation obtainsiTime domain waveform, the spontaneous brain electricity signal inscribed when emulating different stimulated.
B12. the translation transformation for first doing different scale in the time domain to the visual evoked potential after superposed average is lured with emulating Generate electricity position latency change, then carries out Fourier transformation, does to its partial dot amplitude and set 0 at random, then anti-using Fourier Transformation obtains visual evoked potential siTime domain waveform, with emulate different moments stimulation under visual evoked potential.
B13. finally two kinds of linearly are superimposed according to different signal-to-noise ratio, obtain observation signal.
B2. the model that visual evoked potential extracts problem is established, as shown in formula (2), is imitated under the conditions of 0, -3, -5dB respectively True observation signal, and to the observation signal x after emulationiCarry out zero-mean data processing.
xi=si+α·vi
Wherein si, vi, xiRepresent i-th and scan corresponding Visual Evoked Potential Signal, spontaneous brain electricity signal (noise) and Observation signal.Variance is sought in var () representative, and SNR represents signal-to-noise ratio.α represents proportionality coefficient.
Step C. constructs deep learning network, it is made to be used for the extraction of Evoked ptential signal.The step C specifically include as Lower step:
C1. after above-mentioned pretreatment, emulation data are expanded to 29646 samples pair, and each sample is believed by an observation Number and its corresponding visual evoked potential composition.Wherein, input of the observation signal as network, visual evoked potential is as supervision Signal.It is used as training data by the 2/3 of sample size, deep learning network is sent to and is trained, 1/3 sample size is as test Data.
C2. it is trained using convolutional neural networks in deep learning, the structure of convolutional neural networks is as shown in Figure 2.Net Network is formed by ten layers, and mark is named as L1-L10.L1 is input layer, is loaded into input data, and L2 and L3 are convolutional layers, finds input Correlation between data removes redundancy.L4 and L5 is warp lamination, does some waves according to the feature that L2 and L3 are extracted Shape is restored.In order to enable the information after deconvolution is more abundant, L3 and L4 layers of information is done into superposition and obtains L6, L7-L10 It is totally interconnected layer, for being combined to signal characteristic, exports the visual evoked potential after nonlinear fitting.Use training number It is trained according to network, accelerates the training of network using skills such as batch normalization.It is surveyed using test set Examination analysis, the output result of convolutional neural networks is exactly the visual evoked potential waveform extracted.Fig. 3 is that the present invention uses convolution The Evoked ptential signal and former Evoked ptential signal contrast figure that neural network is extracted.
The foregoing is only a preferred embodiment of the present invention, but scope of protection of the present invention is not limited thereto, Anyone skilled in the art in the technical scope disclosed by the present invention, according to the technique and scheme of the present invention and its Inventive concept is subject to equivalent substitution or change, should be covered by the protection scope of the present invention.

Claims (5)

1. a kind of singly lead visual evoked potential extracting method based on convolutional neural networks, it is characterised in that following steps:
The first step obtains observation signal and Visual Evoked Potential Signal under visual stimulus
1.1) EEG signals, i.e. observation signal are detected by scalp under visual stimulus, and removal artefact is filtered to signal;
1.2) observation signal is overlapped by the stimulation moment using superposed average method and averagely obtains visual evoked potential;
Second step pre-processes observation data and visual evoked potential
2.1) data enhancing is carried out to observation signal and visual evoked potential using Fourier transformation and its inverse transformation, it is every to emulate The dynamic mapping of Evoked ptential under secondary stimulation, obtains more observation signals and Visual Evoked Potential Signal;
2.2) model that visual evoked potential extracts problem is established, zero-mean data processing is carried out to 2.1) gained observation signal;
Third step constructs deep learning network, it is made to be used for the extraction of Evoked ptential signal
3.1) using pretreated observation signal and Visual Evoked Potential Signal as input signal and supervisory signals, and in proportion It is divided into training set and test set;
3.2) it is trained using convolutional neural networks in deep learning and is tested using test set, restore Evoked ptential wave Shape.
2. it is according to claim 1 it is a kind of singly lead visual evoked potential extracting method based on convolutional neural networks, it is special Sign is, to the data enhancement methods of eeg data, including following sub-step in the step 2.1):
2.1.1) by observation signal and vision induced EEG signals by interpolation processing, so that scanning sample points are 801 every time A sampled point increases its frequency resolution;Then two kinds of signal subtractions under corresponding scanning times are obtained into spontaneous brain electricity signal, Spontaneous brain electricity signal is transformed into frequency domain by Fourier transformation, 0 is set at random to its partial dot phase, it is then anti-using Fourier Convert obtained spontaneous brain electricity signal viTime domain waveform, the spontaneous brain electricity signal inscribed when emulating different stimulated;
2.1.2 the translation transformation of different scale first) is done in the time domain to the visual evoked potential after superposed average to emulate and induce Then current potential latency change carries out Fourier transformation, do to its partial dot amplitude and set 0 at random, then utilizes Fourier's contravariant Get visual evoked potential s in returniTime domain waveform, with emulate different moments stimulation under visual evoked potential;
2.1.3) finally two kinds of linearly are superimposed according to different signal-to-noise ratio, obtain observation signal.
3. it is according to claim 1 or 2 it is a kind of singly lead visual evoked potential extracting method based on convolutional neural networks, It is characterized in that, deep learning network is constructed in the step 3.2), it is made to be used for the extraction of Evoked ptential signal;Utilize depth Convolutional neural networks are trained in study, and the structure of convolutional neural networks is formed by ten layers, and mark, which is named as L1-L10:L1, is Input layer is loaded into input data;L2 and L3 is convolutional layer, finds the correlation between input data, removes redundancy;L4 and L5 is warp lamination, does some waveforms according to the feature that L2 and L3 are extracted and restores;L3 and L4 layers of information is done into superposition L6 is obtained, L7-L10 is totally interconnected layer, for being combined to signal characteristic, exports the vision induced electricity after nonlinear fitting Position;Network is trained using training data, test analysis is carried out using test set, the output result of convolutional neural networks is For the visual evoked potential waveform extracted.
4. it is according to claim 1 or 2 it is a kind of singly lead visual evoked potential extracting method based on convolutional neural networks, It is characterized in that, the model that visual evoked potential extracts problem is established in the step 2.2), as shown in formula (2):
Wherein, si, vi, xiI-th is represented to scan corresponding Visual Evoked Potential Signal, spontaneous brain electricity signal (noise) and see Survey signal;Variance is sought in var () representative, and SNR represents signal-to-noise ratio;α represents proportionality coefficient.
5. it is according to claim 3 it is a kind of singly lead visual evoked potential extracting method based on convolutional neural networks, it is special Sign is, the model that visual evoked potential extracts problem is established in the step 2.2), as shown in formula (2):
Wherein, si, vi, xiI-th is represented to scan corresponding Visual Evoked Potential Signal, spontaneous brain electricity signal (noise) and see Survey signal;Variance is sought in var () representative, and SNR represents signal-to-noise ratio;α represents proportionality coefficient.
CN201810795005.6A 2018-07-19 2018-07-19 Single-lead visual evoked potential extraction method based on convolutional neural network Active CN109002798B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810795005.6A CN109002798B (en) 2018-07-19 2018-07-19 Single-lead visual evoked potential extraction method based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810795005.6A CN109002798B (en) 2018-07-19 2018-07-19 Single-lead visual evoked potential extraction method based on convolutional neural network

Publications (2)

Publication Number Publication Date
CN109002798A true CN109002798A (en) 2018-12-14
CN109002798B CN109002798B (en) 2021-07-16

Family

ID=64600411

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810795005.6A Active CN109002798B (en) 2018-07-19 2018-07-19 Single-lead visual evoked potential extraction method based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN109002798B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111990991A (en) * 2020-08-22 2020-11-27 陇东学院 Electroencephalogram signal analysis method based on complex network and application
CN113598794A (en) * 2021-08-12 2021-11-05 中南民族大学 Training method and system for detection model of ice drug addict

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1556450A (en) * 2003-12-31 2004-12-22 中国人民解放军第三军医大学野战外科 Method of extracting brain machine interface control signa based on instantaneous vision sense induced electric potential
US6898582B2 (en) * 1998-12-30 2005-05-24 Algodyne, Ltd. Method and apparatus for extracting low SNR transient signals from noise
CN101049236A (en) * 2007-05-09 2007-10-10 西安电子科技大学 Instant detection system and detection method for state of attention based on interaction between brain and computer
CN101828921A (en) * 2010-06-13 2010-09-15 天津大学 Identity identification method based on visual evoked potential (VEP)
CN103019382A (en) * 2012-12-17 2013-04-03 北京大学 Brain-computer interaction method for reflecting subjective motive signals of brain through induced potentials
CN104700119A (en) * 2015-03-24 2015-06-10 北京机械设备研究所 Brain electrical signal independent component extraction method based on convolution blind source separation
CN107529687A (en) * 2017-09-20 2018-01-02 大连理工大学 A kind of time delay estimation method based on circulation joint entropy
CN107844755A (en) * 2017-10-23 2018-03-27 重庆邮电大学 A kind of combination DAE and CNN EEG feature extraction and sorting technique
US20180188807A1 (en) * 2016-12-31 2018-07-05 Daqri, Llc User input validation and verification for augmented and mixed reality experiences

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6898582B2 (en) * 1998-12-30 2005-05-24 Algodyne, Ltd. Method and apparatus for extracting low SNR transient signals from noise
CN1556450A (en) * 2003-12-31 2004-12-22 中国人民解放军第三军医大学野战外科 Method of extracting brain machine interface control signa based on instantaneous vision sense induced electric potential
CN101049236A (en) * 2007-05-09 2007-10-10 西安电子科技大学 Instant detection system and detection method for state of attention based on interaction between brain and computer
CN101828921A (en) * 2010-06-13 2010-09-15 天津大学 Identity identification method based on visual evoked potential (VEP)
CN103019382A (en) * 2012-12-17 2013-04-03 北京大学 Brain-computer interaction method for reflecting subjective motive signals of brain through induced potentials
CN104700119A (en) * 2015-03-24 2015-06-10 北京机械设备研究所 Brain electrical signal independent component extraction method based on convolution blind source separation
US20180188807A1 (en) * 2016-12-31 2018-07-05 Daqri, Llc User input validation and verification for augmented and mixed reality experiences
CN107529687A (en) * 2017-09-20 2018-01-02 大连理工大学 A kind of time delay estimation method based on circulation joint entropy
CN107844755A (en) * 2017-10-23 2018-03-27 重庆邮电大学 A kind of combination DAE and CNN EEG feature extraction and sorting technique

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
MEHDI HAJINOROOZI ET.AL: "EEG-based prediction of driver"s cognitive performance by deep convolutional neural network", 《SIGNAL PROCESSING: IMAGE COMMUNICATION》 *
P. JASKOWSKI ET.AL: "Amplitudes and Latencies of Single-Trial ERP’s", 《IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING 》 *
刘大成: "视觉诱发电位的提取与应用", 《中国优秀硕士学位论文全文数据库 医药卫生科技辑》 *
梁静坤: "《基于想象驾驶行为的脑机接口控制》", 31 December 2015, 国防工业出版社 *
董贤光: "基于卷积神经网络的脑电信号检测与脑机接口实现", 《中国优秀硕士学位论文全文数据库 医药卫生科技辑》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111990991A (en) * 2020-08-22 2020-11-27 陇东学院 Electroencephalogram signal analysis method based on complex network and application
CN113598794A (en) * 2021-08-12 2021-11-05 中南民族大学 Training method and system for detection model of ice drug addict

Also Published As

Publication number Publication date
CN109002798B (en) 2021-07-16

Similar Documents

Publication Publication Date Title
Guo et al. A review of wavelet analysis and its applications: Challenges and opportunities
CN100571617C (en) The signals collecting and the feature extracting method of the imagination that stands action brain electricity
CN102697493B (en) Method for rapidly and automatically identifying and removing ocular artifacts in electroencephalogram signal
CN109299751B (en) EMD data enhancement-based SSVEP electroencephalogram classification method of convolutional neural model
Sobahi Denoising of EMG signals based on wavelet transform
CN109784242A (en) EEG Noise Cancellation based on one-dimensional residual error convolutional neural networks
CN105956624B (en) Mental imagery brain electricity classification method based on empty time-frequency optimization feature rarefaction representation
CN111783942B (en) Brain cognitive process simulation method based on convolutional recurrent neural network
CN111096745B (en) Steady-state evoked response brain source positioning method based on sparse Bayesian learning
CN106328150A (en) Bowel sound detection method, device and system under noisy environment
CN108960182A (en) A kind of P300 event related potential classifying identification method based on deep learning
He et al. Single channel blind source separation on the instantaneous mixed signal of multiple dynamic sources
CN103584872A (en) Psychological stress assessment method based on multi-physiological-parameter integration
CN109589114A (en) Myoelectricity noise-eliminating method based on CEEMD and interval threshold
Wang et al. An MVMD-CCA recognition algorithm in SSVEP-based BCI and its application in robot control
CN104688220A (en) Method for removing ocular artifacts in EEG signals
CN108403108A (en) Array Decomposition Surface EMG method based on waveform optimization
CN104473660A (en) Abnormal heart sound recognition method based on sub-band energy envelope autocorrelation characteristics
CN107550491A (en) A kind of multi-class Mental imagery classifying identification method
CN109002798A (en) It is a kind of singly to lead visual evoked potential extracting method based on convolutional neural networks
CN105748067B (en) A kind of evoked brain potential extracting method based on stochastic gradient adaptive-filtering
CN101433460B (en) Spatial filtering method of lower limb imaginary action potential
CN105931281A (en) Method for quantitatively describing cerebral function network based on network characteristic entropy
CN117281479A (en) Human lower limb chronic pain distinguishing method, storage medium and device based on surface electromyographic signal multi-dimensional feature fusion
He et al. HMT: An EEG Signal Classification Method Based on CNN Architecture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant