CN113887374A - Brain-controlled drinking water system based on dynamic convergence differential neural network - Google Patents

Brain-controlled drinking water system based on dynamic convergence differential neural network Download PDF

Info

Publication number
CN113887374A
CN113887374A CN202111137386.7A CN202111137386A CN113887374A CN 113887374 A CN113887374 A CN 113887374A CN 202111137386 A CN202111137386 A CN 202111137386A CN 113887374 A CN113887374 A CN 113887374A
Authority
CN
China
Prior art keywords
neural network
neuron
signal
brain
differential neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111137386.7A
Other languages
Chinese (zh)
Other versions
CN113887374B (en
Inventor
张智军
孙健声
黄灿辉
黄展峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202111137386.7A priority Critical patent/CN113887374B/en
Publication of CN113887374A publication Critical patent/CN113887374A/en
Application granted granted Critical
Publication of CN113887374B publication Critical patent/CN113887374B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A20/00Water conservation; Efficient water supply; Efficient water use
    • Y02A20/152Water filtration

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Dermatology (AREA)
  • Neurology (AREA)
  • Neurosurgery (AREA)
  • Human Computer Interaction (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention provides a brain-controlled drinking system based on a dynamic convergence differential neural network, which comprises the following steps: 1) carrying out data channel screening, band-pass filtering and independent component analysis on electroencephalogram data acquired by the emotv electroencephalogram equipment; 2) training and identifying and classifying the data set obtained in the step 1) through a dynamic convergence differential neural network; 3) combining the P300 identification and classification result obtained in the step 2) with a user interface to obtain a final target object number; 4) and (4) sending the target object number obtained in the step 3) to a vision mechanical arm system, grabbing the cup and sending the cup to the mouth of a user, and completing a water drinking task.

Description

Brain-controlled drinking water system based on dynamic convergence differential neural network
Technical Field
The invention belongs to the field of electroencephalogram signal identification control, and particularly relates to a brain-controlled drinking system based on a dynamic convergence differential neural network.
Background
Brain Computer Interface (BCI) is an information transmission channel that communicates directly with a Computer without relying on other organs and peripheral nerves of the human body. It provides a new method for the severe disabled to communicate with the outside. Severely disabled patients often sit on a wheelchair or lie in bed for a long time due to inconvenient movement, and other people have difficulty in accurately knowing the movement intention of the severely disabled patients. There are very limited means for communicating with the outside world.
At present, some existing researches attempt to combine a brain-computer interface technology with a mechanical arm technology, the publication number is CN102198660A, and the invention is a Chinese patent application named as 'a mechanical arm control system and action command control scheme based on a brain-computer interface', wherein the brain-computer interface based on motor imagery realizes the control of eight instructions such as the grabbing and releasing of the mechanical arm and the like; the Chinese patent application with the publication number of CN111880656A and the invention name of P300 signal-based intelligent brain control system and rehabilitation equipment develops a P300 signal-based intelligent brain control system, and realizes the task of controlling a mechanical arm by using a P300 signal. According to the invention, some simple and even preset mechanical arm action control tasks are completed only through electroencephalogram signals, the tasks completed by the mechanical arms are not specific, and the advantages of the mechanical arms cannot be embodied. And most of classification networks are completed by using a simple machine learning method, and the calculation takes longer time.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provides a brain-controlled drinking system based on a dynamic convergence differential neural network, so that a user can use electroencephalogram signals to complete the task of grabbing a designated cup for drinking. The user selects different cups through watching characters on the screen to finally complete the task of grabbing the appointed cup by the mechanical arm, and the system can greatly facilitate the daily trip of severely disabled patients.
The purpose of the invention is realized by at least one of the following technical solutions.
A brain-controlled drinking water system based on a dynamic convergence differential neural network comprises the following steps:
s1, preprocessing electroencephalogram data acquired by electroencephalogram equipment to obtain a sample set;
s2, training and identifying and classifying the sample set obtained in the step S1 through a dynamic convergence differential neural network to obtain a P300 identification and classification result;
s3, combining the P300 recognition and classification result obtained in the step S2 with a user interface to obtain a final target object number;
and S4, sending the target object number obtained in the step S3 to a vision mechanical arm system, grabbing the cup and sending the cup to the mouth of a user, and finishing a water drinking task.
Further, step S1 includes the steps of:
s1.1, channel screening is carried out, and channel signals with weak signals are removed;
s1.2, performing band-pass filtering on the acquired channel signals;
and S1.3, performing independent component analysis to remove the electro-oculogram signal and the electro-cardiographic signal, thereby obtaining a sample set.
Further, the performing independent component analysis to remove the electro-ocular signal and the electro-cardiac signal as described in S1.3 includes:
respectively selecting appointed channels of the electro-oculogram signals and the electrocardio signals;
calculating the Pearson correlation between the appointed channel of the electro-oculogram signal and other channels and calculating the Pearson correlation between the appointed channel of the electro-cardiogram signal and other channels;
and setting a threshold value of Pearson correlation, and if the correlation between a certain component and the specified channel is greater than the threshold value of the correlation coefficient or less than the threshold value of the negative correlation coefficient, determining that the component which is strongly correlated with the electrocardiosignal or the electro-oculogram signal is found, and excluding the component.
Further, the threshold value of pearson correlation is 0.3.
Further, in step S2, the dynamic convergent-differential neural network is trained using the sample set, and the network parameters are obtained for recognition and classification.
Further, in step S3, a flicker of the P300 signal is detected in the user interface according to the P300 recognition classification result, so as to determine a number corresponding to the flicker, and the vision mechanical arm system grasps the corresponding cup according to the object number.
Furthermore, a plurality of areas representing different cups are arranged on a user interface and are marked by numbers, each area flickers randomly, a user watches the flicking area, the time of the flicking time appearing in the original electroencephalogram signal and the number of the corresponding area are recorded, after the flicking is finished, a plurality of different groups are obtained and are respectively input into a dynamic convergence differential neural network, and the prediction probability of P300 signals existing in each group is obtained; the signal in each group corresponds to a region, and the same group can test whether a plurality of prediction probability values of the P300 signal corresponding to the region exist or not, then the plurality of prediction probability values of the group are summed, finally, the prediction probability value sum of each region can be obtained respectively, the group with the maximum value is selected from the plurality of prediction probability value sums, and then the region corresponding to the group is the region noticed by the user.
Further, vision mechanical arm system includes camera and arm, and the camera is used for surveying cup depth information and RGB color information, and the arm is used for snatching the cup according to the coordinate of target cup.
Further, the dynamic convergence differential neural network comprises an input layer, a hidden layer and an output layer, and the value of the neuron in the next layer is obtained through forward propagation.
Further, the calculation process of the dynamic convergence differential neural network is as follows:
input layer neurons are obtained directly from input layer feature dimensions, i.e.:
h1i=xi,i=1,2,...,m#(1)
in forward propagation, the value of the next layer of neuron is deduced and calculated according to the weight matrix, the neuron and the biased value, and then the neuron h of the hidden layer2jThrough h1iAnd the transmissionWeight matrix w between ingress and hidden layers1The calculation results in that:
Figure BDA0003282569740000041
h2j=f(L2j),j=1,2,...,n#(3)
output layer neurons ykThrough h2jAnd weight matrix w2The calculation results in that:
Figure BDA0003282569740000042
yk=f(L3k),k=1,2,...,p#(5)
in the formula, xiIs the characteristic dimension of the input layer, i ═ 1, 21iIs a neuron of the input layer, h2jIs a neuron of the hidden layer, j 1, 2kIs the neuron of the output layer, k is 1, 2.. p, m represents the input layer neuron number, n represents the hidden layer neuron number, p represents the output layer neuron number, Y is the label value of the sample, L2j,L3kRespectively calculating the intermediate variable of the weight matrix multiplied by the input layer and added with the bias, and calculating the intermediate variable of the weight matrix multiplied by the hidden layer and added with the bias, b1j,b2kIs the offset of the weight matrix, wjiRepresenting a weight matrix between the ith neuron in the input layer and the jth neuron in the hidden layer, f (x) being an activation function;
and solving to obtain an optimal value of an objective function E (k) in the dynamic convergence differential neural network, namely outputting a final classification result, wherein the expression of the objective function E (k) is as follows:
E(k)=f(w2h2j+b2k)-Y=yk-Y#(7)
by continuously modifying two weight matrices w1,w2Finding the optimum value of E (k), and in order to minimize convergence of E (k), applying a neuro-kinetic formula:
Figure BDA0003282569740000043
wherein Δ E (k) is the derivative of E (k), λ is the learning rate, is a constant greater than 0, and g (E (k)) is a monotonically increasing odd function, which is the activation function;
bringing formula (7) into formula (8) to obtain:
Δyk(Δw2h2j+w2Δh2j)=-λg(yk-Y)#(10)
in the formula,. DELTA.yk、Δw2、Δh2jThe derivative of the output layer neuron, the derivative of the hidden layer to the output weight matrix w2, and the hidden layer neuron h, respectively2jSo that in order to solve for Δ y (k), Δ w needs to be determined separately2,Δh2jThe solving method comprises the following steps:
assume weight matrix w1Is a constant vector in an iteration, i.e. w1And h2jIf the derivatives are all 0, then there is:
Figure BDA0003282569740000051
multiplying by h simultaneously on both sides2(j) Pseudo-inverse matrix h of2j+, the derivative of W (t):
Figure BDA0003282569740000052
assume weight matrix w2Is a constant vector in an iteration, i.e. w2The derivative is 0, again because:
Δh1i=Δf(h2j)Δw1h1i#(13)
in the same way, obtain w1Derivative of (a):
Figure BDA0003282569740000053
w2 +is w2Pseudo-inverse matrix of, h1i' is a pseudo inverse matrix of an input layer, and delta f is the reciprocal of a softsign activation function;
at this point, the DCDNN neural network solving process is completed, iteration is repeated, and w is changed in each iteration1And w2Namely: w is a2=w2+Δw2,w1=w1+Δw1And iterating to E (k) to converge.
Compared with the prior art, the invention has the following beneficial effects:
(1) the invention uses the brain-computer interface recognition technology based on the dynamic convergence differential neural network, the recognition is rapid and accurate, and the problem that the disabled patient cannot take care of the life of the disabled patient can be solved. Meanwhile, the used Emotiv equipment is simple and convenient in use mode and low in price.
(2) The invention integrates the advantages of the P300 signal and the dynamic convergence differential neural network, can specifically complete the task of helping the disabled to drink water, can better utilize the brain-computer interface to improve the life quality of the paralyzed patients and improve the ability of the paralyzed patients to live independently.
Drawings
FIG. 1 is a user interface of the present invention;
FIG. 2 is a flow chart of data preprocessing of the present invention;
FIG. 3 is a flow chart of a dynamically converging differential neural network of the present invention;
fig. 4 is an overall flow chart of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention provides a brain-controlled drinking system based on a dynamic convergence differential neural network, which comprises the following steps:
step S1: preprocessing electroencephalogram data collected by the emotv electroencephalogram equipment to obtain a sample set.
In one embodiment of the present invention, fig. 1 shows a user interface, wherein four white squares correspond to four numbers respectively, which indicate that a cup number one, a cup number two, a cup number three are selected, and the selection is cancelled. And the user keeps visual attention to the corresponding selection, the interface randomly flickers in the four selections, the electroencephalogram data generated by a group of flickers is recorded, and the electroencephalogram data is subjected to P300 signal analysis and identification classification to obtain the corresponding selection number.
In the present invention, referring to fig. 2, the present step specifically includes the following sub-steps:
s1.1, channel screening is carried out, and channel signals with weak signals are removed according to an emptv signal display result;
in one embodiment of the invention, channel selection is firstly carried out on electroencephalogram signals, and because of individual differences of signals of different users and under different conditions, channel signals with weak signals are removed according to the strength of the signals detected by emotv software, and the channel signals which are beneficial to identification and classification are screened out.
S1.2, performing band-pass filtering on the acquired channel signals.
In one embodiment of the invention, the channel signals screened out in the step S1.1 are subjected to band-pass filtering, and the frequency band of the electroencephalogram signal is extracted.
And S1.3, performing independent component analysis to remove the electro-oculogram signal and the electro-cardiographic signal, thereby obtaining a sample set.
In the invention, ICA independent component analysis is carried out on the signals filtered in the step S1.2, only two of the signals which are obvious are processed in the ICA step, namely an ocular electrical signal and an electrocardiosignal, and the two signals have obvious characteristics in a time domain and a space domain. In one embodiment of the present invention, the method of processing the electrical eye signals and the electrical heart signals uses the calculation of the pearson correlation between a given channel and the other channels, i.e., the quotient of the covariance and the standard deviation between the two variables. Specifically, the device adopted in the embodiment of the present invention does not have a dedicated electrocardiographic signal or electrooculogram signal channel, and needs to select from the original 32 channels. To exclude the ocular signal, channel F7, F8 in the 10-20 standard nomenclature system was chosen as the designated channel for computing pearson correlations. More specifically, in order to exclude the electrocardiosignals, a P8 area with weaker electroencephalogram signals is selected as a specified channel for calculation. After the channel is selected, a threshold value is set for the correlation coefficient (the finally selected threshold value in this embodiment is 0.3) next, and if the correlation between a certain component and the designated channel is greater than the set threshold value of the correlation coefficient or less than the set threshold value of the negative correlation coefficient, it is considered that a component strongly correlated with the electrocardiographic signal or the electrooculogram signal is found, and the component is excluded.
And step S2, training and identifying and classifying the sample set obtained in the step S1 through a dynamic convergent differential neural network to obtain a P300 identification and classification result.
Training the dynamic convergence differential neural network by passing through a sample set to obtain the dynamic convergence differential neural network.
And (4) inputting the signals of the electroencephalogram data to be detected after preprocessing in the step S1 into a dynamic convergence differential neural network to perform P300 and non-P300 signal classification processing, and obtaining the predicted value of each signal. After the region watched by the subject flickers, a P300 signal is generated at the moment corresponding to the electroencephalogram signal, and no P300 signal is generated at other moments. Because the EEG signal is in microvolt level, the neural network calculation is facilitated, and all channel signals are amplified by 100000 times. N channels with good signal quality are set, at the same time, 86 sampling points are collected from 0 time to 667 milliseconds in each epoch under the sampling rate (the sampling rate of one embodiment of the invention is 128Hz), and the N86 data points are used as the characteristic dimension of the input layer of the neural network to build a three-layer dynamic convergence differential neural network, as shown in FIG. 3.
Wherein xi(i ═ 1, 2.. m) is the characteristic dimension of the input layer, h1i(i ═ 1, 2.. m) are neurons of the input layer, h2j(j ═ 1, 2,. n) isNeurons of the hidden layer, yk(k ═ 1, 2,. P) is neurons of the output layer, m represents the number of neurons of the input layer, n represents the number of neurons of the hidden layer, P represents the number of neurons of the output layer, Y is the label value of the sample, and in the P300 detection task, the epoch for which the P300 signal is set is 1, and vice versa is-1. w is a1,w2The weight matrix is obtained by multiplying the neuron by the activation function of different layers. Error is an Error function and is calculated by output layer neurons and sample labels.
The dynamic convergence differential neural network (EDCDNN) calculation process is as follows:
1. and (4) forward propagation.
In this step, the value of the next layer of neurons is derived and calculated according to the weight matrix, the neurons and the bias values:
input layer neurons are obtained directly from input layer feature dimensions, i.e.:
h1i=xi,i=1,2,...,m#(1)
hidden layer neuron h2jThrough input layer neurons h1iAnd the weight matrix w1 between the input layer and the hidden layer is calculated, namely:
Figure BDA0003282569740000091
h2j=f(L2j),j=1,2,...,n#(3)
output layer neurons ykThrough h2jAnd weight matrix w2The calculation results in that:
Figure BDA0003282569740000092
yk=f(L3k),k=1,2,...,p#(5)
in the formula, L2j,L3kRespectively calculating the intermediate variable of the multiplication of the weight matrix and the input layer and the addition of the bias, and calculating the layer-by-layer phase of the weight matrix and the hidden layerMultiplied by the offset plus the intermediate variable. b1j,b2kIs the bias of the weight matrix, and the function is to make the fitting of the neural network more accurate. w is ajiRepresenting a weight matrix between the ith neuron in the input layer and the jth neuron in the hidden layer, wjkRepresenting a weight matrix between the jth neuron in the hidden layer and the kth neuron in the output layer. (x) is an activation function, and the activation functions used from the input layer to the hidden layer and from the hidden layer to the output layer are softsign functions, namely:
Figure BDA0003282569740000093
x is the intermediate variable L in the formulae (3) and (5)2j,L3k
2. And (6) solving the neurodynamics.
Solving is essentially the process of finding the optimal solution for the objective function e (k) in a dynamically converging differentiated neural network. Unlike the scalar error function used in the gradient descent method, the following vector error function is defined:
E(k)=f(w2h2j+b2k)-Y=yk-Y#(7)
the vector error function is obtained by continuously modifying two weight matrices w1,w2Find the minimum value of E (k). To minimize convergence of e (k), a neurokinetic formula is applied:
Figure BDA0003282569740000094
where Δ E (k) is the derivative of the vector error function, t is time, and λ is the learning rate, and is a constant greater than 0. g (e (k)) is a monotonically increasing odd function and acts on the vector error function, which in one embodiment of the invention uses the power-sigmoid function as the activation function, namely:
Figure BDA0003282569740000101
wherein the parameters a, b are both greater than or equal to 2, and x ═ e (k). In one embodiment of the present invention, a is 2 and b is 4.
Bringing (7) into (8) to obtain:
Δyk(Δw2h2j+w2Δh2j)=-λg(yk-Y)#(10)
in the formula,. DELTA.yk、Δw2、Δh2jRespectively, derivative, hidden layer to output weight matrix w of output layer neuron2Derivative and hidden layer neurons h of2jSo to solve for Δ ykIt is necessary to separately determine Δ w2,Δh2jThe solving method comprises the following steps:
assume weight matrix w1Is a constant vector in an iteration, i.e. w1And h2jIf the derivatives are all 0, then there is:
Figure BDA0003282569740000102
multiplying by h simultaneously on both sides2(j) Pseudo-inverse matrix h of2j +To obtain w2Derivative of (a):
Figure BDA0003282569740000103
assume weight matrix w2Is a constant vector in an iteration, i.e. w2The derivative is 0, again because:
Δh1i=Δf(h2j)Δw1h1i#(13)
in the same way, obtain w1Derivative of (a):
Figure BDA0003282569740000111
w2 +is w2Pseudo-inverse matrix of, h1i +Being input layersAnd a pseudo inverse matrix, wherein delta f is the inverse of the softsign activation function.
At this point, the DCDNN neural network solving process is completed, the equations (1) - (14) are iterated repeatedly, and w is changed in each iteration1And w2Namely: w is a2=w2+Δw2,w1=w1+Δw1. Iterating to E (k) to converge.
And step S3, combining the P300 recognition and classification result obtained in the step S2 with a user interface to obtain a final target object number.
In some embodiments of the present invention, the four white squares in fig. 1 correspond to cup 1, cup 2, cup 3, and cancel four control signals, respectively, in conjunction with a user interface blinking rule. During the experiment, four white boxes randomly flash. A user watches a certain area when the white square area flickers, and the time of the flicking time appearing in the original electroencephalogram signal and the number of the white square are recorded by the method. And after the flicker is finished, four groups are obtained according to the number of the white square block and are respectively input into the trained dynamic convergence differential neural network. The dynamically converging differential neural network can derive a predicted probability of the presence of a P300 signal for each of the four packets. In each group, the signal prediction probabilities corresponding to the flicker of the corresponding white blocks are added according to the record, the selection corresponding sums represented by the four white blocks are compared, and the maximum value is selected to be considered that the block area has the P300 signal, so that the white block where the visual attention of the user is positioned, namely the object number, can be obtained. That is, the signal in each group corresponds to a white square, then the same group is tested for prediction probability values of whether P300 signal exists or not corresponding to the white square, then the prediction probability values of the group are summed, finally four groups of prediction probability value sums are obtained respectively, the group with the maximum value is selected from the four prediction probability value sums, and then the group corresponding to which white square is the white square which the user pays attention to.
And S4, sending the target object number obtained in the step S3 to a vision mechanical arm system, grabbing the cup and sending the cup to the mouth of the user, and finishing the water drinking task.
In some embodiments of the invention, as shown in fig. 4, the cup 1, the cup 2 and the cup 3 are three object numbers, the electroencephalogram processing part obtains the corresponding object number and sends the corresponding object number to the visual mechanical arm system, and the mechanical arm is controlled to grab the corresponding object and send the corresponding object to the mouth of the user to complete the task.
In some embodiments of the present invention, the vision robot system comprises a kinect camera and a kineva robot, the kinect camera is responsible for detecting the depth information of the cup and the RGB color information, and the kineva robot is responsible for grabbing the object according to the object coordinates. The method comprises the following specific steps:
1) and collecting image information by using a kinect camera. In this step, an RGB-D image, i.e. an image carrying color information and depth information, is acquired by the depth sensor module of the kinect camera.
2) And performing normal vector cross multiplication operation on each point and points around the point according to a normal vector calculation method of the three-dimensional point cloud image.
3) Plane extraction: and (3) extracting the background plane on which the object is placed by using a region growing algorithm for the normal vector set obtained in the step 2). The specific method is to add the vertical normal vector into the potential plane point set. If the number of points in the set of potential plane points is greater than some set threshold, the potential plane is considered to be a plane.
4) And (4) segmenting the object. And calculating points which belong to the three-dimensional point cloud but do not belong to the plane according to the three-dimensional point cloud and the plane point set obtained in the step, and drawing the points into the potential object point set. Similarly, another threshold is set, and if the number of points in the set of potential object points is greater than the threshold, the object to be grabbed is considered to be found and is added to the set of object point clouds.
5) And (3) calculating an object identification matching characteristic vector by using the generated object point cloud set and using the API of OPENCV, and finally, performing characteristic vector matching by using a nearest neighbor open source library FLANN to find an object, namely obtaining a cup identification result.
6) And (5) coordinate conversion. In order to control the mechanical arm to be able to grab the target object through the camera, firstly, the coordinates of the target object in the coordinate system of the mechanical arm are obtained. Because the camera and the mechanical armCannot be physically located at the same position, and therefore, coordinate conversion is required. In computer vision, the conversion between two rectangular coordinate systems is actually the process of solving a rotation matrix R and a translation matrix T. The coordinates of the object in the camera coordinate system are marked as (Xi, Yi, Zi), and the coordinates of the object in the mechanical arm coordinate system are marked as (X)K,YK,ZK)。
Figure BDA0003282569740000131
The one-to-one correspondence relationship of the point sets between the two coordinate systems can be established by solving the above formula, and the coordinates of the object in the coordinate system of the mechanical arm can be obtained. Grabbing the target cup through a kinova mechanical arm and sending the target cup to the mouth of a user to finish a water drinking task.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A brain-controlled drinking water system based on a dynamic convergence differential neural network is characterized by comprising the following steps:
s1, preprocessing electroencephalogram data acquired by electroencephalogram equipment to obtain a sample set;
s2, training and identifying and classifying the sample set obtained in the step S1 through a dynamic convergence differential neural network to obtain a P300 identification and classification result;
s3, combining the P300 recognition and classification result obtained in the step S2 with a user interface to obtain a final target object number;
and S4, sending the target object number obtained in the step S3 to a vision mechanical arm system, grabbing the cup and sending the cup to the mouth of a user, and finishing a water drinking task.
2. The brain-controlled drinking system based on the dynamic convergent-differential neural network as claimed in claim 1, wherein the step S1 includes the following steps:
s1.1, channel screening is carried out, and channel signals with weak signals are removed;
s1.2, performing band-pass filtering on the acquired channel signals;
and S1.3, performing independent component analysis to remove the electro-oculogram signal and the electro-cardiographic signal, thereby obtaining a sample set.
3. The brain-controlled drinking water system based on the dynamic convergent-differential neural network as claimed in claim 2, wherein the performing of independent component analysis to remove the electro-ocular signal and the electro-cardiac signal in S1.3 comprises:
respectively selecting appointed channels of the electro-oculogram signals and the electrocardio signals;
calculating the Pearson correlation between the appointed channel of the electro-oculogram signal and other channels and calculating the Pearson correlation between the appointed channel of the electro-cardiogram signal and other channels;
and setting a threshold value of Pearson correlation, and if the correlation between a certain component and the specified channel is greater than the threshold value of the correlation coefficient or less than the threshold value of the negative correlation coefficient, determining that the component which is strongly correlated with the electrocardiosignal or the electro-oculogram signal is found, and excluding the component.
4. The brain-controlled drinking system based on the dynamic convergent-differential neural network as claimed in claim 3, wherein the threshold of the Pearson correlation is 0.3.
5. The brain-controlled drinking system based on the dynamically convergent differential neural network as claimed in claim 1, wherein the dynamically convergent differential neural network is trained using the sample set in step S2 to obtain network parameters for identification and classification.
6. The brain-controlled drinking system based on the dynamic convergent-differential neural network as claimed in claim 1, wherein in step S3, the flicker of the P300 signal in the user interface is detected according to the P300 identification classification result, so as to determine the number corresponding to the flicker, and the vision mechanical arm system grasps the corresponding cup according to the object number.
7. The brain-controlled water drinking system based on the dynamic convergence differential neural network as claimed in claim 6, wherein a plurality of regions representing different cups are arranged on the user interface and marked with numbers, each region flashes randomly, the user watches the flashing region, the time of the flashing time appearing in the original electroencephalogram signal and the number of the corresponding region are recorded, after the flashing is finished, a plurality of different groups are obtained and are respectively input into the dynamic convergence differential neural network, and the prediction probability of the P300 signal existing in each group is obtained; the signal in each group corresponds to a region, and the same group can test whether a plurality of prediction probability values of the P300 signal corresponding to the region exist or not, then the plurality of prediction probability values of the group are summed, finally, the prediction probability value sum of each region can be obtained respectively, the group with the maximum value is selected from the plurality of prediction probability value sums, and then the region corresponding to the group is the region noticed by the user.
8. The brain-controlled drinking system based on the dynamic convergent differential neural network as claimed in claim 1, wherein the visual mechanical arm system comprises a camera and a mechanical arm, the camera is used for measuring cup depth information and RGB color information, and the mechanical arm is used for grabbing a cup according to the coordinates of a target cup.
9. The brain-controlled drinking system based on the dynamically convergent differential neural network as claimed in any one of claims 1 to 8, wherein the dynamically convergent differential neural network comprises an input layer, a hidden layer and an output layer, and the value of the neuron in the next layer is obtained by forward propagation.
10. The brain-controlled drinking system based on the dynamic convergence differential neural network as claimed in claim 9, wherein the calculation process of the dynamic convergence differential neural network is as follows:
input layer neurons are obtained directly from input layer feature dimensions, i.e.:
h1i=xi,i=1,2,...,m#(1)
in forward propagation, the value of the next layer of neuron is deduced and calculated according to the weight matrix, the neuron and the biased value, and then the neuron h of the hidden layer2jThrough h1iAnd the weight matrix w between the input layer and the hidden layer1The calculation results in that:
Figure FDA0003282569730000031
h2j=f(L2j),j=1,2,...,n#(3)
output layer neurons ykThrough h2jAnd weight matrix w2The calculation results in that:
Figure FDA0003282569730000032
yk=f(L3k),k=1,2,...,p#(5)
in the formula, xiIs the characteristic dimension of the input layer, i ═ 1, 21iIs a neuron of the input layer, h2jIs a neuron of the hidden layer, j 1, 2kIs the neuron of the output layer, k is 1, 2.. p, m represents the input layer neuron number, n represents the hidden layer neuron number, p represents the output layer neuron number, Y is the label value of the sample, L2j,L3kRespectively calculating the intermediate variable of the weight matrix multiplied by the input layer and added with the bias, and calculating the intermediate variable of the weight matrix multiplied by the hidden layer and added with the bias, b1j,b2kIs the offset of the weight matrix, wjiRepresenting a weight matrix between the ith neuron in the input layer and the jth neuron in the hidden layer, f (x) being an activation function;
and solving to obtain an optimal value of an objective function E (k) in the dynamic convergence differential neural network, namely outputting a final classification result, wherein the expression of the objective function E (k) is as follows:
E(k)=f(w2h2j+b2k)-Y=yk-Y#(7)
by continuously modifying two weight matrices w1,w2Finding the optimum value of E (k), and in order to minimize convergence of E (k), applying a neuro-kinetic formula:
Figure FDA0003282569730000033
wherein Δ E (k) is the derivative of E (k), λ is the learning rate, is a constant greater than 0, and g (E (k)) is a monotonically increasing odd function, which is the activation function;
bringing formula (7) into formula (8) to obtain:
Δyk(Δw2h2j+w2Δh2j)=-λg(yk-Y)#(10)
in the formula,. DELTA.yk、Δw2、Δh2jRespectively, derivative, hidden layer to output weight matrix w of output layer neuron2Derivative and hidden layer neurons h of2jSo that in order to solve for Δ y (k), Δ w needs to be determined separately2,Δh2jThe solving method comprises the following steps:
assume weight matrix w1Is a constant vector in an iteration, i.e. w1And h2jIf the derivatives are all 0, then there is:
Figure FDA0003282569730000041
multiplying by h simultaneously on both sides2(j) Pseudo-inverse matrix h of2j +The derivative of W (t) is derived:
Figure FDA0003282569730000042
assume weight matrix w2Is a constant vector in an iteration, i.e. w2The derivative is 0, again because:
Δh1i=Δf(h2j)Δw1h1i#(13)
in the same way, obtain w1Derivative of (a):
Figure FDA0003282569730000043
w2 +is w2Pseudo-inverse matrix of, h1i +The pseudo inverse matrix of the input layer is shown, and delta f is the reciprocal of the softsign activation function;
at this point, the DCDNN neural network solving process is completed, iteration is repeated, and w is changed in each iteration1And w2Namely: w is a2=w2+Δw2,w1=w1+Δw1And iterating to E (k) to converge.
CN202111137386.7A 2021-09-27 2021-09-27 Brain control water drinking system based on dynamic convergence differential neural network Active CN113887374B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111137386.7A CN113887374B (en) 2021-09-27 2021-09-27 Brain control water drinking system based on dynamic convergence differential neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111137386.7A CN113887374B (en) 2021-09-27 2021-09-27 Brain control water drinking system based on dynamic convergence differential neural network

Publications (2)

Publication Number Publication Date
CN113887374A true CN113887374A (en) 2022-01-04
CN113887374B CN113887374B (en) 2024-04-16

Family

ID=79007223

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111137386.7A Active CN113887374B (en) 2021-09-27 2021-09-27 Brain control water drinking system based on dynamic convergence differential neural network

Country Status (1)

Country Link
CN (1) CN113887374B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115105095A (en) * 2022-08-29 2022-09-27 成都体育学院 Electroencephalogram signal-based movement intention identification method, system and equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112233785A (en) * 2020-07-08 2021-01-15 华南理工大学 Intelligent identification method for Parkinson's disease
CN112381124A (en) * 2020-10-30 2021-02-19 华南理工大学 Method for improving brain-computer interface performance based on dynamic inverse learning network
CN112446289A (en) * 2020-09-25 2021-03-05 华南理工大学 Method for improving performance of P300 spelling device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112233785A (en) * 2020-07-08 2021-01-15 华南理工大学 Intelligent identification method for Parkinson's disease
CN112446289A (en) * 2020-09-25 2021-03-05 华南理工大学 Method for improving performance of P300 spelling device
CN112381124A (en) * 2020-10-30 2021-02-19 华南理工大学 Method for improving brain-computer interface performance based on dynamic inverse learning network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王洪涛等: "基于降噪自编码神经网络的事件相关电位脑电信号分析方法", 《控制理论与应用》, no. 04, 15 April 2019 (2019-04-15) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115105095A (en) * 2022-08-29 2022-09-27 成都体育学院 Electroencephalogram signal-based movement intention identification method, system and equipment
CN115105095B (en) * 2022-08-29 2022-11-18 成都体育学院 Electroencephalogram signal-based movement intention identification method, system and equipment

Also Published As

Publication number Publication date
CN113887374B (en) 2024-04-16

Similar Documents

Publication Publication Date Title
US11602300B2 (en) Brain-computer interface based robotic arm self-assisting system and method
Gerson et al. Cortically coupled computer vision for rapid image search
CN112120694B (en) Motor imagery electroencephalogram signal classification method based on neural network
CN107862249B (en) Method and device for identifying split palm prints
de San Roman et al. Saliency Driven Object recognition in egocentric videos with deep CNN: toward application in assistance to Neuroprostheses
CN112043473B (en) Parallel nested and autonomous preferred classifier for brain-myoelectricity fusion perception of intelligent artificial limb
CN108646915B (en) Method and system for controlling mechanical arm to grab object by combining three-dimensional sight tracking and brain-computer interface
CN109325408A (en) A kind of gesture judging method and storage medium
CN111399652A (en) Multi-robot hybrid system based on layered SSVEP and visual assistance
CN113887374B (en) Brain control water drinking system based on dynamic convergence differential neural network
Niu et al. Automatic localization of optic disc based on deep learning in fundus images
Ghosh et al. VEA: vessel extraction algorithm by active contour model and a novel wavelet analyzer for diabetic retinopathy detection
CN110673721B (en) Robot nursing system based on vision and idea signal cooperative control
Ouerhani Visual attention: from bio-inspired modeling to real-time implementation
CN113064490B (en) Eye movement track-based virtual enhancement equipment identification method
Mohamadzamani et al. Detection of cucumber fruit on plant image using artificial neural network.
Ceylan et al. Blood vessel extraction from retinal images using complex wavelet transform and complex-valued artificial neural network
CN116704553B (en) Human body characteristic identification auxiliary system based on computer vision technology
CN115374831B (en) Dynamic and static combination velocity imagery classification method for multi-modal registration and space-time feature attention
Mohammadzamani et al. Detection of cucumber fruit on plant image using artificial neural network
CN112936259B (en) Man-machine cooperation method suitable for underwater robot
CN113947815A (en) Man-machine gesture cooperative control method based on myoelectricity sensing and visual sensing
Du et al. Vision-Based Robotic Manipulation of Intelligent Wheelchair with Human-Computer Shared Control
Alami et al. Exploring a deeper convolutional neural network architecture with high dropout for motor imagery BCI decoding
Mukul et al. Relative spectral power (RSP) and temporal RSP as features for movement imagery EEG classification with linear discriminant analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant