CN113887374B - Brain control water drinking system based on dynamic convergence differential neural network - Google Patents

Brain control water drinking system based on dynamic convergence differential neural network Download PDF

Info

Publication number
CN113887374B
CN113887374B CN202111137386.7A CN202111137386A CN113887374B CN 113887374 B CN113887374 B CN 113887374B CN 202111137386 A CN202111137386 A CN 202111137386A CN 113887374 B CN113887374 B CN 113887374B
Authority
CN
China
Prior art keywords
neural network
dynamic convergence
layer
brain
differential neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111137386.7A
Other languages
Chinese (zh)
Other versions
CN113887374A (en
Inventor
张智军
孙健声
黄灿辉
黄展峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202111137386.7A priority Critical patent/CN113887374B/en
Publication of CN113887374A publication Critical patent/CN113887374A/en
Application granted granted Critical
Publication of CN113887374B publication Critical patent/CN113887374B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A20/00Water conservation; Efficient water supply; Efficient water use
    • Y02A20/152Water filtration

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Dermatology (AREA)
  • Neurology (AREA)
  • Neurosurgery (AREA)
  • Human Computer Interaction (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention provides a brain-controlled water drinking system based on a dynamic convergence differential neural network, which comprises the following steps: 1) Carrying out data channel screening, band-pass filtering and independent component analysis on the electroencephalogram data acquired by the emoiv electroencephalogram equipment; 2) Training the data set obtained in the step 1) through a dynamic convergence differential neural network, and identifying and classifying; 3) Combining the P300 identification classification result obtained in the step 2) with a user interface to obtain a final target object number; 4) And 3) sending the target object number obtained in the step 3) to a visual mechanical arm system, grabbing a cup, and sending the cup to the mouth of a user to finish the water drinking task.

Description

Brain control water drinking system based on dynamic convergence differential neural network
Technical Field
The invention belongs to the field of brain electrical signal recognition control, and particularly relates to a brain control water drinking system based on a dynamic convergence differential neural network.
Background
The brain-computer interface (Brain Computer Interface, BCI) is an information transmission channel which is independent of other organs and peripheral nerves of the human body and directly communicates with a computer. It provides a new method for communicating with the outside for the severely disabled patients. Severely disabled patients often sit in wheelchairs or lie in bed for a long period of time due to mobility inconvenience, and it is difficult for others to accurately learn their intent to move. Their means of communication with the outside world are very limited.
At present, some research has been attempted to combine the brain-computer interface technology with the mechanical arm technology, with the publication number of CN102198660a, and the name of the chinese patent application is "mechanical arm control system and action command control scheme based on brain-computer interface", and the brain-computer interface based on motor imagery realizes the control of eight instructions of grasping and releasing of the mechanical arm up, down, left, right, front, back and fingers; the invention discloses a Chinese patent application with publication number of CN111880656A, which is named as an intelligent brain control system and rehabilitation equipment based on P300 signals, develops an intelligent brain control system based on P300 signals, and realizes the task of controlling a mechanical arm by using P300 signals. The above patent applications of the invention all complete some simple and even preset mechanical arm action control tasks only through brain electrical signals, and the tasks completed by the mechanical arm are not specific and cannot embody the advantages of the mechanical arm. And most of the classification networks are completed by using a simple machine learning method, which takes a long time in calculation.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provides a brain-controlled water drinking system based on a dynamic convergence differential neural network, and a user can finish the task of grabbing and drinking water of a specified cup by using an electroencephalogram signal. The user selects different cups by looking at the characters on the screen to finally finish the task of grabbing the appointed cup by using the mechanical arm, and the system can greatly facilitate the daily travel of severely disabled patients.
The object of the invention is achieved by at least one of the following technical solutions.
A brain-controlled water drinking system based on a dynamic convergence differential neural network comprises the following steps:
s1, preprocessing electroencephalogram data acquired by electroencephalogram equipment to obtain a sample set;
s2, training the sample set obtained in the step S1 through a dynamic convergence differential neural network, and identifying and classifying to obtain a P300 identification and classification result;
s3, combining the P300 identification classification result obtained in the step S2 with a user interface to obtain a final target object number;
s4, sending the target object number obtained in the step S3 to a visual mechanical arm system, grabbing a cup, and sending the cup to the mouth of a user to finish a water drinking task.
Further, step S1 includes the steps of:
s1.1, carrying out channel screening to remove channel signals with weak signals;
s1.2, carrying out band-pass filtering on the acquired channel signals;
s1.3, performing independent component analysis to remove the electro-oculogram signal and the electrocardiosignal, thereby obtaining a sample set.
Further, the performing independent component analysis to remove the electro-oculogram signal and the electrocardiosignal in S1.3 includes:
respectively selecting appointed channels of the electro-ocular signal and the electrocardiosignal;
calculating the pearson correlation between the appointed channel of the electrocardiosignal and other channels and calculating the pearson correlation between the appointed channel of the electrocardiosignal and other channels;
setting a threshold of pearson correlation, and if the correlation between a certain component and a specified channel is larger than the threshold of a correlation coefficient or smaller than the threshold of a negative correlation coefficient, considering that the component which is strongly correlated with the electrocardiosignal or the electrooculogram signal is found and is eliminated.
Further, the threshold of pearson correlation is 0.3.
Further, in step S2, the sample set is used to train the dynamic convergence differential neural network, so as to obtain network parameters for identification and classification.
Further, in step S3, according to the classification result of the P300 recognition, the flicker of the P300 signal in the user interface is detected, so as to determine the number corresponding to the flicker, and the vision mechanical arm system grabs the corresponding cup according to the object number.
Further, setting a plurality of areas representing different cups on a user interface, marking by numbers, randomly flashing each area, recording the time of flashing in the original electroencephalogram signal and the numbers of the corresponding areas by a user when the user looks at the flashing area, obtaining a plurality of different groups after flashing, respectively inputting the groups into a dynamic convergence differential neural network, and obtaining the prediction probability of the P300 signal in each group; the signals in each group correspond to one area, a plurality of prediction probability values of the P300 signals corresponding to the area are tested in the same group, then the prediction probability values of the group are summed, finally the prediction probability value sum of each area can be obtained respectively, the group with the maximum value is selected from the prediction probability value sums, and then the area corresponding to the group is the area noticed by the user.
Further, the vision mechanical arm system comprises a camera and a mechanical arm, wherein the camera is used for measuring cup depth information and RGB color information, and the mechanical arm is used for grabbing the cup according to the coordinates of the target cup.
Further, the dynamic convergence differential neural network comprises an input layer, a hidden layer and an output layer, and the value of the next layer of neurons is obtained through forward propagation.
Further, the calculation process of the dynamic convergence differential neural network is as follows:
the input layer neurons are directly derived from the input layer feature dimensions, namely:
h 1i =x i ,i=1,2,...,m#(1)
in the forward propagation, the value of the next layer of neurons is calculated according to the weight matrix, the neurons and the value deduction of the bias, and then the layer of neurons h are hidden 2j Through h 1i Weight matrix w between input layer and hidden layer 1 The calculation results are that:
h 2j =f(L 2j ),j=1,2,...,n#(3)
output layer neuron y k Through h 2j And weight valueMatrix w 2 The calculation results are that:
y k =f(L 3k ),k=1,2,...,p#(5)
wherein x is i Is the characteristic dimension of the input layer, i=1, 2,..m, h 1i Neurons being input layers, h 2j Neurons that are hidden layers, j=1, 2,..n, y k Is the output layer neuron, k=1, 2,..p, m represents the input layer neuron number, n represents the hidden layer neuron number, p represents the output layer neuron number, Y is the sample tag value, L 2j ,L 3k Respectively calculating an intermediate variable added with bias after multiplying a weight matrix by an input layer and an intermediate variable added with bias after multiplying a weight matrix by a hidden layer, b 1j ,b 2k Is the bias of the weight matrix, w ji Representing a weight matrix between an ith neuron in the input layer and a jth neuron in the hidden layer, f (x) being an activation function;
and solving to obtain an optimal value of an objective function E (k) in the dynamic convergence differential neural network, and outputting a final classification result, wherein the expression of the objective function E (k) is as follows:
E(k)=f(w 2 h 2j +b 2k )-Y=y k -Y#(7)
by constantly modifying two weight matrices w 1 ,w 2 Finding the optimal value of E (k), in order to minimize E (k) convergence, a neuro-kinetic formula is applied:
where ΔE (k) is the derivative of E (k), λ is the learning rate, is a constant greater than 0, g (E (k)) is a monotonically increasing odd function, and is the activation function;
bringing formula (7) into formula (8), yields:
Δy k (Δw 2 h 2j +w 2 Δh 2j )=-λg(y k -Y)#(10)
wherein Deltay k 、Δw 2 、Δh 2j The derivative of the output layer neuron, the derivative of the hidden layer to the output weight matrix w2 and the hidden layer neuron h respectively 2j Therefore, to solve Δy (k), Δw needs to be calculated separately 2 ,Δh 2j The solving method comprises the following steps:
let the weight matrix w 1 Is a constant vector in iteration, i.e. w 1 And h 2j The derivatives are all 0, then there are:
h is multiplied by two sides simultaneously 2 (j) Pseudo-inverse matrix h 2j ++, the derivative of W (t) is derived:
let the weight matrix w 2 Is a constant vector in iteration, i.e. w 2 The derivative is 0, again because:
Δh 1i =Δf(h 2j )Δw 1 h 1 i#(13)
similarly, we get w 1 Is the derivative of:
w 2 + is w 2 Pseudo-inverse matrix of h 1i ' is the pseudo-inverse of the input layer, Δf is the inverse of the softsign activation function;
so far, the DCDNN neural network solving process is completed, iteration is repeated, and w is changed in each iteration 1 And w is equal to 2 The method comprises the following steps: w (w) 2 =w 2 +Δw 2 ,w 1 =w 1 +Δw 1 Iterating to E (k) to converge.
Compared with the prior art, the invention has the following beneficial effects:
(1) The invention uses brain-computer interface recognition technology based on dynamic convergence differential neural network, which can quickly and accurately recognize and solve the problem that disabled patients cannot self-care in life. Meanwhile, the Emotiv equipment is simple and convenient in use mode and low in price.
(2) The invention integrates the advantages of the P300 signal and the dynamic convergence differential neural network, can specifically complete the task of helping disabled people drink water, can better utilize the brain-computer interface to improve the life quality of paralyzed patients and improve the autonomous life capacity of the paralyzed patients.
Drawings
FIG. 1 is a user interface of the present invention;
FIG. 2 is a flow chart of data preprocessing according to the present invention;
FIG. 3 is a flow chart of a dynamically converging differential neural network of the present invention;
fig. 4 is a general flow chart of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention provides a brain-controlled water drinking system based on a dynamic convergence differential neural network, which comprises the following steps:
step S1: preprocessing the electroencephalogram data acquired by the emoiv electroencephalogram equipment to obtain a sample set.
In one embodiment of the present invention, a user interface is shown in FIG. 1, with four white squares corresponding to each of the four numbers, indicating the selection of cup number one, the selection of cup number two, the selection of cup number three, and the deselection. The user keeps visual attention to the corresponding selection, the interface starts to randomly flash in the four selections, the brain electricity data generated by the next group of flash is recorded, and the P300 signal analysis and the identification classification are carried out on the brain electricity data to obtain the corresponding selection number.
In the present invention, referring to fig. 2, the steps specifically include the following sub-steps:
s1.1, carrying out channel screening, and removing channel signals with weaker signals according to an empiv signal display result;
in one embodiment of the invention, firstly, the electroencephalogram signals are subjected to channel selection, and channel signals with weaker signals are removed according to the intensity of the signals detected by the emuiv software because of individual differences of the signals under different users and different conditions, so that the channel signals which are favorable for identification and classification are screened out.
S1.2, carrying out band-pass filtering on the acquired channel signals.
In one embodiment of the invention, the channel signal screened in the step S1.1 is subjected to band-pass filtering, and the frequency band of the brain electrical signal is extracted.
S1.3, performing independent component analysis to remove the electro-oculogram signal and the electrocardiosignal, thereby obtaining a sample set.
In the invention, the ICA independent component analysis is carried out on the signals filtered in the step S1.2, and only two of the signals with obvious comparison, namely the electro-oculogram signal and the electrocardiosignal, are processed in the ICA step, and the two signals have obvious characteristics in the time domain and the space domain. In one embodiment of the invention, the methods of processing both the electro-ocular signal and the electro-cardiac signal employ computing pearson correlations between the designated channel and the other channels, i.e., the quotient of the covariance and standard deviation between the two variables. Specifically, since the apparatus adopted in the embodiment of the present invention has no dedicated electrocardiograph signal or ocular signal channels, it is necessary to select from the original 32 channels. To exclude electro-ocular signals, the F7, F8 channel in the 10-20 standard naming system is selected as the designated channel for computing the pearson correlation. More specifically, in order to exclude the electrocardiosignals, a P8 region with weaker electroencephalogram signals is selected as a designated channel for calculation. After the channel is selected, a threshold is set for the correlation coefficient (the final threshold selected in this embodiment is 0.3), and if the correlation between a component and the designated channel is greater than the correlation coefficient set threshold or less than the negative correlation coefficient set threshold, a component strongly correlated with the electrocardiograph signal or the ocular signal is considered to be found and eliminated.
And S2, training and identifying and classifying the sample set obtained in the step S1 through a dynamic convergence differential neural network to obtain a P300 identifying and classifying result.
Training the dynamic convergence differential neural network through the sample set to obtain the dynamic convergence differential neural network.
And (3) inputting the signals of the electroencephalogram data to be detected, which are preprocessed in the step (S1), into a dynamic convergence differential neural network to perform P300 and non-P300 signal classification processing, and obtaining the predicted value of each signal. After the region where the subject looks at blinks, a P300 signal is generated at this time corresponding to the electroencephalogram signal, and no P300 signal is generated at other times. Because the brain electrical signals are in the microvolt level, all channel signals are amplified 100000 times simultaneously for the convenience of the neural network calculation. Setting up N channels with good signal quality, collecting 86 sampling points of each epoch in a period from 0 time to 667 milliseconds at a sampling rate (the sampling rate of one embodiment of the invention is 128 Hz), taking the N86 data points as characteristic dimensions of a neural network input layer, and constructing a three-layer dynamic convergence differential neural network, as shown in figure 3.
Wherein x is i (i=1, 2,..m) is the feature dimension of the input layer, h 1i (i=1, 2,..m) is the neuron of the input layer, h 2j (j=1, 2,..n) is a neuron of the hidden layer, y k (k=1, 2..p.) is the neuron of the output layer, m represents the neuron number of the input layer, n represents the neuron number of the hidden layer, P represents the neuron number of the output layer, Y is the label value of the sample, and in the P300 detection task, the epoch in which the P300 signal is set is 1, otherwise is-1. w (w) 1 ,w 2 Is a weight matrix between layers, and the weight matrix is obtained by multiplying neurons according to activation functions of different layers. Error is an Error function, by outputting the layer spiritAnd (5) calculating through the element and the sample label.
The dynamic convergence differential neural network (EDCDNN network) calculation process is as follows:
1. forward propagation.
In this step, the values of the neurons of the next layer are calculated according to the weight matrix, the neurons and the value deduction of the bias:
the input layer neurons are directly derived from the input layer feature dimensions, namely:
h 1i =x i ,i=1,2,...,m#(1)
hidden layer neuron h 2j Through input layer neuron h 1i The weight matrix w1 between the input layer and the hidden layer is calculated, namely:
h 2j =f(L 2j ),j=1,2,...,n#(3)
output layer neuron y k Through h 2j And weight matrix w 2 The calculation results are that:
y k =f(L 3k ),k=1,2,...,p#(5)
wherein L is 2j ,L 3k The method comprises the steps of calculating a middle variable of the weight matrix multiplied by an input layer and then adding the offset, and calculating a middle variable of the weight matrix multiplied by a hidden layer and then adding the offset. b 1j ,b 2k Is the bias of the weight matrix and has the function of enabling the fitting of the neural network to be more accurate. w (w) ji Represents a weight matrix, w, between the ith neuron in the input layer and the jth neuron in the hidden layer jk Representing a weight matrix between the jth neuron in the hidden layer and the kth neuron in the output layer. f (x) is an activation function, and from the input layer to the hidden layer, the activation functions used by the hidden layer to the output layer are allsoftsign function, namely:
x is the intermediate variable L in formulas (3), (5) 2j ,L 3k
2. And solving neurodynamics.
The solution is essentially a process to find the optimal solution of the objective function E (k) in the dynamically converging differential neural network. Unlike scalar error functions used in the gradient descent method, the following vector error functions are defined:
E(k)=f(w 2 h 2j +b 2k )-Y=y k -Y#(7)
the vector error function is obtained by continuously modifying two weight matrixes w 1 ,w 2 Find the minimum value of E (k). To minimize E (k), a neuro-kinetic formula is applied:
where ΔE (k) is the derivative of the vector error function, t is time, λ is the learning rate, and is a constant greater than 0. g (E (k)) is an odd function that increases monotonically and acts in a vector error function, in one embodiment of the invention a power-sigmoid function is used as the activation function, namely:
wherein the parameters a, b are all greater than or equal to 2, x=e (k). In one embodiment of the invention, a=2 and b=4.
Bringing (7) into (8), obtaining:
Δy k (Δw 2 h 2j +w 2 Δh 2j )=-λg(y k -Y)#(10)
wherein Deltay k 、Δw 2 、Δh 2j The derivative of the output layer neuron, the hidden layer and the output weight matrix w 2 Derivative of (d) and hidden layer neuron h 2j To solve for deltay k It is necessary to separately determine Δw 2 ,Δh 2j The solving method comprises the following steps:
let the weight matrix w 1 Is a constant vector in iteration, i.e. w 1 And h 2j The derivatives are all 0, then there are:
h is multiplied by two sides simultaneously 2 (j) Pseudo-inverse matrix h 2j + Obtaining w 2 Is the derivative of:
let the weight matrix w 2 Is a constant vector in iteration, i.e. w 2 The derivative is 0, again because:
Δh 1i =Δf(h 2j )Δw 1 h 1i #(13)
similarly, we get w 1 Is the derivative of:
w 2 + is w 2 Pseudo-inverse matrix of h 1i + As a pseudo-inverse of the input layer, Δf is the inverse of the softsign activation function.
Thus, the DCDNN neural network solving process is completed, iteration is repeated from (1) to (14), and w is changed in each iteration 1 And w is equal to 2 The method comprises the following steps: w (w) 2 =w 2 +Δw 2 ,w 1 =w 1 +Δw 1 . Iterating to E (k) to converge.
And step S3, combining the P300 identification classification result obtained in the step S2 with a user interface to obtain a final target object number.
In some embodiments of the present invention, the four white squares in fig. 1 are respectively associated with cup 1, cup 2, cup 3 and cancel four control signals in conjunction with the user interface blinking rules. Four white boxes randomly flash at the time of the experiment. When the user blinks in the white square area, the user looks at a certain area, and the invention records the time of the blinking time in the original brain wave signal and the white square number. After the flicker is finished, four groups are obtained according to the white square numbers, and the four groups are respectively input into a trained dynamic convergence differential neural network. The dynamic convergence differential neural network can derive a predicted probability that each signal in the four packets has a P300 signal. In each group, the signal prediction probabilities corresponding to the flicker of the corresponding white square are added according to the records, the sum corresponding to the selection represented by the four white squares is compared, the maximum value is selected to consider that the P300 signal appears in the square area, and the white square where the visual attention of the user is positioned, namely the object number, can be obtained. That is, the signals in each group correspond to a white square, then the same group will test whether there are a plurality of prediction probability values of the P300 signal corresponding to the white square, then the plurality of prediction probability values of the group are summed, finally the sum of the prediction probability values of four groups can be obtained respectively, the group with the largest value is selected from the sum of the four prediction probability values, and then which white square the group corresponds to is which white square the user notices.
And S4, sending the target object number obtained in the step S3 to a visual mechanical arm system, grabbing a cup, and sending the cup to the mouth of a user to finish the water drinking task.
In some embodiments of the present invention, as shown in fig. 4, the cups 1,2 and 3 are three object numbers, and the brain electricity processing part obtains the corresponding object numbers and sends the corresponding object numbers to the vision mechanical arm system to control the mechanical arm to grasp the corresponding object and send the corresponding object to the mouth of the user, so as to complete the task.
In some embodiments of the present invention, the vision robot system includes a kinect camera and a kineva robot, where the kinect camera is responsible for detecting cup depth information and RGB color information, and the kineva robot is responsible for capturing an object according to object coordinates. The method comprises the following specific steps:
1) Image information was acquired with a kinect camera. In this step, an RGB-D image, i.e., an image carrying color information and depth information, is acquired by a depth sensor module of the kinect camera.
2) According to the normal vector calculation method of the three-dimensional point cloud image, normal vector cross multiplication operation is carried out on each point and points around the point.
3) Plane extraction: and 2) extracting the background plane on which the object is placed from the normal vector set obtained in the step 2) by using a region growing algorithm. The specific approach is to add the vertical normal vector to the set of potential planar points as soon as it is encountered. If the number of points in the set of potential plane points is greater than some set threshold, then the potential plane is considered to be a plane.
4) And (5) object segmentation. And calculating points which belong to the three-dimensional point cloud but do not belong to the plane according to the three-dimensional point cloud and the plane point set obtained by the steps, and dividing the points into the potential object point set. Likewise, another threshold is set, and if the number of points in the set of potential object points is greater than the threshold, then the object to be grabbed is considered found and added to the set of object point clouds.
5) And calculating an object recognition matching feature vector by using the generated object point cloud set and using an API of an OPENCV, and finally, carrying out feature vector matching by using a nearest neighbor source library FLANN, and finding out an object to obtain a cup recognition result.
6) And (5) coordinate conversion. In order to control the mechanical arm to grasp the target object through the camera, first, the coordinates of the target object in the mechanical arm coordinate system are obtained. Since the camera and the mechanical arm cannot be physically located at the same position, coordinate conversion is required. In computer vision, the conversion between two rectangular coordinate systems is actually a process of solving a rotation matrix R and a translation matrix T. The coordinates of the object in the camera coordinate system are denoted as (Xi, yi, zi), and the coordinates of the object in the robot arm coordinate system are denoted as (X) K ,Y K ,Z K )。
The one-to-one correspondence relationship between the point sets of the two coordinate systems can be established by solving the above equation, and the coordinates of the object in the mechanical arm coordinate system can be obtained. And grabbing a target cup through a kineva mechanical arm, and sending the target cup to the mouth of a user to finish the water drinking task.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (8)

1. A brain-controlled water drinking system based on a dynamic convergence differential neural network is characterized by comprising the following steps:
s1, preprocessing electroencephalogram data acquired by electroencephalogram equipment to obtain a sample set;
s2, training the sample set obtained in the step S1 through a dynamic convergence differential neural network, and identifying and classifying to obtain a P300 identification and classification result;
s3, combining the P300 identification classification result obtained in the step S2 with a user interface to obtain a final target object number;
s4, sending the target object number obtained in the step S3 to a vision mechanical arm system, grabbing a cup, and sending the cup to the mouth of a user to finish a water drinking task;
the dynamic convergence differential neural network comprises an input layer, a hidden layer and an output layer, and the value of a next layer of neurons is obtained through forward propagation;
the calculation process of the dynamic convergence differential neural network is as follows:
the input layer neurons are directly derived from the input layer feature dimensions, namely:
h 1i =x i ,i=1,2,…,m (1)
in the forward propagation, the value of the next layer of neurons is calculated according to the weight matrix, the neurons and the value deduction of the bias, and then the layer of neurons h are hidden 2j Through h 1i Weight matrix w between input layer and hidden layer 1 The calculation results are that:
h 2j =f(L 2j ),j=1,2,...,n (3)
output layer neuron y k Through h 2j And weight matrix w 2 The calculation results are that:
y k =f(L 3k ),k=1,2,...,p (5)
wherein x is i Is the characteristic dimension of the input layer, i=1, 2, … m, h 1i Neurons being input layers, h 2j Neurons being hidden layers, j=1, 2, … n, y k Is the neuron of the output layer, k=1, 2, … p, m represents the number of neurons of the input layer, n represents the number of neurons of the hidden layer, p represents the number of neurons of the output layer, Y is the label value of the sample, L 2j ,L 3k Respectively calculating an intermediate variable added with bias after multiplying the weight matrix by the input layer and an intermediate variable added with bias after multiplying the weight matrix by the hidden layer, b 1j ,b 2k Is the bias of the weight matrix, w ji Represents a weight matrix, w, between the ith neuron in the input layer and the jth neuron in the hidden layer jk Representing a weight matrix between a jth neuron in the hidden layer and a kth neuron in the output layer, f (x) being an activation function;
and solving to obtain an optimal value of an objective function E (k) in the dynamic convergence differential neural network, and outputting a final classification result, wherein the expression of the objective function E (k) is as follows:
E(k)=f(w 2 h 2j +b 2k )-Y=y k -Y (7)
by constantly modifying two weight matrices w 1 ,w 2 Finding the optimal value of E (k), in order to minimize E (k) convergence, a neuro-kinetic formula is applied:
where ΔE (k) is the derivative of E (k), λ is the learning rate, is a constant greater than 0, g (E (k)) is a monotonically increasing odd function, and is the activation function;
wherein the parameters a, b are all greater than or equal to 2, x=e (k);
bringing formula (7) into formula (8), yields:
Δy k (Δw 2 h 2j +w 2 Δh 2j )=-λg(y k -Y) (10)
wherein Deltay k 、Δw 2 、Δh 2j The derivative of the output layer neuron, the hidden layer and the output weight matrix w 2 Derivative of (d) and hidden layer neuron h 2j Therefore, to solve Δy (k), Δw needs to be calculated separately 2 ,Δh 2j The solving method comprises the following steps:
let the weight matrix w 1 Is a constant vector in iteration, i.e. w 1 And h 2j The derivatives are all 0, then there are:
h is multiplied by two sides simultaneously 2 (j) Pseudo-inverse matrix h 2j + The derivative of W (t) is derived:
let the weight matrix w 2 Is a constant vector in iteration, i.e. w 2 The derivative is 0, again because:
Δh 1i =Δf(h 2j )Δw 1 h 1i (13)
similarly, we get w 1 Is the derivative of:
w 2 + is w 2 Pseudo-inverse matrix of h 1i + As a pseudo-inverse matrix of the input layer, Δf is the inverse of the softsign activation function;
so far, the DCDNN neural network solving process is completed, iteration is repeated, and w is changed in each iteration 1 And w is equal to 2 The method comprises the following steps: w (w) 2 =w 2 +Δw 2 ,w 1 =w 1 +Δw 1 Iterating to E (k) to converge.
2. The brain-controlled water drinking system based on a dynamic convergence differential neural network as set forth in claim 1, wherein step S1 comprises the steps of:
s1.1, carrying out channel screening to remove channel signals with weak signals;
s1.2, carrying out band-pass filtering on the acquired channel signals;
s1.3, performing independent component analysis to remove the electro-oculogram signal and the electrocardiosignal, thereby obtaining a sample set.
3. The brain-controlled drinking system based on a dynamic convergence differential neural network as set forth in claim 2, wherein said performing independent component analysis to remove electro-oculogram signals and electrocardiographic signals in S1.3 comprises:
respectively selecting appointed channels of the electro-ocular signal and the electrocardiosignal;
calculating the pearson correlation between the appointed channel of the electrocardiosignal and other channels and calculating the pearson correlation between the appointed channel of the electrocardiosignal and other channels;
setting a threshold of pearson correlation, and if the correlation between a certain component and a specified channel is larger than the threshold of a correlation coefficient or smaller than the threshold of a negative correlation coefficient, considering that the component which is strongly correlated with the electrocardiosignal or the electrooculogram signal is found and is eliminated.
4. A brain-controlled water consumption system based on a dynamically converging differential neural network according to claim 3, wherein the threshold of pearson correlation is 0.3.
5. The brain-controlled drinking system based on a dynamic convergence differential neural network according to claim 1, wherein in step S2, the dynamic convergence differential neural network is trained using a sample set to obtain network parameters for identification classification.
6. The brain-controlled drinking system based on the dynamic convergence differential neural network according to claim 1, wherein in step S3, according to the P300 recognition classification result, the flicker of the P300 signal in the user interface is detected, so as to determine the number corresponding to the flicker, and the vision mechanical arm system grabs the corresponding cup according to the object number.
7. The brain-controlled drinking system based on the dynamic convergence differential neural network according to claim 6, wherein a plurality of areas representing different cups are arranged on a user interface and marked by numbers, each area flashes randomly, a user looks at the flashing area, the time of the flashing time appearing in the original brain-electrical signal and the number of the corresponding area are recorded, a plurality of different groups are obtained after the flashing is finished, and the different groups are respectively input into the dynamic convergence differential neural network to obtain the prediction probability of the P300 signal in each group; the signals in each group correspond to one area, a plurality of prediction probability values of the P300 signals corresponding to the area are tested in the same group, then the prediction probability values of the group are summed, finally the prediction probability value sum of each area can be obtained respectively, the group with the maximum value is selected from the prediction probability value sums, and then the area corresponding to the group is the area noticed by the user.
8. The brain-controlled drinking system based on a dynamic convergence differential neural network according to claim 1, wherein the vision robot system comprises a camera and a robot, the camera is used for measuring cup depth information and RGB color information, and the robot is used for grabbing a cup according to coordinates of a target cup.
CN202111137386.7A 2021-09-27 2021-09-27 Brain control water drinking system based on dynamic convergence differential neural network Active CN113887374B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111137386.7A CN113887374B (en) 2021-09-27 2021-09-27 Brain control water drinking system based on dynamic convergence differential neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111137386.7A CN113887374B (en) 2021-09-27 2021-09-27 Brain control water drinking system based on dynamic convergence differential neural network

Publications (2)

Publication Number Publication Date
CN113887374A CN113887374A (en) 2022-01-04
CN113887374B true CN113887374B (en) 2024-04-16

Family

ID=79007223

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111137386.7A Active CN113887374B (en) 2021-09-27 2021-09-27 Brain control water drinking system based on dynamic convergence differential neural network

Country Status (1)

Country Link
CN (1) CN113887374B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115105095B (en) * 2022-08-29 2022-11-18 成都体育学院 Electroencephalogram signal-based movement intention identification method, system and equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112233785A (en) * 2020-07-08 2021-01-15 华南理工大学 Intelligent identification method for Parkinson's disease
CN112381124A (en) * 2020-10-30 2021-02-19 华南理工大学 Method for improving brain-computer interface performance based on dynamic inverse learning network
CN112446289A (en) * 2020-09-25 2021-03-05 华南理工大学 Method for improving performance of P300 spelling device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112233785A (en) * 2020-07-08 2021-01-15 华南理工大学 Intelligent identification method for Parkinson's disease
CN112446289A (en) * 2020-09-25 2021-03-05 华南理工大学 Method for improving performance of P300 spelling device
CN112381124A (en) * 2020-10-30 2021-02-19 华南理工大学 Method for improving brain-computer interface performance based on dynamic inverse learning network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于降噪自编码神经网络的事件相关电位脑电信号分析方法;王洪涛等;《控制理论与应用》;20190415(第04期);全文 *

Also Published As

Publication number Publication date
CN113887374A (en) 2022-01-04

Similar Documents

Publication Publication Date Title
CN113693613B (en) Electroencephalogram signal classification method, electroencephalogram signal classification device, computer equipment and storage medium
CN105022486B (en) EEG signals discrimination method based on the driving of different expressions
CN106169081A (en) A kind of image classification based on different illumination and processing method
CN111584029B (en) Electroencephalogram self-adaptive model based on discriminant confrontation network and application of electroencephalogram self-adaptive model in rehabilitation
CN111265212A (en) Motor imagery electroencephalogram signal classification method and closed-loop training test interaction system
CN109993068A (en) A kind of contactless human emotion's recognition methods based on heart rate and facial characteristics
CN111428601B (en) P300 signal identification method, device and storage medium based on MS-CNN
CN112488002B (en) Emotion recognition method and system based on N170
CN111709267A (en) Electroencephalogram signal emotion recognition method of deep convolutional neural network
CN111466878A (en) Real-time monitoring method and device for pain symptoms of bedridden patients based on expression recognition
CN113887374B (en) Brain control water drinking system based on dynamic convergence differential neural network
CN116563887B (en) Sleeping posture monitoring method based on lightweight convolutional neural network
CN107480716A (en) Method and system for identifying saccade signal by combining EOG and video
CN111399652A (en) Multi-robot hybrid system based on layered SSVEP and visual assistance
Abibullaev et al. A brute-force CNN model selection for accurate classification of sensorimotor rhythms in BCIs
Kumar et al. Classification of SSVEP signals using neural networks for BCI applications
Ghosh et al. VEA: vessel extraction algorithm by active contour model and a novel wavelet analyzer for diabetic retinopathy detection
Shah et al. Real-time facial emotion recognition
CN115374831B (en) Dynamic and static combination velocity imagery classification method for multi-modal registration and space-time feature attention
CN116524380A (en) Target detection method based on brain-computer signal fusion
CN112936259B (en) Man-machine cooperation method suitable for underwater robot
Zhang et al. A pruned deep learning approach for classification of motor imagery electroencephalography signals
CN112381124B (en) Method for improving brain-computer interface performance based on dynamic inverse learning network
CN113947815A (en) Man-machine gesture cooperative control method based on myoelectricity sensing and visual sensing
Du et al. Vision-Based Robotic Manipulation of Intelligent Wheelchair with Human-Computer Shared Control

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant