CN108415554B - Brain-controlled robot system based on P300 and implementation method thereof - Google Patents

Brain-controlled robot system based on P300 and implementation method thereof Download PDF

Info

Publication number
CN108415554B
CN108415554B CN201810048019.1A CN201810048019A CN108415554B CN 108415554 B CN108415554 B CN 108415554B CN 201810048019 A CN201810048019 A CN 201810048019A CN 108415554 B CN108415554 B CN 108415554B
Authority
CN
China
Prior art keywords
module
substep
stimulation
robot
brain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810048019.1A
Other languages
Chinese (zh)
Other versions
CN108415554A (en
Inventor
刘蓉
程俊
梁雅彬
马征
王永轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN201810048019.1A priority Critical patent/CN108415554B/en
Publication of CN108415554A publication Critical patent/CN108415554A/en
Application granted granted Critical
Publication of CN108415554B publication Critical patent/CN108415554B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Neurosurgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurology (AREA)
  • Health & Medical Sciences (AREA)
  • Dermatology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention belongs to the technical field of brain-computer interfaces and robot control, and discloses a P300-based brain-controlled robot system and an implementation method thereof, wherein the system comprises a visual stimulation module, a subject, an electrode cap module, a Neuroscan brain electricity acquisition module, a signal processing module, a control interface module and a Pioneer3-DX type robot motion module which are sequentially connected with the visual stimulation module, the Pioneer3-DX type robot motion module is also connected with the visual stimulation module, and the control interface module is also connected with the Pioneer3-DX type robot environment detection module. The invention adopts the graphic symbols as a new stimulation type, combines and improves the properties of the brain-computer interface, induces the P300 component with higher amplitude and improves the transmission rate of the system. And the brain-computer interface technology and the automatic control technology are combined, an asynchronous control mode is adopted to realize interactive sharing of brain control and autonomous control of the robot, and the movement behaviors of the robot in advancing, retreating, left turning, right turning and standing are realized, so that the system is more stable and rapid.

Description

Brain-controlled robot system based on P300 and implementation method thereof
Technical Field
The invention relates to a brain-controlled robot system based on P300 and an implementation method thereof, belonging to the technical field of brain-computer interfaces and robot control.
Background
The Brain-Computer Interface (BCI) is a non-muscle communication channel established between the Brain and external devices, and realizes communication between the Brain intention and the external environment. BCI provides a new idea for brain idea control of equipment as a new man-machine interaction mode, becomes a research hotspot in the field of intelligent robots, and builds a bridge combining human brain biological intelligence and artificial intelligence. With the fusion of BCI technology and robot automatic control technology, a new technology, brain-controlled robot technology, is generated. In a brain control system, the selected BCI control signals are different according to different application scenes, wherein the brain-computer interface system based on the event-related potential P300 becomes one of important hotspots in BCI research due to short training time and strong adaptability. However, since the P300 signal-to-noise ratio is low, it needs to be superimposed many times to obtain a relatively obvious and stable waveform, which limits the transmission rate of the system. In addition, the P300 potential is an evoked potential with a certain latency, and the current BCI system based on P300 mainly focuses on discrete synchronous control application of instruction selection, such as spelling task. Aiming at the problem of low signal-to-noise ratio, the method is mainly carried out by improving two directions of a signal processing algorithm and an experimental model, namely optimizing a feature extraction and classification algorithm and strengthening a P300 component. The present invention is primarily focused on improving the experimental paradigm. The classic P300 experimental paradigm is proposed by Farwell and Donchin, all symbols are grouped in a row-column manner and presented randomly, and the experimental effect is prone to near neighbor interference, double-flash problems, and repeated blindness problems. Guan et al propose a single-flash experimental paradigm, which improves the above phenomena to some extent, but the too long length of the stimulation sequence easily causes fatigue of the subject, and is more suitable for small-sized symbol matrices. The investigators then used regional flashes (like two single flashes) to improve accuracy. In addition to the different ways of stimulating flicker, the stimulation properties also influence the experimental results. Researches show that the brain has the highest awakening rate on yellow green and green, a blue-green combined transformation subject feels more comfortable, and Takano changes the color background of a symbol in a traditional experimental paradigm into blue and flickers into green, so that the experimental accuracy is improved. In addition, many scholars pay attention to the influence of the stimulation type on the system, and face stimulation is used for substituting for symbol stimulation, so that event-related potentials except the P300 potential are evoked and obtained, and more possibilities are provided for the BCI system. In view of the latency problem of P300, many researches on hybrid brain-computer interfaces, such as P300+ SSVEP brain-computer interface, P300+ Mu/Beta brain-computer interface, etc., have been made in recent years, and although continuous asynchronous control is achieved, the technology is not mature enough and the complexity of the system is increased.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention aims to provide a brain-controlled robot system based on P300 and an implementation method thereof. According to the invention, through the combined improvement of the stimulus attributes, a P300 experimental paradigm based on 5-oddball is designed, the paradigm has a simple interface, is easy to learn and use, can induce a strong P300 component, effectively reduces the stacking times, and improves the transmission rate of the system. In addition, according to the characteristics of the brain-controlled robot system, the system combines brain control and autonomous robot control, realizes interactive sharing asynchronous control of two control commands, reduces the system complexity of the asynchronous mode compared with a mixed brain-computer interface, and is beneficial to stable and rapid control of the robot.
In order to achieve the above purpose and solve the problems existing in the prior art, the invention adopts the technical scheme that: a brain-controlled robot system based on P300 comprises a visual stimulation module, a testee, an electrode cap module, a Neuroscan electroencephalogram acquisition module, a signal processing module, a control interface module and a Pioneer3-DX type robot motion module which are sequentially connected with the visual stimulation module, wherein the Pioneer3-DX type robot motion module is further connected with the visual stimulation module, the control interface module is further connected with a Pioneer3-DX type robot environment detection module, the visual stimulation module is used for presenting visual stimulation to the testee so as to induce an electroencephalogram signal P300, the signal processing module is used for converting acquired electroencephalogram signals into control instructions, and comprises a preprocessing module, a feature extraction module and a feature classification module which are sequentially connected with the preprocessing module, the control interface module realizes interactive sharing of brain control and autonomous robot control, and comprises a brain control command module and an environment information module which are respectively connected with a shared control module.
A realization method of a brain-controlled robot system based on P300 comprises the following steps:
step A, system initialization, namely performing initialization setting on a Neuroscan electroencephalogram acquisition module, a visual stimulation module and a Pioneer3-DX type robot motion module, and specifically comprising the following substeps:
substep A1Initializing a Neuroscan electroencephalogram acquisition module, setting P300 recording electrodes to be Fz, C3, Cz, C4, Pz and Oz, taking the average of left and right mastoid electrodes A1 and A2 as reference, setting the impedance of each electrode to be less than 5k omega, and setting the sampling frequency to be 250 Hz;
substep A2The visual stimulation initialization module adopts a 5-oddball experimental paradigm, a stimulation type selects graphic symbols, the good is taken as a center, the livestock are respectively distributed in four directions of upper side, lower side, left side and right side of the livestock, the interface is divided into three parts from top to bottom, the upper part displays the graphic symbols which are required to be paid attention by an experimental technician, if the experimental technician does not require the position to be empty, the middle part is the graphic symbols fed back at the current moment, the lower part is a graphic symbol stimulation interface, the whole background of the visual stimulation is white, the graphic symbols are black, the flashing color is green, the feedback prompt is yellow, the stimulation presentation mode is single flashing, the flashing duration time is 100ms, the stimulation interval is 125ms, and the visual stimulation interface is placed in the upper left corner of the screen;
substep A3Initializing a Pioneer3-DX type robot motion module, opening MobileSim software, creating a virtual robot, loading and drawing a map under a path, setting an initial coordinate position, an advancing linear speed and a steering angular speed of the robot, and then placing a map interface at the upper right corner of a screen;
substep A4Checking the mental state of the subject, reminding the subject to keep a centralized state, starting a system experiment, and entering the step B;
step B, electroencephalogram signal collection, wherein a Neuroscan electroencephalogram collection module is adopted to collect electroencephalogram data of electrodes at Fz, C3, Cz, C4, Pz and Oz positions, the collection is divided into a training stage and a testing stage, and the electroencephalogram signal collection method specifically comprises the following substeps:
substep B1The method for acquiring the electroencephalogram signal training data specifically comprises the following substeps:
substep B11In the acquisition process, a subject is required to watch a visual stimulation interface presented at the upper left corner of a screen, and graphic symbols which are required to be sequentially concerned by the subject by experiment technicians are displayed on the upper part of the interface;
substep B12Subjects had an adjusted time of 2s after the start of the experiment, followed by the start of stimulation, 5 stimuli were flashed once each in a random sequence, called 1 trial, with a time interval between trial and trial of 500 ms;
substep B13Successively remembering the flashing times of the testee according to the graphic symbols displayed on the upper part of the interface, turning to the next graphic symbol to be concerned after a feedback signal appears in the middle part of the interface, wherein the training data contains 500 trials in total, and then entering the substep C1
Substep B2The method for collecting the electroencephalogram signal test data specifically comprises the following substeps:
substep B21In the acquisition process, a subject is required to watch a visual stimulation interface presented at the upper left corner of a screen, graphic symbols cannot be displayed at the upper part of the interface, and the subject firstly observes a map and a robot presented at the upper right corner of the screen and confirms the movement direction of the subject;
substep B22The same substep B12
Substep B23And memorizing the corresponding graphic symbol flicker frequency according to the motion direction obtained by observing the map and the robot, wherein ← represents 30 DEG for left rotation at a constant speed, → ↓ ] represents 30 DEG for right rotation at a constant speed, ℃ represents 0.5m for forward movement at a constant speed, ↓ ] represents 0.5m for backward movement at a constant speed, and fine represents standing still, the subject can adjust the graphic symbol needing attention according to a feedback signal displayed in the middle of the interface, the test data comprises 400 trials, and then the method enters a substep C2
Step C, signal processing, namely preprocessing the electroencephalogram signals, extracting and classifying features, and specifically comprises the following substeps:
substep C1Carrying out feature extraction and classification training through training data, and constructing a classifier model, wherein the method specifically comprises the following sub-stepsThe method comprises the following steps:
substep C11Preprocessing signals, selecting six electrode data of Fz, C3, Cz, C4, Pz and Oz in electroencephalogram signal acquisition, transmitting the electrode data to a signal processing module, wherein the data format is R multiplied by S dimension, wherein R is equal to 6 and represents the number of electrodes, S represents sampling points, and then sequentially preprocessing the electrode data through an IIR filter with the cutoff frequency of 0.1Hz and an FIR filter with the cutoff frequency of 10 Hz;
substep C12Carrying out feature extraction on signals, segmenting electroencephalogram signals from 0ms of the starting time of each single stimulus, wherein each segment is called an Epoch, the length of a time window is selected to be 600ms, the first 100ms is a base line, epochs of the same stimulus are overlapped for 5 times, then data are down-sampled to 25Hz, 15 points are counted, a new Epoch is obtained, the data format is 6 x 15 dimensions, then the obtained epochs are sequentially spliced according to the electrode sequence, and the feature vector x extracted by the single stimulus is 90 x 1 dimensions;
substep C13Training a classifier, wherein the selected classifier is a Fisher linear discriminant analysis classifier, a discriminant function g (x) is described by a formula (1),
g(x)=wT x (1)
where w is a weight vector, X is a feature vector, and the training sample is X ═ X1,x2,...,xN]The number N of samples is equal to 500, wherein the number of samples of target stimulation is 100, the number of corresponding classification labels is 1, the number of samples of non-target stimulation is 400, the number of corresponding classification labels is 0, and the optimal value w of w is obtained through a Fisher linear discriminant classifier;
the P300 induction is related to expectation and not to stimulation, so the recognition of P300 in the brain-computer interface, i.e. the classifier is divided into two types of results, namely 1 for the target stimulation type and 0 for the non-target stimulation type, and classified according to the formula (2),
Figure BDA0001551501840000051
where label (x) is the classifier output function,
in an actual P300 brain-computer interface, the data input isn xiA set of (i ═ 1, …, n), where n denotes the number of types of stimulation, and there are 5 types of stimulation (good devices, ↓, →), that is, n ═ 5, and the stimulation is classified by formula (3),
Figure BDA0001551501840000052
if the classifier identifies 1 target stimulation class and n-1 non-target stimulation classes in total, the classifier represents that the instruction is output, if the classifier identifies more than 1 target stimulation class in total, the classifier represents that the instruction is not output, and then the substep B is carried out2
Substep C2The online detection of the P300 specifically comprises the following substeps:
substep C21The same substep C11
Substep C22The same substep C12
Substep C23Applying the constructed classifier to an online system, after 5 rounds of stimulation flicker, forming classifier input by the feature vectors of 5 types of stimulation, outputting control commands corresponding to the 5 types of stimulation by the classifier according to a formula (1) and a formula (3), and then entering a step D;
step D, realizing shared control, adopting a Pioneer3-DX type robot of ActivMedia Robotics company in America to control, and carrying out data transmission between a brain control command and the robot through a TCP/IP protocol, specifically comprising the following substeps:
substep D1Judging whether there is brain control command information by the system, if so, entering substep D2Otherwise, go to substep D3
Substep D2Firstly, detecting the brain control command, judging whether the brain control command accords with the environmental information, namely detecting whether the robot still receives the brain control command close to the obstacle when the distance between the obstacle and the robot is less than 0.5m, and entering the step D if the brain control command does not accord with the environmental information3Otherwise, entering a brain control command mode, namely executing corresponding brain control command action within the duration of the brain control command, namely when the brain control command is ↓or ↓, the robot advances at a constant speed orBacking 0.5m, making the brain control command ← or → time, making the robot make a left turn or a right turn at a constant speed of 30 degrees, making the robot keep still when the brain control command is good, then judging whether the control command is finished by the system, and entering substep D when the control command is finished1Otherwise, waiting for the control command to end;
substep D3Entering a robot autonomous control mode, acquiring environmental information through a robot laser sensor, calculating the linear velocity of the robot motion and the angular velocity of the steering through a fuzzy discrete event system method, outputting a control command and judging whether the execution is finished, and entering a substep D after the execution is finished1Otherwise, waiting for the control command to finish execution.
The invention has the beneficial effects that: a brain-controlled robot system based on P300 and its implementation method, adopt the graphic symbol different from traditional letter symbol to make up 5 kinds of new experimental paradigms of choice, refer to the meaning clearer to the brain-controlled robot system, the serviceability is strong. In addition, the stimulus color combination is enhanced with a white background and a more popular green color to induce a larger amplitude of the P300 component. The stimulation adopts a single-flash brightness enhancement mode, and the problem influence of double-flash effect and adjacent interference is avoided. The BCI system attributes are combined and improved, the transmission rate is improved, and the brain-controlled robot control can be realized more quickly and accurately by combining the asynchronous BCI and the robot sharing brain-controlled technology.
Drawings
FIG. 1 is a schematic block diagram of the system of the present invention.
Fig. 2 is a schematic block diagram of a visual stimulus module in the present invention.
Fig. 3 is a schematic block diagram of a signal processing module in the present invention.
Fig. 4 is a functional block diagram of a control interface module in the present invention.
FIG. 5 is a flow chart of the method steps of the present invention.
Fig. 6 is a visual stimulus module interface diagram in the present invention.
Detailed Description
The invention will be further described with reference to the accompanying drawings in which:
as shown in fig. 1, 2, 3 and 4, a brain-controlled robot system based on P300 comprises a visual stimulation module and a subject, an electrode cap module, a Neuroscan electroencephalogram acquisition module, a signal processing module, a control interface module and a Pioneer3-DX type robot motion module which are connected in sequence with the visual stimulation module, wherein the Pioneer3-DX type robot motion module is also connected with the visual stimulation module, the control interface module is also connected with the Pioneer3-DX type robot environment detection module, the visual stimulation module is used for presenting visual stimulation to the subject to induce an electroencephalogram signal P300, comprises a 5-oddball type visual stimulation interface configuration module and a parameter setting module, a stimulation presenting module and a stimulation design module which are respectively connected with the Pioneer3-DX type robot environment detection module, the signal processing module is used for converting the acquired electroencephalogram signal into a control command, comprises a feature extraction module and a feature classification module which are connected in sequence with the preprocessing module, the control interface module realizes interactive sharing of brain control and autonomous robot control, and comprises a brain control command module and an environment information module which are respectively connected with a shared control module.
As shown in fig. 5, an implementation method of a brain-controlled robot system based on P300 includes the following steps:
step A, system initialization, namely performing initialization setting on a Neuroscan electroencephalogram acquisition module, a visual stimulation module and a Pioneer3-DX type robot motion module, and specifically comprising the following substeps:
substep A1Initializing a Neuroscan electroencephalogram acquisition module, setting P300 recording electrodes to be Fz, C3, Cz, C4, Pz and Oz, taking the average of left and right mastoid electrodes A1 and A2 as reference, setting the impedance of each electrode to be less than 5k omega, and setting the sampling frequency to be 250 Hz;
substep A2Initializing a visual stimulation module, adopting a 5-oddball experimental paradigm, as shown in fig. 6, selecting graphic symbols according to stimulation types, taking good as a center, distributing ↓, ±, → respectively on four directions of right upper, right lower, right left and right, distributing the interface into three parts from top to bottom, displaying the graphic symbols which need to be concerned by the subject by an experimental technician on the upper part, displaying the position as empty if the experimental technician does not require, and displaying the middle part as emptyThe lower part of the graphic symbol is a graphic symbol stimulation interface which is fed back at the current moment, the whole background of the visual stimulation is white, the graphic symbol is black, the flashing color is green, the feedback prompt is yellow, the stimulation presentation mode is single flashing, the flashing duration is 100ms, the stimulation interval is 125ms, and then the visual stimulation interface is arranged at the upper left corner of the screen;
substep A3Initializing a Pioneer3-DX type robot motion module, opening MobileSim software, creating a virtual robot, loading and drawing a map under a path, setting an initial coordinate position, an advancing linear speed and a steering angular speed of the robot, and then placing a map interface at the upper right corner of a screen;
substep A4Checking the mental state of the subject, reminding the subject to keep a centralized state, starting a system experiment, and entering the step B;
step B, electroencephalogram signal collection, wherein a Neuroscan electroencephalogram collection module is adopted to collect electroencephalogram data of electrodes at Fz, C3, Cz, C4, Pz and Oz positions, the collection is divided into a training stage and a testing stage, and the electroencephalogram signal collection method specifically comprises the following substeps:
substep B1The method for acquiring the electroencephalogram signal training data specifically comprises the following substeps:
substep B11In the acquisition process, a subject is required to watch a visual stimulation interface presented at the upper left corner of a screen, and graphic symbols which are required to be sequentially concerned by the subject by experiment technicians are displayed on the upper part of the interface;
substep B12Subjects had an adjusted time of 2s after the start of the experiment, followed by the start of stimulation, 5 stimuli were flashed once each in a random sequence, called 1 trial, with a time interval between trial and trial of 500 ms;
substep B13Successively remembering the flashing times of the testee according to the graphic symbols displayed on the upper part of the interface, turning to the next graphic symbol to be concerned after a feedback signal appears in the middle part of the interface, wherein the training data contains 500 trials in total, and then entering the substep C1
Substep B2The method for collecting the electroencephalogram signal test data specifically comprises the following substeps:
substep B21In the acquisition process, a subject is required to watch a visual stimulation interface presented at the upper left corner of a screen, graphic symbols cannot be displayed at the upper part of the interface, and the subject firstly observes a map and a robot presented at the upper right corner of the screen and confirms the movement direction of the subject;
substep B22The same substep B12
Substep B23And memorizing the corresponding graphic symbol flicker frequency according to the motion direction obtained by observing the map and the robot, wherein ← represents 30 DEG for left rotation at a constant speed, → ↓ ] represents 30 DEG for right rotation at a constant speed, ℃ represents 0.5m for forward movement at a constant speed, ↓ ] represents 0.5m for backward movement at a constant speed, and fine represents standing still, the subject can adjust the graphic symbol needing attention according to a feedback signal displayed in the middle of the interface, the test data comprises 400 trials, and then the method enters a substep C2
Step C, signal processing, namely preprocessing the electroencephalogram signals, extracting and classifying features, and specifically comprises the following substeps:
substep C1Carrying out feature extraction and classification training through training data, and constructing a classifier model, which specifically comprises the following substeps:
substep C11Preprocessing signals, selecting six electrode data of Fz, C3, Cz, C4, Pz and Oz in electroencephalogram signal acquisition, transmitting the electrode data to a signal processing module, wherein the data format is R multiplied by S dimension, wherein R is equal to 6 and represents the number of electrodes, S represents sampling points, and then sequentially preprocessing the electrode data through an IIR filter with the cutoff frequency of 0.1Hz and an FIR filter with the cutoff frequency of 10 Hz;
substep C12Carrying out feature extraction on signals, segmenting electroencephalogram signals from 0ms of the starting time of each single stimulus, wherein each segment is called an Epoch, the length of a time window is selected to be 600ms, the first 100ms is a base line, epochs of the same stimulus are overlapped for 5 times, then data are down-sampled to 25Hz, 15 points are counted, a new Epoch is obtained, the data format is 6 x 15 dimensions, then the obtained epochs are sequentially spliced according to the electrode sequence, and the feature vector x extracted by the single stimulus is 90 x 1 dimensions;
substeps ofC13Training a classifier, wherein the selected classifier is a Fisher linear discriminant analysis classifier, a discriminant function g (x) is described by a formula (1),
g(x)=wT x (1)
where w is a weight vector, X is a feature vector, and the training sample is X ═ X1,x2,...,xN]The number N of samples is equal to 500, wherein the number of samples of target stimulation is 100, the number of corresponding classification labels is 1, the number of samples of non-target stimulation is 400, the number of corresponding classification labels is 0, and the optimal value w of w is obtained through a Fisher linear discriminant classifier;
the P300 induction is related to expectation and not to stimulation, so the recognition of P300 in the brain-computer interface, i.e. the classifier is divided into two types of results, namely 1 for the target stimulation type and 0 for the non-target stimulation type, and classified according to the formula (2),
Figure BDA0001551501840000101
where label (x) is the classifier output function,
in a practical P300 brain-computer interface, the data input is n xi(i is 1, …, n), n represents the number of stimulation types, and 5 types of stimulation, i.e., n is 5, are classified by the formula (3),
Figure BDA0001551501840000102
if the classifier identifies 1 target stimulation class and n-1 non-target stimulation classes in total, the classifier represents that the instruction is output, if the classifier identifies more than 1 target stimulation class in total, the classifier represents that the instruction is not output, and then the substep B is carried out2
Substep C2The online detection of the P300 specifically comprises the following substeps:
substep C21The same substep C11
Substep C22The same substep C12
Substep C23Applying the constructed classifier to an online system, after 5 rounds of stimulation flicker, forming classifier input by the feature vectors of 5 types of stimulation, outputting control commands corresponding to the 5 types of stimulation by the classifier according to a formula (1) and a formula (3), and then entering a step D;
step D, realizing shared control, adopting a Pioneer3-DX type robot of ActivMedia Robotics company in America to control, and carrying out data transmission between a brain control command and the robot through a TCP/IP protocol, specifically comprising the following substeps:
substep D1Judging whether there is brain control command information by the system, if so, entering substep D2Otherwise, go to substep D3
Substep D2Firstly, detecting the brain control command, judging whether the brain control command accords with the environmental information, namely detecting whether the robot still receives the brain control command close to the obstacle when the distance between the obstacle and the robot is less than 0.5m, and entering the step D if the brain control command does not accord with the environmental information3Otherwise, entering a brain control command mode, namely executing corresponding brain control command action within the duration of the brain control command, namely when the brain control command is ↓or ↓, the robot advances or retreats at a constant speed of 0.5m, when the brain control command is ← or → the robot turns at a constant speed of 30 degrees left or right, the robot remains motionless when the brain control command is good, then the system judges whether the control command is finished, and entering a substep D after the control command is finished1Otherwise, waiting for the control command to end;
substep D3Entering a robot autonomous control mode, acquiring environmental information through a robot laser sensor, calculating the linear velocity of the robot motion and the angular velocity of the steering through a fuzzy discrete event system method, outputting a control command and judging whether the execution is finished, and entering a substep D after the execution is finished1Otherwise, waiting for the control command to finish execution.

Claims (1)

1. A realization method of a brain-controlled robot system based on P300 is characterized in that: the brain-controlled robot system comprises a visual stimulation module, a testee, an electrode cap module, a Neuroscan electroencephalogram acquisition module, a signal processing module, a control interface module and a Pioneer3-DX type robot motion module which are sequentially connected with the visual stimulation module, wherein the Pioneer3-DX type robot motion module is further connected with the visual stimulation module, the control interface module is further connected with the Pioneer3-DX type robot environment detection module, the visual stimulation module is used for presenting visual stimulation to the testee to induce an electroencephalogram signal P300, the brain-controlled robot system comprises a 5-oddball type visual stimulation interface configuration module and a parameter setting module, a stimulation presentation module and a stimulation design module which are respectively connected with the visual stimulation interface configuration module, the signal processing module is used for converting the acquired electroencephalogram signal into a control command, and comprises a preprocessing module, a feature extraction module and a feature classification module which are sequentially connected with the preprocessing module, the control interface module realizes interactive sharing of brain control and autonomous robot control, and comprises a brain control command module and an environment information module which are respectively connected with a shared control module;
the implementation method comprises the following steps:
step A, system initialization, namely performing initialization setting on a Neuroscan electroencephalogram acquisition module, a visual stimulation module and a Pioneer3-DX type robot motion module, and specifically comprising the following substeps:
substep A1Initializing a Neuroscan electroencephalogram acquisition module, setting P300 recording electrodes to be Fz, C3, Cz, C4, Pz and Oz, taking the average of left and right mastoid electrodes A1 and A2 as reference, setting the impedance of each electrode to be less than 5k omega, and setting the sampling frequency to be 250 Hz;
substep A2Initializing a visual stimulation module, adopting a 5-oddball experimental paradigm, selecting graphic symbols according to stimulation types to take good as a center, respectively distributing the good as a center, wherein ↓, ←, → are respectively distributed on four directions of the good, the left and the right, the interface is divided into three parts from top to bottom, the upper part displays the graphic symbols which need to be concerned by an experimental technician, if the experimental technician does not have the requirement, the position display is empty, the middle part is the graphic symbols fed back at the current moment, the lower part is a graphic symbol stimulation interface, the whole background of the visual stimulation is white, the graphic symbols are black, the flashing color is green, the feedback prompt is yellow, the stimulation presentation mode is single flash, the flashing color is single flash, and the flashing display mode is single flash, and the flashing displayThe duration is 100ms, the stimulation interval is 125ms, and then the visual stimulation interface is arranged at the upper left corner of the screen;
substep A3Initializing a Pioneer3-DX type robot motion module, opening MobileSim software, creating a virtual robot, loading and drawing a map under a path, setting an initial coordinate position, an advancing linear speed and a steering angular speed of the robot, and then placing a map interface at the upper right corner of a screen;
substep A4Checking the mental state of the subject, reminding the subject to keep a centralized state, starting a system experiment, and entering the step B;
step B, electroencephalogram signal collection, wherein a Neuroscan electroencephalogram collection module is adopted to collect electroencephalogram data of electrodes at Fz, C3, Cz, C4, Pz and Oz positions, the collection is divided into a training stage and a testing stage, and the electroencephalogram signal collection method specifically comprises the following substeps:
substep B1The method for acquiring the electroencephalogram signal training data specifically comprises the following substeps:
substep B11In the acquisition process, a subject is required to watch a visual stimulation interface presented at the upper left corner of a screen, and graphic symbols which are required to be sequentially concerned by the subject by experiment technicians are displayed on the upper part of the interface;
substep B12Subjects had an adjusted time of 2s after the start of the experiment, followed by the start of stimulation, 5 stimuli were flashed once each in a random sequence, called 1 trial, with a time interval between trial and trial of 500 ms;
substep B13Successively remembering the flashing times of the testee according to the graphic symbols displayed on the upper part of the interface, turning to the next graphic symbol to be concerned after a feedback signal appears in the middle part of the interface, wherein the training data contains 500 trials in total, and then entering the substep C1
Substep B2The method for collecting the electroencephalogram signal test data specifically comprises the following substeps:
substep B21During the collection process, a subject is required to watch a visual stimulation interface presented at the upper left corner of a screen, graphic symbols cannot be displayed at the upper part of the interface, and the subject firstly needs to observe a map and a machine presented at the upper right corner of the screenA robot for confirming a desired direction of movement;
substep B22The same substep B12
Substep B23And memorizing the corresponding graphic symbol flicker frequency according to the motion direction obtained by observing the map and the robot, wherein ← represents 30 DEG for left rotation at a constant speed, → ↓ ] represents 30 DEG for right rotation at a constant speed, ℃ represents 0.5m for forward movement at a constant speed, ↓ ] represents 0.5m for backward movement at a constant speed, and fine represents standing still, the subject can adjust the graphic symbol needing attention according to a feedback signal displayed in the middle of the interface, the test data comprises 400 trials, and then the method enters a substep C2
Step C, signal processing, namely preprocessing the electroencephalogram signals, extracting and classifying features, and specifically comprises the following substeps:
substep C1Carrying out feature extraction and classification training through training data, and constructing a classifier model, which specifically comprises the following substeps:
substep C11Preprocessing signals, selecting six electrode data of Fz, C3, Cz, C4, Pz and Oz in electroencephalogram signal acquisition, transmitting the electrode data to a signal processing module, wherein the data format is R multiplied by S dimension, wherein R is equal to 6 and represents the number of electrodes, S represents sampling points, and then sequentially preprocessing the electrode data through an IIR filter with the cutoff frequency of 0.1Hz and an FIR filter with the cutoff frequency of 10 Hz;
substep C12Carrying out feature extraction on signals, segmenting electroencephalogram signals from 0ms of the starting time of each single stimulus, wherein each segment is called an Epoch, the length of a time window is selected to be 600ms, the first 100ms is a base line, epochs of the same stimulus are overlapped for 5 times, then data are down-sampled to 25Hz, 15 points are counted, a new Epoch is obtained, the data format is 6 x 15 dimensions, then the obtained epochs are sequentially spliced according to the electrode sequence, and the feature vector x extracted by the single stimulus is 90 x 1 dimensions;
substep C13Training a classifier, wherein the selected classifier is a Fisher linear discriminant analysis classifier, a discriminant function g (x) is described by a formula (1),
g(x)=wTx (1)
where w is a weight vector, X is a feature vector, and the training sample is X ═ X1,x2,...,xN]The number N of samples is equal to 500, wherein the number of samples of target stimulation is 100, the number of corresponding classification labels is 1, the number of samples of non-target stimulation is 400, the number of corresponding classification labels is 0, and the optimal value w of w is obtained through a Fisher linear discriminant classifier;
the P300 induction is related to expectation and not to stimulation, so the recognition of P300 in the brain-computer interface, i.e. the classifier is divided into two types of results, namely 1 for the target stimulation type and 0 for the non-target stimulation type, and classified according to the formula (2),
Figure FDA0002604500000000041
where label (x) is the classifier output function,
in a practical P300 brain-computer interface, the data input is n xiA set of (i ═ 1, …, n), where n denotes the number of types of stimulation, and there are 5 types of stimulation (good devices, ↓, →), that is, n ═ 5, and the stimulation is classified by formula (3),
Figure FDA0002604500000000042
if the classifier identifies 1 target stimulation class and n-1 non-target stimulation classes in total, the classifier represents that the instruction is output, if the classifier identifies more than 1 target stimulation class in total, the classifier represents that the instruction is not output, and then the substep B is carried out2
Substep C2The online detection of the P300 specifically comprises the following substeps:
substep C21The same substep C11
Substep C22The same substep C12
Substep C23Applying the constructed classifier in an online system, after 5 times of stimulation and flicker, forming classifier input by the feature vectors of 5 types of stimulation, and enabling the classifier to be according to a formula (A)1) And the formula (3) outputs control commands corresponding to the 5 types of stimulation, and then the step D is carried out;
step D, realizing shared control, adopting a Pioneer3-DX type robot of ActivMedia Robotics company in America to control, and carrying out data transmission between a brain control command and the robot through a TCP/IP protocol, specifically comprising the following substeps:
substep D1Judging whether there is brain control command information by the system, if so, entering substep D2Otherwise, go to substep D3
Substep D2Firstly, detecting the brain control command, judging whether the brain control command accords with the environmental information, namely detecting whether the robot still receives the brain control command close to the obstacle when the distance between the obstacle and the robot is less than 0.5m, and entering the step D if the brain control command does not accord with the environmental information3Otherwise, entering a brain control command mode, namely executing corresponding brain control command action within the duration of the brain control command, namely when the brain control command is ↓or ↓, the robot advances or retreats at a constant speed of 0.5m, when the brain control command is ← or → the robot turns at a constant speed of 30 degrees left or right, the robot remains motionless when the brain control command is good, then the system judges whether the control command is finished, and entering a substep D after the control command is finished1Otherwise, waiting for the control command to end;
substep D3Entering a robot autonomous control mode, acquiring environmental information through a robot laser sensor, calculating the linear velocity of the robot motion and the angular velocity of the steering through a fuzzy discrete event system method, outputting a control command and judging whether the execution is finished, and entering a substep D after the execution is finished1Otherwise, waiting for the control command to finish execution.
CN201810048019.1A 2018-01-18 2018-01-18 Brain-controlled robot system based on P300 and implementation method thereof Active CN108415554B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810048019.1A CN108415554B (en) 2018-01-18 2018-01-18 Brain-controlled robot system based on P300 and implementation method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810048019.1A CN108415554B (en) 2018-01-18 2018-01-18 Brain-controlled robot system based on P300 and implementation method thereof

Publications (2)

Publication Number Publication Date
CN108415554A CN108415554A (en) 2018-08-17
CN108415554B true CN108415554B (en) 2020-11-10

Family

ID=63125976

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810048019.1A Active CN108415554B (en) 2018-01-18 2018-01-18 Brain-controlled robot system based on P300 and implementation method thereof

Country Status (1)

Country Link
CN (1) CN108415554B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108836327A (en) * 2018-09-06 2018-11-20 电子科技大学 Intelligent outlet terminal and EEG signal identification method based on brain-computer interface
CN109445580A (en) * 2018-10-17 2019-03-08 福州大学 Trust Game Experiments system based on brain-computer interface
CN110244854A (en) * 2019-07-16 2019-09-17 湖南大学 A kind of artificial intelligence approach of multi-class eeg data identification
CN111007725A (en) * 2019-12-23 2020-04-14 昆明理工大学 Method for controlling intelligent robot based on electroencephalogram neural feedback
CN111273578A (en) * 2020-01-09 2020-06-12 南京理工大学 Real-time brain-controlled robot system based on Alpha wave and SSVEP signal control and control method
CN111752392B (en) * 2020-07-03 2022-07-08 福州大学 Accurate visual stimulation control method in brain-computer interface
CN112207816B (en) * 2020-08-25 2022-08-26 天津大学 Brain control mechanical arm system based on view coding and decoding and control method
CN111956933B (en) * 2020-08-27 2022-05-03 北京理工大学 Alzheimer's disease nerve feedback rehabilitation system
CN114237385B (en) * 2021-11-22 2024-01-16 中国人民解放军军事科学院军事医学研究院 Man-machine brain control interaction system based on non-invasive brain electrical signals
CN116492597B (en) * 2023-06-28 2023-11-24 南昌大学第一附属医院 Peripheral-central nerve regulation and control device and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4440661B2 (en) * 2004-01-30 2010-03-24 学校法人 芝浦工業大学 EEG control device and program thereof
CN103116279A (en) * 2013-01-16 2013-05-22 大连理工大学 Vague discrete event shared control method of brain-controlled robotic system
CN103955270A (en) * 2014-04-14 2014-07-30 华南理工大学 Character high-speed input method of brain-computer interface system based on P300

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9211078B2 (en) * 2010-09-03 2015-12-15 Faculdades Católicas, a nonprofit association, maintainer of the Pontificia Universidade Católica of Rio de Janeiro Process and device for brain computer interface

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4440661B2 (en) * 2004-01-30 2010-03-24 学校法人 芝浦工業大学 EEG control device and program thereof
CN103116279A (en) * 2013-01-16 2013-05-22 大连理工大学 Vague discrete event shared control method of brain-controlled robotic system
CN103955270A (en) * 2014-04-14 2014-07-30 华南理工大学 Character high-speed input method of brain-computer interface system based on P300

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
A P300 Brain-computer Interface for Controlling a Mobile Robot by Issuing a Motion Command;Xin’an Fan等;《Proceedings of 2013 ICME International Conference on Complex Medical Engineering》;20130627;第707-710页 *
P300脑机接口控制智能小车系统的设计与实现;王金甲,杨成杰,胡备;《生物医学工程学杂志》;20130425;第30卷(第2期);第223-228页 *
视觉ERP 脑机接口中实验范式的研究进展;马征等;《中国生物医学工程学报》;20160220;第35卷(第1期);第96-102页 *

Also Published As

Publication number Publication date
CN108415554A (en) 2018-08-17

Similar Documents

Publication Publication Date Title
CN108415554B (en) Brain-controlled robot system based on P300 and implementation method thereof
Li et al. An EEG-based BCI system for 2-D cursor control by combining Mu/Beta rhythm and P300 potential
CN104083258B (en) A kind of method for controlling intelligent wheelchair based on brain-computer interface and automatic Pilot technology
Carrino et al. A self-paced BCI system to control an electric wheelchair: Evaluation of a commercial, low-cost EEG device
CN103699226B (en) A kind of three mode serial brain-computer interface methods based on Multi-information acquisition
CN110169770A (en) The fine granularity visualization system and method for mood brain electricity
CN106569601A (en) Virtual driving system control method based on P300 electroencephalogram
Lehtonen et al. Online classification of single EEG trials during finger movements
CN106362287A (en) Novel MI-SSSEP mixed brain-computer interface method and system thereof
CN111930238B (en) Brain-computer interface system implementation method and device based on dynamic SSVEP (secure Shell-and-Play) paradigm
CN112465059A (en) Multi-person motor imagery identification method based on cross-brain fusion decision and brain-computer system
CN110534180A (en) The man-machine coadaptation Mental imagery brain machine interface system of deep learning and training method
Chae et al. Brain-actuated humanoid robot navigation control using asynchronous brain-computer interface
CN113208593A (en) Multi-modal physiological signal emotion classification method based on correlation dynamic fusion
CN110262658A (en) A kind of brain-computer interface character input system and implementation method based on reinforcing attention
CN112597967A (en) Emotion recognition method and device for immersive virtual environment and multi-modal physiological signals
CN115089190B (en) Pilot multi-mode physiological signal synchronous acquisition system based on simulator
CN113180701A (en) Electroencephalogram signal depth learning method for image label labeling
CN109326341A (en) A kind of rehabilitation motion guiding method and apparatus
CN103390193A (en) Automatic training device for navigation-oriented rat robot, and rat behavior identification method and training method
Tang et al. A shared-control based BCI system: For a robotic arm control
Li et al. An adaptive P300 model for controlling a humanoid robot with mind
CN112140113B (en) Robot control system and control method based on brain-computer interface
CN113009931B (en) Man-machine and unmanned-machine mixed formation cooperative control device and method
CN108319367A (en) A kind of brain-machine interface method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant