CN105549733A - Brain-computer interface system and method based on steady state visual evoked in intelligent space - Google Patents

Brain-computer interface system and method based on steady state visual evoked in intelligent space Download PDF

Info

Publication number
CN105549733A
CN105549733A CN201510900849.9A CN201510900849A CN105549733A CN 105549733 A CN105549733 A CN 105549733A CN 201510900849 A CN201510900849 A CN 201510900849A CN 105549733 A CN105549733 A CN 105549733A
Authority
CN
China
Prior art keywords
subject
visual
main control
control system
led
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510900849.9A
Other languages
Chinese (zh)
Other versions
CN105549733B (en
Inventor
张进华
洪军
王润泽
王保增
张程
邱志惠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN201510900849.9A priority Critical patent/CN105549733B/en
Publication of CN105549733A publication Critical patent/CN105549733A/en
Application granted granted Critical
Publication of CN105549733B publication Critical patent/CN105549733B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Dermatology (AREA)
  • Neurosurgery (AREA)
  • Neurology (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Rehabilitation Tools (AREA)

Abstract

The invention discloses a brain-computer interface system and method based on steady state visual evoked in an intelligent space. A visual evoked potential generated by a testee by watching an LED visual stimulus display is collected and processed by EEG and is input to a control host, a camera obtains the position information and the surrounding environmental information of the testee in the intelligent space, the control host calculates an electroencephalogram signal, the position information and the surrounding environmental information to provide a navigation function comprising motion time pre-estimation and motion path planning for the testee, and the control host generates a control command to control a lower extremity exoskeleton to help the testee to move to a preset target. The brain-computer interface system and method disclosed by the invention can be used for eliminating the limitation of the visual stimulator to the position and the direction of the testee, reinforcing the active motion consciousness of the testee and improving the intelligent living level of the testee.

Description

Based on the brain-computer interface system and method for stable state vision inducting under a kind of intelligent space
[technical field]
The present invention relates to medical science of recovery therapy and facing Information Science interleaving techniques and brain-machine interfacing field, be specifically related under a kind of intelligent space based on the brain-machine interface system of stable state vision inducting and method.
[background technology]
Brain-machine interface is human brain and directly exchanging and control channel of setting up between computing machine or other electronic equipment, by this passage, subject just directly can express idea or maneuvering device by brain, and not needing language or action, this effectively can strengthen subject and the extraneous ability exchanging or control external environment condition.
Steady State Visual Evoked Potential refers to the certain electric activity that nervous system accepts frequency and produces higher than 6Hz visual stimulus, and the EEG signals wherein recorded will comprise the rhythm and pace of moving things composition identical with frequency of stimulation.This response signal with cyclic component is called as Steady State Visual Evoked Potential.Steady State Visual Evoked Potential concentrates on characteristic frequency, and its input and disposal route are comparatively simple and accuracy is higher.
When traditional domestic environment carries out visual stimulus, subject must in visual stimulator 100 cm range, and traditional visual stimulator is all single-screen usually.Therefore subject will be subject to the restriction of visual stimulator position and dimension, and can not any position in domestic environment or carry out vision induced from any direction, limits the scope of activities of subject so to a great extent.In addition, stable state vision inducting is subject to the restriction of visual stimulator refreshing frequency, and conventional screen refresh rate is at about 60Hz.
[summary of the invention]
The object of the invention is to solve above technical matters, the brain-machine interface method based on stable state vision inducting under a kind of intelligent space being provided, this method eliminateing the restriction of visual stimulator to subject, improve the intelligent level of subject's life.
For achieving the above object, the present invention is achieved by the following technical solutions:
A kind of visual stimulator, comprise LED visual stimulus display, LED visual stimulus display is installed on the ceiling by pitman, and LED visual stimulus display is interacted by visual stimulus drive system and main control system; On LED visual stimulus display according to the refreshing frequency display of at least 100Hz for carrying out vision induced stimulation instructions.
The present invention further improves and is:
Described LED visual stimulus display is the rectangular parallelepiped surrounded by four lateral emitting blocks, a bottom-face luminous block and top flap, and four lateral emitting blocks are identical with the content shown by bottom-face luminous block.
The length, width and height of described lateral emitting block are of a size of 1.3m, 1m and 1.5m.
The top of described pitman is bolted on ceiling, and bottom welding is in the top flap of LED visual stimulus display.
Described visual stimulus drive system comprise for all LED light-emitting blocks in control LED visual stimulus display show identical content single-chip microcomputer and for driving the triode of all LED light-emitting blocks luminescence.
The content of described LED light-emitting block display comprises word, arrow or Newton ring instruction.
Based on brain-machine interface system of stable state vision inducting under intelligent space, comprising:
Visual stimulator, for showing stimulation instructions;
Main control system, for calculating shortest path according to the motion intention of subject and location circumstances information, and is converted to steering order by shortest path and sends to lower limb exoskeleton;
Camera, for the environmental information of the positional information and surrounding that gather subject;
EEG Acquire and process device, for gathering subject after observation visual stimulator, the EEG signal that cerebral cortex produces, then sends to main control system by after EEG signal process;
Lower limb exoskeleton, is worn on subject's lower limb, for receiving the action command of main control system, assists subject to move to impact point according to shortest path according to action command.
Based on brain-machine interface method of stable state vision inducting under intelligent space, comprise the following steps:
1) subject watches stimulation instructions corresponding to impact point on visual stimulator, and the VEP that cerebral cortex produces is input to main control system after treatment, and main control system is by the motion intention of EEG's Recognition subject;
2) camera obtains the positional information of subject and the environmental information of surrounding;
3) main control system carries out computing to position environmental information, subject and impact point are considered as a coordinate points according to its position, barrier is considered as a polygon according to its position, main control system is again in conjunction with EEG signals, for subject provides the navigation feature comprising and estimate run duration and programming movement path, obtain shortest path;
4) main control system controls lower limb exoskeleton assistance subject and moves to impact point according to shortest path, completes instruction.
Described step 1) in, the pre-service of signal, feature extraction and discriminator are comprised to the process of VEP, wherein pre-service comprises DC component, signal filtering, removes eye electricity and Muscle artifacts and ICA remove interference component, feature extraction adopts wavelet analysis method, and discriminator adopts Artificial Neural Network.
Described step 3) in, estimate run duration and programming movement path is specially:
Estimate run duration by subject starting position and the distance of predetermined movement position and the subjective motion speed of subject, utilize Visual Graph method to plan the motion path hiding barrier in intelligent space in real time according to the positional information of subject and the environmental information of surrounding;
Wherein path planning is specially:
Subject, impact point are all regarded as a point, barrier is regarded as the polygon surrounded by its summit, each summit of subject, impact point and barrier is connected, and ensure that these straight lines are all not crossing with barrier, just form a Visual Graph, then use breadth-first search search customization shortest path.
Compared with prior art, the present invention has following beneficial effect:
LED visual stimulus display is fixed on the ceiling by the present invention, and the size of 1 employing .3m × 1m × 1.5m, guarantee that subject can carry out vision induced in optional position in intelligent space, the versatility of LED visual stimulus display guarantees that subject can carry out vision induced from any direction in intelligent space, eliminate the restriction in position to subject and direction, the foundation of lower limb exoskeleton motion is used as by the EEG signal of subject, enhance the active movement consciousness of subject, intelligent space can provide for subject the navigation feature comprising and estimate run duration and programming movement path, improve the intelligent level of subject's life.
[accompanying drawing explanation]
Fig. 1 is workflow diagram of the present invention;
Fig. 2 is structural representation of the present invention.
Wherein, 1 is ceiling; 2 is visual stimulus drive system; 3 is pitman; 4 is lateral emitting block; 5 is LED visual stimulus display; 6 is bottom-face luminous block.
[embodiment]
Below in conjunction with accompanying drawing, the present invention is described in further detail:
The present invention designs intelligent space and is used for making up deficiency when traditional domestic environment carries out visual stimulus, and intelligent space specifically refers to have camera, this space that embedded in calculation element and multi-modal sensor of main control system.As shown in Figure 2, visual stimulator comprises LED visual stimulus display 5, LED visual stimulus display 5 and is arranged on ceiling 1 by pitman 3, and LED visual stimulus display 5 is interacted by visual stimulus drive system 2 and main control system; LED visual stimulus display 5 is the rectangular parallelepiped surrounded by four lateral emitting blocks, 4, bottom-face luminous block 6 and top flap, and four lateral emitting blocks 4 are identical with the content shown by bottom-face luminous block 6.The length, width and height of lateral emitting block 4 are of a size of 1.3m, 1m and 1.5m, can ensure that subject carries out vision induced in the optional position of intelligent space from any direction, and LED visual stimulus display refreshing frequency can to reach 100Hz even higher, higher than conventional screen refreshing frequency, eliminate visual stimulator to the frequency limitation of subject.The top of pitman 3 is bolted on ceiling 1, and bottom welding is in the top flap of LED visual stimulus display 5.Visual stimulus drive system 2 comprise for all LED light-emitting blocks in control LED visual stimulus display 5 show identical content single-chip microcomputer and for driving the triode of all LED light-emitting blocks luminescence.The content of LED light-emitting block display comprises word, arrow or Newton ring instruction.
The present invention utilizes visual stimulator, devising the brain-machine interface system based on stable state vision inducting under a kind of intelligent space, comprising: visual stimulator 5, for showing stimulation instructions; Main control system, provides navigation feature for subject and controls lower limb exoskeleton and assist subject to move to intended target; Main control system calculates shortest path according to the motion intention of subject and location circumstances information, and shortest path is converted to steering order sends to lower limb exoskeleton; Camera, for the positional information of Real-time Obtaining subject and the environmental information of surrounding; EEG Acquire and process device, for gathering subject after observation visual stimulator 5, the EEG signal that cerebral cortex produces, then sends to main control system by after EEG signal process; Lower limb exoskeleton, is worn on subject's lower limb, for receiving the action command of main control system, assists subject to move to impact point according to shortest path according to action command.
The invention also discloses the brain-machine interface method based on stable state vision inducting under a kind of intelligent space, comprise the following steps:
1) subject watches stimulation instructions corresponding to impact point on visual stimulator, and the VEP that cerebral cortex produces is input to main control system after treatment, and main control system is by the motion intention of EEG's Recognition subject; The pre-service of signal, feature extraction and discriminator are comprised to the process of VEP, wherein pre-service comprises DC component, signal filtering, removes eye electricity and Muscle artifacts, the improper data of artificial removal and ICA removal interference component, feature extraction adopts wavelet analysis method, and discriminator adopts Artificial Neural Network.
2) camera obtains the positional information of subject and the environmental information of surrounding;
3) main control system carries out computing to position environmental information, subject and impact point are considered as a coordinate points according to its position, barrier is considered as a polygon according to its position, main control system is again in conjunction with EEG signals, for subject provides the navigation feature comprising and estimate run duration and programming movement path, obtain shortest path;
Estimate run duration and programming movement path is specially:
Estimate run duration by subject starting position and the distance of predetermined movement position and the subjective motion speed of subject, utilize Visual Graph method to plan the motion path hiding barrier in intelligent space in real time according to the positional information of subject and the environmental information of surrounding;
Wherein path planning is specially:
Subject, impact point are all regarded as a point, barrier is regarded as the polygon surrounded by its summit, each summit of subject, impact point and barrier is connected, and ensure that these straight lines are all not crossing with barrier, just form a Visual Graph, then use breadth-first search search customization shortest path.
4) main control system controls lower limb exoskeleton assistance subject and moves to impact point according to shortest path, completes instruction.
Principle of the present invention:
Based on the brain-machine interface method of stable state vision inducting under intelligent space of the present invention, brought out by LED visual stimulus display to produce VEP after EEG Acquire and process under intelligent space, the method of lower limb exoskeleton is controlled by main control system, LED visual stimulus display is by LED visual stimulus display, pitman and LED visual stimulus drive system form, LED visual stimulus display is by left side LED light-emitting block, LED light-emitting block above, bottom surface LED light-emitting block, LED light-emitting block below, right side LED light-emitting block and above baffle plate are formed, namely the rectangular parallelepiped surrounded by 5 LED light-emitting blocks and 1 baffle plate, length, width and height size is 1.3m respectively, 1m and 1.5m, pitman is connected with LED visual stimulus display by welding, pitman is bolted on the ceiling that is fixed in intelligent space, LED visual stimulus drive system is made up of triode and single-chip microcomputer, triode driving LED light-emitting block is shinny, in Single-chip Controlling LED visual stimulus display, all LED light-emitting blocks show identical content, comprise word, arrow or Newton ring instruction, activities all in corresponding subject one day, be used as control command to stimulate.
The VEP that subject is produced by viewing LED visual stimulus display is after EEG Acquire and process, be input in main control system, under intelligent space, camera obtains the positional information of subject and the environmental information of surrounding, main control system carries out computing to EEG signals and location circumstances information, for subject provides the navigation feature comprising and estimate run duration and programming movement path, finally control lower limb exoskeleton and assist subject to move to intended target.
In intelligent space, the subject of optional position can watch LED visual stimulus display from any direction, LED light-emitting block in LED visual stimulus display can show enough greatly active instructions all in subject one day, different instructions is with different frequency scintillations, subject want to carry out which movable time, only need watch the stimulation normal form of command adapted thereto, the VEP produced immediately is after EEG Acquire and process, be input in main control system, under intelligent space, camera obtains the positional information of subject and the environmental information of surrounding, main control system carries out computing to EEG signals and location circumstances information, for subject provides the navigation feature comprising and estimate run duration and programming movement path, namely run duration is estimated by subject starting position and the distance of predetermined movement position and the subjective motion speed of subject, according to the positional information of subject and the real-time programming movement path of the environmental information of surrounding, finally controlling lower limb exoskeleton assists subject to move to intended target.
Embodiment
Study is thought for subject, in intelligent space, subject watches representative in LED visual stimulus display and arrives the Newton ring stimulation instructions of study order, this Newton ring flicker frequency is 10Hz, the VEP EEG produced immediately is after Acquire and process, be input in main control system, main control system is by the motion intention of EEG's Recognition subject, then camera obtains the positional information of subject and the environmental information of surrounding, main control system carries out computing to EEG signals and location circumstances information, for subject provides the navigation feature comprising and estimate run duration and programming movement path, namely run duration is estimated by subject starting position and the distance of predetermined movement position and the subjective motion speed of subject, Visual Graph method is utilized to plan the motion path hiding barrier in intelligent space in real time according to the positional information of subject and the environmental information of surrounding.Subject, study are all regarded as a point, barrier is regarded as the polygon surrounded by its summit, each summit of subject, study and barrier is connected, and ensures that these straight lines are all not crossing with barrier, just forms a figure, is Visual Graph.Summit due to any two straight lines is all visible, and all paths arriving study from subject's point along these straight lines are all that the nothing of subject in intelligent space touches path, then use breadth-first search search customization shortest path.Finally shortest path is inputed to main control system 4, main control system 4 controls lower limb exoskeleton and assists subject to move to study, completes instruction.
Above content is only and technological thought of the present invention is described; protection scope of the present invention can not be limited with this; every technological thought proposed according to the present invention, any change that technical scheme basis is done, within the protection domain all falling into claims of the present invention.

Claims (10)

1. a visual stimulator, it is characterized in that, comprise LED visual stimulus display (5), LED visual stimulus display (5) is arranged on ceiling (1) by pitman (3), and LED visual stimulus display (5) is interacted by visual stimulus drive system (2) and main control system; On LED visual stimulus display (5) according to the refreshing frequency display of at least 100Hz for carrying out vision induced stimulation instructions.
2. visual stimulator according to claim 1, it is characterized in that, described LED visual stimulus display (5) is the rectangular parallelepiped surrounded by four lateral emitting blocks (4), a bottom-face luminous block (6) and top flap, and four lateral emitting blocks (4) are identical with the content shown by bottom-face luminous block (6).
3. visual stimulator according to claim 2, is characterized in that, the length, width and height of described lateral emitting block (4) are of a size of 1.3m, 1m and 1.5m.
4. visual stimulator according to claim 2, is characterized in that, the top of described pitman (3) is bolted on ceiling (1), and bottom welding is in the top flap of LED visual stimulus display (5).
5. visual stimulator according to claim 2, it is characterized in that, described visual stimulus drive system (2) comprise for LED light-emitting blocks all in control LED visual stimulus display (5) show identical content single-chip microcomputer and for driving the triode of all LED light-emitting blocks luminescence.
6. visual stimulator according to claim 2, is characterized in that, the content of described LED light-emitting block display comprises word, arrow or Newton ring instruction.
7. adopt the brain-machine interface system based on stable state vision inducting under the intelligent space of visual stimulator described in claim 1-6 any one, it is characterized in that, comprising:
Visual stimulator (5), for showing stimulation instructions;
Main control system, for calculating shortest path according to the motion intention of subject and location circumstances information, and is converted to steering order by shortest path and sends to lower limb exoskeleton;
Camera, for the environmental information of the positional information and surrounding that gather subject;
EEG Acquire and process device, for gathering subject after observation visual stimulator (5), the EEG signal that cerebral cortex produces, then sends to main control system by after EEG signal process;
Lower limb exoskeleton, is worn on subject's lower limb, for receiving the action command of main control system, assists subject to move to impact point according to shortest path according to action command.
8. adopt the brain-machine interface method based on stable state vision inducting under the intelligent space of system described in claim 7, it is characterized in that, comprise the following steps:
1) subject watches stimulation instructions corresponding to impact point on visual stimulator, and the VEP that cerebral cortex produces is input to main control system after treatment, and main control system is by the motion intention of EEG's Recognition subject;
2) camera obtains the positional information of subject and the environmental information of surrounding;
3) main control system carries out computing to position environmental information, subject and impact point are considered as a coordinate points according to its position, barrier is considered as a polygon according to its position, main control system is again in conjunction with EEG signals, for subject provides the navigation feature comprising and estimate run duration and programming movement path, obtain shortest path;
4) main control system controls lower limb exoskeleton assistance subject and moves to impact point according to shortest path, completes instruction.
9. under intelligent space according to claim 8 based on the brain-machine interface method of stable state vision inducting, it is characterized in that, described step 1) in, the pre-service of signal, feature extraction and discriminator are comprised to the process of VEP, wherein pre-service comprises DC component, signal filtering, removes eye electricity and Muscle artifacts and ICA remove interference component, feature extraction adopts wavelet analysis method, and discriminator adopts Artificial Neural Network.
10. under intelligent space according to claim 8 or claim 9 based on the brain-machine interface method of stable state vision inducting, it is characterized in that, described step 3) in, estimate run duration and programming movement path is specially:
Estimate run duration by subject starting position and the distance of predetermined movement position and the subjective motion speed of subject, utilize Visual Graph method to plan the motion path hiding barrier in intelligent space in real time according to the positional information of subject and the environmental information of surrounding;
Wherein path planning is specially:
Subject, impact point are all regarded as a point, barrier is regarded as the polygon surrounded by its summit, each summit of subject, impact point and barrier is connected, and ensure that these straight lines are all not crossing with barrier, just form a Visual Graph, then use breadth-first search search customization shortest path.
CN201510900849.9A 2015-12-08 2015-12-08 Brain-computer interface system and method based on stable state vision inducting under a kind of intelligent space Active CN105549733B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510900849.9A CN105549733B (en) 2015-12-08 2015-12-08 Brain-computer interface system and method based on stable state vision inducting under a kind of intelligent space

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510900849.9A CN105549733B (en) 2015-12-08 2015-12-08 Brain-computer interface system and method based on stable state vision inducting under a kind of intelligent space

Publications (2)

Publication Number Publication Date
CN105549733A true CN105549733A (en) 2016-05-04
CN105549733B CN105549733B (en) 2018-06-26

Family

ID=55828958

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510900849.9A Active CN105549733B (en) 2015-12-08 2015-12-08 Brain-computer interface system and method based on stable state vision inducting under a kind of intelligent space

Country Status (1)

Country Link
CN (1) CN105549733B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107714331A (en) * 2017-09-13 2018-02-23 西安交通大学 The intelligent wheel chair control of view-based access control model inducting brain-machine interface and method for optimizing route
CN108742957A (en) * 2018-06-22 2018-11-06 上海交通大学 A kind of artificial limb control method of multi-sensor fusion
CN112223253A (en) * 2019-07-15 2021-01-15 上海中研久弋科技有限公司 Exoskeleton system, exoskeleton identification control method, electronic device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101576772A (en) * 2009-05-14 2009-11-11 天津工程师范学院 Brain-computer interface system based on virtual instrument steady-state visual evoked potentials and control method thereof
CN104398325A (en) * 2014-11-05 2015-03-11 西安交通大学 Brain-myoelectricity artificial limb control device and method based on scene steady-state visual evoking
CN104524689A (en) * 2014-12-03 2015-04-22 上海交通大学 System and method for realizing allogeneic biological control by brain-brain interface
CN204480175U (en) * 2014-07-16 2015-07-15 天津职业技术师范大学 A kind of view-based access control model brings out the intelligent electrical appliance control device of brain-computer interface

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101576772A (en) * 2009-05-14 2009-11-11 天津工程师范学院 Brain-computer interface system based on virtual instrument steady-state visual evoked potentials and control method thereof
CN204480175U (en) * 2014-07-16 2015-07-15 天津职业技术师范大学 A kind of view-based access control model brings out the intelligent electrical appliance control device of brain-computer interface
CN104398325A (en) * 2014-11-05 2015-03-11 西安交通大学 Brain-myoelectricity artificial limb control device and method based on scene steady-state visual evoking
CN104524689A (en) * 2014-12-03 2015-04-22 上海交通大学 System and method for realizing allogeneic biological control by brain-brain interface

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107714331A (en) * 2017-09-13 2018-02-23 西安交通大学 The intelligent wheel chair control of view-based access control model inducting brain-machine interface and method for optimizing route
CN108742957A (en) * 2018-06-22 2018-11-06 上海交通大学 A kind of artificial limb control method of multi-sensor fusion
CN112223253A (en) * 2019-07-15 2021-01-15 上海中研久弋科技有限公司 Exoskeleton system, exoskeleton identification control method, electronic device and storage medium
CN112223253B (en) * 2019-07-15 2022-08-02 上海中研久弋科技有限公司 Exoskeleton system, exoskeleton identification control method, electronic device and storage medium

Also Published As

Publication number Publication date
CN105549733B (en) 2018-06-26

Similar Documents

Publication Publication Date Title
US20230264022A1 (en) Apparatus for management of a parkinson's disease patient's gait
Lee et al. A brain-controlled exoskeleton with cascaded event-related desynchronization classifiers
CN109394476B (en) Method and system for automatic intention recognition of brain muscle information and intelligent control of upper limbs
US10973408B2 (en) Smart eye system for visuomotor dysfunction diagnosis and its operant conditioning
CN104799984B (en) Assistance system for disabled people based on brain control mobile eye and control method for assistance system
He et al. A wireless BCI and BMI system for wearable robots
Wang et al. An asynchronous wheelchair control by hybrid EEG–EOG brain–computer interface
Deng et al. A bayesian shared control approach for wheelchair robot with brain machine interface
US20210221404A1 (en) Driver predictive mental response profile and application to automated vehicle brain interface control
McMullen et al. Demonstration of a semi-autonomous hybrid brain–machine interface using human intracranial EEG, eye tracking, and computer vision to control a robotic upper limb prosthetic
EP2801389B1 (en) Neuroprosthetic device for monitoring and suppression of pathological tremors through neurostimulation of the afferent pathways
Iturrate et al. A noninvasive brain-actuated wheelchair based on a P300 neurophysiological protocol and automated navigation
CN109199786A (en) A kind of lower limb rehabilitation robot based on two-way neural interface
Heed et al. Integration of hand and finger location in external spatial coordinates for tactile localization.
CN104382595A (en) Upper limb rehabilitation system and method based on myoelectric signal and virtual reality interaction technology
CN105578954A (en) Physiological parameter measurement and feedback system
CN105549733A (en) Brain-computer interface system and method based on steady state visual evoked in intelligent space
CN110727353A (en) Control component control method and device based on two-dimensional intention definition
EP4385398A2 (en) An active closed-loop medical system
Duan et al. Shared control of a brain-actuated intelligent wheelchair
CN103106343A (en) Difficulty adjusting method for limb rehabilitation training
CN105342812A (en) Wearable walking aid for patient with Parkinson's disease
CN111258428B (en) Brain electricity control system and method
CN105137830A (en) Traditional Chinese painting mechanical hand based on visual evoking brain-machine interface, and drawing method thereof
CN106267557A (en) A kind of brain control based on wavelet transformation and support vector machine identification actively upper limb medical rehabilitation training system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant