CN109605385A - A kind of rehabilitation auxiliary robot of mixing brain-computer interface driving - Google Patents

A kind of rehabilitation auxiliary robot of mixing brain-computer interface driving Download PDF

Info

Publication number
CN109605385A
CN109605385A CN201811434591.8A CN201811434591A CN109605385A CN 109605385 A CN109605385 A CN 109605385A CN 201811434591 A CN201811434591 A CN 201811434591A CN 109605385 A CN109605385 A CN 109605385A
Authority
CN
China
Prior art keywords
user
mechanical arm
target object
brain
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811434591.8A
Other languages
Chinese (zh)
Other versions
CN109605385B (en
Inventor
徐宝国
张大林
李文龙
魏智唯
宋爱国
赵国普
李会军
曾洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201811434591.8A priority Critical patent/CN109605385B/en
Publication of CN109605385A publication Critical patent/CN109605385A/en
Application granted granted Critical
Publication of CN109605385B publication Critical patent/CN109605385B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/008Manipulators for service tasks
    • B25J11/009Nursing, e.g. carrying sick persons, pushing wheelchairs, distributing drugs
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • B25J13/085Force or torque sensors
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/04Viewing devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/08Programme-controlled manipulators characterised by modular constructions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Abstract

The invention discloses a kind of rehabilitation auxiliary robots of mixing brain-computer interface driving, comprising: data acquisition module, for acquiring sight motion information and Mental imagery EEG signals, and the deep image information of acquisition user face and target object;PC processing module, for obtaining the target object that user wants crawl according to the Mental imagery EEG signals of user, image information extracts location information based on the received;Mechanical arm control module, for target object to be transported to the motion path of user face according to the location information of user face and target object planning mechanical arm, and the sight motion information of synthetic user and the motion path of planning generate mechanical arm and control signal, control the movement of mechanical arm;Mechanical arm, for the control campaign according to mechanical arm control module;Feedback module is installed on mechanical arm, for feeding back the clamping whether successful signal of target object.The characteristics of comprehensive brain-computer interface technology of the present invention and healing robot both techniques, it is more advantageous to patients ' recovery training.

Description

A kind of rehabilitation auxiliary robot of mixing brain-computer interface driving
Technical field
The present invention relates to robot field more particularly to a kind of rehabilitation auxiliary robots of mixing brain-computer interface driving.
Background technique
Traditional rehabilitation auxiliary robot is the movement for driving limbs to do repeatability by machine, to control limb motion Nervous system is stimulated and is promoted its reconstruction, and healing robot includes two kinds of healing robots of upper limb and lower limb, most of to need Want other people auxiliary that patient is helped to use, patient cannot be actively engaged in wherein, lack the mechanism accurately fed back, be unfavorable for patient The accurate feedback played an active part in rehabilitation efficacy.Traditional healing robot is devoted to the limbs nervous system of user Restore, can not be provided to user and assist help, such as patient for many severe paralysis, they can only be by by others It helps to complete activity necessary to some daily lifes, such as eats food, drinks water.
Brain-computer interface (Brain Computer Interface, BCI) is to be related to Neuscience, cognitive science, computer Science, control and multidisciplinary, the multi-field man-machine interface mode such as information science and technology, medicine, are in brain and external rings The nerve information exchange established between border and control channel.The present invention is using non-intrusion type brain-computer interface technology, by adopting Collection and the EEG signals feature for extracting brain generation accordingly generate control signal to complete brain and external equipment and carry out information biography The task with control is passed, realizes the direct interaction between central nervous system and internal or external device, patient is helped to realize Some control instruction auxiliary Rehabilitations or life are realized in the interaction of brain and computer.
It is currently, there are part existing research and combines brain-computer interface technology with intelligent robot technology, such as Publication No. The patent " a kind of system of brain wave control mechanical arm " of CN105425963A using brain wave signal acquisition attention and put Looseness parameter, to complete preset mechanical arm control action.Publication No. CN102198660A, it is entitled " to be based on brain-machine The mechanical arm control system and action command control program of interface ", the brain-computer interface of motor pattern, switching are imagined using three kinds With the motor pattern of selection mechanical arm, realizes mechanical arm and grab and put, eight deliberate action instructions of up, down, left, right, before and after. Above-mentioned patent of invention, only by brain-computer interface it is technically simple and robot technology combine, by acquiring and analyzing brain telecommunications Number feature, generate to the control instruction of robot, robot according to some movements predetermined of predefined instruction completion, Such technology lacks corresponding feedback mechanism, does not give full play of the advantage that brain-computer interface is combined with robot technology.Cause This, it would be highly desirable to it solves the above problems.
Summary of the invention
Goal of the invention: in view of the problems of the existing technology the present invention, provides a kind of auxiliary of mixing brain-computer interface driving Healing robot.
Technical solution: it is of the present invention mixing brain-computer interface driving rehabilitation auxiliary robot include:
Data acquisition module, for acquiring the sight motion information of user and the movement of generation is thought when carrying out Mental imagery As EEG signals, and according to the deep image information of PC processing module instruction acquisition user face and target object;
PC processing module, for obtaining the target object that user wants crawl according to the Mental imagery EEG signals of user, The deep image information of instruction acquisition user face and target object is sent to data acquisition module, and image is believed based on the received Breath extracts location information;
Mechanical arm control module, for planning mechanical arm by object according to the location information of user face and target object Body is transported to the motion path of user face and the sight motion information of synthetic user and the motion path of planning generates machinery Arm controls signal, controls the movement of mechanical arm;
Mechanical arm, for the control campaign according to mechanical arm control module;
Feedback module is installed on mechanical arm, for feeding back the clamping whether successful signal of target object.
Further, the data acquisition module specifically includes:
Eeg signal acquisition unit, for acquiring the Mental imagery EEG signals of user's generation when carrying out Mental imagery, And PC processing module is transmitted to after carrying out signal filtering, amplification, analog-to-digital conversion;
Image acquisition units, acquisition are located at the target object three dimensional depth image information of target position, and acquisition user The deep image information of face receives the instruction that PC processing module is sent, and therefrom parses target position and user face position Confidence breath;
Eye-controlling focus unit for acquiring the sight motion information of user, and is transmitted to mechanical arm control module.
Further, the PC processing module specifically includes:
Eeg data processing unit is pre-processed for the Mental imagery EEG signals to user, and it is special to extract brain electricity Reference number, and according to the classifier of pre-training obtain the brain electrical feature signal corresponding to object space, want to grab as user The position of the target object taken generates the acquisition instructions comprising target object location to data acquisition module;
Positioning unit is identified, for believing according to the user face of data collecting module collected and the depth image of target object Breath extracts the location information of user face and target object;
Mental imagery guidance unit, for playing the audio and video of the guidance user movement imagination.
Further, the mechanical arm control module specifically includes:
Robotic arm path planning unit, for the location information according to user face and target object, plan mechanical arm with The position of target object is clamped as starting point, motion path of the face location of user as terminal;
Manipulator motion control unit, user's sight motion information for sending data acquisition module is as mechanical arm Start stop signal is moved, mechanical arm is generated according to the motion path of planning and controls signal, control the movement of mechanical arm.
Further, the feedback module specifically includes:
Touch detection device, is installed at robot arm end effector, the pressure for detection mechanical arm end effector Variation;
Vibrational feedback unit, for issuing when touch detection device detects robot arm end effector pressure increase Pass signal is clamped, ownership goal object is prompted to clamp successfully.
Further, the eeg signal acquisition unit includes that sequentially connected standard 10-20 leads brain electrode cap, multichannel Eeg amplifier, analog-digital converter and signal transmission unit.
Further, described image acquisition unit includes two Kinect cameras, for obtaining target object and use respectively The deep image information of family face.
Further, the Eye-controlling focus unit is specially eye tracker, is being installed on PC processing module display just Lower section, for being realized by the position for the blinkpunkt for measuring eyes to oculomotor tracking.
The utility model has the advantages that compared with prior art, the present invention its remarkable advantage is:
1, the rehabilitation auxiliary robot of the present invention-mixing brain-computer interface driving, based on Mental imagery brain-computer interface technology with Rehabilitation auxiliary robot technology combines, and user is only needed to provide Mental imagery EEG signals and eye movement signal, rehabilitation auxiliary machinery The planning path work that the mechanical arm of people is generated according to system is user service, and the use of the robot is intervened without other people, used One people of family can be operated, convenient for application.
2, the present invention uses the Mental imagery brain-computer interface technology of non-intrusion type, and Mental imagery movement is left hand, the right hand, double Hand and loosen four kinds of states, user is adaptable using simple, recognition accuracy is high.
3, the present invention using power touch detection and shakes the feedback device as system, it is ensured that rehabilitation auxiliary robot Safety has better man-machine interaction.
4, the present invention using eye movement tracer technique control mechanical arm movement, avoid directly adopt EEG signals control by There is the problem of mechanical arm control fault in the classification of mistake caused by classifier, it is ensured that the safety of rehabilitation auxiliary robot.
5, the present invention combines brain-computer interface technology, Robot Control Technology, 3D vision location technology, realizes user Mechanical arm is controlled using idea, required target object can be independently selected, target object is successfully grabbed and the mouth for being sent to user is attached Closely.When target object is food, the autonomous feed of paralytic is may be implemented in the present invention, can improve the life matter of user Amount, promotes its autonomous viability.
Detailed description of the invention
Fig. 1 is the system structure diagram of the embodiment of the present invention;
Fig. 2 is that target object location and Mental imagery act corresponding relationship in the embodiment of the present invention;
Fig. 3 is the work flow diagram of the embodiment of the present invention.
Specific embodiment
Technical solution of the present invention is described further with reference to the accompanying drawing, but embodiments of the present invention are not limited to This.
As shown in Figure 1, the present invention is a kind of rehabilitation auxiliary robot of mixing brain-computer interface driving, comprising: data acquisition Module, mechanical arm control module, mechanical arm, feedback module and PC processing module.Data acquisition module is used to acquire the view of user Line motion information and when carrying out Mental imagery generation Mental imagery EEG signals, and according to PC processing module instruction acquisition The deep image information of user face and target object (such as food);PC processing module is used for the Mental imagery brain according to user Electric signal parses user's brain electricity and is intended to obtain the target object that user wants crawl, sends instruction acquisition to data acquisition module and uses The deep image information of family face and target object, and image information extracts location information based on the received;Mechanical arm control Module is used to plan that target object is transported to user face (tool by mechanical arm according to the location information of user face and target object Body is mouth) motion path and synthetic user sight motion information and planning motion path generate mechanical arm control Signal controls the movement of mechanical arm;Mechanical arm is used for the control campaign according to mechanical arm control module;Feedback module installation In on mechanical arm, clamp whether target object succeeds for feeding back.
Data acquisition module specifically includes eeg signal acquisition unit, image acquisition units and Eye-controlling focus unit.Brain electricity Signal acquisition unit be used for acquire user carry out Mental imagery when generation Mental imagery EEG signals, and carry out signal filtering, Amplification is transmitted to PC processing module after analog-to-digital conversion, the mode of signal transmission be not limited to using USB, serial communication, bluetooth, The communication modes such as WIFI can be led brain electrode cap, multichannel brain electric amplifier, modulus by standard 10-20 and turned in the specific implementation Parallel operation and signal transmission unit composition, 10-20 conduction polar cap can acquire C3, FC3, CP3, C5, C4, FC4, CP4, C6 etc. eight The EEG signals of electrode channel.The target object three dimensional depth image that image acquisition units are used to acquire positioned at target position is believed Breath, and the deep image information of acquisition user face, receive the instruction that PC processing module is sent, and therefrom parse target position Set with user's face location information, in the specific implementation, image acquisition units can be made of two Kinect cameras, be obtained respectively Deep image information is taken, is extracted for later positions.Eye-controlling focus unit is used to acquire the sight motion information of user (specially Eyeball motion information), and it is transmitted to mechanical arm control module, in the specific implementation, eye tracker can be used, PC is installed on The underface of processing module display is realized by measuring the position of blinkpunkt of eyes to oculomotor tracking.
PC processing module specifically includes eeg data processing unit, identification positioning unit and Mental imagery guidance unit.Fortune Dynamic imagination guidance unit, for playing the audio and video of the guidance user movement imagination, specifically display.The eeg data Processing unit extracts brain electrical feature signal, and according to pre- instruction for pre-processing to the Mental imagery EEG signals of user Experienced classifier classifies to brain electrical feature signal, and parsing user's brain electricity is intended to obtain object corresponding to the brain electrical feature signal Body position wants the position of the target object of crawl as user, generates the acquisition instructions comprising target object location to data Acquisition module.Identify that positioning unit is used to believe according to the user face of data collecting module collected and the depth image of target object Breath extracts the location information of user face and target object.Wherein, the training process of classifier are as follows:
A, object is placed as shown in Fig. 2, present in two-dimensional space, is left and right, lower four kinds of position distributions, it is above-mentioned it is upper, Placed at left and right three positions different objects (food, fruit, other etc.), object and the position are not placed as machinery in lower section The initial position of arm end manipulator.
B, guidance user's setting in motion imagination, the different motion obtained under different imagination movements imagine EEG signals.The imagination Guidance unit guides user's beginning and end Mental imagery according to certain regular playing animation, generates Mental imagery brain telecommunications Number, the movement of Mental imagery is left hand, the right hand, both hands and loosens four kinds of motion states, the movement of Mental imagery and target object Position be one-to-one relationship.Such as the corresponding imagination bimanual movements of object above, i.e., to control auxiliary robot crawl The object of top then imagines bimanual movements, as shown in Figure 2.
C, different types of Mental imagery EEG signals are passed through into the denoising of 8-12Hz and 19-26Hz bandpass filter, The feature vector of signal is extracted using feature extraction algorithm again, it is preferred to use CSP cospace model algorithm extracts EEG signals Feature vector.
D, it using different EEG signals feature vectors and corresponding object space as sample, completes to divide using pattern classifier The training of class device, it is preferred to use linear classifier (LDA) is classified.After training classifier, an EEG signals spy is inputted Sign vector can be obtained by the location information of an object.
Mechanical arm is specially multiple degrees of freedom (6DOF) mechanical arm, and end is manipulator mechanism.
Mechanical arm control module specifically includes robotic arm path planning unit and manipulator motion control unit, mechanical arm road Diameter planning unit is used for the location information according to user face and target object, plans mechanical arm to clamp the position of target object As starting point, motion path of face (specially mouth) position of user as terminal;Manipulator motion control unit is used for User's sight motion information that data acquisition module is sent is as manipulator motion start stop signal, according to the motion path of planning It generates mechanical arm and controls signal, control the movement of mechanical arm.When it is implemented, using user's sight motion information control start and stop tool Body may is that from the brain electricity of user intention in parsing target object location then control manipulator be moved to target object just on Side waits crawl control signal, and user actively moves eyeball sight at this time, and Eye-controlling focus unit detects watching attentively for user eyeball Point changes, and generates active auxiliary control signal all the way and sends mechanical arm motion control unit, start mechanical arm, completes target crawl It is carried with mechanical arm, after the completion of target crawl, feedback module can export feedback signal and remind user that target crawl is completed, mechanical Arm started to carry target object after delay a period of time;When target object is carried to user's mouth according to motion profile by mechanical arm Afterwards, eye tracker is watched in user's active attentively, and Eye-controlling focus unit is made to detect that the eye movement variation Eye-controlling focus unit of user produces Raw mechanical arm return control signal to mechanical arm control module, mechanical arm control module controls mechanical arm and returns to initial position.
Feedback module specifically includes touch detection device and vibrational feedback unit, and touch detection device is installed on mechanical arm end Hold joint, the pressure change for detection mechanical arm;When it is implemented, can be examined using two pressure sensors as tactile Survey device, two independent pressure sensors are installed on mechanical arm tail end claw two sides, can acquire end claw whether stress, from And judge whether mechanical arm has successfully grabbed target object (food).Vibrational feedback unit is used to detect in touch detection device When to mechanical arm pressure increase, clamping pass signal is issued, clamping pass signal is not limited to the feedback signals such as sound, light, vibration To prompt user to clamp object success.
As shown in figure 3, the course of work of robot are as follows:
1) user is sitting in the front of Mental imagery guidance unit, is adjusted to comfortable position, wears brain wave acquisition Electrode cap, opens data acquisition module, and confirmation signal acquisition state is good.
2) Eye-controlling focus unit is installed on the underface of imagination guidance unit, and after user adjusts good position, test sight is chased after Track cell operation is in good condition.
3) Mental imagery prompts, and places object.The placement of object as shown in Fig. 2, present in two-dimensional space, be left and right, lower four Kind of position distribution, placed at above-mentioned upper, left and right three positions different target objects (food, fruit, other etc.), under Side does not place object and the position is the initial position of mechanical arm tail end manipulator.
4) start image acquisition units, confirm the face of user and the required object grabbed of mechanical arm in image acquisition units Field range.
5) user's setting in motion is imagined, eeg signal acquisition unit starts according to certain regular playing animation guidance user With terminate Mental imagery, generate Mental imagery EEG signals, the movement of Mental imagery is left hand, the right hand, both hands and loosens four kinds Motion state, the movement of Mental imagery and the position of target object are one-to-one relationship.
6) the brain electrical feature of user is obtained according to the Mental imagery EEG signals that step 5) is extracted, and according to point of pre-training Class device obtains object space corresponding to the brain electrical feature signal, and the position of the target object of crawl is wanted as user, generates Acquisition instructions comprising target object location to image acquisition units, image acquisition units acquire target object and face (mouth) Image information, be input to PC processing module.Such as, user imagines bimanual movements, is intended that clamping top object, then image is adopted Collect the image information that unit obtains top target object.
7) location information is extracted according to the image information that step 6) obtains, in conjunction with the installation site of mechanical arm, rationally established Coordinate system, solves mechanical arm to clamp the position of target object as starting point, inverse solution of the mouth position of user as terminal, And plan reasonable motion path.
8) target location coordinate obtained according to step 7), control mechanical arm tail end manipulator are moved to object to be grabbed The surface of body waits crawl control signal.
9) according to step 8), the end of mechanical arm is moved into the top of object to be grabbed, and user actively moves eyeball Sight, Eye-controlling focus unit detects that the blinkpunkt of user eyeball changes, generates active auxiliary control signal all the way, which passes Mechanical arm control module is transported to, mechanical arm control module controls mechanical arm and completes target crawl and mechanical arm carrying, target crawl After the completion, touch detection device detects that the variation of pressure can export feedback signal and remind user that target crawl is completed, mechanical Arm started to carry target object after delay a period of time.
10) according to step 9) and step 6), mechanical arm anticipates user by the brain electricity that Mental imagery brain-computer interface is expressed The target object that figure-wants crawl is carried near the mouth of user.When target object is different food, user is only needed Movement slightly can be completed and take to eating for target object, realize autonomous life.
11) after user completes the use of target object, Eye-controlling focus unit is watched in user's active attentively, makes Eye-controlling focus list Member detects the eye movement variation of user, and Eye-controlling focus unit generates mechanical arm return control signal, which is transmitted to mechanical arm Control module, mechanical arm control module control mechanical arm and return to initial position.
The step 8) and step 11) eye tracker generate control signal and realize especially by step once:
A, human eye faces eye tracker.
B, its eyeball fixes point that moves left and right of human eye active changes.
C, eye tracker detects that eyeball fixes point changes meeting trigger signal output, and the signal is for controlling mechanical arm system The starting and return of system.
Above disclosed is only a preferred embodiment of the present invention, and the right model of the present invention cannot be limited with this It encloses, therefore equivalent changes made in accordance with the claims of the present invention, is still within the scope of the present invention.

Claims (8)

1. a kind of rehabilitation auxiliary robot of mixing brain-computer interface driving, characterized by comprising:
Data acquisition module, for acquire the sight motion information of user and when carrying out Mental imagery generation Mental imagery brain Electric signal, and the deep image information according to PC processing module instruction acquisition user face and target object;
PC processing module, for obtaining the target object that user wants crawl according to the Mental imagery EEG signals of user, to number The deep image information of instruction acquisition user face and target object is sent according to acquisition module, and image information mentions based on the received Extracting position information;
Mechanical arm control module, for planning that mechanical arm transports target object according to the location information of user face and target object It is sent to the motion path of user face and the sight motion information of synthetic user and the motion path of planning generates mechanical arm control Signal processed controls the movement of mechanical arm;
Mechanical arm, for the control campaign according to mechanical arm control module;
Feedback module is installed on mechanical arm, for feeding back the clamping whether successful signal of target object.
2. the rehabilitation auxiliary robot of mixing brain-computer interface driving according to claim 1, it is characterised in that: the data Acquisition module specifically includes:
Eeg signal acquisition unit is gone forward side by side for acquiring the Mental imagery EEG signals of user's generation when carrying out Mental imagery PC processing module is transmitted to after the filtering of row signal, amplification, analog-to-digital conversion;
Image acquisition units, for acquiring the target object three dimensional depth image information for being located at target position, and acquisition user The deep image information of face receives the instruction that PC processing module is sent, and therefrom parses target position and user face position Confidence breath;
Eye-controlling focus unit for acquiring the sight motion information of user, and is transmitted to mechanical arm control module.
3. the rehabilitation auxiliary robot of mixing brain-computer interface driving according to claim 1, it is characterised in that: at the PC Reason module specifically includes:
Eeg data processing unit is pre-processed for the Mental imagery EEG signals to user, extracts brain electrical feature letter Number, and user's brain electricity is parsed according to the classifier of pre-training and is intended to obtain object space corresponding to the brain electrical feature signal, make The position of the target object of crawl is wanted for user, generates the acquisition instructions comprising target object location to data acquisition module;
Positioning unit is identified, for mentioning according to the user face of data collecting module collected and the deep image information of target object Take out the location information of user face and target object;
Mental imagery guidance unit, for playing the audio and video of the guidance user movement imagination.
4. the rehabilitation auxiliary robot of mixing brain-computer interface driving according to claim 1, it is characterised in that: the machinery Arm control module specifically includes:
Robotic arm path planning unit plans mechanical arm to clamp for the location information according to user face and target object The position of target object is as starting point, motion path of the face location of user as terminal;
Manipulator motion control unit, user's sight motion information for sending data acquisition module is as manipulator motion Start stop signal generates mechanical arm according to the motion path of planning and controls signal, controls the movement of mechanical arm.
5. the rehabilitation auxiliary robot of mixing brain-computer interface driving according to claim 1, it is characterised in that: the feedback Module specifically includes:
Touch detection device, is installed at robot arm end effector, the pressure change for detection mechanical arm end effector;
Vibrational feedback unit, for issuing clamping when touch detection device detects robot arm end effector pressure increase Pass signal prompts ownership goal object to clamp successfully.
6. the rehabilitation auxiliary robot of mixing brain-computer interface driving according to claim 2, it is characterised in that: the brain electricity Signal acquisition unit includes that sequentially connected standard 10-20 leads brain electrode cap, multichannel brain electric amplifier, analog-digital converter and letter Number transmission unit.
7. the rehabilitation auxiliary robot of mixing brain-computer interface driving according to claim 2, it is characterised in that: described image Acquisition unit includes two Kinect cameras, for obtaining the deep image information of target object and user face respectively.
8. the rehabilitation auxiliary robot of mixing brain-computer interface driving according to claim 2, it is characterised in that: the sight Tracing unit is specially eye tracker, is installed on the underface of PC processing module display, for the note by measurement eyes The position of viewpoint and realize to oculomotor tracking.
CN201811434591.8A 2018-11-28 2018-11-28 Rehabilitation assisting robot driven by hybrid brain-computer interface Active CN109605385B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811434591.8A CN109605385B (en) 2018-11-28 2018-11-28 Rehabilitation assisting robot driven by hybrid brain-computer interface

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811434591.8A CN109605385B (en) 2018-11-28 2018-11-28 Rehabilitation assisting robot driven by hybrid brain-computer interface

Publications (2)

Publication Number Publication Date
CN109605385A true CN109605385A (en) 2019-04-12
CN109605385B CN109605385B (en) 2020-12-11

Family

ID=66006322

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811434591.8A Active CN109605385B (en) 2018-11-28 2018-11-28 Rehabilitation assisting robot driven by hybrid brain-computer interface

Country Status (1)

Country Link
CN (1) CN109605385B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190387995A1 (en) * 2016-12-20 2019-12-26 South China University Of Technology Brain-Computer Interface Based Robotic Arm Self-Assisting System and Method
CN110827974A (en) * 2019-11-08 2020-02-21 上海第二工业大学 Intelligent auxiliary feeding nursing system and auxiliary feeding method thereof
CN111462906A (en) * 2020-04-26 2020-07-28 郑州大学 Visual system and man-machine interaction interface for assisting paralyzed patient to eat food
CN112085052A (en) * 2020-07-28 2020-12-15 中国科学院深圳先进技术研究院 Training method of motor imagery classification model, motor imagery method and related equipment
CN112757302A (en) * 2021-01-06 2021-05-07 北京航空航天大学 Control method of portable dining-assistant robot
CN113500611A (en) * 2021-07-22 2021-10-15 常州大学 Feeding robot system based on electroencephalogram and visual guidance
CN113849067A (en) * 2021-09-26 2021-12-28 华东理工大学 Motion imagery artificial data generation method and device based on empirical mode decomposition
CN113925742A (en) * 2021-10-20 2022-01-14 南通大学 Control method and control system of target-driven upper limb exoskeleton rehabilitation robot
CN115617046A (en) * 2022-11-01 2023-01-17 中国第一汽车股份有限公司 Path planning method and device, electronic equipment and storage medium
CN117873330A (en) * 2024-03-11 2024-04-12 河海大学 Electroencephalogram-eye movement hybrid teleoperation robot control method, system and device
CN117873330B (en) * 2024-03-11 2024-05-17 河海大学 Electroencephalogram-eye movement hybrid teleoperation robot control method, system and device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140336781A1 (en) * 2013-05-13 2014-11-13 The Johns Hopkins University Hybrid augmented reality multimodal operation neural integration environment
CN104360730A (en) * 2014-08-19 2015-02-18 西安交通大学 Man-machine interaction method supported by multi-modal non-implanted brain-computer interface technology
US20160052139A1 (en) * 2014-08-21 2016-02-25 Elwha Llc Systems, devices, and methods including a wheelchair-assist robot
CN105710885A (en) * 2016-04-06 2016-06-29 济南大学 Service-oriented movable manipulator system
CN106671084A (en) * 2016-12-20 2017-05-17 华南理工大学 Mechanical arm self-directed auxiliary system and method based on brain-computer interface
CN107037883A (en) * 2017-04-13 2017-08-11 安徽大学 A kind of mixing brain machine interface system and method based on Mental imagery
CN107358026A (en) * 2017-06-14 2017-11-17 中国人民解放军信息工程大学 A kind of disabled person based on brain-computer interface and Internet of Things intelligently accompanies and attends to system
CN108646915A (en) * 2018-05-03 2018-10-12 东南大学 The method and system of object is captured in conjunction with three-dimensional eye tracking and brain-computer interface control machinery arm

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140336781A1 (en) * 2013-05-13 2014-11-13 The Johns Hopkins University Hybrid augmented reality multimodal operation neural integration environment
CN104360730A (en) * 2014-08-19 2015-02-18 西安交通大学 Man-machine interaction method supported by multi-modal non-implanted brain-computer interface technology
US20160052139A1 (en) * 2014-08-21 2016-02-25 Elwha Llc Systems, devices, and methods including a wheelchair-assist robot
CN105710885A (en) * 2016-04-06 2016-06-29 济南大学 Service-oriented movable manipulator system
CN106671084A (en) * 2016-12-20 2017-05-17 华南理工大学 Mechanical arm self-directed auxiliary system and method based on brain-computer interface
CN107037883A (en) * 2017-04-13 2017-08-11 安徽大学 A kind of mixing brain machine interface system and method based on Mental imagery
CN107358026A (en) * 2017-06-14 2017-11-17 中国人民解放军信息工程大学 A kind of disabled person based on brain-computer interface and Internet of Things intelligently accompanies and attends to system
CN108646915A (en) * 2018-05-03 2018-10-12 东南大学 The method and system of object is captured in conjunction with three-dimensional eye tracking and brain-computer interface control machinery arm

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
徐宝国 等: "基于运动想象脑电的机器人连续控制系统研究", 《仪器仪表学报》 *
王言鑫: "基于混合视线_脑机接口与共享控制的人_机器人交互系统", 《机器人》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190387995A1 (en) * 2016-12-20 2019-12-26 South China University Of Technology Brain-Computer Interface Based Robotic Arm Self-Assisting System and Method
US11602300B2 (en) * 2016-12-20 2023-03-14 South China University Of Technology Brain-computer interface based robotic arm self-assisting system and method
CN110827974A (en) * 2019-11-08 2020-02-21 上海第二工业大学 Intelligent auxiliary feeding nursing system and auxiliary feeding method thereof
CN111462906A (en) * 2020-04-26 2020-07-28 郑州大学 Visual system and man-machine interaction interface for assisting paralyzed patient to eat food
CN112085052A (en) * 2020-07-28 2020-12-15 中国科学院深圳先进技术研究院 Training method of motor imagery classification model, motor imagery method and related equipment
CN112757302A (en) * 2021-01-06 2021-05-07 北京航空航天大学 Control method of portable dining-assistant robot
CN113500611A (en) * 2021-07-22 2021-10-15 常州大学 Feeding robot system based on electroencephalogram and visual guidance
CN113849067A (en) * 2021-09-26 2021-12-28 华东理工大学 Motion imagery artificial data generation method and device based on empirical mode decomposition
CN113925742A (en) * 2021-10-20 2022-01-14 南通大学 Control method and control system of target-driven upper limb exoskeleton rehabilitation robot
CN115617046A (en) * 2022-11-01 2023-01-17 中国第一汽车股份有限公司 Path planning method and device, electronic equipment and storage medium
CN117873330A (en) * 2024-03-11 2024-04-12 河海大学 Electroencephalogram-eye movement hybrid teleoperation robot control method, system and device
CN117873330B (en) * 2024-03-11 2024-05-17 河海大学 Electroencephalogram-eye movement hybrid teleoperation robot control method, system and device

Also Published As

Publication number Publication date
CN109605385B (en) 2020-12-11

Similar Documents

Publication Publication Date Title
CN109605385A (en) A kind of rehabilitation auxiliary robot of mixing brain-computer interface driving
CN103838378B (en) A kind of wear-type eyes control system based on pupil identification positioning
Güneysu et al. An SSVEP based BCI to control a humanoid robot by using portable EEG device
US20160235323A1 (en) Physiological parameter measurement and feedback system
WO2012167653A1 (en) Visualised method for guiding the blind and intelligent device for guiding the blind thereof
Schröer et al. An autonomous robotic assistant for drinking
Achic et al. Hybrid BCI system to operate an electric wheelchair and a robotic arm for navigation and manipulation tasks
US20190286234A1 (en) System and method for synchronized neural marketing in a virtual environment
CN110824979B (en) Unmanned equipment control system and method
Zhang et al. An intention-driven semi-autonomous intelligent robotic system for drinking
CN108646915B (en) Method and system for controlling mechanical arm to grab object by combining three-dimensional sight tracking and brain-computer interface
Kapeller et al. A BCI using VEP for continuous control of a mobile robot
CN101947152A (en) Electroencephalogram-voice control system and working method of humanoid artificial limb
Pathirage et al. A vision based P300 brain computer interface for grasping using a wheelchair-mounted robotic arm
Hu et al. StereoPilot: A wearable target location system for blind and visually impaired using spatial audio rendering
CN112587285A (en) Multi-mode information guide environment perception myoelectricity artificial limb system and environment perception method
CN105137830A (en) Traditional Chinese painting mechanical hand based on visual evoking brain-machine interface, and drawing method thereof
Finke et al. A hybrid brain interface for a humanoid robot assistant
CN106681509A (en) Interface operating method and system
KR102048551B1 (en) System and Method for Virtual reality rehabilitation training using Smart device
Ashok High-level hands-free control of wheelchair–a review
Schiatti et al. Soft brain-machine interfaces for assistive robotics: A novel control approach
CN110673721A (en) Robot nursing system based on vision and idea signal cooperative control
CN108008810A (en) A kind of confirmation method and system based on Mental imagery
CN210278193U (en) Comprehensive rehabilitation treatment system based on motion capture and biofeedback

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant