WO2015192610A1 - 一种基于脑机接口与自动驾驶技术的智能轮椅控制方法 - Google Patents
一种基于脑机接口与自动驾驶技术的智能轮椅控制方法 Download PDFInfo
- Publication number
- WO2015192610A1 WO2015192610A1 PCT/CN2014/093071 CN2014093071W WO2015192610A1 WO 2015192610 A1 WO2015192610 A1 WO 2015192610A1 CN 2014093071 W CN2014093071 W CN 2014093071W WO 2015192610 A1 WO2015192610 A1 WO 2015192610A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- wheelchair
- brain
- destination
- computer interface
- user
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 37
- 238000005516 engineering process Methods 0.000 title claims abstract description 21
- 210000004556 brain Anatomy 0.000 title claims abstract description 16
- 230000033001 locomotion Effects 0.000 claims abstract description 40
- 239000013598 vector Substances 0.000 claims description 40
- 230000004397 blinking Effects 0.000 claims description 15
- 238000003384 imaging method Methods 0.000 claims description 10
- 238000002610 neuroimaging Methods 0.000 claims description 9
- 230000008569 process Effects 0.000 claims description 9
- 239000003086 colorant Substances 0.000 claims description 6
- 238000001514 detection method Methods 0.000 claims description 6
- 239000000284 extract Substances 0.000 claims description 6
- 238000001914 filtration Methods 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 claims description 6
- 230000000877 morphologic effect Effects 0.000 claims description 6
- 239000007787 solid Substances 0.000 claims description 6
- 238000003672 processing method Methods 0.000 claims description 5
- 230000008859 change Effects 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 230000000717 retained effect Effects 0.000 claims description 3
- 230000011218 segmentation Effects 0.000 claims description 3
- 238000012795 verification Methods 0.000 claims description 3
- 230000003340 mental effect Effects 0.000 abstract description 5
- 230000001769 paralizing effect Effects 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 4
- 238000011160 research Methods 0.000 description 4
- 230000007659 motor function Effects 0.000 description 3
- 230000001537 neural effect Effects 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 230000000763 evoking effect Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 210000004761 scalp Anatomy 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 206010061296 Motor dysfunction Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000003925 brain function Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000001054 cortical effect Effects 0.000 description 1
- 238000007428 craniotomy Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000003205 muscle Anatomy 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 210000001428 peripheral nervous system Anatomy 0.000 description 1
- 230000033764 rhythmic process Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000001356 surgical procedure Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61G—TRANSPORT, PERSONAL CONVEYANCES, OR ACCOMMODATION SPECIALLY ADAPTED FOR PATIENTS OR DISABLED PERSONS; OPERATING TABLES OR CHAIRS; CHAIRS FOR DENTISTRY; FUNERAL DEVICES
- A61G5/00—Chairs or personal conveyances specially adapted for patients or disabled persons, e.g. wheelchairs
- A61G5/04—Chairs or personal conveyances specially adapted for patients or disabled persons, e.g. wheelchairs motor-driven
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
- G01C21/206—Instruments for performing navigational calculations specially adapted for indoor navigation
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B6/00—Internal feedback arrangements for obtaining particular characteristics, e.g. proportional, integral or differential
- G05B6/02—Internal feedback arrangements for obtaining particular characteristics, e.g. proportional, integral or differential electric
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0217—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with energy consumption, time reduction or distance reduction criteria
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
- G05D1/0253—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0268—Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
- G05D1/027—Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means comprising intertial navigation means, e.g. azimuth detector
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0268—Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
- G05D1/0272—Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means comprising means for registering the travel distance, e.g. revolutions of wheels
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0268—Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
- G05D1/0274—Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/015—Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/0482—Interaction with lists of selectable items, e.g. menus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/10—Machine learning using kernel methods, e.g. support vector machines [SVM]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61G—TRANSPORT, PERSONAL CONVEYANCES, OR ACCOMMODATION SPECIALLY ADAPTED FOR PATIENTS OR DISABLED PERSONS; OPERATING TABLES OR CHAIRS; CHAIRS FOR DENTISTRY; FUNERAL DEVICES
- A61G2203/00—General characteristics of devices
- A61G2203/10—General characteristics of devices characterised by specific control means, e.g. for adjustment or steering
- A61G2203/18—General characteristics of devices characterised by specific control means, e.g. for adjustment or steering by patient's head, eyes, facial muscles or voice
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61G—TRANSPORT, PERSONAL CONVEYANCES, OR ACCOMMODATION SPECIALLY ADAPTED FOR PATIENTS OR DISABLED PERSONS; OPERATING TABLES OR CHAIRS; CHAIRS FOR DENTISTRY; FUNERAL DEVICES
- A61G2203/00—General characteristics of devices
- A61G2203/10—General characteristics of devices characterised by specific control means, e.g. for adjustment or steering
- A61G2203/22—General characteristics of devices characterised by specific control means, e.g. for adjustment or steering for automatically guiding movable devices, e.g. stretchers or wheelchairs in a hospital
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61G—TRANSPORT, PERSONAL CONVEYANCES, OR ACCOMMODATION SPECIALLY ADAPTED FOR PATIENTS OR DISABLED PERSONS; OPERATING TABLES OR CHAIRS; CHAIRS FOR DENTISTRY; FUNERAL DEVICES
- A61G2203/00—General characteristics of devices
- A61G2203/70—General characteristics of devices with special adaptations, e.g. for safety or comfort
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0238—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
- G05D1/024—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/20—Ensemble learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/01—Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound
Definitions
- the invention relates to the field of brain-computer interface application research and artificial intelligence, and particularly relates to an intelligent wheelchair control method based on brain-computer interface and automatic driving technology.
- the neural signal based brain machine interface (brain Computer Interface, BCI) is a rapid development as a human-computer interaction method, and it is also a hot topic in brain function research in recent years.
- BCI brain Computer Interface
- the brain-computer interface as a new interactive way to control electric wheelchairs also faces new challenges: the accurate expression of human intentions through the brain-computer interface requires a high degree of concentration. Therefore, if the wheelchair is directly controlled through the brain-computer interface, it will cause a huge mental burden to the disabled.
- due to the instability of the brain signal we are currently unable to obtain the same information transmission rate as the wheelchair control lever through the prior art, and it is difficult to achieve the steering ability as the control joystick can achieve.
- the brain-computer interface refers to the direct communication and control channel established between the brain and the computer or other equipment. It does not depend on the peripheral nervous system and muscle tissue, and is a brand-new human-machine interface. Brain-computer interfaces are divided into implanted and non-implanted types. The brain signal obtained by the implanted brain-computer interface has relatively high precision, high signal-to-noise ratio, and is easy to analyze and process. However, it is necessary to perform craniotomy on the user, which is of great risk and is mainly used for animal experiments.
- the non-implanted brain-computer interface acquires a large brain signal noise, and the signal characteristics are poorly distinguishable, but no brain surgery is required, and as the signal processing methods and techniques continue to advance, the scalp brain electrical (electroencephalogram, The processing of EEG) has been able to reach a certain level, making it possible to apply the brain-computer interface to real life.
- the brain-computer interface mentioned in the following refers to a non-implantable brain-computer interface.
- the signals used in the non-implantable brain-computer interface research mainly include P300, steady-state visual evoked potential (steady State visually evoked potential (SSVEP) and other event related potentials (event related potential, ERP), mu and beta rhythm, slow cortical potential (SCP), etc.
- the brain-computer interface usually consists of three parts: 1) signal acquisition. 2) Signal processing. The user's consciousness is extracted from the neural signal, and the input user's neural signal is converted into an output command for controlling the external device through a specific pattern recognition algorithm. 3) Control external devices. The external device is driven according to the user's consciousness, thereby replacing the lost movement and communication ability of the user.
- the brain-controlled wheelchair systems use the brain-computer interface to directly control the wheelchair, without adding automatic driving technology, such as the Chinese patent (a new type of intelligent wheelchair system based on motor-imaginative EEG control, publication number: CN 101897640 A; Disabled wheelchair based on motor image control, publication number: CN 101953737 A).
- the collector is imagining the scalp EEG signal during the left and right hand movements, and by analyzing the specific components of the EEG, the direction of the user's imagination is determined, and the direction of the wheelchair movement is controlled.
- Chinese patent Intelligent wheelchair based on multi-modal brain-computer interface, publication number: CN 102309380 A).
- the invention uses a multi-modal brain-computer interface to achieve multi-degree of freedom control of the wheelchair.
- the start, stop, retreat and speed control of the electric wheelchair are realized by the event-related potential P300; the direction control of the wheelchair is realized by motion imaging.
- the above invention has the following three problems: (1) Wheelchair control is multi-target, including starting, stopping, direction control, and speed control. But current brain-computer interfaces are difficult to produce so many control commands. Despite the patent (intelligent wheelchair based on multimodal brain-computer interface, publication number: CN 102309380 A) A variety of control commands have been obtained using a multi-modal brain-computer interface, but the use of P300 or SSVEP to generate precise control commands takes a long time and is not suitable for practical control of the wheelchair. (2) The performance of the brain-computer interface varies from person to person.
- a wheelchair with an automatic driving function does not require any control commands when navigating.
- the automatic navigation system cannot execute all control commands.
- the automatic navigation system does not automatically recognize the instructions of the user's destination, so a specific human-machine interface is required to deliver destination information to the automated navigation system.
- ALS motor function loss
- the object of the present invention is to overcome the shortcomings and deficiencies of the prior art, and to provide an intelligent wheelchair control method based on brain-computer interface and automatic driving technology.
- An intelligent wheelchair control method based on brain-computer interface and automatic driving technology comprising the following sequence of steps:
- the user selects a destination through a brain-computer interface
- step S1 the obstacle is achieved by positioning in the following sequence:
- the method for self-positioning the wheelchair includes the following sequence of steps:
- the straight line is extracted by a least squares fitting algorithm, and the extracted straight line is converted into a vector with direction information according to the direction of the lidar scanning;
- the track estimation is performed on the position of the wheelchair at the next moment, and the coordinate obtained by the laser radar is coordinate-transformed according to the position estimated by the track;
- the step S4 is specifically to select a destination through a brain imaging interface of the motion imagination, and includes the following sequence of steps:
- the candidate destinations are respectively represented by light and dark solid circles, and the two colors represent destinations of two different categories;
- the step S4 is specifically to select a destination through the brain-computer interface of the P300, and includes the following sequence of steps:
- the user first has 20 seconds to determine the number of the destination he wants to select from the graphical user interface interface;
- EEG signal is filtered by 0.1 ⁇ 20Hz bandpass, 5Hz downsampling
- the intelligent wheelchair control method based on brain-computer interface and automatic driving technology further includes a wheelchair stopping process, and if the user wants to stop the wheelchair and change the destination, the stop command can be sent to the wheelchair through the brain imaging interface based on the motion imagination or P300.
- Specific steps are as follows:
- the present invention has the following advantages and beneficial effects:
- the wheelchair system introduces the concept of collaborative control, making full use of the advantages of human intelligence, automatic driving and precise control capabilities, and allowing the two to control different aspects to complement each other.
- the automatic navigation system locates the obstacle in real time based on the obstacle information that the sensor (the webcam fixed on the wall) fully senses. According to the position information of the obstacle in the room, the candidate destination for the user to select and the track point for the planned route are automatically generated.
- the user can select a destination via a brain imaging interface based on motion imaging or P300. Based on the selected destination, the navigation system plans a shortest and safest path and automatically navigates the wheelchair to the selected destination.
- the user can send a stop command through the brain-computer interface, and the use of the system proposed by the present invention can greatly reduce the mental burden on the user.
- Each navigation task only requires the user to select a destination through the brain-computer interface before the wheelchair is started.
- the automatic navigation system automatically navigates the wheelchair to the destination selected by the user, and does not require any instructions from the user during navigation. Therefore, compared to other inventions, our system greatly reduces the mental burden of users;
- 1 is an application interface of an intelligent wheelchair control method based on a brain-computer interface and an automatic driving technology according to the present invention
- GUI 2 is a graphical user interface (GUI) diagram of motion estimation based destination selection of the method of FIG. 1;
- GUI 3 is a graphical user interface (GUI) diagram of a P300 based destination selection of the method of FIG. 1;
- Figure 4 is a system block diagram of the method of Figure 1;
- FIG. 5 is a flow chart of a wheelchair self-positioning algorithm of the method of FIG.
- an intelligent wheelchair control method based on brain-computer interface and automatic driving technology includes the following sequence of steps:
- the self-positioning of the wheelchair and the self-positioning of the wheelchair specifically include the following sequence of steps:
- Wheelchair self-positioning is divided into two categories: initial positioning and process positioning.
- the user selects the destination through the brain-computer interface:
- the first one as shown in Figure 2, select the destination through the brain imaging interface of the motion imagination, including the following sequence of steps:
- the candidate destinations are respectively represented by light and dark solid circles, and the two colors represent destinations of two different categories;
- the user first has 20 seconds to determine the number of the destination he wants to select from the graphical user interface interface;
- EEG signal is filtered by 0.1 ⁇ 20Hz bandpass, 5Hz downsampling
- the stop command can be sent to the wheelchair through the brain imaging interface based on the motion imagination or P300.
- the specific steps are as follows:
- the collected EEG data is transmitted to the on-board computer for real-time processing; and SICK is fixed in front of the wheelchair.
- the LMS111 laser radar transmits data to the on-board computer in real time through the TCP network for wheelchair self-positioning; the odometer fixed on the left and right drive wheels of the wheelchair transmits real-time data through the serial port, converts it into line speed and angular velocity and uses it as feedback data of the PID controller. Used to adjust the current speed of the wheelchair in real time;
- the webcam fixed on the wall of the room is connected to the onboard computer through the wireless network, and controls whether to transmit the current image data through the onboard computer, performs image processing on the transmitted image data, and divides the obstacle in the room from the floor image for use in Locate obstacles in the room;
- the automatic navigation system automatically generates a destination that can be selected by the user. These destinations are distributed around the obstacle and evenly distributed on the open space at a distance of 1 meter; the generalized structure is constructed according to the distribution of obstacles in the room.
- the Voronoi diagram uses the edge of the constructed Voronoi diagram as the path that the wheelchair can pass. The path formed in this way is as far as possible from the obstacles on both sides of the path, so it is safer to use this as the navigation path; the edge of the Voronoi diagram is 0.2.
- the meter extracts the track points, and the coordinate information of each track point and the adjacent relationship between each track point are input to the path planning module. Once the user selects the destination, the path planning module plans a shortest path based on the current wheelchair location, the location of the destination, and the information of the track point;
- the path tracking module calculates the reference linear velocity and angular velocity based on the current wheelchair position and the planned path. Taking into account the safety and comfort of driving a wheelchair, the line speed is fixed to 0.2m/s and the angular speed is not more than 0.6rad/s; the reference line speed and angular velocity are transmitted to the motion control module (ie PID controller) for control Based on the collected odometer information as feedback of the current speed, the device controls the wheelchair to the destination in real time.
- the motion control module ie PID controller
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Theoretical Computer Science (AREA)
- Automation & Control Theory (AREA)
- General Engineering & Computer Science (AREA)
- Aviation & Aerospace Engineering (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Electromagnetism (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Medical Informatics (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Biomedical Technology (AREA)
- Neurosurgery (AREA)
- Neurology (AREA)
- Dermatology (AREA)
- Animal Behavior & Ethology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
一种基于脑机接口与自动驾驶技术的智能轮椅控制方法,包括以下步骤:根据网络摄像头获取当前图片对障碍物实现定位;障碍物信息产生候选目的地和用于路径规划的航迹点;对轮椅进行自定位;用户通过脑机接口选择目的地;将轮椅当前的位置作为起点,用户选择的目的地作为终点,结合航迹点规划最优路径;计算轮椅当前位置与最优路径的位置差作为PID路径跟踪算法的反馈;PID路径跟踪算法计算出参考角速度和线速度并入到PID运动控制器,并将里程数据转化为当前的角速度和线速度信息作为PID运动控制器的反馈,实时控制轮椅行驶至目的地。该智能轮椅控制方法,极大程度地减轻用户的精神负担,且能适应多变的环境,提高了重症瘫痪病人的生活自理能力。
Description
技术领域
本发明涉及脑机接口应用研究及人工智能领域,特别涉及一种基于脑机接口与自动驾驶技术的智能轮椅控制方法。
背景技术
世界各地有数以百万计的残疾人由于患有运动功能障碍从而丧失了运动功能。其中成千上万人的日常生活需要依赖电动轮椅。但是仍有一部分丧失运动功能的人不能操控传统的电动轮椅,有以下两个方面的原因:(1)
他们不能通过传统的接口(如轮椅的控制杆)控制这类轮椅;(2)他们被认为没有能力安全地控制这类轮椅。
随着人工智能技术的高速发展,越来越多的研究成果被应用于辅助此类人群运动功能,从而提高他们的生活质量。其中,基于神经信号的脑机接口(brain
computer
interface,BCI)作为一种人机交互方式发展尤为迅速,也是近年来脑功能研究的热点课题。但是,脑机接口作为一种新的交互方式来控制电动轮椅也面临着新的挑战:通过脑机接口准确地表达人的意图需要高度集中精神。因此,如果通过脑机接口直接控制轮椅行驶,会给残疾人产生巨大的精神负担。此外,由于脑信号不稳定,我们当前无法通过现有技术获得像轮椅控制杆同样的信息传输率,并且很难达到像控制操纵杆所能达到的操控能力。
脑机接口是指在脑和计算机或其它设备之间建立的直接的交流和控制通道,它不依赖于外周神经系统及肌肉组织,是一种全新的人机接口方式。脑机接口分为植入式与非植入式两类。植入式脑机接口所获得的脑信号精度相对较高,信噪比高,易于分析处理,但需要对使用者进行开颅手术,危险性较大,目前主要用于动物实验研究。非植入式脑机接口获取的脑信号噪声大,信号特征的可区分性差,但获取脑信号不需要进行任何手术,而且随着信号处理方法和技术的不断进步,对头皮脑电(electroencephalogram,EEG)的处理已经能够达到一定的水平,使脑机接口应用于实际生活成为可能。本发明以下内容所提及的脑机接口均指非植入式的脑机接口。目前,非植入式脑机接口研究所使用的信号主要包括P300,稳态视觉诱发电位(steady
state visually evoked potential, SSVEP)等事件相关电位(event related potential,
ERP),mu及beta节律,慢皮层电位(slow cortical potential, SCP)等。
脑机接口通常包括三部分:1)信号采集。2)信号处理。从神经信号中提取使用者的意识,并通过特定的模式识别算法将输入的使用者的神经信号转换为控制外部设备的输出指令。3)控制外部设备。根据使用者的意识来驱动外部设备,从而替代用户丧失的运动和交流能力。
目前,大部分的脑控轮椅系统都是利用脑机接口直接控制轮椅,没有加入自动驾驶技术,例如中国专利(一种新型的基于运动想象脑电控制的智能轮椅系统,公开号:CN
101897640 A;基于运动想象控制的残疾人轮椅车,公开号:CN 101953737
A)。采集人在想象左右手运动过程中的头皮脑电信号,通过分析脑电特异性成分,判断用户想象的方向,实现对轮椅运动方向的控制。中国专利(基于多模态脑机接口的智能轮椅,公开号:CN
102309380
A)。该发明采用多模态脑机接口对轮椅实现多自由度控制。通过事件相关电位P300实现对电动轮椅的启动、停止、后退以及速度控制;通过运动想象实现轮椅的方向控制。以上所述发明存在以下3个问题:(1)轮椅控制是多目标的,包括起动、停止,方向控制和速度控制。但是当前脑机接口很难产生如此多的控制命令。尽管专利(基于多模态脑机接口的智能轮椅,公开号:CN
102309380
A)已经采用多模态的脑机接口获得多种控制命令,但是用P300或者SSVEP产生精确的控制命令需要的时间较长,不适合对轮椅实际的控制。(2)脑机接口的性能因人而异。例如,很多人经过长时间的运动想象训练仍不能产生可以明显区分的控制信号。(3)长时间通过脑机接口控制轮椅对用户产生较大的精神负担。将自动驾驶技术引入到轮椅控制系统中可以解决上述问题。具有自动驾驶功能的轮椅在导航时,不需要任何控制命令。但是自动导航系统不能执行所有的控制命令。例如,自动导航系统不能自动识别出用户目的地的指令,因此需要一个特定的人机接口向自动导航系统传递目的地信息。但是对于运动功能丧失的残疾人,例如(ALS)病人,使用传统的人机接口(例如,控制杆,键盘等)会存在障碍。因此脑机接口技术与自动驾驶技术的结合对解决以上问题会是一个很好的方向。
发明内容
本发明的目的在于克服现有技术的缺点与不足,提供一种基于脑机接口与自动驾驶技术的智能轮椅控制方法。
本发明的目的通过以下的技术方案实现:
一种基于脑机接口与自动驾驶技术的智能轮椅控制方法,包括以下顺序的步骤:
S1.通过固定在墙面上的网络摄像头获取当前的图片信息,将获取的图片采用图像处理方法对障碍物实现定位;
S2.根据障碍物信息产生候选目的地和用于路径规划的航迹点;
S3.对轮椅进行自定位;
S4.用户通过脑机接口选择目的地;
S5.将轮椅当前的位置作为起点,用户选择的目的地作为终点,结合障碍物定位后产生的航迹点,经过A*算法进行路径规划,产生一条最短的最优路径;
S6.获取最优路径后,计算轮椅当前的位置与最优路径的位置差,将位置差作为PID路径跟踪算法的反馈,由PID路径跟踪算法计算出参考的角速度和线速度;
S7.将参考角速度和线速度输入到PID运动控制器,并从固定在轮椅左右侧轮子上的里程计获得里程数据,将里程数据转化为当前的角速度和线速度信息,然后将转化得到的角速度和线速度信息作为PID运动控制器的反馈,从而调节轮椅的控制信号,实时控制轮椅行驶至目的地。
步骤S1中,所述的障碍物实现定位是通过以下顺序的步骤完成的:
(1)用阈值分割方法将图片中障碍物与地板分割;
(2)通过形态学开操作去除噪声,形态学闭操作重建开操作中去除的区域,从而获取每一个已分割区域的轮廓;
(3)通过去除比较小的轮廓,达到进一步去噪,然后将剩下的轮廓用凸包拟合;
(4)根据对应关系矩阵,将凸包的顶点映射到全局坐标系即地面平面坐标系;其中对应关系矩阵表示图像的像素坐标系与地面平面坐标系之间的对应关系;
(5)计算每张图片对应的凸包在全局坐标系下的相交区域,障碍物在坐标系中所对应的位置可以用这些相交区域近似。
所述的步骤S3,对轮椅进行自定位的方法包含以下顺序的步骤:
A、初始定位
(1)根据激光雷达采集到的距离点信息,用最小二乘拟合算法提取直线,并根据激光雷达扫描的方向,将提取到的直线转换成有方向信息的向量;
(2)将提取到的向量与环境地图中向量进行匹配,根据匹配的向量对,计算出轮椅当前的位置;
B、过程定位
(1)根据上一时刻轮椅的位置信息,并根据里程计的数据对轮椅下一时刻的位置进行航迹推算,根据航迹推算的位置对激光雷达获取到的向量进行坐标变换;
(2)将坐标变换后的向量与环境地图中向量进行匹配,根据匹配的向量对,计算出轮椅当前时刻的位置。
所述的步骤S4,具体为通过运动想象的脑机接口选择目的地,包含以下顺序的步骤:
(1)候选目的地分别用浅色和深色的实心圆表示,两种颜色代表两种不同类别的目的地;
(2)如果用户想选择一个浅色/深色的目的地,他需要根据图形用户接口界面中水平条的颜色相应地执行至少2秒的左/右手运动想象;当脑机接口系统检测到左/右手运动想象,浅色/深色的目的地保留在GUI中,并进一步将保留在GUI中的目的地划分成两类,两类分别用浅色和深色区别,其他的目的地从GUI中消失;
(3)用户一直重复这个选择过程直到只剩下一个目的地,最后用户需要继续执行2秒的左/右手运动想象接受/拒绝选择的目的地。
所述运动想象的检测算法步骤如下:
(1)提取200ms的EEG信号,使用共同平均参考(common average reference,
CAR)滤波,8~30Hz的带通滤波;
(2)将滤波后的EEG信号使用共同空间模式(Common spatial
pattern,CSP)投影后作为特征向量;
(3)将获得的特征向量输入到SVM分类器,得到预测的类和对应的SVM输出值,如果SVM输出值超过一定的阈值,对应类别作为输出结果。
所述的步骤S4,具体为通过P300的脑机接口选择目的地,包含以下顺序的步骤:
(1)用户首先有20秒的时间从图形用户接口界面中确定他想选择的目的地的编号;
(2)20秒后,P300的
GUI将出现在屏幕上,其中每个闪烁键的编号与图形用户接口界面中实心圆中的编号一致;
(3)利用出现在屏幕上P300的GUI所示的脑机接口,用户可以通过注视对应编号的闪烁键来选择目的地;
(4)当选择完目的地,用户需要继续注视着闪烁键‘O/S’进一步验证;否则,用户需要注视着闪烁键‘Delete’拒绝上一次的选择,并重新选择目的地。
所述P300检测算法步骤如下:
(1)EEG信号经过0.1~20Hz的带通滤波,5Hz的下采样;
(2)对P300的GUI中的每一个闪烁键,提取每个通道的一段EEG信号形成一个向量,结合所有通道的向量形成特征向量,其中EEG信号长度为闪烁后600ms;
(3)SVM分类器应用到这些特征向量,获得对应40个闪烁键的值;
(4)经过4个round后,计算对应每一个键的SVM值的和,并找出其中最大的和次大的值,如果最大和次大的值的差超过某一阈值,对应最大的值的闪烁键作为输出的结果;否则继续检测前4个round,直到满足阈值条件;其中,所有的闪烁键随机地闪烁完一次定义为一个round。
所述的基于脑机接口与自动驾驶技术的智能轮椅控制方法,还包括轮椅行驶过程中,如果用户想停止轮椅并更改目的地,可以通过基于运动想象或者P300的脑机接口向轮椅发送停止命令,具体步骤如下:
(1)通过运动想象的脑机接口停止轮椅:在轮椅行驶过程中,想象左手运动超过3秒超过预设的阈值,一方面,脑机接口系统会向轮椅控制器直接发送停止指令;另一方面,车载计算机显示用于选择目的地的用户接口;
(2)通过基于P300的脑机接口停止轮椅:在轮椅行驶过程中,用户只需注视着图3中的闪烁键‘O/S’,一旦脑机接口系统检测到对应闪烁键‘O/S’的P300,一方面,脑机接口系统会向轮椅控制器直接发送停止指令;另一方面,车载计算机显示用于选择目的地的用户接口,供用户重新选择目的地。
本发明与现有技术相比,具有如下优点和有益效果:
1、本轮椅系统引入了协同控制的概念,充分利用人的智能、自动驾驶精确的控制能力的优点,并让两者控制不同的方面相互取长补短。根据传感器(固定在墙面上的网络摄像头)充分感知的障碍物信息,自动导航系统对障碍物进行实时的定位。根据障碍物在房间的位置信息,供用户选择的候选目的地和用于规划路径的航迹点自动产生。用户可以通过基于运动想象或者P300的脑机接口选择一个目的地。根据选择的目的地,导航系统规划出一条最短且最安全的路径,并自动导航轮椅到达选择的目的地。在轮椅行驶至目的地的过程中,用户能通过脑机接口发送停止命令,使用本发明所提出的系统,能够极大程度地减轻用户的精神负担。
每一次导航任务只需要用户在轮椅起动前通过脑机接口选择目的地,自动导航系统就会自动导航轮椅到达用户选择的目的地,导航期间不需要用户发出任何指令。因此,与其它发明相比,我们的系统极大程度地减轻了用户的精神负担;
2、轮椅行驶的路径是根据当前环境自动产生的,而不是离线预定义的。因此,我们的系统更能适应多变的环境。
附图说明
图1为本发明所述的基于脑机接口与自动驾驶技术的智能轮椅控制方法的应用界面;
图2为图1所述方法的基于运动想象的目的地选择的图形用户接口(GUI)图;
图3为图1所述方法的基于P300的目的地选择的图形用户接口(GUI)图;
图4为图1所述方法的系统框图;
图5为图1所述方法的轮椅自定位算法的流程图。
具体实施方式
下面结合实施例及附图对本发明作进一步详细的描述,但本发明的实施方式不限于此。
实施例一
如图1、2、3、4、5,一种基于脑机接口与自动驾驶技术的智能轮椅控制方法,包括以下顺序的步骤:
S1.通过固定在墙面上的网络摄像头获取当前的图片信息,将获取的图片采用图像处理方法对障碍物实现定位;所述的障碍物实现定位是通过以下顺序的步骤完成的:
(1)用阈值分割方法将图片中障碍物与地板分割;
(2)通过形态学开操作去除噪声,形态学闭操作重建开操作中去除的区,从而获取每一个已分割区域的轮廓;
(3)通过去除比较小的轮廓,达到进一步去噪,然后将剩下的轮廓用凸包拟合;
(4)根据对应关系矩阵,将凸包的顶点映射到全局坐标系即地面平面坐标系;其中对应关系矩阵表示图像的像素坐标系与地面平面坐标系之间的对应关系;
(5)计算每张图片对应的凸包在全局坐标系下的相交区域,障碍物在坐标系中所对应的位置可以用这些相交区域近似;
S2.根据障碍物信息产生候选目的地和用于路径规划的航迹点;
S3.如图5,对轮椅进行自定位,对轮椅进行自定位的方法具体包含以下顺序的步骤:
轮椅的自定位分为两类:初始定位和过程定位。
S31.初始定位:(1)根据激光雷达采集到的距离点信息,用最小二乘拟合算法提取直线,并根据激光雷达扫描的方向,将提取到的直线转换成有方向信息的向量。(2)将提取到的向量与环境地图中向量进行匹配,根据匹配的向量对,计算出轮椅当前的位置。
S32.过程定位:(1)根据上一时刻轮椅的位置信息,并根据里程计的数据对轮椅下一时刻的位置进行航迹推算,根据航迹推算的位置对激光雷达获取到的向量进行坐标变换。(2)将坐标变换后的向量与环境地图中向量进行匹配,根据匹配的向量对,计算出轮椅当前时刻的位置;
S4.用户通过脑机接口选择目的地:
第一种:如图2,通过运动想象的脑机接口选择目的地,包含以下顺序的步骤:
(1)候选目的地分别用浅色和深色的实心圆表示,两种颜色代表两种不同类别的目的地;
(2)如果用户想选择一个浅色/深色的目的地,他需要根据图形用户接口界面中水平条的颜色相应地执行至少2秒的左/右手运动想象;当脑机接口系统检测到左/右手运动想象,浅色/深色的目的地保留在GUI中,并进一步将保留在GUI中的目的地划分成两类,两类分别用浅色和深色区别,其他的目的地从GUI中消失;
(3)用户一直重复这个选择过程直到只剩下一个目的地,最后用户需要继续执行2秒的左/右手运动想象接受/拒绝选择的目的地;
所述运动想象的检测算法步骤如下:
(1)提取200ms的EEG信号,使用共同平均参考(common average reference,
CAR)滤波,8~30Hz的带通滤波;
(2)将滤波后的EEG信号使用共同空间模式(Common spatial
pattern,CSP)投影后作为特征向量;
(3)将获得的特征向量输入到SVM分类器,得到预测的类和对应的SVM输出值,如果SVM输出值超过一定的阈值,对应类别作为输出结果。
第二种:如图3,通过P300的脑机接口选择目的地,包含以下顺序的步骤:
(1)用户首先有20秒的时间从图形用户接口界面中确定他想选择的目的地的编号;
(2)20秒后,P300的
GUI(如图3)将出现在屏幕上,其中每个闪烁键的编号与图形用户接口界面(如图1)中实心圆中的编号一致;
(3)利用如图3出现在屏幕上P300的GUI所示的脑机接口,用户可以通过注视对应编号的闪烁键来选择目的地;
(4)当选择完目的地,用户需要继续注视着闪烁键‘O/S’进一步验证;否则,用户需要注视着闪烁键‘Delete’拒绝上一次的选择,并重新选择目的地;
所述P300检测算法步骤如下:
(1)EEG信号经过0.1~20Hz的带通滤波,5Hz的下采样;
(2)对P300的GUI中的每一个闪烁键,提取每个通道的一段EEG信号形成一个向量,结合所有通道的向量形成特征向量,其中EEG信号长度为闪烁后600ms;
(3)SVM分类器应用到这些特征向量,获得对应40个闪烁键的值;
(4)经过4个round后,计算对应每一个键的SVM值的和,并找出其中最大的和次大的值,如果最大和次大的值的差超过某一阈值,对应最大的值的闪烁键作为输出的结果;否则继续检测前4个round,直到满足阈值条件。其中,所有的闪烁键随机地闪烁完一次定义为一个round;
S5.将轮椅当前的位置作为起点,用户选择的目的地作为终点,结合障碍物定位后产生的航迹点,经过A*算法进行路径规划,产生一条最短的最优路径;
S6.获取最优路径后,计算轮椅当前的位置与最优路径的位置差,将位置差作为PID路径跟踪算法的反馈,由PID路径跟踪算法计算出参考的角速度和线速度;
S7.将参考角速度和线速度输入到PID运动控制器,并从固定在轮椅左右侧轮子上的里程计获得里程数据,将里程数据转化为当前的角速度和线速度信息,然后将转化得到的角速度和线速度信息作为PID运动控制器的反馈,从而调节轮椅的控制信号,实时控制轮椅行驶至目的地;
S8.如果用户想停止轮椅并更改目的地,可以通过基于运动想象或者P300的脑机接口向轮椅发送停止命令,具体步骤如下:
(1)通过运动想象的脑机接口停止轮椅:在轮椅行驶过程中,想象左手运动超过3秒超过预设的阈值,一方面,脑机接口系统会向轮椅控制器直接发送停止指令;另一方面,车载计算机显示用于选择目的地的用户接口;
(2)通过基于P300的脑机接口停止轮椅:在轮椅行驶过程中,用户只需注视着图3中的闪烁键‘O/S’,一旦脑机接口系统检测到对应闪烁键‘O/S’的P300,一方面,脑机接口系统会向轮椅控制器直接发送停止指令;另一方面,车载计算机显示用于选择目的地的用户接口,供用户重新选择目的地。
实施例二
下面通过更为具体的实施例对本发明进行介绍:
通过使用者头部所戴的电极帽采集脑电信号;
将采集到的脑电数据传送到车载的计算机里进行实时处理;同时固定在轮椅前方的SICK
LMS111激光雷达通过TCP网络向车载计算机实时传送数据,用于轮椅自定位;固定在轮椅左右主动轮上的里程计通过串口传送实时数据,转化为线速度和角速度并作为PID控制器的反馈数据,用于实时调节轮椅当前的速度;
固定在房间墙面上的网络摄像头通过无线网络与车载计算机连接,通过车载计算机控制是否传送当前的图像数据,对传送的图像数据进行图像处理,将房间里的障碍物与地板图像分割,用于定位房间里的障碍物;
障碍物定位结束后,自动导航系统自动产生可供用户选择的目的地,这些目的地分布在障碍物的四周以及以1米的距离均匀分布在空地上;根据障碍物在房间的分布构建广义的Voronoi图,利用构建的Voronoi图的边作为轮椅可以通行的路径,通过这种方式形成的路径尽量远离路径两边的障碍物,因此以此作为导航的路径最安全;以Voronoi图的边每隔0.2米提取航迹点,每个航迹点的坐标信息以及每个航迹点之间的相邻关系输入到路径规划模块。一旦用户选择目的地,路径规划模块根据当前轮椅的位置,目的地的位置以及航迹点的信息规划出一条最短的路径;
路径跟踪模块根据当前轮椅的位置以及规划出的路径计算出参考的线速度和角速度。考虑到驾驶轮椅的安全性和舒适性,将线速度固定为0.2m/s,角速度最大不超过0.6rad/s;将参考的线速度和角速度传送到运动控制模块(即PID控制器),控制器根据采集的里程计信息作为当前速度的反馈,实时控制轮椅行驶至目的地。
上述实施例为本发明较佳的实施方式,但本发明的实施方式并不受上述实施例的限制,其他的任何未背离本发明的精神实质与原理下所作的改变、修饰、替代、组合、简化,均应为等效的置换方式,都包含在本发明的保护范围之内。
Claims (8)
- 一种基于脑机接口与自动驾驶技术的智能轮椅控制方法,其特征在于,包括以下顺序的步骤:S1.通过固定在墙面上的网络摄像头获取当前的图片信息,将获取的图片采用图像处理方法对障碍物实现定位;S2.根据障碍物信息产生候选目的地和用于路径规划的航迹点;S3.对轮椅进行自定位;S4.用户通过脑机接口选择目的地;S5.将轮椅当前的位置作为起点,用户选择的目的地作为终点,结合障碍物定位后产生的航迹点,经过A*算法进行路径规划,产生一条最短的最优路径;S6.获取最优路径后,计算轮椅当前的位置与最优路径的位置差,将位置差作为PID路径跟踪算法的反馈,由PID路径跟踪算法计算出参考的角速度和线速度;S7.将参考角速度和线速度输入到PID运动控制器,并从固定在轮椅左右侧轮子上的里程计获得里程数据,将里程数据转化为当前的角速度和线速度信息,然后将转化得到的角速度和线速度信息作为PID运动控制器的反馈,从而调节轮椅的控制信号,实时控制轮椅行驶至目的地。
- 根据权利要求1所述的基于脑机接口与自动驾驶技术的智能轮椅控制方法,其特征在于,步骤S1中,所述的障碍物实现定位是通过以下顺序的步骤完成的:(1)用阈值分割方法将图片中障碍物与地板分割;(2)通过形态学开操作去除噪声,形态学闭操作重建开操作中去除的区域,从而获取每一个已分割区域的轮廓;(3)通过去除比较小的轮廓,达到进一步去噪,然后将剩下的轮廓用凸包拟合;(4)根据对应关系矩阵,将凸包的顶点映射到全局坐标系即地面平面坐标系;其中对应关系矩阵表示图像的像素坐标系与地面平面坐标系之间的对应关系;(5)计算每张图片对应的凸包在全局坐标系下的相交区域,障碍物在坐标系中所对应的位置可以用这些相交区域近似。
- 根据权利要求1所述的基于脑机接口与自动驾驶技术的智能轮椅控制方法,其特征在于,所述的步骤S3,对轮椅进行自定位的方法包含以下顺序的步骤:A、初始定位(1)根据激光雷达采集到的距离点信息,用最小二乘拟合算法提取直线,并根据激光雷达扫描的方向,将提取到的直线转换成有方向信息的向量;(2)将提取到的向量与环境地图中向量进行匹配,根据匹配的向量对,计算出轮椅当前的位置;B、过程定位(1)根据上一时刻轮椅的位置信息,并根据里程计的数据对轮椅下一时刻的位置进行航迹推算,根据航迹推算的位置对激光雷达获取到的向量进行坐标变换;(2)将坐标变换后的向量与环境地图中向量进行匹配,根据匹配的向量对,计算出轮椅当前时刻的位置。
- 根据权利要求1所述的基于脑机接口与自动驾驶技术的智能轮椅控制方法,其特征在于,所述的步骤S4,具体为通过运动想象的脑机接口选择目的地,包含以下顺序的步骤:(1)候选目的地分别用浅色和深色的实心圆表示,两种颜色代表两种不同类别的目的地;(2)如果用户想选择一个浅色/深色的目的地,他需要根据图形用户接口界面中水平条的颜色相应地执行至少2秒的左/右手运动想象;当脑机接口系统检测到左/右手运动想象,浅色/深色的目的地保留在GUI中,并进一步将保留在GUI中的目的地划分成两类,两类分别用浅色和深色区别,其他的目的地从GUI中消失;(3)用户一直重复这个选择过程直到只剩下一个目的地,最后用户需要继续执行2秒的左/右手运动想象接受/拒绝选择的目的地。
- 根据权利要求4所述的基于脑机接口与自动驾驶技术的智能轮椅控制方法,其特征在于,所述运动想象的检测算法步骤如下:(1)提取200ms的EEG信号,使用共同平均参考滤波,8~30Hz的带通滤波;(2)将滤波后的EEG信号使用共同空间模式投影后作为特征向量;(3)将获得的特征向量输入到SVM分类器,得到预测的类和对应的SVM输出值,如果SVM输出值超过一定的阈值,对应类别作为输出结果。
- 根据权利要求1所述的基于脑机接口与自动驾驶技术的智能轮椅控制方法,其特征在于,所述的步骤S4,具体为通过P300的脑机接口选择目的地,包含以下顺序的步骤:(1)用户首先有20秒的时间从图形用户接口界面中确定他想选择的目的地的编号;(2)20秒后,P300的 GUI将出现在屏幕上,其中每个闪烁键的编号与图形用户接口界面中实心圆中的编号一致;(3)利用出现在屏幕上P300的GUI所示的脑机接口,用户可以通过注视对应编号的闪烁键来选择目的地;(4)当选择完目的地,用户需要继续注视着闪烁键‘O/S’进一步验证;否则,用户需要注视着闪烁键‘Delete’拒绝上一次的选择,并重新选择目的地。
- 根据权利要求6所述的基于脑机接口与自动驾驶技术的智能轮椅控制方法,其特征在于,所述P300检测算法步骤如下:(1)EEG信号经过0.1~20Hz的带通滤波,5Hz的下采样;(2)对P300的GUI中的每一个闪烁键,提取每个通道的一段EEG信号形成一个向量,结合所有通道的向量形成特征向量,其中EEG信号长度为闪烁后600ms;(3)SVM分类器应用到这些特征向量,获得对应40个闪烁键的值;(4)经过4个round后,计算对应每一个键的SVM值的和,并找出其中最大的和次大的值,如果最大和次大的值的差超过某一阈值,对应最大的值的闪烁键作为输出的结果;否则继续检测前4个round,直到满足阈值条件;其中,所有的闪烁键随机地闪烁完一次定义为一个round。
- 根据权利要求1所述的基于脑机接口与自动驾驶技术的智能轮椅控制方法,其特征在于,还包括轮椅行驶过程中,如果用户想停止轮椅并更改目的地,可以通过基于运动想象或者P300的脑机接口向轮椅发送停止命令,具体步骤如下:(1)通过运动想象的脑机接口停止轮椅:在轮椅行驶过程中,想象左手运动超过3秒超过预设的阈值,一方面,脑机接口系统会向轮椅控制器直接发送停止指令;另一方面,车载计算机显示用于选择目的地的用户接口;(2)通过基于P300的脑机接口停止轮椅:在轮椅行驶过程中,用户只需注视着图3中的闪烁键‘O/S’,一旦脑机接口系统检测到对应闪烁键‘O/S’的P300,一方面,脑机接口系统会向轮椅控制器直接发送停止指令;另一方面,车载计算机显示用于选择目的地的用户接口,供用户重新选择目的地。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/380,047 US20170095383A1 (en) | 2014-06-17 | 2016-12-15 | Intelligent wheel chair control method based on brain computer interface and automatic driving technology |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410269902.5A CN104083258B (zh) | 2014-06-17 | 2014-06-17 | 一种基于脑机接口与自动驾驶技术的智能轮椅控制方法 |
CN201410269902.5 | 2014-06-17 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/380,047 Continuation-In-Part US20170095383A1 (en) | 2014-06-17 | 2016-12-15 | Intelligent wheel chair control method based on brain computer interface and automatic driving technology |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015192610A1 true WO2015192610A1 (zh) | 2015-12-23 |
Family
ID=51631095
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2014/093071 WO2015192610A1 (zh) | 2014-06-17 | 2014-12-04 | 一种基于脑机接口与自动驾驶技术的智能轮椅控制方法 |
Country Status (3)
Country | Link |
---|---|
US (1) | US20170095383A1 (zh) |
CN (1) | CN104083258B (zh) |
WO (1) | WO2015192610A1 (zh) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102016119729A1 (de) | 2016-10-17 | 2018-04-19 | Connaught Electronics Ltd. | Steuern eines Personenbeförderungsfahrzeugs mit Rundumsichtkamerasystem |
CN111338482A (zh) * | 2020-03-04 | 2020-06-26 | 太原理工大学 | 一种基于监督自编码的脑控字符拼写识别方法及系统 |
GB2557688B (en) * | 2016-12-15 | 2021-03-24 | Ford Global Tech Llc | Navigation method and system |
CN113085851A (zh) * | 2021-03-09 | 2021-07-09 | 傅玥 | 一种动态自适应ssvep脑机接口的实时驾驶避障系统及方法 |
CN114652532A (zh) * | 2022-02-21 | 2022-06-24 | 华南理工大学 | 基于ssvep与注意力检测的多功能脑控轮椅系统 |
Families Citing this family (50)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104083258B (zh) * | 2014-06-17 | 2016-10-05 | 华南理工大学 | 一种基于脑机接口与自动驾驶技术的智能轮椅控制方法 |
CN104799984B (zh) * | 2015-05-14 | 2017-01-25 | 华东理工大学 | 基于脑控移动眼的残疾人辅助系统及其控制方法 |
US20170010619A1 (en) * | 2015-07-08 | 2017-01-12 | Cnh Industrial America Llc | Automation kit for an agricultural vehicle |
CN105468138A (zh) * | 2015-07-15 | 2016-04-06 | 武汉理工大学 | 基于脑机接口技术与激光雷达的智能车辆避障导航方法 |
EP4043982B1 (en) * | 2016-04-14 | 2023-11-15 | DEKA Products Limited Partnership | User control device from a transporter |
CN106020470B (zh) * | 2016-05-18 | 2019-05-14 | 华南理工大学 | 基于脑机接口的自适应家居环境控制装置及其控制方法 |
CN106726209B (zh) * | 2016-11-24 | 2018-08-14 | 中国医学科学院生物医学工程研究所 | 一种基于脑机接口与人工智能的智能轮椅控制方法 |
CN109906069B (zh) * | 2017-01-22 | 2021-12-31 | 四川金瑞麒智能科学技术有限公司 | 一种具有医疗监测及反应功能的智能轮椅系统 |
CN110234306B (zh) * | 2017-04-26 | 2021-10-15 | 深圳市元征科技股份有限公司 | 轮椅的控制方法及装置 |
CN107174418A (zh) * | 2017-06-28 | 2017-09-19 | 歌尔股份有限公司 | 一种智能轮椅及其控制方法 |
CN107440848B (zh) * | 2017-08-03 | 2019-04-02 | 宁波市智能制造产业研究院 | 基于意念的医疗转运床控制系统 |
CN107714331B (zh) * | 2017-09-13 | 2019-06-14 | 西安交通大学 | 基于视觉诱发脑机接口的智能轮椅控制及路径优化方法 |
CN107553491A (zh) * | 2017-09-15 | 2018-01-09 | 华南理工大学 | 一种脑控轮椅机械臂 |
CN107669416B (zh) * | 2017-09-30 | 2023-05-02 | 五邑大学 | 基于持续-轻快运动想象神经解码的轮椅系统及控制方法 |
WO2019127261A1 (zh) * | 2017-12-28 | 2019-07-04 | 四川金瑞麒智能科学技术有限公司 | 一种智能轮椅自动驾驶方法、系统及计算机可读介质 |
WO2019127368A1 (zh) * | 2017-12-29 | 2019-07-04 | 四川金瑞麒智能科学技术有限公司 | 一种智能轮椅系统 |
CN108536154A (zh) * | 2018-05-14 | 2018-09-14 | 重庆师范大学 | 基于生物电信号控制的低速自动驾驶智能轮椅构建方法 |
KR102094863B1 (ko) * | 2018-05-28 | 2020-03-30 | 한국과학기술연구원 | 입력 지연시간을 보상하는 이동로봇 제어 장치 및 방법 |
CN109106522A (zh) * | 2018-06-14 | 2019-01-01 | 深圳市尚荣医用工程有限公司 | 语音控制自动规划路径智能轮椅系统 |
CN109009887A (zh) * | 2018-07-17 | 2018-12-18 | 东北大学 | 一种基于脑机接口的人机交互式导航系统及方法 |
CN109240282A (zh) * | 2018-07-30 | 2019-01-18 | 王杰瑞 | 一种可操纵智能医疗机器人 |
CN109765885A (zh) * | 2018-11-21 | 2019-05-17 | 深圳市迈康信医用机器人有限公司 | 轮椅室内自动驾驶的方法及其系统 |
CN109448835A (zh) * | 2018-12-07 | 2019-03-08 | 西安科技大学 | 一种残疾人生活脑控辅助系统及方法 |
CN109646198B (zh) * | 2019-02-19 | 2021-09-03 | 西安科技大学 | 一种基于视觉跟踪的电动轮椅控制方法 |
CN109966064B (zh) * | 2019-04-04 | 2021-02-19 | 北京理工大学 | 带有侦查装置的融合脑控与自动驾驶的轮椅及控制方法 |
EP3956747A1 (en) | 2019-04-19 | 2022-02-23 | Toyota Motor Europe | Neural menu navigator and navigation methods |
US11786694B2 (en) | 2019-05-24 | 2023-10-17 | NeuroLight, Inc. | Device, method, and app for facilitating sleep |
CN110209073A (zh) * | 2019-05-31 | 2019-09-06 | 湖南大佳数据科技有限公司 | 基于增强现实的脑机交互载人移动平台系统 |
CN110119152A (zh) * | 2019-06-15 | 2019-08-13 | 大连亿斯德环境科技有限公司 | 一种多功能智能轮椅控制系统及相应的控制方法 |
CN110675950B (zh) * | 2019-08-29 | 2022-08-23 | 江苏大学 | 一种基于物联网云平台的瘫痪病人智能看护系统 |
CN110687929B (zh) * | 2019-10-10 | 2022-08-12 | 辽宁科技大学 | 基于单目视觉与运动想象的飞行器三维空间目标搜索系统 |
CN111367295A (zh) * | 2020-03-26 | 2020-07-03 | 华南理工大学 | 一种智能轮椅床的导航与避障系统及方法 |
CN111880656B (zh) * | 2020-07-28 | 2023-04-07 | 中国人民解放军国防科技大学 | 一种基于p300信号的智能脑控系统及康复设备 |
US12048655B2 (en) | 2020-09-03 | 2024-07-30 | The Board Of Trustees Of The University Of Illinois | Low-profile and high-load ball-balancing rolling system |
EP3967285A1 (fr) * | 2020-09-14 | 2022-03-16 | Tridon de Rey, Hubert | Appareillage mobile pour handicapes moteurs et systeme de guidage impliquant un tel appareillage |
CN112148011B (zh) * | 2020-09-24 | 2022-04-15 | 东南大学 | 一种未知环境下脑电移动机器人共享控制方法 |
CN112451229B (zh) * | 2020-12-09 | 2022-07-22 | 北京云迹科技股份有限公司 | 智能轮椅的出行方法及装置 |
TWI825374B (zh) * | 2020-12-22 | 2023-12-11 | 緯創資通股份有限公司 | 移動式輔具及其障礙克服方法 |
CN112947455B (zh) * | 2021-02-25 | 2023-05-02 | 复旦大学 | 一种基于视觉脑机交互的高速公路自动驾驶系统和方法 |
CN112914865B (zh) * | 2021-03-22 | 2022-06-03 | 华南理工大学 | 一种基于脑机接口的连续转向控制方法 |
CN113311823B (zh) * | 2021-04-07 | 2023-01-17 | 西北工业大学 | 一种结合脑机接口技术及orb_slam导航的移动机器人控制新方法 |
CN113138668B (zh) * | 2021-04-25 | 2023-07-18 | 清华大学 | 自动驾驶轮椅目的地选择方法、装置及系统 |
CN113288611B (zh) * | 2021-05-17 | 2022-08-23 | 安徽金百合医疗器械有限公司 | 一种基于电动轮椅行进场景的操作安全保障方法和系统 |
US20220390950A1 (en) * | 2021-06-04 | 2022-12-08 | Boston Dynamics, Inc. | Directed exploration for navigation in dynamic environments |
CN113320617B (zh) * | 2021-07-09 | 2023-05-26 | 北京优时科技有限公司 | 一种六轮差速转速控制方法和一种六轮差速转速控制装置 |
CN113888584A (zh) * | 2021-08-04 | 2022-01-04 | 北京化工大学 | 基于全方位视觉的机器人轮椅跟踪系统及控制方法 |
CN113925738A (zh) * | 2021-08-31 | 2022-01-14 | 广州医科大学附属脑科医院 | 一种手部手指神经康复训练方法及装置 |
CN113760094A (zh) * | 2021-09-09 | 2021-12-07 | 成都视海芯图微电子有限公司 | 一种基于脑机接口控制与交互的肢体运动辅助方法及系统 |
CN114312819B (zh) * | 2022-03-09 | 2022-06-28 | 武汉理工大学 | 基于胶囊神经网络的脑启发式自动驾驶辅助系统及方法 |
CN115192045B (zh) * | 2022-09-16 | 2023-01-31 | 季华实验室 | 目的地识别/轮椅控制方法、装置、电子设备及存储介质 |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2004092858A1 (ja) * | 2003-04-18 | 2004-10-28 | Miksy Limited Corporation | 自動走行車椅子、車椅子自動走行システム、及び車椅子の自動走行方法 |
CN101897640A (zh) * | 2010-08-10 | 2010-12-01 | 北京师范大学 | 一种新型的基于运动想象脑电控制的智能轮椅系统 |
US20110152710A1 (en) * | 2009-12-23 | 2011-06-23 | Korea Advanced Institute Of Science And Technology | Adaptive brain-computer interface device |
CN102188311A (zh) * | 2010-12-09 | 2011-09-21 | 南昌大学 | 一种嵌入式智能轮椅视觉导航控制系统及方法 |
CN102309380A (zh) * | 2011-09-13 | 2012-01-11 | 华南理工大学 | 基于多模态脑机接口的智能轮椅 |
CN102331782A (zh) * | 2011-07-13 | 2012-01-25 | 华南理工大学 | 一种多模态脑机接口的自动车控制方法 |
CN102520723A (zh) * | 2011-12-28 | 2012-06-27 | 天津理工大学 | 基于上悬无线传输摄像头的轮椅室内全局视频监控导航系统 |
CN104083258A (zh) * | 2014-06-17 | 2014-10-08 | 华南理工大学 | 一种基于脑机接口与自动驾驶技术的智能轮椅控制方法 |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5999866A (en) * | 1996-11-05 | 1999-12-07 | Carnegie Mellon University | Infrastructure independent position determining system |
US7912633B1 (en) * | 2005-12-01 | 2011-03-22 | Adept Mobilerobots Llc | Mobile autonomous updating of GIS maps |
US8323189B2 (en) * | 2006-05-12 | 2012-12-04 | Bao Tran | Health monitoring appliance |
US8139109B2 (en) * | 2006-06-19 | 2012-03-20 | Oshkosh Corporation | Vision system for an autonomous vehicle |
US8577126B2 (en) * | 2007-04-11 | 2013-11-05 | Irobot Corporation | System and method for cooperative remote vehicle behavior |
CN101301244A (zh) * | 2008-06-18 | 2008-11-12 | 天津大学 | 基于脑-机接口的智能轮椅控制系统及其脑电信号处理方法 |
US20100094534A1 (en) * | 2008-10-13 | 2010-04-15 | International Business Machines Corporation | Electronic map routes based on route preferences |
CN101976115B (zh) * | 2010-10-15 | 2011-12-28 | 华南理工大学 | 一种基于运动想象与p300脑电电位的功能键选择方法 |
CN103472922A (zh) * | 2013-09-23 | 2013-12-25 | 北京理工大学 | 一种基于p300与ssvep混合式脑机接口的目的地选择系统 |
CN103705352A (zh) * | 2013-12-27 | 2014-04-09 | 南京升泰元机器人科技有限公司 | 基于脑-机接口的智能轮椅及其控制系统和控制方法 |
-
2014
- 2014-06-17 CN CN201410269902.5A patent/CN104083258B/zh active Active
- 2014-12-04 WO PCT/CN2014/093071 patent/WO2015192610A1/zh active Application Filing
-
2016
- 2016-12-15 US US15/380,047 patent/US20170095383A1/en not_active Abandoned
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2004092858A1 (ja) * | 2003-04-18 | 2004-10-28 | Miksy Limited Corporation | 自動走行車椅子、車椅子自動走行システム、及び車椅子の自動走行方法 |
US20110152710A1 (en) * | 2009-12-23 | 2011-06-23 | Korea Advanced Institute Of Science And Technology | Adaptive brain-computer interface device |
CN101897640A (zh) * | 2010-08-10 | 2010-12-01 | 北京师范大学 | 一种新型的基于运动想象脑电控制的智能轮椅系统 |
CN102188311A (zh) * | 2010-12-09 | 2011-09-21 | 南昌大学 | 一种嵌入式智能轮椅视觉导航控制系统及方法 |
CN102331782A (zh) * | 2011-07-13 | 2012-01-25 | 华南理工大学 | 一种多模态脑机接口的自动车控制方法 |
CN102309380A (zh) * | 2011-09-13 | 2012-01-11 | 华南理工大学 | 基于多模态脑机接口的智能轮椅 |
CN102520723A (zh) * | 2011-12-28 | 2012-06-27 | 天津理工大学 | 基于上悬无线传输摄像头的轮椅室内全局视频监控导航系统 |
CN104083258A (zh) * | 2014-06-17 | 2014-10-08 | 华南理工大学 | 一种基于脑机接口与自动驾驶技术的智能轮椅控制方法 |
Non-Patent Citations (3)
Title |
---|
LI, PENGHAI.: "Research on new technique for mobile device control with computer vision based BCI.", CHINESE DOCTORAL DISSERTATIONS FULL-TEXT DATABASE, INFORMATION SCIENCE AND TECHNOLOGY., 15 May 2012 (2012-05-15), ISSN: 1674-022X * |
LIU, TIAN ET AL.: "An assistive system based on ultrasonic sensors for brain-controlled wheelchair to avoid obstacles.", PROCEEDINGS OF THE 31ST CHINESE CONTROL CONFERENCE 2012., 27 July 2012 (2012-07-27), pages 3741 - 3744, XP032289019 * |
WANG, LIJUN.: "Planning and navigation for intelligent wheelchair in dynamic environments.", CHINESE MASTER'S THESES FULL-TEXT DATABASE, INFORMATION SCIENCE AND TECHNOLOGY., 15 December 2010 (2010-12-15), XP055245866, ISSN: 1674-0246 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102016119729A1 (de) | 2016-10-17 | 2018-04-19 | Connaught Electronics Ltd. | Steuern eines Personenbeförderungsfahrzeugs mit Rundumsichtkamerasystem |
WO2018072908A1 (en) | 2016-10-17 | 2018-04-26 | Connaught Electronics Ltd. | Controlling a vehicle for human transport with a surround view camera system |
GB2557688B (en) * | 2016-12-15 | 2021-03-24 | Ford Global Tech Llc | Navigation method and system |
CN111338482A (zh) * | 2020-03-04 | 2020-06-26 | 太原理工大学 | 一种基于监督自编码的脑控字符拼写识别方法及系统 |
CN111338482B (zh) * | 2020-03-04 | 2023-04-25 | 太原理工大学 | 一种基于监督自编码的脑控字符拼写识别方法及系统 |
CN113085851A (zh) * | 2021-03-09 | 2021-07-09 | 傅玥 | 一种动态自适应ssvep脑机接口的实时驾驶避障系统及方法 |
CN114652532A (zh) * | 2022-02-21 | 2022-06-24 | 华南理工大学 | 基于ssvep与注意力检测的多功能脑控轮椅系统 |
CN114652532B (zh) * | 2022-02-21 | 2023-07-18 | 华南理工大学 | 基于ssvep与注意力检测的多功能脑控轮椅系统 |
Also Published As
Publication number | Publication date |
---|---|
CN104083258B (zh) | 2016-10-05 |
CN104083258A (zh) | 2014-10-08 |
US20170095383A1 (en) | 2017-04-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2015192610A1 (zh) | 一种基于脑机接口与自动驾驶技术的智能轮椅控制方法 | |
JP6810125B2 (ja) | 仮想現実環境においてナビゲートする方法、システム、および装置 | |
Deng et al. | A bayesian shared control approach for wheelchair robot with brain machine interface | |
Aronson et al. | Eye-hand behavior in human-robot shared manipulation | |
Gautam et al. | Eye movement based electronic wheel chair for physically challenged persons | |
CN106726209B (zh) | 一种基于脑机接口与人工智能的智能轮椅控制方法 | |
Ktena et al. | A virtual reality platform for safe evaluation and training of natural gaze-based wheelchair driving | |
US20060281969A1 (en) | System and method for operation without touch by operators | |
Mao et al. | A brain–robot interaction system by fusing human and machine intelligence | |
CN108646915B (zh) | 结合三维视线跟踪和脑机接口控制机械臂抓取物体的方法和系统 | |
Gips et al. | The Camera Mouse: Preliminary investigation of automated visual tracking for computer access | |
Kyrarini et al. | Robot learning of assistive manipulation tasks by demonstration via head gesture-based interface | |
Penaloza et al. | Towards intelligent brain-controlled body augmentation robotic limbs | |
KR20190136962A (ko) | 역각 시각화 장치, 로봇 및 역각 시각화 프로그램 | |
Carlson et al. | Vision-based shared control for a BCI wheelchair | |
Taher et al. | EEG control of an electric wheelchair for disabled persons | |
CN110825216A (zh) | 一种驾驶员驾驶时人机交互的方法和系统 | |
Tong et al. | A retrofit eye gaze tracker for the da Vinci and its integration in task execution using the da Vinci Research Kit | |
Alsharif et al. | Gaze gesture-based human robot interface | |
Perez et al. | Robust human machine interface based on head movements applied to Assistive robotics | |
Perez et al. | Robotic wheelchair controlled through a vision-based interface | |
Wang et al. | Continuous shared control for robotic arm reaching driven by a hybrid gaze-brain machine interface | |
Sharma et al. | Eye gaze controlled robotic arm for persons with ssmi | |
Iáñez et al. | Interface based on electrooculography for velocity control of a robot arm | |
JPH09212082A (ja) | 視線入力装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14895274 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 14895274 Country of ref document: EP Kind code of ref document: A1 |