WO2022062169A1 - Sharing control method for electroencephalogram mobile robot in unknown environment - Google Patents

Sharing control method for electroencephalogram mobile robot in unknown environment Download PDF

Info

Publication number
WO2022062169A1
WO2022062169A1 PCT/CN2020/132580 CN2020132580W WO2022062169A1 WO 2022062169 A1 WO2022062169 A1 WO 2022062169A1 CN 2020132580 W CN2020132580 W CN 2020132580W WO 2022062169 A1 WO2022062169 A1 WO 2022062169A1
Authority
WO
WIPO (PCT)
Prior art keywords
eeg
mobile robot
signal
speed control
robot
Prior art date
Application number
PCT/CN2020/132580
Other languages
French (fr)
Chinese (zh)
Inventor
徐宝国
刘德平
王勇
张坤
宋爱国
赵国普
Original Assignee
东南大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 东南大学 filed Critical 东南大学
Publication of WO2022062169A1 publication Critical patent/WO2022062169A1/en

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle

Definitions

  • the invention belongs to the technical field of robots, and in particular relates to a shared control method of an EEG mobile robot in an unknown environment.
  • Brain-Computer Interface (BCI) technology is a technology that can use brain physiological signals to output control signals to external devices without going through the neuro-muscle pathway.
  • Brain-computer interface technology obtains EEG control signals by analyzing the collected EEG signals, thereby establishing a new control method between the brain and the mobile robot.
  • the control tasks of mobile robots can be divided into control tasks of known environment and control tasks of unknown environment according to the cognition of the mobile robot to the working environment.
  • the control task of the known environment refers to controlling the robot to complete tasks such as obstacle avoidance and path planning when the environment map or the location of obstacles in the environment is known.
  • the control task of the unknown environment means that the environmental information such as the environmental map or the location of the obstacles is unknown, and the robot needs to complete tasks such as obstacle avoidance and path planning without prior knowledge of the environment.
  • Mobile robot control in an unknown environment needs to combine multiple sensors to reasonably perceive the environment, and then perform global path planning and local path planning to complete the task.
  • Discrete shared control divides the control process into two stages: EEG control and automatic control.
  • EEG control When no obstacles are detected, EEG control is completely used.
  • the robot When obstacles are detected, the robot is autonomously controlled.
  • the robot uses depth cameras, inertial sensors, Lidar and ultrasonic sensors perceive pose and environmental information, rely on the distance information from obstacles, and combine artificial potential field method, A* algorithm, RRT algorithm and D* algorithm for path planning and navigation. This method poses a great challenge to the robot's autonomous perception and autonomous decision-making capabilities.
  • the human-computer interaction experience is poor, and the efficiency in the process of perception and decision-making is often low.
  • Continuous shared control uses a certain strategy to combine the user's EEG control with the robot's autonomous control.
  • the EEG control signal is regarded as a gravitational force
  • the autonomous control signal is regarded as a repulsive force
  • the resultant force is used to control the movement of the mobile robot.
  • This sharing method is more complicated and is easily affected by the unevenness of the EEG signal.
  • Others use EEG signals to control the direction of the robot's linear velocity, and the linear velocity of the robot is set to a fixed value or autonomously controlled. This sharing method is not stable enough, the forward trajectory is easy to shake, and the practicability is not high.
  • the present invention discloses a shared control method of an EEG mobile robot in an unknown environment.
  • the EEG signal is used to control the magnitude of the linear velocity of the mobile robot
  • the autonomous obstacle avoidance signal is used to control the direction of the linear velocity of the mobile robot.
  • Robotic obstacle avoidance and path planning of mobile robots to improve the efficiency of mobile robot control and human-computer interaction capabilities.
  • a shared control method for an EEG mobile robot in an unknown environment comprising the following steps:
  • Step 1 Collect the user's motor imagery EEG signal and perform preprocessing and feature extraction
  • Step 2 Perform an adaptive weight linear summation on the left and right EEG signals to obtain an EEG-speed control signal;
  • Step 3 the mobile robot obtains the autonomous obstacle avoidance-speed control signal according to the autonomous path planning
  • Step 4 The mobile robot is under the shared control of the EEG-speed control signal and the autonomous obstacle avoidance-speed control signal, and drives in an unknown environment.
  • step 1 specifically includes:
  • Step 1a using the first lead electrode to collect the motor imagery EEG signal of the left motor sensory cortex on the top of the user's head;
  • Step 1b using the second lead electrode to collect the motor imagery EEG signal of the right motor sensory cortex on the top of the user's head;
  • Step 1c use bandpass filtering and Laplace reference filtering for preprocessing:
  • the Laplacian reference filtering method is calculated as the mean value of the signal of this lead minus the original signal of the adjacent lead, using the formula:
  • d ij is the distance between electrode i and electrode j
  • S i is the electrode set within a certain distance with the i-th electrode as the center
  • w ij is the calculated weight of the j-th electrode
  • n is the total number of lead electrodes number
  • V i is the original signal collected by a single lead, is the lead signal of V i after Laplace filtering
  • Step 1d use the autoregressive AR model to extract the features of the EEG signal:
  • the autoregressive AR model is used to extract the features of the motor imagery EEG signals of the preset rhythm of the electrodes of each channel of the user, using the formula:
  • n is the total number of lead electrodes.
  • step 2 specifically includes:
  • w L and w R are the weights of the left and right motor imagery EEG signals
  • b is a constant offset term
  • L and R are the left and right motor imagery EEG signals
  • M Vvalue is the EEG control coefficient
  • (M Vvalue ) max is the maximum value of the EEG control coefficient
  • (M Vvalue ) min is the minimum value of the EEG control coefficient
  • C Vvalue is the numerical variation coefficient of the linear velocity of the mobile robot.
  • the initial values of w L , w R , and b are all +1.0, and the normalized minimum mean square error NLMS algorithm is used to adjust the weights adaptively.
  • step 3 specifically includes:
  • Step 3a scan the environmental information.
  • Step 3b using the Hector SLAM algorithm for synchronous positioning and mapping:
  • the grid size is defined, and the grid cell has three existence states: free, occupied and unknown.
  • the bilinear filtering algorithm is used to estimate the probability that the environmental point cloud information occupies the grid, and the robot pose information is located.
  • Step 3c using the A* algorithm for path planning:
  • the heuristic function h(n) adopts the formula:
  • (x n , y n ) represents the horizontal and vertical coordinates of the map of the current point
  • (x G , y G ) represents the horizontal and vertical coordinates of the map of the target point.
  • Step 3d Obtain the autonomous obstacle avoidance-speed control signal according to the planned path
  • the signal is a unit direction vector, the direction is that each parent node in the optimal path node set generated by the path planning points to the next child node, autonomous obstacle avoidance-speed control signal Satisfy the formula:
  • step 4 specifically includes:
  • V max represents the preset maximum linear speed
  • the present invention can combine human brain intention control with robot autonomous control in the case of unknown environmental information, thereby improving the control efficiency and human-computer interaction ability in the unknown environment.
  • the present invention uses EEG to control the magnitude of the linear velocity of the mobile robot, uses the path independently planned by the mobile robot to control the direction of the linear velocity of the mobile robot, and enables the robot to complete obstacle avoidance and path planning in an unknown environment through the shared control method, which improves the performance of the mobile robot. Obstacle avoidance efficiency of mobile robots in unknown environments.
  • FIG. 1 is a system block diagram of a shared control method for an EEG mobile robot in an unknown environment provided by the present invention
  • Fig. 2 is the action schematic diagram of the user's motor imagery task in the present invention
  • FIG. 3 is a schematic diagram of an EEG signal collection position in the present invention.
  • FIG. 4 is a schematic diagram of a bilinear filtering algorithm in the present invention.
  • FIG. 5 is a flowchart of the A* path planning algorithm in the present invention.
  • This embodiment provides a shared control method for an EEG mobile robot in an unknown environment.
  • the electrical signals are linearly summed with adaptive weights to obtain the EEG-speed control signal; (3)
  • the mobile robot obtains the autonomous obstacle avoidance-speed control signal according to the autonomous path planning; (4)
  • the mobile robot is controlled by the EEG-speed Shared control of signals and autonomous obstacle avoidance - speed control signals, driving in unknown environments.
  • the video played on the screen that the user observes guides the user to generate motor imagery EEG signals.
  • the first lead electrode is used to collect the motor imagery EEG signal of the left motor sensory cortex of the user
  • the second lead electrode is used to detect the movement of the right side of the user's head. Motor imagery EEG signals of the sensory cortex were collected.
  • an 8-32 Hz band-pass filter is used for the collected EEG signal to filter out the noise in the collection process and improve the signal-to-noise ratio of the signal.
  • the bandpass filtered signal is then filtered by the Laplacian reference method.
  • the Laplacian reference filtering method is calculated as the mean value of the signal of this lead minus the original signal of the adjacent lead, using the formula:
  • d ij is the distance between electrode i and electrode j
  • S i is the electrode set within a certain distance with the i-th electrode as the center
  • w ij is the calculated weight of the j-th electrode
  • n is the total number of lead electrodes number
  • V i is the original signal collected by a single lead, is the lead signal of V i after Laplace filtering
  • the autoregressive AR model is used to extract the features of the motor imagery EEG signals of the preset rhythm of the electrodes of each channel of the user, using the formula:
  • n is the total number of lead electrodes
  • w L and w R are the weights of the left and right motor imagery EEG signals
  • b is a constant offset term
  • L and R are the left and right motor imagery EEG signals
  • M Vvalue is the EEG control coefficient
  • (M Vvalue ) max is the maximum value of the EEG control coefficient
  • (M Vvalue ) min is the minimum value of the EEG control coefficient
  • C Vvalue is the numerical variation coefficient of the linear velocity of the mobile robot.
  • the initial values of w L and w R used in this embodiment are +1.0, and the normalized minimum mean square error NLMS algorithm is used to adjust the weights adaptively.
  • k is the number of updates
  • V(k) is the kth input signal
  • ⁇ and ⁇ are the adjustment factors
  • d(k) is the reference signal.
  • the angle of each frame of the two-dimensional lidar is related to the resolution.
  • the resolution is set to 360, and each degree corresponds to a value.
  • the two-dimensional lidar obtains environmental information by scanning in the form of a circle on a two-dimensional plane.
  • the two-dimensional lidar information refers to the distance of the detected obstacle, which represents the distance value of the real obstacle.
  • the grid size and discretize the actual environment map Define the grid size and discretize the actual environment map.
  • the basic principle is: take the coordinates of the robot's location as the center, spread the size of the unit grid around until the virtual grid covers the plane, and then put the two-dimensional lidar information into the ground. into the corresponding grid, and all the two-dimensional lidar information falling into the grid is represented by the occupancy probability value of the grid.
  • the bilinear filtering algorithm is used to estimate the probability that the environmental point cloud information occupies the grid.
  • the grid cells have three existence states: free, occupied and unknown.
  • S i ( ⁇ ) can represent the coordinates of the robot in the global coordinate system:
  • the heuristic function h(n) adopts the formula:
  • (x n , y n ) represents the horizontal and vertical coordinates of the map of the current point
  • (x G , y G ) represents the horizontal and vertical coordinates of the map of the target point.
  • f(n) is the total cost of the node (x n , y n ), when selecting the next node to be traversed, select the node with the lowest total cost, that is, the node with the highest priority.
  • g(n) is the cost of the node (x n , y n ) from the starting point (x 0 , y 0 ).
  • open_set is not empty, select node n with the highest priority from open_set, delete node n from open_set, and add it to close_set;
  • node n is not the end point, judge whether node n has adjacent node m;
  • step 3 If the adjacent node m does not exist, then jump to step 3);
  • the mobile robot changes its travel direction according to the autonomously planned path, and this signal controls the direction of the linear velocity.
  • the signal is a unit direction vector, the direction is that each parent node in the optimal path node set generated by the path planning points to the next child node, autonomous obstacle avoidance-speed control signal Satisfy the formula:
  • V max represents the preset maximum linear speed

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Optics & Photonics (AREA)
  • Electromagnetism (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Feedback Control In General (AREA)

Abstract

A sharing control method for an electroencephalogram mobile robot in an unknown environment. The sharing control method comprises: first collecting motor imagery electroencephalogram signals of a user, preprocessing the signals and performing feature extraction, performing self-adaptive weight linear summation on the electroencephalogram signals on the left side and the right side, and obtaining electroencephalogram-speed control signals; the mobile robot obtaining autonomous obstacle avoidance-speed control signals according to autonomous path planning; and the mobile robot being subjected to sharing control of the electroencephalogram-speed control signals and the autonomous obstacle avoidance-speed control signals and travelling in the unknown environment. According to the sharing control method for the electroencephalogram mobile robot in the unknown environment, a linear speed of the mobile robot can be controlled by means of the electroencephalogram-speed control signals, the direction of the linear speed of the mobile robot is controlled by means of the autonomous obstacle avoidance-speed control signals, and the traveling process is more stable by means of continuous sharing control.

Description

一种未知环境下脑电移动机器人共享控制方法A shared control method for EEG mobile robots in unknown environments 技术领域technical field
本发明属于机器人技术领域,具体涉及一种未知环境下脑电移动机器人共享控制方法。The invention belongs to the technical field of robots, and in particular relates to a shared control method of an EEG mobile robot in an unknown environment.
背景技术Background technique
脑机接口(Brain-Computer Interface,BCI)技术是一种能够不经过神经-肌肉通路使用脑生理信号对外接设备输出控制信号的技术。脑机接口技术通过对采集到的脑电信号进行分析得到脑电控制信号,从而在大脑和移动机器人之间建立一个新的控制方法。Brain-Computer Interface (BCI) technology is a technology that can use brain physiological signals to output control signals to external devices without going through the neuro-muscle pathway. Brain-computer interface technology obtains EEG control signals by analyzing the collected EEG signals, thereby establishing a new control method between the brain and the mobile robot.
移动机器人的控制任务中根据移动机器人对工作环境的认知可以分为已知环境的控制任务和未知环境的控制任务。已知环境的控制任务是指在已知环境地图或环境中障碍物位置的情况下,控制机器人完成避障和路径规划等任务。未知环境的控制任务是指环境地图或障碍物位置等环境信息未知,机器人需要在缺乏先验环境知识的情况下完成避障和路径规划等任务。未知环境中的移动机器人控制需要结合多传感器对环境进行合理感知,再进行全局路径规划和局部路径规划完成任务。The control tasks of mobile robots can be divided into control tasks of known environment and control tasks of unknown environment according to the cognition of the mobile robot to the working environment. The control task of the known environment refers to controlling the robot to complete tasks such as obstacle avoidance and path planning when the environment map or the location of obstacles in the environment is known. The control task of the unknown environment means that the environmental information such as the environmental map or the location of the obstacles is unknown, and the robot needs to complete tasks such as obstacle avoidance and path planning without prior knowledge of the environment. Mobile robot control in an unknown environment needs to combine multiple sensors to reasonably perceive the environment, and then perform global path planning and local path planning to complete the task.
目前已有的研究中,在未知环境下人-移动机器人的共享控制中,分为离散共享控制和连续共享控制。离散共享控制将控制过程分为脑电控制和自动控制两个阶段,在没有检测到障碍物时完全使用脑电控制,在检测到障碍物时由机器人自主控制,机器人利用深度相机、惯性传感器、激光雷达和超声波传感器对位姿和环境信息进行感知,依 靠与障碍物的距离信息,结合人工势场法、A*算法、RRT算法和D*算法等进行路径规划和导航。这种方式对机器人的自主感知和自主决策能力提出了较大挑战,人机交互体验较差,在感知和决策的过程中往往效率不高。连续共享控制利用一定策略将使用者脑电控制与机器人的自主控制结合,比如脑电控制信号被看作引力,自主控制信号被看作斥力,利用合力控制移动机器人运动。这种共享方法比较复杂,容易受到脑电信号不平稳的影响。还有的利用脑电信号控制机器人线速度方向,机器人线速度大小设为固定值或自主控制,这种共享方法不够稳定,前进轨迹容易抖动,实用性不高。In the existing research, the shared control of human-mobile robots in unknown environments is divided into discrete shared control and continuous shared control. Discrete shared control divides the control process into two stages: EEG control and automatic control. When no obstacles are detected, EEG control is completely used. When obstacles are detected, the robot is autonomously controlled. The robot uses depth cameras, inertial sensors, Lidar and ultrasonic sensors perceive pose and environmental information, rely on the distance information from obstacles, and combine artificial potential field method, A* algorithm, RRT algorithm and D* algorithm for path planning and navigation. This method poses a great challenge to the robot's autonomous perception and autonomous decision-making capabilities. The human-computer interaction experience is poor, and the efficiency in the process of perception and decision-making is often low. Continuous shared control uses a certain strategy to combine the user's EEG control with the robot's autonomous control. For example, the EEG control signal is regarded as a gravitational force, and the autonomous control signal is regarded as a repulsive force, and the resultant force is used to control the movement of the mobile robot. This sharing method is more complicated and is easily affected by the unevenness of the EEG signal. Others use EEG signals to control the direction of the robot's linear velocity, and the linear velocity of the robot is set to a fixed value or autonomously controlled. This sharing method is not stable enough, the forward trajectory is easy to shake, and the practicability is not high.
发明内容SUMMARY OF THE INVENTION
为解决上述问题,本发明公开了一种未知环境下脑电移动机器人共享控制方法,用脑电信号控制移动机器人线速度的大小,用自主避障信号控制移动机器人线速度的方向,实现人-机器人移动机器人的避障和路径规划,提高移动机器人控制的效率和人机交互能力。In order to solve the above problems, the present invention discloses a shared control method of an EEG mobile robot in an unknown environment. The EEG signal is used to control the magnitude of the linear velocity of the mobile robot, and the autonomous obstacle avoidance signal is used to control the direction of the linear velocity of the mobile robot. Robotic obstacle avoidance and path planning of mobile robots to improve the efficiency of mobile robot control and human-computer interaction capabilities.
为达到上述目的,本发明的技术方案如下:For achieving the above object, technical scheme of the present invention is as follows:
一种未知环境下脑电移动机器人共享控制方法,包括以下步骤:A shared control method for an EEG mobile robot in an unknown environment, comprising the following steps:
步骤1、采集使用者的运动想象脑电信号并进行预处理、特征提取;Step 1. Collect the user's motor imagery EEG signal and perform preprocessing and feature extraction;
步骤2、将左右两侧脑电信号进行自适应权重线性求和,得到脑电-速度控制信号;Step 2. Perform an adaptive weight linear summation on the left and right EEG signals to obtain an EEG-speed control signal;
步骤3、移动机器人根据自主进行的路径规划,得到自主避障-速度控制信号;Step 3, the mobile robot obtains the autonomous obstacle avoidance-speed control signal according to the autonomous path planning;
步骤4、移动机器人受脑电-速度控制信号和自主避障-速度控制信号的共享控制,在未知环境下行驶。Step 4. The mobile robot is under the shared control of the EEG-speed control signal and the autonomous obstacle avoidance-speed control signal, and drives in an unknown environment.
进一步的,所述步骤1具体包括:Further, the step 1 specifically includes:
步骤1a、利用第一导联电极对使用者头顶左侧运动感觉皮层的运动想象脑电信号进行采集;Step 1a, using the first lead electrode to collect the motor imagery EEG signal of the left motor sensory cortex on the top of the user's head;
步骤1b、利用第二导联电极对使用者头顶右侧运动感觉皮层的运动想象脑电信号进行采集;Step 1b, using the second lead electrode to collect the motor imagery EEG signal of the right motor sensory cortex on the top of the user's head;
步骤1c、利用带通滤波和拉普拉斯参考滤波法进行预处理:Step 1c, use bandpass filtering and Laplace reference filtering for preprocessing:
拉普拉斯参考滤波法计算上表示为该导联信号减去邻近导联的原始信号的均值,采用公式:The Laplacian reference filtering method is calculated as the mean value of the signal of this lead minus the original signal of the adjacent lead, using the formula:
Figure PCTCN2020132580-appb-000001
Figure PCTCN2020132580-appb-000001
Figure PCTCN2020132580-appb-000002
Figure PCTCN2020132580-appb-000002
其中,d ij为电极i与电极j的距离,S i为以第i个电极为中心,特定距离范围内的电极集合,w ij为第j个电极的计算权重,n为导联电极总个数,V i为单个导联采集的原始信号,
Figure PCTCN2020132580-appb-000003
是V i经过拉普拉斯滤波后的导联信号;
Among them, d ij is the distance between electrode i and electrode j, S i is the electrode set within a certain distance with the i-th electrode as the center, w ij is the calculated weight of the j-th electrode, and n is the total number of lead electrodes number, V i is the original signal collected by a single lead,
Figure PCTCN2020132580-appb-000003
is the lead signal of V i after Laplace filtering;
步骤1d、利用自回归AR模型对脑电信号进行特征提取:Step 1d, use the autoregressive AR model to extract the features of the EEG signal:
利用自回归AR模型对使用者每个通道电极预设节律的运动想象脑电信号进行特征提取,采用公式:The autoregressive AR model is used to extract the features of the motor imagery EEG signals of the preset rhythm of the electrodes of each channel of the user, using the formula:
Figure PCTCN2020132580-appb-000004
Figure PCTCN2020132580-appb-000004
其中,
Figure PCTCN2020132580-appb-000005
为t时刻的估计电压信号幅度,a k为第k个AR模型 参数,m为AR模型阶数,
Figure PCTCN2020132580-appb-000006
为t-k时刻的电压信号幅度,ε为估计误差,n为导联电极总个数。
in,
Figure PCTCN2020132580-appb-000005
is the estimated voltage signal amplitude at time t, a k is the kth AR model parameter, m is the AR model order,
Figure PCTCN2020132580-appb-000006
is the voltage signal amplitude at time tk, ε is the estimation error, and n is the total number of lead electrodes.
进一步的,所述步骤2具体包括:Further, the step 2 specifically includes:
将头顶左侧脑电信号预设频段的幅值L和头顶右侧脑电信号预设频段的幅值R分别乘以权重w L和w R,加权求和,得到脑电-速度控制信号C Vvalue,采用公式: Multiply the amplitude L of the preset frequency band of the EEG signal on the left side of the head and the amplitude R of the preset frequency band of the EEG signal on the right side of the head by the weights w L and w R respectively, and the weighted sum is obtained to obtain the EEG-speed control signal C Vvalue , using the formula:
Figure PCTCN2020132580-appb-000007
Figure PCTCN2020132580-appb-000007
M Vvalue=w LL+w RR+b M Vvalue =w L L+w R R+b
其中,w L和w R分别为左右两侧运动想象脑电信号权重,b为常数偏移项,L和R为左右两侧运动想象脑电信号,M Vvalue为脑电控制系数,(M Vvalue) max为脑电控制系数的最大值,(M Vvalue) min为脑电控制系数的最小值,C Vvalue为移动机器人线速度数值变化系数,C Vvalue大于预设常数值时机器人线速度数值增大,C Vvalue小于预设常数值时机器人线速度数值减小。 Among them, w L and w R are the weights of the left and right motor imagery EEG signals, b is a constant offset term, L and R are the left and right motor imagery EEG signals, M Vvalue is the EEG control coefficient, (M Vvalue ) max is the maximum value of the EEG control coefficient, (M Vvalue ) min is the minimum value of the EEG control coefficient, and C Vvalue is the numerical variation coefficient of the linear velocity of the mobile robot. When C Vvalue is greater than the preset constant value, the numerical value of the linear velocity of the robot increases. , when C Vvalue is less than the preset constant value, the value of the robot linear velocity decreases.
进一步的,所述步骤2中w L,w R,b初始值都为+1.0,采用归一化最小均方误差NLMS算法自适应调整权重。 Further, in the step 2, the initial values of w L , w R , and b are all +1.0, and the normalized minimum mean square error NLMS algorithm is used to adjust the weights adaptively.
进一步的,所述步骤3具体包括:Further, the step 3 specifically includes:
步骤3a、对环境信息进行扫描。Step 3a, scan the environmental information.
步骤3b、采用Hector SLAM算法进行同步定位与建图:Step 3b, using the Hector SLAM algorithm for synchronous positioning and mapping:
定义栅格尺寸,栅格单元具有自由,占用和未知三种存在状态,利用双线性滤波算法估计环境点云信息占用栅格的概率,定位机器人 位姿信息。The grid size is defined, and the grid cell has three existence states: free, occupied and unknown. The bilinear filtering algorithm is used to estimate the probability that the environmental point cloud information occupies the grid, and the robot pose information is located.
步骤3c、采用A*算法进行路径规划:Step 3c, using the A* algorithm for path planning:
以机器人所在位置(x n,y n)为起点,以自由和未知栅格之间的边界点中,和机器人所在位置间欧式距离最短的点作为目标点(x G,y G),启发函数h(n)采用公式: Taking the position of the robot (x n , y n ) as the starting point, and taking the point with the shortest Euclidean distance between the free and unknown grids and the position of the robot as the target point (x G , y G ), the heuristic function h(n) adopts the formula:
Figure PCTCN2020132580-appb-000008
Figure PCTCN2020132580-appb-000008
其中,(x n,y n)表示当前点的地图横纵坐标,(x G,y G)表示目标点的地图横纵坐标。 Among them, (x n , y n ) represents the horizontal and vertical coordinates of the map of the current point, and (x G , y G ) represents the horizontal and vertical coordinates of the map of the target point.
步骤3d、根据规划路径,得到自主避障-速度控制信号
Figure PCTCN2020132580-appb-000009
Step 3d: Obtain the autonomous obstacle avoidance-speed control signal according to the planned path
Figure PCTCN2020132580-appb-000009
该信号为一单位方向向量,方向为路径规划生成的最优路径节点集里的每一父节点指向下一子节点,自主避障-速度控制信号
Figure PCTCN2020132580-appb-000010
满足公式:
The signal is a unit direction vector, the direction is that each parent node in the optimal path node set generated by the path planning points to the next child node, autonomous obstacle avoidance-speed control signal
Figure PCTCN2020132580-appb-000010
Satisfy the formula:
Figure PCTCN2020132580-appb-000011
Figure PCTCN2020132580-appb-000011
进一步的,所述步骤4具体包括:Further, the step 4 specifically includes:
移动机器人前进过程中线速度的控制决策采用公式:The control decision of the linear speed in the moving process of the mobile robot adopts the formula:
Figure PCTCN2020132580-appb-000012
Figure PCTCN2020132580-appb-000012
其中,
Figure PCTCN2020132580-appb-000013
表示移动机器人前进过程中线速度,V max表示预设的线速度最大值;
in,
Figure PCTCN2020132580-appb-000013
Indicates the linear speed of the mobile robot during the forward process, and V max represents the preset maximum linear speed;
移动机器人前进过程中,位于所述步骤4a中规划路径的每一节点处时,根据自主避障-速度控制信号
Figure PCTCN2020132580-appb-000014
的方向进行角度补偿,此时为进行角度补偿仅完成转向动作,采用预设固定的角速度
Figure PCTCN2020132580-appb-000015
During the forward process of the mobile robot, when it is located at each node of the planned path in the step 4a, according to the autonomous obstacle avoidance-speed control signal
Figure PCTCN2020132580-appb-000014
angle compensation is performed in the direction of the
Figure PCTCN2020132580-appb-000015
本发明的有益效果是:The beneficial effects of the present invention are:
1、本发明可以在未知环境信息的情况下将人脑意图控制与机器 人自主控制相结合,提高了未知环境下的控制效率和人机交互能力。1. The present invention can combine human brain intention control with robot autonomous control in the case of unknown environmental information, thereby improving the control efficiency and human-computer interaction ability in the unknown environment.
2、本发明使用脑电控制移动机器人线速度的大小,使用移动机器人自主规划的路径控制移动机器人线速度的方向,通过共享控制的方法使机器人在未知环境下完成避障和路径规划,提高了未知环境下移动机器人的避障效率。2. The present invention uses EEG to control the magnitude of the linear velocity of the mobile robot, uses the path independently planned by the mobile robot to control the direction of the linear velocity of the mobile robot, and enables the robot to complete obstacle avoidance and path planning in an unknown environment through the shared control method, which improves the performance of the mobile robot. Obstacle avoidance efficiency of mobile robots in unknown environments.
附图说明Description of drawings
图1是本发明提供的一种未知环境下脑电移动机器人共享控制方法的系统框图;1 is a system block diagram of a shared control method for an EEG mobile robot in an unknown environment provided by the present invention;
图2为本发明中的使用者运动想象任务动作示意图;Fig. 2 is the action schematic diagram of the user's motor imagery task in the present invention;
图3为本发明中的脑电信号采集位置示意图;FIG. 3 is a schematic diagram of an EEG signal collection position in the present invention;
图4为本发明中的双线性滤波算法示意图;4 is a schematic diagram of a bilinear filtering algorithm in the present invention;
图5为本发明中的A*路径规划算法流程图。FIG. 5 is a flowchart of the A* path planning algorithm in the present invention.
具体实施方式detailed description
下面结合附图和具体实施方式,进一步阐明本发明,应理解下述具体实施方式仅用于说明本发明而不用于限制本发明的范围。The present invention will be further clarified below with reference to the accompanying drawings and specific embodiments. It should be understood that the following specific embodiments are only used to illustrate the present invention and not to limit the scope of the present invention.
本实施例提供了一种未知环境下脑电移动机器人共享控制方法,包括如下步骤:(1)采集使用者的运动想象脑电信号并进行预处理、特征提取;(2)将左右两侧脑电信号进行自适应权重线性求和,得到脑电-速度控制信号;(3)移动机器人根据自主进行的路径规划,得到自主避障-速度控制信号;(4)移动机器人受脑电-速度控制信号和自主避障-速度控制信号的共享控制,在未知环境下行驶。This embodiment provides a shared control method for an EEG mobile robot in an unknown environment. The electrical signals are linearly summed with adaptive weights to obtain the EEG-speed control signal; (3) The mobile robot obtains the autonomous obstacle avoidance-speed control signal according to the autonomous path planning; (4) The mobile robot is controlled by the EEG-speed Shared control of signals and autonomous obstacle avoidance - speed control signals, driving in unknown environments.
1.采集使用者的运动想象脑电信号;1. Collect the user's motor imagery EEG signals;
使用者观察的屏幕上播放视频引导使用者产生运动想象脑电信号。在使用者想象双手运动和休息两种状态下,利用第一导联电极对使用者头顶左侧运动感觉皮层的运动想象脑电信号进行采集,利用第二导联电极对使用者头顶右侧运动感觉皮层的运动想象脑电信号进行采集。The video played on the screen that the user observes guides the user to generate motor imagery EEG signals. When the user imagines the movement of his hands and rests, the first lead electrode is used to collect the motor imagery EEG signal of the left motor sensory cortex of the user, and the second lead electrode is used to detect the movement of the right side of the user's head. Motor imagery EEG signals of the sensory cortex were collected.
2.对使用者的脑电信号进行预处理和特征提取2. Preprocessing and feature extraction of the user's EEG signals
2.1利用带通滤波和拉普拉斯参考滤波法进行预处理2.1 Preprocessing using bandpass filtering and Laplace reference filtering
本实施例对采集到的脑电信号使用8-32Hz的带通滤波器,滤除采集过程中的噪声,提高信号的信噪比。再针对带通滤波后的信号进行拉普拉斯参考法滤波。拉普拉斯参考滤波法计算上表示为该导联信号减去邻近导联的原始信号的均值,采用公式:In this embodiment, an 8-32 Hz band-pass filter is used for the collected EEG signal to filter out the noise in the collection process and improve the signal-to-noise ratio of the signal. The bandpass filtered signal is then filtered by the Laplacian reference method. The Laplacian reference filtering method is calculated as the mean value of the signal of this lead minus the original signal of the adjacent lead, using the formula:
Figure PCTCN2020132580-appb-000016
Figure PCTCN2020132580-appb-000016
Figure PCTCN2020132580-appb-000017
Figure PCTCN2020132580-appb-000017
其中,d ij为电极i与电极j的距离,S i为以第i个电极为中心,特定距离范围内的电极集合,w ij为第j个电极的计算权重,n为导联电极总个数,V i为单个导联采集的原始信号,
Figure PCTCN2020132580-appb-000018
是V i经过拉普拉斯滤波后的导联信号;
Among them, d ij is the distance between electrode i and electrode j, S i is the electrode set within a certain distance with the i-th electrode as the center, w ij is the calculated weight of the j-th electrode, and n is the total number of lead electrodes number, V i is the original signal collected by a single lead,
Figure PCTCN2020132580-appb-000018
is the lead signal of V i after Laplace filtering;
2.2利用自回归AR模型对脑电信号进行特征提取2.2 Feature extraction of EEG signals using autoregressive AR model
利用自回归AR模型对使用者每个通道电极预设节律的运动想象脑电信号进行特征提取,采用公式:The autoregressive AR model is used to extract the features of the motor imagery EEG signals of the preset rhythm of the electrodes of each channel of the user, using the formula:
Figure PCTCN2020132580-appb-000019
Figure PCTCN2020132580-appb-000019
其中,
Figure PCTCN2020132580-appb-000020
为t时刻的估计电压信号幅度,a k为第k个AR模型参数,m为AR模型阶数,
Figure PCTCN2020132580-appb-000021
为t-k时刻的电压信号幅度,ε为估计误差,n为导联电极总个数;
in,
Figure PCTCN2020132580-appb-000020
is the estimated voltage signal amplitude at time t, a k is the kth AR model parameter, m is the AR model order,
Figure PCTCN2020132580-appb-000021
is the voltage signal amplitude at time tk, ε is the estimation error, and n is the total number of lead electrodes;
3.求脑电-速度控制信号3. Find the EEG-speed control signal
将头顶左侧脑电信号预设频段的幅值L和头顶右侧脑电信号预设频段的幅值R分别乘以权重w L和w R,加权求和,得到脑电-速度控制信号C Vvalue,采用公式: Multiply the amplitude L of the preset frequency band of the EEG signal on the left side of the head and the amplitude R of the preset frequency band of the EEG signal on the right side of the head by the weights w L and w R respectively, and the weighted sum is obtained to obtain the EEG-speed control signal C Vvalue , using the formula:
Figure PCTCN2020132580-appb-000022
Figure PCTCN2020132580-appb-000022
M Vvalue=w LL+w RR+b M Vvalue =w L L+w R R+b
其中,w L和w R分别为左右两侧运动想象脑电信号权重,b为常数偏移项,L和R为左右两侧运动想象脑电信号,M Vvalue为脑电控制系数,(M Vvalue) max为脑电控制系数的最大值,(M Vvalue) min为脑电控制系数的最小值,C Vvalue为移动机器人线速度数值变化系数,C Vvalue大于预设常数值时机器人线速度数值增大,C Vvalue小于预设常数值时机器人线速度数值减小。 Among them, w L and w R are the weights of the left and right motor imagery EEG signals, b is a constant offset term, L and R are the left and right motor imagery EEG signals, M Vvalue is the EEG control coefficient, (M Vvalue ) max is the maximum value of the EEG control coefficient, (M Vvalue ) min is the minimum value of the EEG control coefficient, and C Vvalue is the numerical variation coefficient of the linear velocity of the mobile robot. When C Vvalue is greater than the preset constant value, the numerical value of the linear velocity of the robot increases. , when C Vvalue is less than the preset constant value, the value of the robot linear velocity decreases.
本实施例中使用的w L,w R初始值为+1.0,采用归一化最小均方误差NLMS算法自适应调整权重。 The initial values of w L and w R used in this embodiment are +1.0, and the normalized minimum mean square error NLMS algorithm is used to adjust the weights adaptively.
将滤波后的信号输入NLMS自适应滤波器,给定权重w,计算输出值:Input the filtered signal into the NLMS adaptive filter, given the weight w, and calculate the output value:
y(k)=w(k) TV(k) y(k)=w(k) T V(k)
计算估计误差e(k):Calculate the estimated error e(k):
e(k)=d(k)-y(k)e(k)=d(k)-y(k)
权重更新:Weight update:
Figure PCTCN2020132580-appb-000023
Figure PCTCN2020132580-appb-000023
其中,k为更新次数,V(k)为第k次的输入信号,α和μ为调节因子,d(k)为参考信号。Among them, k is the number of updates, V(k) is the kth input signal, α and μ are the adjustment factors, and d(k) is the reference signal.
4.移动机器人自主路径规划4. Autonomous path planning for mobile robots
4.1对环境信息进行扫描4.1 Scan environmental information
利用二维激光雷达扫描周围环境,得到环境信息。二维激光雷达每一帧的角度跟分辨率有关,设定分辨率为360,每一度对应一个值。二维激光雷达通过二维平面上转圈形式地扫描方式获取环境信息,二维激光雷达信息是指探测到的障碍物的距离,代表着真实障碍物的距离值。Use two-dimensional lidar to scan the surrounding environment to obtain environmental information. The angle of each frame of the two-dimensional lidar is related to the resolution. The resolution is set to 360, and each degree corresponds to a value. The two-dimensional lidar obtains environmental information by scanning in the form of a circle on a two-dimensional plane. The two-dimensional lidar information refers to the distance of the detected obstacle, which represents the distance value of the real obstacle.
4.2采用Hector SLAM算法进行同步定位与建图4.2 Simultaneous positioning and mapping using Hector SLAM algorithm
定义栅格尺寸,将实际环境地图离散化,基本原理为:以机器人所在位置坐标为中心,以单元栅格的尺寸向四周扩散,直到虚拟栅格铺满平面,然后将二维激光雷达信息落到相应的栅格内,由栅格的占有概率值来代表所有落到该栅格内的二维激光雷达信息。利用双线性滤波算法估计环境点云信息占用栅格的概率,栅格单元具有自由,占用和未知三种存在状态。Define the grid size and discretize the actual environment map. The basic principle is: take the coordinates of the robot's location as the center, spread the size of the unit grid around until the virtual grid covers the plane, and then put the two-dimensional lidar information into the ground. into the corresponding grid, and all the two-dimensional lidar information falling into the grid is represented by the occupancy probability value of the grid. The bilinear filtering algorithm is used to estimate the probability that the environmental point cloud information occupies the grid. The grid cells have three existence states: free, occupied and unknown.
给定一个连续的地图坐标P m,占有值M(P m),对梯度 Given a continuous map coordinate P m , occupying the value M(P m ), for the gradient
Figure PCTCN2020132580-appb-000024
Figure PCTCN2020132580-appb-000024
用点P ij(i,j=0,1)近似后沿着轴与轴进行线性插值, After approximating with the point P ij (i,j=0,1), perform linear interpolation along the axis and the axis,
Figure PCTCN2020132580-appb-000025
Figure PCTCN2020132580-appb-000025
偏导数可以近似为:Partial derivatives can be approximated as:
Figure PCTCN2020132580-appb-000026
Figure PCTCN2020132580-appb-000026
Figure PCTCN2020132580-appb-000027
Figure PCTCN2020132580-appb-000027
建立栅格地图后,将栅格地图与二维激光雷达信息进行对齐。使用高斯-牛顿法:After building the raster map, align the raster map with the 2D lidar information. Using the Gauss-Newton method:
对于机器人位姿For robot pose
ξ=(P x,P y,ψ) T ξ=(P x ,P y ,ψ) T
求解使其满足公式Solve to satisfy the formula
Figure PCTCN2020132580-appb-000028
Figure PCTCN2020132580-appb-000028
此时激光扫描与地图有最佳的对齐。其中,S i(ξ)是扫描端点S i=(S i,x,S i,y) T的全局坐标。有一个S i(ξ)可以表示机器人在全局坐标系里的坐标: At this point the laser scan is optimally aligned with the map. where S i (ξ) is the global coordinate of the scanning endpoint S i =(S i,x ,S i,y ) T . There is a S i (ξ) that can represent the coordinates of the robot in the global coordinate system:
Figure PCTCN2020132580-appb-000029
Figure PCTCN2020132580-appb-000029
方程M(S i(ξ))返回S i(ξ)处的地图值。对于给定ξ的初始估计,要估计ξ,根据以下方程优化测量误差: The equation M(S i (ξ)) returns the map value at Si ( ξ). For a given initial estimate of ξ, to estimate ξ, the measurement error is optimized according to the following equation:
Figure PCTCN2020132580-appb-000030
Figure PCTCN2020132580-appb-000030
对M(S i(ξ+Δξ))做一阶泰勒展开得: Do the first-order Taylor expansion of M(S i (ξ+Δξ)):
Figure PCTCN2020132580-appb-000031
Figure PCTCN2020132580-appb-000031
将Δξ的偏导数设置为0而使以下方程最小化:The following equation is minimized by setting the partial derivative of Δξ to 0:
Figure PCTCN2020132580-appb-000032
Figure PCTCN2020132580-appb-000032
解Δξ得到高斯-牛顿方程的最小化问题:Solving for Δξ leads to the minimization problem of the Gauss-Newton equation:
Figure PCTCN2020132580-appb-000033
Figure PCTCN2020132580-appb-000033
其中:in:
Figure PCTCN2020132580-appb-000034
Figure PCTCN2020132580-appb-000034
Figure PCTCN2020132580-appb-000035
为占有栅格的偏导:
Figure PCTCN2020132580-appb-000035
is the partial derivative of the occupancy grid:
Figure PCTCN2020132580-appb-000036
Figure PCTCN2020132580-appb-000036
4.3采用A*算法进行路径规划4.3 Using A* algorithm for path planning
以机器人所在位置(x n,y n)为起点,以自由和未知栅格之间的边界点中,和机器人所在位置间欧式距离最短的点作为目标点(x G,y G),启发函数h(n)采用公式: Taking the position of the robot (x n , y n ) as the starting point, and taking the point with the shortest Euclidean distance between the free and unknown grids and the position of the robot as the target point (x G , y G ), the heuristic function h(n) adopts the formula:
Figure PCTCN2020132580-appb-000037
Figure PCTCN2020132580-appb-000037
其中,(x n,y n)表示当前点的地图横纵坐标,(x G,y G)表示目标点的地图横纵坐标。 Among them, (x n , y n ) represents the horizontal and vertical coordinates of the map of the current point, and (x G , y G ) represents the horizontal and vertical coordinates of the map of the target point.
则移动机器人到达每个节点的代价:Then the cost of moving the robot to reach each node:
Figure PCTCN2020132580-appb-000038
Figure PCTCN2020132580-appb-000038
其中,f(n)是节点(x n,y n)的总代价,当选择下一个要遍历的节点时,选择总代价最低,即优先级最高的节点。g(n)是节点(x n,y n)距离起点(x 0,y 0)的代价。 Among them, f(n) is the total cost of the node (x n , y n ), when selecting the next node to be traversed, select the node with the lowest total cost, that is, the node with the highest priority. g(n) is the cost of the node (x n , y n ) from the starting point (x 0 , y 0 ).
A*算法的流程如下:The flow of the A* algorithm is as follows:
1)初始化open_set和close_set;1) Initialize open_set and close_set;
2)将起点加入open_set中,并设置代价值为0(优先级最高);2) Add the starting point to open_set, and set the cost value to 0 (the highest priority);
3)如果open_set不为空,则从open_set中选取优先级最高的节点n,将节点n从open_set中删除,并加入close_set中;3) If open_set is not empty, select node n with the highest priority from open_set, delete node n from open_set, and add it to close_set;
4)如果节点n为终点,则从终点开始逐步追踪父节点,一直达到起点,返回找到的结果路径,算法结束;4) If the node n is the end point, the parent node is gradually traced from the end point until the start point is reached, and the found result path is returned, and the algorithm ends;
如果节点n不是终点,判断节点n是否存在邻近节点m;If node n is not the end point, judge whether node n has adjacent node m;
5)如果邻近节点m不存在,则跳转至步骤3);5) If the adjacent node m does not exist, then jump to step 3);
如果邻近节点m存在,则将节点m加入open_set中,跳转至步骤3)。If the adjacent node m exists, add the node m to the open_set, and jump to step 3).
4.4求自主避障-速度控制信号4.4 Seek autonomous obstacle avoidance-speed control signal
移动机器人根据自主规划的路径变化行进方向,该信号控制线速度的方向。该信号为一单位方向向量,方向为路径规划生成的最优路径节点集里的每一父节点指向下一子节点,自主避障-速度控制信号
Figure PCTCN2020132580-appb-000039
满足公式:
The mobile robot changes its travel direction according to the autonomously planned path, and this signal controls the direction of the linear velocity. The signal is a unit direction vector, the direction is that each parent node in the optimal path node set generated by the path planning points to the next child node, autonomous obstacle avoidance-speed control signal
Figure PCTCN2020132580-appb-000039
Satisfy the formula:
Figure PCTCN2020132580-appb-000040
Figure PCTCN2020132580-appb-000040
5.脑电-速度控制信号和自主避障-速度控制信号对移动机器人的共享控制5. Shared control of mobile robots by EEG-speed control signals and autonomous obstacle avoidance-speed control signals
移动机器人前进过程中线速度的控制决策采用公式:The control decision of the linear speed in the moving process of the mobile robot adopts the formula:
Figure PCTCN2020132580-appb-000041
Figure PCTCN2020132580-appb-000041
其中,
Figure PCTCN2020132580-appb-000042
表示移动机器人前进过程中线速度,V max表示预设的线速度最大值;
in,
Figure PCTCN2020132580-appb-000042
Indicates the linear speed of the mobile robot during the forward process, and V max represents the preset maximum linear speed;
移动机器人前进过程中,位于移动机器人自主规划的路径中的每一节点处时,根据自主避障-速度控制信号
Figure PCTCN2020132580-appb-000043
的方向进行角度补偿,此时为进行角度补偿仅完成转向动作,采用预设固定的角速度
Figure PCTCN2020132580-appb-000044
During the forward process of the mobile robot, when it is located at each node in the path planned by the mobile robot autonomously, according to the autonomous obstacle avoidance-speed control signal
Figure PCTCN2020132580-appb-000043
angle compensation is performed in the direction of the
Figure PCTCN2020132580-appb-000044
本发明方案所公开的技术手段不仅限于上述实施方式所公开的技术手段,还包括由以上技术特征任意组合所组成的技术方案。The technical means disclosed in the solution of the present invention are not limited to the technical means disclosed in the above embodiments, but also include technical solutions composed of any combination of the above technical features.

Claims (6)

  1. 一种未知环境下脑电移动机器人共享控制方法,其特征在于,包括以下步骤:A shared control method for an EEG mobile robot in an unknown environment, comprising the following steps:
    步骤1、采集使用者的运动想象脑电信号并进行预处理、特征提取;Step 1. Collect the user's motor imagery EEG signal and perform preprocessing and feature extraction;
    步骤2、将左右两侧脑电信号进行自适应权重线性求和,得到脑电-速度控制信号;Step 2. Perform an adaptive weight linear summation on the left and right EEG signals to obtain an EEG-speed control signal;
    步骤3、移动机器人根据自主进行的路径规划,得到自主避障-速度控制信号;Step 3, the mobile robot obtains the autonomous obstacle avoidance-speed control signal according to the autonomous path planning;
    步骤4、移动机器人受脑电-速度控制信号和自主避障-速度控制信号的共享控制,在未知环境下行驶。Step 4. The mobile robot is under the shared control of the EEG-speed control signal and the autonomous obstacle avoidance-speed control signal, and drives in an unknown environment.
  2. 根据权利要求1所述未知环境下脑电移动机器人共享控制方法,其特征在于:所述步骤1中采集使用者的运动想象脑电信号并进行预处理、特征提取,具体为:The shared control method for an EEG mobile robot in an unknown environment according to claim 1, characterized in that: in the step 1, the motor imagery EEG signal of the user is collected and subjected to preprocessing and feature extraction, specifically:
    步骤1a、利用第一导联电极对使用者头顶左侧运动感觉皮层的运动想象脑电信号进行采集;Step 1a, using the first lead electrode to collect the motor imagery EEG signal of the left motor sensory cortex on the top of the user's head;
    步骤1b、利用第二导联电极对使用者头顶右侧运动感觉皮层的运动想象脑电信号进行采集;Step 1b, using the second lead electrode to collect the motor imagery EEG signal of the right motor sensory cortex on the top of the user's head;
    步骤1c、利用带通滤波和拉普拉斯参考滤波法进行预处理:Step 1c, use bandpass filtering and Laplace reference filtering for preprocessing:
    拉普拉斯参考滤波法计算上表示为该导联信号减去邻近导联的原始信号的均值,采用公式:The Laplacian reference filtering method is calculated as the mean value of the signal of this lead minus the original signal of the adjacent lead, using the formula:
    Figure PCTCN2020132580-appb-100001
    Figure PCTCN2020132580-appb-100001
    Figure PCTCN2020132580-appb-100002
    Figure PCTCN2020132580-appb-100002
    其中,d ij为电极i与电极j的距离,S i为以第i个电极为中心,特定距离范围内的电极集合,w ij为第j个电极的计算权重,n为导联电极总个数,V i为单个导联采集的原始信号,V i LAP是V i经过拉普拉斯滤波后的导联信号; Among them, d ij is the distance between electrode i and electrode j, S i is the electrode set within a certain distance with the i-th electrode as the center, w ij is the calculated weight of the j-th electrode, and n is the total number of lead electrodes V i is the original signal collected by a single lead, and V i LAP is the lead signal of V i after Laplace filtering;
    步骤1d、利用自回归AR模型对脑电信号进行特征提取:Step 1d, use the autoregressive AR model to extract the features of the EEG signal:
    利用自回归AR模型对使用者每个通道电极预设节律的运动想象脑电信号进行特征提取,采用公式:The autoregressive AR model is used to extract the features of the motor imagery EEG signals of the preset rhythm of the electrodes of each channel of the user, using the formula:
    Figure PCTCN2020132580-appb-100003
    Figure PCTCN2020132580-appb-100003
    其中,V i LAP(t)为t时刻的估计电压信号幅度,a k为第k个AR模型参数,m为AR模型阶数,V i LAP(t-k)为t-k时刻的电压信号幅度,ε为估计误差,n为导联电极总个数。 Among them, Vi LAP (t) is the estimated voltage signal amplitude at time t, a k is the kth AR model parameter, m is the AR model order, Vi LAP (tk) is the voltage signal amplitude at time tk, and ε is Estimation error, n is the total number of lead electrodes.
  3. 根据权利要求1所述未知环境下脑电移动机器人共享控制方法,其特征在于:所述步骤2中将左右两侧脑电信号进行自适应权重线性求和,得到脑电-速度控制信号,具体为:The shared control method for an EEG mobile robot in an unknown environment according to claim 1, wherein in the step 2, the EEG signals on the left and right sides are linearly summed with adaptive weights to obtain an EEG-speed control signal. for:
    将头顶左侧脑电信号预设频段的幅值L和头顶右侧脑电信号预设频段的幅值R分别乘以权重w L和w R,加权求和,得到脑电-速度控制信号C Vvalue,采用公式: Multiply the amplitude L of the preset frequency band of the EEG signal on the left side of the head and the amplitude R of the preset frequency band of the EEG signal on the right side of the head by the weights w L and w R respectively, and the weighted sum is obtained to obtain the EEG-speed control signal C Vvalue , using the formula:
    Figure PCTCN2020132580-appb-100004
    Figure PCTCN2020132580-appb-100004
    M Vvalue=w LL+w RR+b M Vvalue =w L L+w R R+b
    其中,w L和w R分别为左右两侧运动想象脑电信号权重,b为常数偏移 项,L和R为左右两侧运动想象脑电信号,M Vvalue为脑电控制系数,(M Vvalue) max为脑电控制系数的最大值,(M Vvalue) min为脑电控制系数的最小值,C Vvalue为移动机器人线速度数值变化系数,C Vvalue大于预设常数值时机器人线速度数值增大,C Vvalue小于预设常数值时机器人线速度数值减小。 Among them, w L and w R are the weights of the left and right motor imagery EEG signals, b is a constant offset term, L and R are the left and right motor imagery EEG signals, M Vvalue is the EEG control coefficient, (M Vvalue ) max is the maximum value of the EEG control coefficient, (M Vvalue ) min is the minimum value of the EEG control coefficient, and C Vvalue is the numerical variation coefficient of the linear velocity of the mobile robot. When C Vvalue is greater than the preset constant value, the numerical value of the linear velocity of the robot increases. , when C Vvalue is less than the preset constant value, the value of the robot linear velocity decreases.
  4. 根据权利要求3所述未知环境下脑电移动机器人共享控制方法,其特征在于:w L,w R初始值为+1.0,采用归一化最小均方误差NLMS算法自适应调整权重。 The shared control method for an EEG mobile robot in an unknown environment according to claim 3, wherein the initial values of w L and w R are +1.0, and the normalized minimum mean square error NLMS algorithm is used to adjust the weights adaptively.
  5. 根据权利要求1所述的未知环境下脑电移动机器人共享控制方法,其特征在于:所述步骤3中移动机器人根据自主进行的路径规划,得到自主避障-速度控制信号,具体为:The shared control method for an EEG mobile robot in an unknown environment according to claim 1, wherein in the step 3, the mobile robot obtains an autonomous obstacle avoidance-speed control signal according to an autonomous path planning, specifically:
    步骤3a、对环境信息进行扫描;Step 3a, scan the environmental information;
    步骤3b、采用Hector SLAM算法进行同步定位与建图:Step 3b, using the Hector SLAM algorithm for synchronous positioning and mapping:
    定义栅格尺寸,栅格单元具有自由,占用和未知三种存在状态,利用双线性滤波算法估计环境点云信息占用栅格的概率,定位机器人位姿信息;Define the size of the grid, the grid cell has three existence states of free, occupied and unknown, and use the bilinear filtering algorithm to estimate the probability that the environmental point cloud information occupies the grid, and locate the robot pose information;
    步骤3c、采用A*算法进行路径规划:Step 3c, using the A* algorithm for path planning:
    以机器人所在位置(x n,y n)为起点,以自由和未知栅格之间的边界点中,和机器人所在位置间欧式距离最短的点作为目标点(x G,y G),启发函数h(n)采用公式: Taking the position of the robot (x n , y n ) as the starting point, and taking the point with the shortest Euclidean distance between the free and unknown grids and the position of the robot as the target point (x G , y G ), the heuristic function h(n) adopts the formula:
    Figure PCTCN2020132580-appb-100005
    Figure PCTCN2020132580-appb-100005
    其中,(x n,y n)表示当前点的地图横纵坐标,(x G,y G)表示目标点的 地图横纵坐标; Among them, (x n , y n ) represents the horizontal and vertical coordinates of the map of the current point, and (x G , y G ) represents the horizontal and vertical coordinates of the map of the target point;
    步骤3d、根据规划路径,得到自主避障-速度控制信号
    Figure PCTCN2020132580-appb-100006
    Step 3d: Obtain the autonomous obstacle avoidance-speed control signal according to the planned path
    Figure PCTCN2020132580-appb-100006
    该信号为一单位方向向量,方向为路径规划生成的最优路径节点集里的每一父节点指向下一子节点,自主避障-速度控制信号
    Figure PCTCN2020132580-appb-100007
    满足公式:
    The signal is a unit direction vector, the direction is that each parent node in the optimal path node set generated by the path planning points to the next child node, autonomous obstacle avoidance-speed control signal
    Figure PCTCN2020132580-appb-100007
    Satisfy the formula:
    Figure PCTCN2020132580-appb-100008
    Figure PCTCN2020132580-appb-100008
  6. 根据权利要求1所述的未知环境下脑电移动机器人共享控制方法,其特征在于:The shared control method for an EEG mobile robot in an unknown environment according to claim 1, wherein:
    所述步骤4中移动机器人受脑电-速度控制信号和自主避障-速度控制信号的共享控制,在未知环境下行驶,具体为:In the step 4, the mobile robot is under the shared control of the EEG-speed control signal and the autonomous obstacle avoidance-speed control signal, and drives in an unknown environment, specifically:
    移动机器人前进过程中线速度的控制决策采用公式:The control decision of the linear speed in the moving process of the mobile robot adopts the formula:
    Figure PCTCN2020132580-appb-100009
    Figure PCTCN2020132580-appb-100009
    其中,
    Figure PCTCN2020132580-appb-100010
    表示移动机器人前进过程中线速度,V max表示预设的线速度最大值;移动机器人前进过程中,位于所述步骤4a中规划路径的每一节点处时,根据自主避障-速度控制信号
    Figure PCTCN2020132580-appb-100011
    的方向进行角度补偿,此时为进行角度补偿仅完成转向动作,采用预设固定的角速度
    Figure PCTCN2020132580-appb-100012
    in,
    Figure PCTCN2020132580-appb-100010
    Represents the linear velocity in the forward process of the mobile robot, and V max represents the preset maximum linear velocity; during the forward process of the mobile robot, when it is located at each node of the planned path in the step 4a, according to the autonomous obstacle avoidance-speed control signal
    Figure PCTCN2020132580-appb-100011
    angle compensation is performed in the direction of the
    Figure PCTCN2020132580-appb-100012
PCT/CN2020/132580 2020-09-24 2020-11-30 Sharing control method for electroencephalogram mobile robot in unknown environment WO2022062169A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011013015.3A CN112148011B (en) 2020-09-24 2020-09-24 Electroencephalogram mobile robot sharing control method under unknown environment
CN202011013015.3 2020-09-24

Publications (1)

Publication Number Publication Date
WO2022062169A1 true WO2022062169A1 (en) 2022-03-31

Family

ID=73896343

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/132580 WO2022062169A1 (en) 2020-09-24 2020-11-30 Sharing control method for electroencephalogram mobile robot in unknown environment

Country Status (2)

Country Link
CN (1) CN112148011B (en)
WO (1) WO2022062169A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115609595A (en) * 2022-12-16 2023-01-17 北京中海兴达建设有限公司 Trajectory planning method, device and equipment of mechanical arm and readable storage medium
CN115639823A (en) * 2022-10-27 2023-01-24 山东大学 Terrain sensing and movement control method and system for robot under rugged and undulating terrain

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113311823B (en) * 2021-04-07 2023-01-17 西北工业大学 New mobile robot control method combining brain-computer interface technology and ORB _ SLAM navigation
CN113760094A (en) * 2021-09-09 2021-12-07 成都视海芯图微电子有限公司 Limb movement assisting method and system based on brain-computer interface control and interaction
CN117666586B (en) * 2023-12-07 2024-07-05 西北工业大学深圳研究院 Brain-controlled robot control system and method based on self-adaptive sharing control

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103116279A (en) * 2013-01-16 2013-05-22 大连理工大学 Vague discrete event shared control method of brain-controlled robotic system
CN104083258A (en) * 2014-06-17 2014-10-08 华南理工大学 Intelligent wheel chair control method based on brain-computer interface and automatic driving technology
US20190056731A1 (en) * 2017-08-21 2019-02-21 Honda Motor Co., Ltd. Methods and systems for preventing an autonomous vehicle from transitioning from an autonomous driving mode to a manual driving mode based on a risk model
CN110916652A (en) * 2019-10-21 2020-03-27 昆明理工大学 Data acquisition device and method for controlling robot movement based on motor imagery through electroencephalogram and application of data acquisition device and method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109521880B (en) * 2018-11-27 2022-06-24 东南大学 Teleoperation robot system and method based on mixed bioelectricity signal driving
CN109947119B (en) * 2019-04-23 2021-06-29 东北大学 Mobile robot autonomous following method based on multi-sensor fusion
CN110955251A (en) * 2019-12-25 2020-04-03 华侨大学 Petri network-based mobile robot brain-computer cooperative control method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103116279A (en) * 2013-01-16 2013-05-22 大连理工大学 Vague discrete event shared control method of brain-controlled robotic system
CN104083258A (en) * 2014-06-17 2014-10-08 华南理工大学 Intelligent wheel chair control method based on brain-computer interface and automatic driving technology
US20190056731A1 (en) * 2017-08-21 2019-02-21 Honda Motor Co., Ltd. Methods and systems for preventing an autonomous vehicle from transitioning from an autonomous driving mode to a manual driving mode based on a risk model
CN110916652A (en) * 2019-10-21 2020-03-27 昆明理工大学 Data acquisition device and method for controlling robot movement based on motor imagery through electroencephalogram and application of data acquisition device and method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
WANG YANXIN;JI PENG;ZENG HONG;SONG AIGUO;WU CHANGCHENG;XU BAOGUO;LI HUIJUN: "A Human-Robot Interaction System Based on Hybrid Gaze Brain-Machine Interface and Shared Control", ROBOT, vol. 40, no. 4, 6 July 2018 (2018-07-06), pages 431 - 439, XP055915623, ISSN: 1002-0446, DOI: 10.13973/j.cnki.robot.180134 *
XU BAOGUO;HE XIAOHANG;WEI ZHIWEI;SONG AIGUO;ZHAO GUOPU: "Research on Continuous Control System for Robot Based on Motor Imagery EEG", CHINESE JOURNAL OF SCIENTIFIC INSTRUMENT, vol. 39, no. 9, 15 September 2018 (2018-09-15), pages 10 - 19, XP055915626, ISSN: 0254-3087, DOI: 10.19650/j.cnki.cjsi.J1803518 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115639823A (en) * 2022-10-27 2023-01-24 山东大学 Terrain sensing and movement control method and system for robot under rugged and undulating terrain
CN115609595A (en) * 2022-12-16 2023-01-17 北京中海兴达建设有限公司 Trajectory planning method, device and equipment of mechanical arm and readable storage medium

Also Published As

Publication number Publication date
CN112148011A (en) 2020-12-29
CN112148011B (en) 2022-04-15

Similar Documents

Publication Publication Date Title
WO2022062169A1 (en) Sharing control method for electroencephalogram mobile robot in unknown environment
KR102533690B1 (en) Terrain aware step planning system
WO2022041740A1 (en) Method and apparatus for detecting obstacle, self-propelled robot, and storage medium
CN110703747A (en) Robot autonomous exploration method based on simplified generalized Voronoi diagram
WO2007033101A2 (en) Hybrid control device
CN110673633B (en) Power inspection unmanned aerial vehicle path planning method based on improved APF
CN112347900B (en) Monocular vision underwater target automatic grabbing method based on distance estimation
CN110744544B (en) Service robot vision grabbing method and service robot
Kon et al. Mixed integer programming-based semiautonomous step climbing of a snake robot considering sensing strategy
CN114407030A (en) Autonomous navigation distribution network live working robot and working method thereof
Bustamante et al. Towards information-based feedback control for binaural active localization
Shi et al. Adaptive image-based visual servoing for hovering control of quad-rotor
Chen et al. A review of autonomous obstacle avoidance technology for multi-rotor UAVs
WO2022238189A1 (en) Method of acquiring sensor data on a construction site, construction robot system, computer program product, and training method
Iida et al. Navigation in an autonomous flying robot by using a biologically inspired visual odometer
CN109960278B (en) LGMD-based bionic obstacle avoidance control system and method for unmanned aerial vehicle
Diao et al. Design and realization of a novel obstacle avoidance algorithm for intelligent wheelchair bed using ultrasonic sensors
CN116477505A (en) Tower crane real-time path planning system and method based on deep learning
CN117718962A (en) Multi-task-oriented brain-control composite robot control system and method
CN113311709A (en) Intelligent wheelchair compound control system and method based on brain-computer interface
CN108062102A (en) A kind of gesture control has the function of the Mobile Robot Teleoperation System Based of obstacle avoidance aiding
CN112731918B (en) Ground unmanned platform autonomous following system based on deep learning detection tracking
CN114879719A (en) Intelligent obstacle avoidance method suitable for hybrid electric unmanned aerial vehicle
Castro et al. Reactive local navigation
Maimone et al. Autonomous rock tracking and acquisition from a mars rover

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20955011

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20955011

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20955011

Country of ref document: EP

Kind code of ref document: A1