CN114089628A - Brain-driven mobile robot control system and method based on steady-state visual stimulation - Google Patents

Brain-driven mobile robot control system and method based on steady-state visual stimulation Download PDF

Info

Publication number
CN114089628A
CN114089628A CN202111242723.9A CN202111242723A CN114089628A CN 114089628 A CN114089628 A CN 114089628A CN 202111242723 A CN202111242723 A CN 202111242723A CN 114089628 A CN114089628 A CN 114089628A
Authority
CN
China
Prior art keywords
speed
brain
mobile robot
control
driven mobile
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111242723.9A
Other languages
Chinese (zh)
Other versions
CN114089628B (en
Inventor
李鸿岐
付沛荣
张仕进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202111242723.9A priority Critical patent/CN114089628B/en
Publication of CN114089628A publication Critical patent/CN114089628A/en
Application granted granted Critical
Publication of CN114089628B publication Critical patent/CN114089628B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B13/00Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
    • G05B13/02Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
    • G05B13/04Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators
    • G05B13/042Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators in which a parameter or coefficient is automatically adjusted to optimise the performance

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Feedback Control In General (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a brain-driven mobile robot control system and method based on steady-state visual stimulation, which comprises a user interface, an SSVEP-BMI, a speed conversion module, a layered robust prediction control framework and a brain-driven mobile robot; the user induces EEG signals to generate control intentions; the SSVEP-BMI translates the control intention so as to output a qualitative brain-driven mobile robot speed control command; the speed conversion module is used for converting a qualitative brain-driven mobile robot speed control instruction into a quantitative brain-driven mobile robot expected speed signal; a model predictive controller of a hierarchical robust predictive control framework design optimizes a cost function and constraint conditions, the expected control speed of the brain-driven mobile robot is solved in real time in each control period, a discrete sliding mode manifold is constructed, and an actual control signal is output to control the robot to move. The invention can output the actual control signal which can realize robust tracking, ensures the safety of the system and simultaneously improves the robustness of the system.

Description

Brain-driven mobile robot control system and method based on steady-state visual stimulation
Technical Field
The invention belongs to the technical field of robot control, and particularly relates to a brain-driven mobile robot control system and method.
Background
The intelligent wheeled mobile robot equipment has been widely applied and explored in the fields of military, industry, civilian use and scientific research. They have the advantages of higher mobility, stronger traction, simple wheel structure and the like. In order to improve the exercise and autonomous movement abilities of patients with limb motion impairment such as motor neuron diseases, stroke and amyotrophic lateral sclerosis, researchers develop a wheel type mobile robot intelligent auxiliary system based on Brain Machine Interface (BMI). The intelligent auxiliary system is also called a brain-driven mobile robot, is an auxiliary mobile platform constructed by fusing a user, a brain-computer interface and the mobile robot with corresponding control devices, provides a new interaction mode for the user and the mobile robot, and has wide application prospect.
A brain-computer interface is a system that provides a direct information communication channel between the human brain and a physical device by decoding the brain activity of a user from neurophysiological signals into appropriate commands. Among a plurality of Electroencephalogram modes of the brain-computer interface system, a scalp Electroencephalogram (EEG) has lower spatial resolution and lower acquisition cost, and can reflect brain activities of a user in real time. Among them, the brain-computer interface based on the Steady-State Visual Evoked Potentials (SSVEP) EEG induces the user electroencephalogram signal by external Visual stimulation, and is widely applied to the development and use of brain-driven mobile robots due to the higher information transmission rate, higher accuracy and relatively less training time.
The key technology of the brain-driven mobile robot system based on the steady-state visual stimulation is a brain-computer interface (SSVEP-BMI) technology oriented to the steady-state visual stimulation and an auxiliary control technology of the brain-driven system. The former is generally divided into four major parts, namely signal acquisition (including acquisition and amplification of EEG signals), artifact filtering, feature extraction and feature classification. Since the concept of brain-computer interface is proposed, the detailed technologies of the above parts have been greatly developed, but due to the unsteady and time-varying characteristics of the EEG itself, the SSVEP-BMI still can only output limited discrete control commands so far, thereby limiting the performance of the brain-driven mobile robot system. The auxiliary control technology further improves the overall performance of the system by designing an auxiliary control device from the viewpoint of intelligent control. In the auxiliary control research of the existing brain-driven mobile robot system, most developers still focus on the output of a robot speed control signal, and the subsequent speed is controlled by the traditional PID. The research and development of the brain-driven mobile robot system oriented to the navigation performance, the safety performance and the robustness performance of the brain-driven system are few. The model predictive control can realize the optimized control under the conditions of multivariable and multiple constraints, and the sliding mode control can realize the robust tracking of the bottom layer speed.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a brain-driven mobile robot control system and method based on steady-state visual stimulation, which comprises a user interface, an SSVEP-BMI, a speed conversion module, a layered robust prediction control framework and a brain-driven mobile robot; the user induces EEG signals to generate control intentions; the SSVEP-BMI translates the control intention so as to output a qualitative brain-driven mobile robot speed control command; the speed conversion module is used for converting a qualitative brain-driven mobile robot speed control instruction into a quantitative brain-driven mobile robot expected speed signal; a model predictive controller of a hierarchical robust predictive control framework design optimizes a cost function and constraint conditions, the expected control speed of the brain-driven mobile robot is solved in real time in each control period, a discrete sliding mode manifold is constructed, and an actual control signal is output to control the robot to move. The invention can output the actual control signal which can realize robust tracking, ensures the safety of the system and simultaneously improves the robustness of the system.
The technical scheme adopted by the invention for solving the technical problems is as follows:
a brain-driven mobile robot control system based on steady-state visual stimulation comprises a user interface, an SSVEP-BMI, a speed conversion module, a layered robust predictive control framework and a brain-driven mobile robot; the brain-driven mobile robot is provided with a speed sensor, a position sensor and an environment sensor;
the environment sensor is used for acquiring environment information of the brain-driven mobile robot in operation; the position sensor is used for acquiring the position information of the brain-driven mobile robot; the speed sensor is used for acquiring speed information of the brain-driven mobile robot; the position information and the speed information of the brain-driven mobile robot form the self state of the brain-driven mobile robot;
the user interface induces EEG (electroencephalogram) of a user according to the self state and the running environment information of the brain-driven mobile robot to generate a control intention for controlling the brain-driven mobile robot;
the SSVEP-BMI translates the control intention expressed by EEG of the user, thereby outputting a qualitative brain-driven mobile robot speed control command;
the speed conversion module is used for converting a qualitative brain-driven mobile robot speed control instruction into a quantitative brain-driven mobile robot expected speed signal;
the layered robust prediction control framework consists of a model prediction controller at a high layer and a sliding mode controller at a bottom layer; the model prediction controller receives the brain-driven mobile robot expected speed signal and the self state of the brain-driven mobile robot, the model prediction controller is designed to optimize a cost function and a constraint condition, and the expected control speed of the brain-driven mobile robot is solved in real time in each control period; the sliding mode controller receives the expected control speed of the brain-driven mobile robot and the actual speed of the brain-driven mobile robot obtained by the speed sensor, calculates the speed error of the brain-driven mobile robot and the speed sensor, constructs a discrete sliding mode manifold, calculates a control torque signal driven by the bottom layer of the robot according to a sliding mode approximation rule, and outputs an actual control signal to control the robot to move.
Further, the environment information of the brain-driven mobile robot comprises initial position information, target position information, obstacle information and safety boundary information in the environment of the brain-driven mobile robot; the obstacle information includes static obstacles and dynamic obstacles.
Further, the brain-driven mobile robot state comprises the transverse position, the longitudinal position, the orientation angle, the lateral rotation angular velocity and the longitudinal straight line velocity of the mobile robot.
Further, the control intention of the brain-driven mobile robot includes speed maintenance, left turn, right turn, longitudinal acceleration, longitudinal deceleration, and instantaneous stop of the brain-driven mobile robot.
Further, the qualitative brain-driven mobile robot speed control command is specifically expressed as B e {1,2,3,4,5,6}, wherein 1 represents longitudinal acceleration, 2 represents left turn, 3 represents speed maintenance, 4 represents right turn, 5 represents longitudinal deceleration, and 6 represents instantaneous stop.
Further, the quantitative brain-driven mobile robot expected speed signal is specifically expressed as follows:
Figure BDA0003320055750000031
wherein, B (n) represents the speed control command received by the speed conversion module;
Figure BDA0003320055750000032
and
Figure BDA0003320055750000033
respectively representing the straight-line speed and the rotation angle speed output by the speed conversion module at the sampling moment of n hours,
Figure BDA0003320055750000034
and
Figure BDA0003320055750000035
respectively representing the straight-moving speed and the turning speed output by the speed conversion module when the sampling time is n-1, wherein the initial output of the speed conversion module is 0; delta u and delta omega are speed control increments of the straight line and the rotation angle of the robot; u. ofmaxAnd ωmaxThe maximum values of the straight movement and the turning speed of the robot are respectively; u. ofminAnd ωminThe minimum values of the straight-going speed and the turning speed of the robot are respectively.
Further, the model predictive controller optimizes the cost function and the constraint condition as follows:
minimize
Figure BDA0003320055750000041
subject to qk+i+1|k=qk+i|k+Jk+i|kvk+i|k (2b)
D≤dk+i+1|k(dobsobs,vk+ik),i=0,...,Np-1 (2c)
Figure BDA0003320055750000042
vk+i|k=vk+i-1|k+△vk+i|k,i=0,...,Nc-1 (2e)
vmin≤νk+i|k≤vmax,i=0,...,Nc-1 (2f)
△vmin≤△vk+i|k≤△vmax,i=0,...,Nc-1 (2g)
△vk+i|k=0,i=Nc,...,Np-1 (2h) wherein (2a) is an optimization function of the model predictive controller and (2b) - (2h) are constraint functions of the model predictive controller;
(2a) the optimization of the first term in the formula aims at minimizing the error between the output of the optimization function and the output of the user through the speed conversion module, and the effect is that the speed signal output by the model predictive controller tracks the output of the user through the brain-computer interface and the speed conversion module; (2a) the optimization objective of the second term in the equation is to minimize the change in predicted speed increments in the sense of keeping the speed steady throughout the control process;
in the formula (2a), k is the current time, NcPredicting the control window width, N, of the controller for the modelpTo predict window width, veIn order to optimize the speed error between the function output and the speed conversion module output, delta v is the change of the predicted speed input, and Q and R are weighting matrixes;
equations (2b) to (2g) are constraints of the model predictive controller; the position of the brain-driven mobile robot is constrained by the kinematic model formula (2 b); in order to ensure the safety of the brain drive system, the position of the robot is limited by the constraint expressions (2c) and (2 d); the expression (2e) is a physical relation between the speed variation and the speed time; equations (2f) and (2g) are speed changes due to output limitation of a direct current gear motor of the brain-driven mobile robot; equation (2h) indicates that outside the control window width, the control increment of speed remains unchanged.
Further, the sliding mode controller receives an expected control speed of the brain-driven mobile robot and an actual speed of the brain-driven mobile robot obtained by the speed sensor, calculates a speed error between the expected control speed and the actual speed, constructs a discrete sliding mode manifold, calculates a control torque signal of the bottom layer drive of the robot according to a sliding mode approach law, and outputs an actual control signal to control the robot to move, which is specifically as follows:
first, the velocity error e at time k is calculatedkThe method comprises the following steps:
ek=vr,k-vk (3)
wherein: v. ofr,kA desired control speed for the brain-driven mobile robot at time k; v. ofkThe speed of the actual brain-driven mobile robot at the moment k is acquired by a speed sensor;
the change of the speed state of the robot is constrained by the dynamic model, and the change is as follows:
vk+1=Avk+Bτk+dk v0=v(0) (4)
wherein: v. ofk+1The speed of the actual brain-driven mobile robot at the moment k +1 is acquired by a speed sensor; a is the system state matrix, B is the input matrix, τkFor the torque input of the DC speed reducing motor of the brain-driven mobile robot at the time of k, dkThe external interference suffered by the robot k at the moment; v. of0Acquiring an actual brain-driven mobile robot initial speed for the speed sensor;
in order to realize robust tracking of speed, a discrete sliding mode manifold is designed as follows:
Sk=Μek (5)
wherein: skIs the sliding mode manifold expression at the moment k; m is a symmetric positive definite parameter matrix;
according to the design theory of a discrete sliding mode controller, in order to calculate the actual control torque, a sliding mode approach law is selected as follows:
Sk+1-Sk=-qTSk-εTsgn(Sk) (6)
wherein S isk+1Is the sliding mode manifold expression at the moment k; t is sampling time of a sliding mode controller at the bottom layer, q is a parameter for increasing convergence speed of the sliding mode controller in a sliding mode approach law, epsilon is a parameter for compensating interference on a system in the sliding mode approach law, and sgn (.) is a sign function;
in order to solve the control input, formula (3) -formula (5) is substituted into formula (6) to obtain the control input torque of the direct-current speed reduction motor of the brain-driven mobile robot:
τk=(ΜB)-1(Μvr,k-ΜAvk)-(ΜB)-1((1-qT)Sk-εTsgn(Sk)) (7)
meanwhile, in order to reduce buffeting caused by the control input of the driver, a St function of an equation (8) is adopted to replace a sign function in an equation (7), wherein the St function is specifically as follows:
Figure BDA0003320055750000051
where δ is a positively determined parameter matrix.
A brain-driven mobile robot control method based on steady-state visual stimulation comprises the following steps:
step 1: initializing a brain-driven mobile robot system, and initializing the straight-moving speed and the turning speed of the mobile robot to 0;
the system is started, and the connection state of each component of the brain-driven mobile robot and the sensor and the working state of each component, including whether the environment sensor, the position sensor and the speed sensor can work normally, are detected; when the system is started, the interfaces of the sensors are connected with the speed conversion module and the controller of the layered robust predictive control framework, and the controller is connected with the direct-current speed reduction driving motor at the bottom layer of the robot, so that the brain-driven mobile robot is ensured to be normally started and used;
initializing a clock, and ensuring that the starting running time of an environment sensor, a position sensor and a speed sensor is synchronous with the starting time of a controller of a hierarchical robust predictive control framework; in the initialization process, the interruption is forbidden, the data cache in each sensor is cleared, the data in the hierarchical robust prediction control frame is cleared, and the input/output port and the register of the hierarchical robust prediction controller are defined;
step 2: translating the control intention expressed by the EEG of the user through the SSVEP-BMI, and outputting a qualitative brain-driven mobile robot speed control command;
and step 3: converting a qualitative brain-driven mobile robot speed control instruction into a quantitative brain-driven mobile robot expected speed signal through a speed conversion module;
and 4, step 4: the hierarchical robust prediction control framework receives and corrects the expected speed signal of the brain-driven mobile robot and outputs an actual control signal capable of realizing safe robust tracking;
step 4-1: sensor dynamic scanning, allowing for interrupts;
the environment sensor dynamically scans the periphery of the mobile robot to obtain the real-time environment around the robot, wherein the environment comprises boundary information and obstacle information; the position sensor and the speed sensor dynamically monitor and transmit state parameters of the brain-driven mobile robot, wherein the state parameters comprise the transverse position, the longitudinal position, the orientation angle, the lateral rotation angular speed and the longitudinal straight line speed of the robot; in the dynamic scanning of the environment sensor, the position sensor and the speed sensor, the sensors are allowed to be interrupted by other events; the sampling time of the environment sensor, the position sensor and the speed sensor is less than the sampling time of a controller of the hierarchical robust predictive control framework;
step 4-2: the sensor is communicated with the layered robust prediction control framework;
acquiring surrounding environment information and state information of the robot through an environment sensor, a position sensor and a speed sensor, and inputting the surrounding environment information and the state information into a layered robust predictive control framework; in the communication process, a memory storage unit is required to be arranged in a controller of the hierarchical robust predictive control framework to store the information;
step 4-3: solving an optimization problem under a constraint condition by a model predictive controller in a hierarchical robust predictive control framework;
the model predictive controller optimizes a cost function and constraint conditions as follows:
minimize
Figure BDA0003320055750000071
subject to qk+i+1|k=qk+i|k+Jk+i|kvk+i|k (2b)
D≤dk+i+1|k(dobsobs,vk+ik),i=0,...,Np-1 (2c)
Figure BDA0003320055750000072
vk+i|k=vk+i-1|k+△vk+i|k,i=0,...,Nc-1 (2e)
vmin≤νk+i|k≤vmax,i=0,...,Nc-1 (2f)
△vmin≤△vk+i|k≤△vmax,i=0,...,Nc-1 (2g)
△vk+i|k=0,i=Nc,...,Np-1 (2h) wherein (2a) is an optimization function of the model predictive controller and (2b) - (2h) are constraint functions of the model predictive controller;
(2a) the optimization of the first term in the formula aims at minimizing the error between the output of the optimization function and the output of the user through the speed conversion module, and the effect is that the speed signal output by the model predictive controller tracks the output of the user through the brain-computer interface and the speed conversion module; (2a) the optimization objective of the second term in the equation is to minimize the change in predicted speed increments in the sense of keeping the speed steady throughout the control process;
in the formula (2a), k is the current time, NcPredicting the control window width, N, of the controller for the modelpTo predict window width, veIn order to optimize the speed error between the function output and the speed conversion module output, delta v is the change of the predicted speed input, and Q and R are weighting matrixes;
equations (2b) to (2g) are constraints of the model predictive controller; the position of the brain-driven mobile robot is constrained by the kinematic model formula (2 b); in order to ensure the safety of the brain drive system, the position of the robot is limited by the constraint expressions (2c) and (2 d); the expression (2e) is a physical relation between the speed variation and the speed time; equations (2f) and (2g) are speed changes due to output limitation of a direct current gear motor of the brain-driven mobile robot; equation (2h) indicates that outside the control window width, the control increment of speed remains unchanged.
In the optimization problem solving of the model predictive controller, if the optimization problem has a solution, a safe speed control signal is obtained, and the step 4-4 is carried out; otherwise, if the optimization problem falls into no solution, the control framework fails: the brain-driven mobile robot suspends receiving a control instruction of a user, stops moving, enables the robot to enter a safe area, then restarts the brain-driven mobile robot and completes initialization, and starts receiving the user instruction to realize brain-driven control;
step 4-4: a sliding mode controller in a layered robust prediction control framework realizes robust tracking of speed;
first, the velocity error e at time k is calculatedkThe method comprises the following steps:
ek=vr,k-vk (3)
wherein: v. ofr,kA desired control speed for the brain-driven mobile robot at time k; v. ofkThe speed of the actual brain-driven mobile robot at the moment k is acquired by a speed sensor;
the change of the speed state of the robot is constrained by the dynamic model, and the change is as follows:
vk+1=Avk+Bτk+dk v0=v(0) (4)
wherein: v. ofk+1The speed of the actual brain-driven mobile robot at the moment k +1 is acquired by a speed sensor; a is the system state matrix, B is the input matrix, τkFor the torque input of the DC speed reducing motor of the brain-driven mobile robot at the time of k, dkThe external interference suffered by the robot k at the moment; v. of0Acquiring an actual brain-driven mobile robot initial speed for the speed sensor;
in order to realize robust tracking of speed, a discrete sliding mode manifold is designed as follows:
Sk=Μek (5)
wherein: skIs the sliding mode manifold expression at the moment k; m is a symmetric positive definite parameter matrix;
according to the design theory of a discrete sliding mode controller, in order to calculate the actual control torque, a sliding mode approach law is selected as follows:
Sk+1-Sk=-qTSk-εTsgn(Sk) (6)
wherein S isk+1Is the sliding mode manifold expression at the moment k; t is sampling time of a sliding mode controller at the bottom layer, q is a parameter for increasing convergence speed of the sliding mode controller in a sliding mode approach law, epsilon is a parameter for compensating interference on a system in the sliding mode approach law, and sgn (.) is a sign function;
in order to solve the control input, formula (3) -formula (5) is substituted into formula (6) to obtain the control input torque of the direct-current speed reduction motor of the brain-driven mobile robot:
τk=(ΜB)-1(Μvr,k-ΜAvk)-(ΜB)-1((1-qT)Sk-εTsgn(Sk)) (7)
meanwhile, in order to reduce buffeting caused by the control input of the driver, a St function of an equation (8) is adopted to replace a sign function in an equation (7), wherein the St function is specifically as follows:
Figure BDA0003320055750000081
wherein δ is a positively determined parameter matrix;
and 4-5: and (4) applying the control input torque obtained in the step (4-4) as a control quantity to the brain-driven mobile robot, repeating the steps (4-1) to (4-4) at the next sampling moment, and applying a new control signal to the brain-driven mobile robot.
The invention has the following beneficial effects:
after the output of the brain-computer interface and the speed conversion module facing the steady-state visual stimulation, a layered robust predictive control framework is designed, the framework is used for receiving an expected speed control signal and the real-time state of the mobile robot measured by the speed and position sensors, correcting the longitudinal expected speed control signal at the side of the mobile robot under the requirements of the safety and the robustness of a brain drive system, and finally outputting an actual control signal capable of realizing robust tracking, so that the safety of the system is ensured, and the robustness of the system is improved.
Compared with the traditional directly-controlled brain-driven mobile robot system, the invention depends on a high-level model predictive controller in a layered robust predictive control framework, and meets the safety constraint condition through solving the problem of online real-time optimization, thereby realizing the obstacle avoidance of the mobile robot, namely ensuring the safety of the system; compared with a brain-driven mobile robot system which only ensures safety, the brain-driven mobile robot system relies on a bottom sliding mode controller in a layered robust prediction control frame, and gives robot torque of the bottom layer through sliding mode control instead of a traditional reference speed control signal, so that the resistance characteristic of the system to external interference is enhanced, namely the robustness of the system is improved.
Drawings
Fig. 1 is a schematic structural diagram of a brain-driven mobile robot control system based on steady-state visual stimulation according to the present invention.
FIG. 2 is a flow diagram of a hierarchical robust predictive control framework of the present invention.
Detailed Description
The invention is further illustrated with reference to the following figures and examples.
As shown in fig. 1, a brain-driven mobile robot control system based on steady-state visual stimulation comprises a user interface, an SSVEP-BMI, a speed conversion module, a hierarchical robust predictive control framework and a brain-driven mobile robot; the brain-driven mobile robot is provided with a speed sensor, a position sensor and an environment sensor;
the environment sensor is used for acquiring environment information of the brain-driven mobile robot in operation; the position sensor is used for acquiring the position information of the brain-driven mobile robot; the speed sensor is used for acquiring speed information of the brain-driven mobile robot; the position information and the speed information of the brain-driven mobile robot form the self state of the brain-driven mobile robot;
the user interface induces EEG (electroencephalogram) of a user according to the self state and the running environment information of the brain-driven mobile robot to generate a control intention for controlling the brain-driven mobile robot;
the SSVEP-BMI translates the control intention expressed by EEG of the user, thereby outputting a qualitative brain-driven mobile robot speed control command;
the speed conversion module is used for converting a qualitative brain-driven mobile robot speed control instruction into an actually controllable side longitudinal speed signal increment or a given zero value, and keeping the speed constant before the next sampling moment comes, so that a quantitative brain-driven mobile robot expected speed signal is obtained;
the layered robust prediction control framework consists of a model prediction controller at a high layer and a sliding mode controller at a bottom layer; the model prediction controller receives the brain-driven mobile robot expected speed signal and the self state of the brain-driven mobile robot, the model prediction controller is designed to optimize a cost function and a constraint condition, and the expected control speed of the brain-driven mobile robot is solved in real time in each control period; the sliding mode controller receives the expected control speed of the brain-driven mobile robot and the actual speed of the brain-driven mobile robot obtained by the speed sensor, calculates the speed error of the brain-driven mobile robot and the speed sensor, constructs a discrete sliding mode manifold, calculates a control torque signal driven by the bottom layer of the robot according to a sliding mode approximation rule, and outputs an actual control signal to control the robot to move.
The layered robust prediction control framework receives state information of the robot through the position and speed sensors and receives obstacle information in a certain range through the environment sensor. The model predictive controller, in combination with a predictive model (mobile robot model), determines whether the robot is within safety limits within a prediction window width. If the robot can guarantee that the robot does not touch the obstacle within the prediction window width, namely safety is guaranteed, the model prediction controller outputs and tracks the output of a user through a brain-computer interface and a speed conversion module, and the smooth control of the speed of the robot is guaranteed; on the contrary, if the robot possibly collides with the obstacle within the prediction window width, the model prediction controller outputs a corrected speed control signal meeting the system safety under the control constraint requirement through the cost function. And under the support of a sliding mode theory, the sliding mode controller at the bottom layer firstly obtains the tracking error of the speed signal, and then further solves the actual robot motor torque signal by designing the sliding mode manifold so as to complete the robust tracking of the speed signal output by the high-rise model predictive controller.
Further, the environment information of the brain-driven mobile robot comprises initial position information, target position information, obstacle information and safety boundary information in the environment of the brain-driven mobile robot; the obstacle information includes static obstacles and dynamic obstacles.
Further, the brain-driven mobile robot state comprises the transverse position, the longitudinal position, the orientation angle, the lateral rotation angular velocity and the longitudinal straight line velocity of the mobile robot.
Further, the control intention of the brain-driven mobile robot includes speed maintenance, left turn, right turn, longitudinal acceleration, longitudinal deceleration, and instantaneous stop of the brain-driven mobile robot.
Further, the qualitative brain-driven mobile robot speed control command is specifically expressed as B e {1,2,3,4,5,6}, wherein 1 represents longitudinal acceleration, 2 represents left turn, 3 represents speed maintenance, 4 represents right turn, 5 represents longitudinal deceleration, and 6 represents instantaneous stop.
Further, the quantitative brain-driven mobile robot expected speed signal is specifically expressed as follows:
Figure BDA0003320055750000111
wherein, B (n) represents the speed control command received by the speed conversion module;
Figure BDA0003320055750000112
and
Figure BDA0003320055750000113
respectively representing the straight-line speed and the rotation angle speed output by the speed conversion module at the sampling moment of n hours,
Figure BDA0003320055750000114
and
Figure BDA0003320055750000115
respectively representing the straight-moving speed and the turning speed output by the speed conversion module when the sampling time is n-1, wherein the initial output of the speed conversion module is 0; delta u and delta omega are speed control increments of the straight line and the rotation angle of the robot; u. ofmaxAnd ωmaxThe maximum values of the straight movement and the turning speed of the robot are respectively; u. ofminAnd ωminThe minimum values of the straight-going speed and the turning speed of the robot are respectively.
Further, the model predictive controller optimizes the cost function and the constraint condition as follows:
minimize
Figure BDA0003320055750000116
subject to qk+i+1|k=qk+i|k+Jk+i|kvk+i|k (2b)
D≤dk+i+1|k(dobsobs,vk+i|k),i=0,...,Np-1 (2c)
Figure BDA0003320055750000117
vk+i|k=vk+i-1|k+△vk+i|k,i=0,...,Nc-1 (2e)
vmin≤νk+i|k≤vmax,i=0,...,Nc-1 (2f)
△vmin≤△vk+i|k≤△vmax,i=0,...,Nc-1 (2g)
△vk+i|k=0,i=Nc,...,Np-1 (2h) wherein (2a) is an optimization function of the model predictive controller and (2b) - (2h) are constraint functions of the model predictive controller;
(2a) the optimization of the first term in the formula aims at minimizing the error between the output of the optimization function and the output of the user through the speed conversion module, and the effect is that the speed signal output by the model predictive controller tracks the output of the user through the brain-computer interface and the speed conversion module; (2a) the optimization objective of the second term in the equation is to minimize the change in predicted speed increments in the sense of keeping the speed steady throughout the control process;
in the formula (2a), k is the current time, NcPredicting the control window width, N, of the controller for the modelpTo predict window width, veIn order to optimize the speed error between the function output and the speed conversion module output, delta v is the change of the predicted speed input, and Q and R are weighting matrixes;
equations (2b) to (2g) are constraints of the model predictive controller; the position of the brain-driven mobile robot is constrained by the kinematic model formula (2 b); in order to ensure the safety of the brain drive system, the position of the robot is limited by the constraint expressions (2c) and (2 d); the expression (2e) is a physical relation between the speed variation and the speed time; equations (2f) and (2g) are speed changes due to output limitation of a direct current gear motor of the brain-driven mobile robot; equation (2h) indicates that outside the control window width, the control increment of speed remains unchanged.
Further, the sliding mode controller receives an expected control speed of the brain-driven mobile robot and an actual speed of the brain-driven mobile robot obtained by the speed sensor, calculates a speed error between the expected control speed and the actual speed, constructs a discrete sliding mode manifold, calculates a control torque signal of the bottom layer drive of the robot according to a sliding mode approach law, and outputs an actual control signal to control the robot to move, which is specifically as follows:
first, the velocity error e at time k is calculatedkThe method comprises the following steps:
ek=vr,k-vk (3)
wherein: v. ofr,kA desired control speed for the brain-driven mobile robot at time k; v. ofkThe speed of the actual brain-driven mobile robot at the moment k is acquired by a speed sensor;
the change of the speed state of the robot is constrained by the dynamic model, and the change is as follows:
vk+1=Avk+Bτk+dk v0=v(0) (4)
wherein: v. ofk+1The speed of the actual brain-driven mobile robot at the moment k +1 is acquired by a speed sensor; a is the system state matrix, B is the input matrix, τkFor the torque input of the DC speed reducing motor of the brain-driven mobile robot at the time of k, dkThe external interference suffered by the robot k at the moment; v. of0Acquiring an actual brain-driven mobile robot initial speed for the speed sensor;
in order to realize robust tracking of speed, a discrete sliding mode manifold is designed as follows:
Sk=Μek (5)
wherein: skIs the sliding mode manifold expression at the moment k; m is a symmetric positive definite parameter matrix;
according to the design theory of a discrete sliding mode controller, in order to calculate the actual control torque, a sliding mode approach law is selected as follows:
Sk+1-Sk=-qTSk-εTsgn(Sk) (6)
wherein S isk+1Is the sliding mode manifold expression at the moment k; t is a bottom layer slideSampling time of a mode controller, wherein q is a parameter for increasing the convergence speed of the sliding mode controller in a sliding mode approach law, epsilon is a parameter for compensating interference on a system in the sliding mode approach law, and sgn (logn) is a sign function;
in order to solve the control input, formula (3) -formula (5) is substituted into formula (6) to obtain the control input torque of the direct-current speed reduction motor of the brain-driven mobile robot:
τk=(ΜB)-1(Μvr,k-ΜAvk)-(ΜB)-1((1-qT)Sk-εTsgn(Sk)) (7)
meanwhile, in order to reduce buffeting caused by the control input of the driver, a St function of an equation (8) is adopted to replace a sign function in an equation (7), wherein the St function is specifically as follows:
Figure BDA0003320055750000131
where δ is a positively determined parameter matrix.
A brain-driven mobile robot control method based on steady-state visual stimulation comprises the following steps:
step 1: initializing a brain-driven mobile robot system, and initializing the straight-moving speed and the turning speed of the mobile robot to 0;
the system is started, and the connection state of each component of the brain-driven mobile robot and the sensor and the working state of each component, including whether the environment sensor, the position sensor and the speed sensor can work normally, are detected; when the system is started, the interfaces of the sensors are connected with the speed conversion module and the controller of the layered robust prediction control framework through the A/D module, and the controller is connected with the direct-current speed reduction driving motor at the bottom layer of the robot, so that the brain-driven mobile robot is ensured to be normally started and used;
initializing a clock, and ensuring that the starting running time of an environment sensor, a position sensor and a speed sensor is synchronous with the starting time of a controller of a hierarchical robust predictive control framework; in the initialization process, the interruption is forbidden, the data cache in each sensor is cleared, the data in the hierarchical robust prediction control frame is cleared, and the input/output port and the register of the hierarchical robust prediction controller are defined;
step 2: translating the control intention expressed by the EEG of the user through the SSVEP-BMI, and outputting a qualitative brain-driven mobile robot speed control command;
and step 3: converting a qualitative brain-driven mobile robot speed control instruction into a quantitative brain-driven mobile robot expected speed signal through a speed conversion module;
and 4, step 4: as shown in fig. 2, the hierarchical robust predictive control framework receives and corrects an expected speed signal of the brain-driven mobile robot, and outputs an actual control signal capable of realizing safe robust tracking;
step 4-1: sensor dynamic scanning, allowing for interrupts;
the environment sensor dynamically scans the periphery of the mobile robot to obtain the real-time environment around the robot, wherein the environment comprises boundary information and obstacle information; the position sensor and the speed sensor dynamically monitor and transmit state parameters of the brain-driven mobile robot, wherein the state parameters comprise the transverse position, the longitudinal position, the orientation angle, the lateral rotation angular speed and the longitudinal straight line speed of the robot; in the dynamic scanning of the environment sensor, the position sensor and the speed sensor, the sensors are allowed to be interrupted by other events; the sampling time of the environment sensor, the position sensor and the speed sensor is less than the sampling time of a controller of the hierarchical robust predictive control framework;
step 4-2: the sensor is communicated with the layered robust prediction control framework;
acquiring surrounding environment information and state information of the robot through an environment sensor, a position sensor and a speed sensor, and inputting the surrounding environment information and the state information into a layered robust predictive control framework; in the communication process, a memory storage unit is required to be arranged in a controller of the hierarchical robust predictive control framework to store the information;
step 4-3: solving an optimization problem under a constraint condition by a model predictive controller in a hierarchical robust predictive control framework;
the model predictive controller optimizes a cost function and constraint conditions as follows:
minimize
Figure BDA0003320055750000151
subject to qk+i+1|k=qk+i|k+Jk+i|kvk+i|k (2b)
D≤dk+i+1|k(dobsobs,vk+ik),i=0,...,Np-1 (2c)
Figure BDA0003320055750000152
vk+i|k=vk+i-1|k+△vk+i|k,i=0,...,Nc-1 (2e)
vmin≤νk+i|k≤vmax,i=0,...,Nc-1 (2f)
△vmin≤△vk+i|k≤△vmax,i=0,...,Nc-1 (2g)
△vk+i|k=0,i=Nc,...,Np-1 (2h) wherein (2a) is an optimization function of the model predictive controller and (2b) - (2h) are constraint functions of the model predictive controller;
(2a) the optimization of the first term in the formula aims at minimizing the error between the output of the optimization function and the output of the user through the speed conversion module, and the effect is that the speed signal output by the model predictive controller tracks the output of the user through the brain-computer interface and the speed conversion module; (2a) the optimization objective of the second term in the equation is to minimize the change in predicted speed increments in the sense of keeping the speed steady throughout the control process;
in the formula (2a), k is the current time, NcPredicting the control window width, N, of the controller for the modelpTo predict window width, veFor optimizing the speed error of the function output and the speed conversion module output, Δ v is the change of the predicted speed input, Q and R areA weighting matrix;
equations (2b) to (2g) are constraints of the model predictive controller; the position of the brain-driven mobile robot is constrained by the kinematic model formula (2 b); in order to ensure the safety of the brain drive system, the position of the robot is limited by the constraint expressions (2c) and (2 d); the expression (2e) is a physical relation between the speed variation and the speed time; equations (2f) and (2g) are speed changes due to output limitation of a direct current gear motor of the brain-driven mobile robot; equation (2h) indicates that outside the control window width, the control increment of speed remains unchanged.
In the optimization problem solving of the model predictive controller, if the optimization problem has a solution, a safe speed control signal is obtained, and the step 4-4 is carried out; otherwise, if the optimization problem falls into no solution, the control framework fails: the brain-driven mobile robot suspends receiving a control instruction of a user, stops moving, enables the robot to enter a safe area, then restarts the brain-driven mobile robot and completes initialization, and starts receiving the user instruction to realize brain-driven control;
step 4-4: a sliding mode controller in a layered robust prediction control framework realizes robust tracking of speed;
first, the velocity error e at time k is calculatedkThe method comprises the following steps:
ek=vr,k-vk (3)
wherein: v. ofr,kA desired control speed for the brain-driven mobile robot at time k; v. ofkThe speed of the actual brain-driven mobile robot at the moment k is acquired by a speed sensor;
the change of the speed state of the robot is constrained by the dynamic model, and the change is as follows:
vk+1=Avk+Bτk+dk v0=v(0) (4)
wherein: v. ofk+1The speed of the actual brain-driven mobile robot at the moment k +1 is acquired by a speed sensor; a is the system state matrix, B is the input matrix, τkFor the torque input of the DC speed reducing motor of the brain-driven mobile robot at the time of k, dkThe external interference suffered by the robot k at the moment; v. of0Acquiring an actual brain-driven mobile robot initial speed for the speed sensor;
in order to realize robust tracking of speed, a discrete sliding mode manifold is designed as follows:
Sk=Μek (5)
wherein: skIs the sliding mode manifold expression at the moment k; m is a symmetric positive definite parameter matrix;
according to the design theory of a discrete sliding mode controller, in order to calculate the actual control torque, a sliding mode approach law is selected as follows:
Sk+1-Sk=-qTSk-εTsgn(Sk) (6)
wherein S isk+1Is the sliding mode manifold expression at the moment k; t is sampling time of a sliding mode controller at the bottom layer, q is a parameter for increasing convergence speed of the sliding mode controller in a sliding mode approach law, epsilon is a parameter for compensating interference on a system in the sliding mode approach law, and sgn (.) is a sign function;
in order to solve the control input, formula (3) -formula (5) is substituted into formula (6) to obtain the control input torque of the direct-current speed reduction motor of the brain-driven mobile robot:
τk=(ΜB)-1(Μvr,k-ΜAvk)-(ΜB)-1((1-qT)Sk-εTsgn(Sk)) (7)
meanwhile, in order to reduce buffeting caused by the control input of the driver, a St function of an equation (8) is adopted to replace a sign function in an equation (7), wherein the St function is specifically as follows:
Figure BDA0003320055750000161
wherein δ is a positively determined parameter matrix;
and 4-5: and (4) applying the control input torque obtained in the step (4-4) as a control quantity to the brain-driven mobile robot, repeating the steps (4-1) to (4-4) at the next sampling moment, and applying a new control signal to the brain-driven mobile robot.

Claims (9)

1. A brain-driven mobile robot control system based on steady-state visual stimulation is characterized by comprising a user interface, an SSVEP-BMI, a speed conversion module, a layered robust predictive control framework and a brain-driven mobile robot; the brain-driven mobile robot is provided with a speed sensor, a position sensor and an environment sensor;
the environment sensor is used for acquiring environment information of the brain-driven mobile robot in operation; the position sensor is used for acquiring the position information of the brain-driven mobile robot; the speed sensor is used for acquiring speed information of the brain-driven mobile robot; the position information and the speed information of the brain-driven mobile robot form the self state of the brain-driven mobile robot;
the user interface induces EEG (electroencephalogram) of a user according to the self state and the running environment information of the brain-driven mobile robot to generate a control intention for controlling the brain-driven mobile robot;
the SSVEP-BMI translates the control intention expressed by EEG of the user, thereby outputting a qualitative brain-driven mobile robot speed control command;
the speed conversion module is used for converting a qualitative brain-driven mobile robot speed control instruction into a quantitative brain-driven mobile robot expected speed signal;
the layered robust prediction control framework consists of a model prediction controller at a high layer and a sliding mode controller at a bottom layer; the model prediction controller receives the brain-driven mobile robot expected speed signal and the self state of the brain-driven mobile robot, the model prediction controller is designed to optimize a cost function and a constraint condition, and the expected control speed of the brain-driven mobile robot is solved in real time in each control period; the sliding mode controller receives the expected control speed of the brain-driven mobile robot and the actual speed of the brain-driven mobile robot obtained by the speed sensor, calculates the speed error of the brain-driven mobile robot and the speed sensor, constructs a discrete sliding mode manifold, calculates a control torque signal driven by the bottom layer of the robot according to a sliding mode approximation rule, and outputs an actual control signal to control the robot to move.
2. The brain-driven mobile robot control system based on steady-state visual stimulation according to claim 1, wherein the environment information of the operation of the brain-driven mobile robot comprises start position information, target position information, obstacle information and safety boundary information in the environment of the operation of the brain-driven mobile robot; the obstacle information includes static obstacles and dynamic obstacles.
3. The brain-driven mobile robot control system based on steady-state visual stimulation, according to claim 1, wherein the brain-driven mobile robot state comprises the lateral position, the longitudinal position, the orientation angle, the lateral rotation angular velocity and the longitudinal linear velocity of the mobile robot.
4. The brain-driven mobile robot control system based on steady-state visual stimulation, according to claim 1, wherein the control intentions of the brain-driven mobile robot include speed maintenance, left turn, right turn, longitudinal acceleration, longitudinal deceleration and instantaneous stop of the brain-driven mobile robot.
5. The brain-driven mobile robot control system based on steady-state visual stimulation according to claim 4, wherein the qualitative brain-driven mobile robot speed control command is expressed as B e {1,2,3,4,5,6}, wherein 1 represents longitudinal acceleration, 2 represents left turn, 3 represents speed hold, 4 represents right turn, 5 represents longitudinal deceleration, and 6 represents instant stop.
6. The brain-driven mobile robot control system based on steady-state visual stimulation according to claim 5, wherein the quantitative brain-driven mobile robot expected speed signal is expressed by the following formula:
Figure FDA0003320055740000021
wherein, B (n) represents the speed control command received by the speed conversion module;
Figure FDA0003320055740000022
and
Figure FDA0003320055740000023
respectively representing the straight-line speed and the rotation angle speed output by the speed conversion module at the sampling moment of n hours,
Figure FDA0003320055740000024
and
Figure FDA0003320055740000025
respectively representing the straight-moving speed and the turning speed output by the speed conversion module when the sampling time is n-1, wherein the initial output of the speed conversion module is 0; delta u and delta omega are speed control increments of the straight line and the rotation angle of the robot; u. ofmaxAnd ωmaxThe maximum values of the straight movement and the turning speed of the robot are respectively; u. ofminAnd ωminThe minimum values of the straight-going speed and the turning speed of the robot are respectively.
7. The brain-driven mobile robot control system based on steady-state visual stimulation according to claim 6, wherein the model predictive controller optimizes cost function and constraint conditions as follows:
Figure FDA0003320055740000031
subject to qk+i+1|k=qk+i|k+Jk+i|kvk+i|k (2b)
D≤dk+i+1|k(dobsobs,vk+i|k),i=0,...,Np-1 (2c)
Figure FDA0003320055740000032
vk+i|k=vk+i-1|k+△vk+i|k,i=0,...,Nc-1 (2e)
vmin≤νk+i|k≤vmax,i=0,...,Nc-1 (2f)
△vmin≤△vk+i|k≤△vmax,i=0,...,Nc-1 (2g)
△vk+i|k=0,i=Nc,...,Np-1 (2h)
wherein, the formula (2a) is an optimization function of the model prediction controller, and the formulae (2b) - (2h) are constraint functions of the model prediction controller;
(2a) the optimization of the first term in the formula aims at minimizing the error between the output of the optimization function and the output of the user through the speed conversion module, and the effect is that the speed signal output by the model predictive controller tracks the output of the user through the brain-computer interface and the speed conversion module; (2a) the optimization objective of the second term in the equation is to minimize the change in predicted speed increments in the sense of keeping the speed steady throughout the control process;
in the formula (2a), k is the current time, NcPredicting the control window width, N, of the controller for the modelpTo predict window width, veIn order to optimize the speed error between the function output and the speed conversion module output, delta v is the change of the predicted speed input, and Q and R are weighting matrixes;
equations (2b) to (2g) are constraints of the model predictive controller; the position of the brain-driven mobile robot is constrained by the kinematic model formula (2 b); in order to ensure the safety of the brain drive system, the position of the robot is limited by the constraint expressions (2c) and (2 d); the expression (2e) is a physical relation between the speed variation and the speed time; equations (2f) and (2g) are speed changes due to output limitation of a direct current gear motor of the brain-driven mobile robot; equation (2h) indicates that outside the control window width, the control increment of speed remains unchanged.
8. The brain-driven mobile robot control system based on steady-state visual stimulation according to claim 7, wherein the sliding-mode controller receives a desired control speed of the brain-driven mobile robot and an actual speed of the brain-driven mobile robot obtained by the speed sensor, calculates a speed error between the desired control speed and the actual speed, constructs a discrete sliding-mode manifold, calculates a control torque signal of a bottom-layer drive of the robot according to a sliding-mode approximation law, and outputs an actual control signal to control the robot to move, specifically as follows:
first, the velocity error e at time k is calculatedkThe method comprises the following steps:
ek=vr,k-vk (3)
wherein: v. ofr,kA desired control speed for the brain-driven mobile robot at time k; v. ofkThe speed of the actual brain-driven mobile robot at the moment k is acquired by a speed sensor;
the change of the speed state of the robot is constrained by the dynamic model, and the change is as follows:
vk+1=Avk+Bτk+dk v0=v(0) (4)
wherein: v. ofk+1The speed of the actual brain-driven mobile robot at the moment k +1 is acquired by a speed sensor; a is the system state matrix, B is the input matrix, τkFor the torque input of the DC speed reducing motor of the brain-driven mobile robot at the time of k, dkThe external interference suffered by the robot k at the moment; v. of0Acquiring an actual brain-driven mobile robot initial speed for the speed sensor;
in order to realize robust tracking of speed, a discrete sliding mode manifold is designed as follows:
Sk=Μek (5)
wherein: skIs the sliding mode manifold expression at the moment k; m is a symmetric positive definite parameter matrix;
according to the design theory of a discrete sliding mode controller, in order to calculate the actual control torque, a sliding mode approach law is selected as follows:
Sk+1-Sk=-qTSk-εTsgn(Sk) (6)
wherein S isk+1Is the sliding mode manifold expression at the moment k; t is sampling time of a sliding mode controller at the bottom layer, q is a parameter for increasing convergence speed of the sliding mode controller in a sliding mode approach law, epsilon is a parameter for compensating interference on a system in the sliding mode approach law, and sgn (.) is a sign function;
in order to solve the control input, formula (3) -formula (5) is substituted into formula (6) to obtain the control input torque of the direct-current speed reduction motor of the brain-driven mobile robot:
τk=(ΜB)-1(Μvr,k-ΜAvk)-(ΜB)-1((1-qT)Sk-εTsgn(Sk)) (7)
meanwhile, in order to reduce buffeting caused by the control input of the driver, a St function of an equation (8) is adopted to replace a sign function in an equation (7), wherein the St function is specifically as follows:
Figure FDA0003320055740000041
where δ is a positively determined parameter matrix.
9. A brain-driven mobile robot control method based on steady-state visual stimulation is characterized by comprising the following steps:
step 1: initializing a brain-driven mobile robot system, and initializing the straight-moving speed and the turning speed of the mobile robot to 0;
the system is started, and the connection state of each component of the brain-driven mobile robot and the sensor and the working state of each component, including whether the environment sensor, the position sensor and the speed sensor can work normally, are detected; when the system is started, the interfaces of the sensors are connected with the speed conversion module and the controller of the layered robust predictive control framework, and the controller is connected with the direct-current speed reduction driving motor at the bottom layer of the robot, so that the brain-driven mobile robot is ensured to be normally started and used;
initializing a clock, and ensuring that the starting running time of an environment sensor, a position sensor and a speed sensor is synchronous with the starting time of a controller of a hierarchical robust predictive control framework; in the initialization process, the interruption is forbidden, the data cache in each sensor is cleared, the data in the hierarchical robust prediction control frame is cleared, and the input/output port and the register of the hierarchical robust prediction controller are defined;
step 2: translating the control intention expressed by the EEG of the user through the SSVEP-BMI, and outputting a qualitative brain-driven mobile robot speed control command;
and step 3: converting a qualitative brain-driven mobile robot speed control instruction into a quantitative brain-driven mobile robot expected speed signal through a speed conversion module;
and 4, step 4: the hierarchical robust prediction control framework receives and corrects the expected speed signal of the brain-driven mobile robot and outputs an actual control signal capable of realizing safe robust tracking;
step 4-1: sensor dynamic scanning, allowing for interrupts;
the environment sensor dynamically scans the periphery of the mobile robot to obtain the real-time environment around the robot, wherein the environment comprises boundary information and obstacle information; the position sensor and the speed sensor dynamically monitor and transmit state parameters of the brain-driven mobile robot, wherein the state parameters comprise the transverse position, the longitudinal position, the orientation angle, the lateral rotation angular speed and the longitudinal straight line speed of the robot; in the dynamic scanning of the environment sensor, the position sensor and the speed sensor, the sensors are allowed to be interrupted by other events; the sampling time of the environment sensor, the position sensor and the speed sensor is less than the sampling time of a controller of the hierarchical robust predictive control framework;
step 4-2: the sensor is communicated with the layered robust prediction control framework;
acquiring surrounding environment information and state information of the robot through an environment sensor, a position sensor and a speed sensor, and inputting the surrounding environment information and the state information into a layered robust predictive control framework; in the communication process, a memory storage unit is required to be arranged in a controller of the hierarchical robust predictive control framework to store the information;
step 4-3: solving an optimization problem under a constraint condition by a model predictive controller in a hierarchical robust predictive control framework;
the model predictive controller optimizes a cost function and constraint conditions as follows:
Figure FDA0003320055740000061
subject to qk+i+1|k=qk+i|k+Jk+i|kvk+i|k (2b)
D≤dk+i+1|k(dobsobs,vk+i|k),i=0,...,Np-1 (2c)
Figure FDA0003320055740000062
vk+i|k=vk+i-1|k+△vk+i|k,i=0,...,Nc-1 (2e)
vmin≤νk+i|k≤vmax,i=0,...,Nc-1 (2f)
△vmin≤△vk+i|k≤△vmax,i=0,...,Nc-1 (2g)
△vk+i|k=0,i=Nc,...,Np-1 (2h)
wherein, the formula (2a) is an optimization function of the model prediction controller, and the formulae (2b) - (2h) are constraint functions of the model prediction controller;
(2a) the optimization of the first term in the formula aims at minimizing the error between the output of the optimization function and the output of the user through the speed conversion module, and the effect is that the speed signal output by the model predictive controller tracks the output of the user through the brain-computer interface and the speed conversion module; (2a) the optimization objective of the second term in the equation is to minimize the change in predicted speed increments in the sense of keeping the speed steady throughout the control process;
in the formula (2a), k is the current time, NcPredicting the control window width, N, of the controller for the modelpTo predict window width, veIn order to optimize the speed error between the function output and the speed conversion module output, delta v is the change of the predicted speed input, and Q and R are weighting matrixes;
equations (2b) to (2g) are constraints of the model predictive controller; the position of the brain-driven mobile robot is constrained by the kinematic model formula (2 b); in order to ensure the safety of the brain drive system, the position of the robot is limited by the constraint expressions (2c) and (2 d); the expression (2e) is a physical relation between the speed variation and the speed time; equations (2f) and (2g) are speed changes due to output limitation of a direct current gear motor of the brain-driven mobile robot; equation (2h) indicates that outside the control window width, the control increment of speed remains unchanged;
in the optimization problem solving of the model predictive controller, if the optimization problem has a solution, a safe speed control signal is obtained, and the step 4-4 is carried out; otherwise, if the optimization problem falls into no solution, the control framework fails: the brain-driven mobile robot suspends receiving a control instruction of a user, stops moving, enables the robot to enter a safe area, then restarts the brain-driven mobile robot and completes initialization, and starts receiving the user instruction to realize brain-driven control;
step 4-4: a sliding mode controller in a layered robust prediction control framework realizes robust tracking of speed;
first, the velocity error e at time k is calculatedkThe method comprises the following steps:
ek=vr,k-vk (3)
wherein: v. ofr,kA desired control speed for the brain-driven mobile robot at time k; v. ofkThe speed of the actual brain-driven mobile robot at the moment k is acquired by a speed sensor;
the change of the speed state of the robot is constrained by the dynamic model, and the change is as follows:
vk+1=Avk+Bτk+dk v0=v(0) (4)
wherein: v. ofk+1The speed of the actual brain-driven mobile robot at the moment k +1 is acquired by a speed sensor; a is the system state matrix, B is the input matrix, τkFor the torque input of the DC speed reducing motor of the brain-driven mobile robot at the time of k, dkThe external interference suffered by the robot k at the moment; v. of0Acquiring an actual brain-driven mobile robot initial speed for the speed sensor;
in order to realize robust tracking of speed, a discrete sliding mode manifold is designed as follows:
Sk=Μek (5)
wherein: skIs the sliding mode manifold expression at the moment k; m is a symmetric positive definite parameter matrix;
according to the design theory of a discrete sliding mode controller, in order to calculate the actual control torque, a sliding mode approach law is selected as follows:
Sk+1-Sk=-qTSk-εTsgn(Sk) (6)
wherein S isk+1Is the sliding mode manifold expression at the moment k; t is sampling time of a sliding mode controller at the bottom layer, q is a parameter for increasing convergence speed of the sliding mode controller in a sliding mode approach law, epsilon is a parameter for compensating interference on a system in the sliding mode approach law, and sgn (.) is a sign function;
in order to solve the control input, formula (3) -formula (5) is substituted into formula (6) to obtain the control input torque of the direct-current speed reduction motor of the brain-driven mobile robot:
τk=(ΜB)-1(Μvr,k-ΜAvk)-(ΜB)-1((1-qT)Sk-εTsgn(Sk)) (7)
meanwhile, in order to reduce buffeting caused by the control input of the driver, a St function of an equation (8) is adopted to replace a sign function in an equation (7), wherein the St function is specifically as follows:
Figure FDA0003320055740000081
wherein δ is a positively determined parameter matrix;
and 4-5: and (4) applying the control input torque obtained in the step (4-4) as a control quantity to the brain-driven mobile robot, repeating the steps (4-1) to (4-4) at the next sampling moment, and applying a new control signal to the brain-driven mobile robot.
CN202111242723.9A 2021-10-25 2021-10-25 Brain-driven mobile robot control system and method based on steady-state visual stimulation Active CN114089628B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111242723.9A CN114089628B (en) 2021-10-25 2021-10-25 Brain-driven mobile robot control system and method based on steady-state visual stimulation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111242723.9A CN114089628B (en) 2021-10-25 2021-10-25 Brain-driven mobile robot control system and method based on steady-state visual stimulation

Publications (2)

Publication Number Publication Date
CN114089628A true CN114089628A (en) 2022-02-25
CN114089628B CN114089628B (en) 2022-11-04

Family

ID=80297598

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111242723.9A Active CN114089628B (en) 2021-10-25 2021-10-25 Brain-driven mobile robot control system and method based on steady-state visual stimulation

Country Status (1)

Country Link
CN (1) CN114089628B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117055361A (en) * 2023-10-12 2023-11-14 纳博特南京科技有限公司 Mobile robot control method based on synovial membrane model predictive control

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102309380A (en) * 2011-09-13 2012-01-11 华南理工大学 Intelligent wheelchair based on multimode brain-machine interface
CN105584479A (en) * 2016-01-18 2016-05-18 北京理工大学 Computer-controlled vehicle-oriented model prediction control method and computer-controlled vehicle utilizing method
CN107121977A (en) * 2017-06-02 2017-09-01 南京邮电大学 Mechanical arm actuator failures fault-tolerant control system and its method based on double-decker
CN107168319A (en) * 2017-06-01 2017-09-15 同济大学 A kind of unmanned vehicle barrier-avoiding method based on Model Predictive Control
CN108287961A (en) * 2018-01-18 2018-07-17 东南大学 Brain control Vehicular system modeling and simulation method suitable for different brain-computer interface types
CN108491071A (en) * 2018-03-05 2018-09-04 东南大学 A kind of brain control vehicle Compliance control method based on fuzzy control
CN111007725A (en) * 2019-12-23 2020-04-14 昆明理工大学 Method for controlling intelligent robot based on electroencephalogram neural feedback
CN112034828A (en) * 2020-09-16 2020-12-04 北京理工大学 Discrete integral sliding mode control device and method of brain-controlled mobile robot
CN112114670A (en) * 2020-09-10 2020-12-22 季华实验室 Man-machine co-driving system based on hybrid brain-computer interface and control method thereof

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102309380A (en) * 2011-09-13 2012-01-11 华南理工大学 Intelligent wheelchair based on multimode brain-machine interface
CN105584479A (en) * 2016-01-18 2016-05-18 北京理工大学 Computer-controlled vehicle-oriented model prediction control method and computer-controlled vehicle utilizing method
CN107168319A (en) * 2017-06-01 2017-09-15 同济大学 A kind of unmanned vehicle barrier-avoiding method based on Model Predictive Control
CN107121977A (en) * 2017-06-02 2017-09-01 南京邮电大学 Mechanical arm actuator failures fault-tolerant control system and its method based on double-decker
CN108287961A (en) * 2018-01-18 2018-07-17 东南大学 Brain control Vehicular system modeling and simulation method suitable for different brain-computer interface types
CN108491071A (en) * 2018-03-05 2018-09-04 东南大学 A kind of brain control vehicle Compliance control method based on fuzzy control
CN111007725A (en) * 2019-12-23 2020-04-14 昆明理工大学 Method for controlling intelligent robot based on electroencephalogram neural feedback
CN112114670A (en) * 2020-09-10 2020-12-22 季华实验室 Man-machine co-driving system based on hybrid brain-computer interface and control method thereof
CN112034828A (en) * 2020-09-16 2020-12-04 北京理工大学 Discrete integral sliding mode control device and method of brain-controlled mobile robot

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
FUJIAN HE 等: "Model Predictive Control for a Brain-controlled Mobile Robot", 《2017 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS,MAN,AND CYBERNETICS》 *
HONGQI LI 等: "Discrete-Time Integer Sliding Mode Control for Brain-Controlled Mobile Robots", 《2020 IEEE INTERNATIONAL CONFERENCE ON REAL-TIME COMPUTING AND ROBOTICS》 *
HONGQI LI 等: "Sliding-Mode Nonlinear Predictive Control of Brain-Controlled Mobile Robots", 《IEEE TRANSACTIONS ON CYBERNETICS》 *
和福建: "基于模型预测控制的脑控移动机器人控制系统研究", 《中国优秀硕士学位论文全文数据库 (医药卫生科技辑)》 *
李鸿岐 等: "脑-控移动机器人的非线性模型预测控制", 《2018中国自动化大会(CAC2018)论文集》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117055361A (en) * 2023-10-12 2023-11-14 纳博特南京科技有限公司 Mobile robot control method based on synovial membrane model predictive control
CN117055361B (en) * 2023-10-12 2023-12-19 纳博特南京科技有限公司 Mobile robot control method based on sliding mode model predictive control

Also Published As

Publication number Publication date
CN114089628B (en) 2022-11-04

Similar Documents

Publication Publication Date Title
US11772264B2 (en) Neural network adaptive tracking control method for joint robots
Deng et al. A learning-based hierarchical control scheme for an exoskeleton robot in human–robot cooperative manipulation
US10717191B2 (en) Apparatus and methods for haptic training of robots
CN109177974B (en) Man-machine co-driving type lane keeping auxiliary method for intelligent automobile
CN108524187B (en) six-degree-of-freedom upper limb rehabilitation robot control system
CN107053179B (en) A kind of mechanical arm Compliant Force Control method based on Fuzzy Reinforcement Learning
US20180311817A1 (en) Predictive robotic controller apparatus and methods
CN105773623B (en) SCARA robotic tracking control methods based on the study of forecasting type Indirect iteration
Yanco et al. Preliminary investigation of a semi-autonomous robotic wheelchair directed through electrodes
CN110450156B (en) Optimal design method of self-adaptive fuzzy controller of multi-degree-of-freedom mechanical arm system
CN114089628B (en) Brain-driven mobile robot control system and method based on steady-state visual stimulation
CN103786157B (en) Based on the embedded control system of upper limbs ectoskeleton power-assisting robot
CN109330819B (en) Master-slave type upper limb exoskeleton rehabilitation robot control system and control method thereof
CN105584479B (en) A kind of model predictive control method towards brain control vehicle and the brain control vehicle using this method
CN112417755A (en) Master-slave mode surgical robot track prediction control method
Li et al. Sliding-mode nonlinear predictive control of brain-controlled mobile robots
CN112034828B (en) Discrete integral sliding mode control device and method of brain-controlled mobile robot
WO2023065781A1 (en) Control method, device, and system for hybrid robot
CN110175349B (en) Independent suspension structure vehicle control method based on MPC algorithm
CN111605729B (en) Star detection vehicle wheel active following control method and system and star detection vehicle
CN109454625A (en) A kind of non-moment sensor industrial robot dragging teaching method
Miao et al. Rehabilitation robot following motion control algorithm based on human behavior intention
CN108776432B (en) Airport runway detection robot prediction control method based on network
CN108670412A (en) A kind of exoskeleton rehabilitation manipulator and method with force feedback mechanism
CN113805610A (en) Trajectory tracking control method and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant