CN117718962A - Multi-task-oriented brain-control composite robot control system and method - Google Patents

Multi-task-oriented brain-control composite robot control system and method Download PDF

Info

Publication number
CN117718962A
CN117718962A CN202311765753.7A CN202311765753A CN117718962A CN 117718962 A CN117718962 A CN 117718962A CN 202311765753 A CN202311765753 A CN 202311765753A CN 117718962 A CN117718962 A CN 117718962A
Authority
CN
China
Prior art keywords
robot
mechanical arm
algorithm
brain
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311765753.7A
Other languages
Chinese (zh)
Inventor
高亚鹏
李宇晗
李海芳
邓红霞
高志熙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taiyuan University of Technology
Original Assignee
Taiyuan University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taiyuan University of Technology filed Critical Taiyuan University of Technology
Priority to CN202311765753.7A priority Critical patent/CN117718962A/en
Publication of CN117718962A publication Critical patent/CN117718962A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Manipulator (AREA)

Abstract

The invention belongs to the technical field of robot control, and particularly relates to a brain-control compound robot control system and method for multitasking, comprising an electroencephalogram cap, a notebook computer, a mechanical arm, a chassis robot, an upper computer, an end effector and a depth camera, wherein the electroencephalogram cap is worn on the head of a user, the electroencephalogram cap is connected with the notebook computer in a wired connection mode, and a stimulation interface program is operated after the notebook computer tests electroencephalogram signals; the mechanical arm and the chassis robot are connected with the upper computer through a local area network, the notebook computer is connected with the upper computer, the depth camera is electrically connected with the mechanical arm, and the mechanical arm is electrically connected with the end effector. The invention designs the multi-stage stimulation interface, so that the task processing is more various, and a user can make more flexible and various task selections in the current environment.

Description

Multi-task-oriented brain-control composite robot control system and method
Technical Field
The invention belongs to the technical field of robot control, and particularly relates to a brain-control compound robot control system and method for multitasking.
Background
With the promotion of technological development and innovative driving policies, brain-controlled robots are regarded as one of the important directions of future technological development as the crossing field of artificial intelligence and neuroscience. The national policy advocates that human perception, movement and processing capacity are extended through brain control technology, thereby realizing revolutionary breakthrough in aspects of medical treatment, military, service industry and the like. In the medical field, brain-controlled robots can be used for rehabilitation therapy to help recover patients with impaired locomotor ability. In the military field, brain-controlled robots can be used for remote operation and dangerous task execution, and the combat capability of the army is enhanced. Soldiers can control robots to complete complex tasks through brain control technology, risk is reduced, and combat effectiveness is improved. In the service field, the brain-controlled robot can control intelligent household equipment such as light, air conditioner, television and the like through brain signals. In certain training areas, such as for example, for the training of aircraft personnel and for distance education, brain-controlled robots may provide virtual experiments, with simulation tasks for increasing the degree of specialization. With the continuous development of technology, more innovative applications will emerge. The development of brain-controlled robots will bring more possibilities for the medical field, the military field and the life service field, and improve the life quality and efficiency.
The existing mechanical arm generally realizes various functions in a direct computer programming mode, but the mechanical arm requires an operation object to work at a relatively fixed position and is consistent in type, the mechanical arm control mode has poor environment adaptability and high upgrading and maintenance cost, and although the motion trail of the mechanical arm can be defined in a traditional teaching reproduction mode, the application of a demonstrator is not flexible enough and the operation is complex.
Disclosure of Invention
Aiming at the technical problem that the mechanical arm control mode has poor adaptability to the environment, the invention provides a brain control compound robot control system and a brain control compound robot control method for multiple tasks.
In order to solve the technical problems, the invention adopts the following technical scheme:
the brain-controlled composite robot control system comprises an electroencephalogram cap, a notebook computer, a mechanical arm, a chassis robot, an upper computer, an end effector and a depth camera, wherein the electroencephalogram cap is worn on the head of a user and is connected with the notebook computer in a wired connection mode, and a stimulation interface program is operated after the notebook computer tests electroencephalogram signals; the mechanical arm and the chassis robot are connected with the upper computer through a local area network, the notebook computer is connected with the upper computer, the depth camera is electrically connected with the mechanical arm, and the mechanical arm is electrically connected with the end effector.
A brain-controlled compound robot control method facing multitasking comprises the following steps:
s1, after a user wears an electroencephalogram cap, blinking is started three times, stimulation and flickering are started, the user can look at a task stimulation block to generate an electroencephalogram signal, the electroencephalogram cap can transmit corresponding control signals to an upper computer in a tcp socket mode by identifying the function frequency of eye fixation, the upper computer obtains commands and translates the commands into ros topic communication to issue the commands, and a mechanical arm and a chassis robot node realize command receiving in a topic subscribing mode so as to control execution of different tasks;
s2, when a grabbing task is carried out, the depth camera component carries out environment judgment and object pose recognition, position information acquired by the depth camera is sent to a subscribing node of the mechanical arm through a topic communication mechanism of the ROS, the mechanical arm can carry out the grabbing task more flexibly, grabbing positions and tracks do not need to be set in advance, an upper computer controls the mechanical arm node to send an arriving topic after the mechanical arm reaches a specified grabbing/placing position, the mechanical arm node sends an instruction to a pico board through subscribing the topic, and the pico board controls the opening and closing of an end effector through high and low electric frequencies;
and S3, when the moving task is executed, the chassis robot receives a target coordinate in a working map from the upper computer, and the robot automatically senses, positions and avoids the obstacle and moves to a target position through the laser radar and the IMU component.
The method for the mechanical arm to execute the grabbing task in the S2 is as follows:
the mechanical arm and the depth camera are combined to realize automatic recognition pose of an object and capture the function, the mechanical arm and the depth camera are required to be subjected to hand-eye calibration, the pixel coordinates of the depth camera are transformed into a space coordinate system of the mechanical arm through a calibrated transformation matrix, the pose executed at the tail end of the mechanical arm is sent to a hand-eye calibration algorithm through an end_positon method, the pose of the object recognized by the camera is sent to a pose topic named/acuco_single/position through a method in an acuco_ros function package, and the hand-eye calibration algorithm subscribes to tcp and acuco topics; firstly, the hand-eye calibration needs to find a plurality of unchanged quantities, and the rotation translation relation between the camera and the tail end of the mechanical arm is unchangedThe relation between the calibration plate and the mechanical arm base is unchanged +.>Find->After the rotation and translation relation of the robot arm, the pose acquired from the camera can be converted into pose information with the base coordinate system of the robot arm as a reference, so that the robot arm can accurately grasp the pose information.
The determinationThe method of the rotation translation relation of (a) is as follows:
the following formula is given according to two invariant relations:
the posture of the mechanical arm is changed as follows:
wherein the method comprises the steps ofIs known->The relation between the mechanical arm base and the mechanical arm end effector is obtained through the calibration of a mechanical arm hand eye; />The relation between the calibration plate and the depth camera is obtained through Zhang's calibration, namely the external parameters of the camera; the simultaneous formulas (1) and (2) are obtained:
then we reduce to a x=x×b and solve this matrix equation, the unknowns are matricesThe relationship between the eyes and the hands is the following;
the hand-eye relation is solved by taking the robot base coordinates as a world coordinate system and converting the calibration plate coordinate system into a robot coordinate system, so that the final relation is simplified as follows: a x=x×b, thereby obtainingIs provided for the rotation and translation relationship of (a).
The method for realizing autonomous sensing, positioning, obstacle avoidance and movement of the robot to the target position through the laser radar and the IMU component in the S3 comprises the following steps:
using a mapping algorithm to realize that a global map construction auxiliary robot realizes obstacle avoidance and navigation, and using a laser radar to scan a task area by the robot, and using an RBpf particle filtering algorithm to draw a two-dimensional grid map; the generated map is matched with a self-adaptive Monte Carlo positioning algorithm, a high-precision positioning result is provided, the IMU and the odometer are fused to be used for more accurately estimating the position change of the robot in space, the position change is realized by using an extended Kalman filter algorithm, two sensor data are fused by using the extended Kalman filter algorithm provided by a ekf _lacalination_node node in a robot_localization packet provided by ros to be used for accurately positioning the robot in motion, global planning is performed by using a Dijkstra algorithm, and local path planning is performed by using a DWA algorithm to realize autonomous movement of the robot.
The method for drawing the two-dimensional grid map through the RBpf particle filter algorithm comprises the following steps:
describing the possibility of estimating the pose and the map of the robot by using particle swarm through an RBPF particle filter algorithm; each particle comprises a possible historical track of the robot and an associated corresponding map, and the particle sets can be converged on a few particles with higher weight coefficients in the continuous updating and resampling process to draw a two-dimensional grid map.
The two-dimensional grid map is matched with an adaptive Monte Carlo positioning algorithm amcl, a particle filter is used for tracking the gesture of a robot aiming at the existing map, a high-precision positioning result is provided, the gesture information of the robot under a map coordinate system/map can be estimated and estimated by using a amcl package in a ros, and TF conversion between/base,/odom,/map is provided.
The method for estimating the position change of the robot in the space comprises the following steps: the IMU and the odometer are fused to more accurately estimate the position change of the robot in space, the fusion of multiple sensors is realized through a robot_localization packet provided by a ros, an extended Kalman filtering algorithm provided by a EKF _localization_node in the packet fuses two sensor data to be used for accurately positioning the robot in motion, and the EKF approximates to a real state by linearizing a nonlinear function and combines observation data and a motion model to perform state estimation; running the function package through a ros run, and storing configuration parameters of the extended Kalman filter node through a yaml file; after setting the parameters, the robot_localization outputs the fused data, and the topic output by the robot_localization is called as the model/filtered, and the state of the robot is represented by (X, Y, Z, roll, pitch, yaw, X, Y, Z).
The method for global planning through Dijkstra algorithm comprises the following steps: in the constructed grid map, global planning is performed through Dijkstra algorithm, and a single-source shortest path can be found through Dijkstra algorithm after the position of a target end point is set: the point of the shortest path from the start point is taken out from the points of the shortest path, and the distance between the points of the shortest path is refreshed by taking the point as a bridge.
The method for the DWA algorithm to perform local path planning comprises the following steps:
after the global path planning algorithm is finished, the DWA algorithm can carry out control space adoption according to the current robot position, the obstacle and the position of the end point, so that the local path planning is finished; determining a sampling speed space meeting the hardware constraint of the mobile robot in a speed space (v, omega) according to the current position state and the speed state of the mobile robot, predicting the track of the mobile robot moving for a certain time under the speed conditions, evaluating the track through an evaluation function, and finally selecting the speed corresponding to the track with the optimal evaluation as the movement speed of the mobile robot, and circulating until the mobile robot reaches a target point; thus first sampling the space V by setting the robot speed s The limitations fall into three main categories: the speed limit Vm, the acceleration limit Vd and the environmental obstacle limit Va, the final mobile robot speed sampling space is an intersection Vs=vm n Vdn Va of three speed spaces, after the speed sampling space Vs is determined, the DWA algorithm uniformly samples in the space at a certain sampling interval, and samples the speed groupThe number is as follows:
n=[(vhigh―vlow)/Ev]·[(whigh―wlow)/Ew]
where vhigh, vlow, whow are upper and lower limits of the velocity space, and Ew and Ev represent sampling resolutions; at the time of sampling a group of V s And then, track prediction is carried out through a kinematic model of the mobile robot, and a plurality of groups of tracks obtained by sampling are evaluated, wherein the evaluation function is as follows:
G(v,ω)=σ(α·heading(v,ω))+σ(β·dist(v,ω))+σ(γ·velocity(v,ω))
heading is the azimuth evaluation function, dist is the distance evaluation function, velocity is the speed evaluation function, and α, β and γ are coefficients of the evaluation function, where σ represents normalization, thereby deriving a path satisfying avoidance of the obstacle and fast travel toward the target point.
Compared with the prior art, the invention has the beneficial effects that:
the invention designs the multi-stage stimulation interface, so that the task processing is more various, and a user can make more flexible and various task selections in the current environment. The invention adds the chassis robot module and the depth camera module, and after adding the chassis robot module, the invention can move to execute tasks and realize obstacle avoidance and navigation in the execution process, and is not limited to simple grabbing of the mechanical arm; the depth camera can acquire the pose of an object, so that the mechanical arm can realize flexible grabbing independently, and the grabbing path is not fixed by manually prescribing grabbing positions; the multi-mode data acquired by the laser radar and the depth camera of the chassis robot can better enable the robot to sense the external environment, so that tasks can be better executed. According to the mechanical arm end effector, the design of the command transmission path of the upper computer-pico board-control board is adopted, so that the controllable control of the mechanical arm end effector is realized with lower cost.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It will be apparent to those skilled in the art from this disclosure that the drawings described below are merely exemplary and that other embodiments may be derived from the drawings provided without undue effort.
The structures, proportions, sizes, etc. shown in the present specification are shown only for the purposes of illustration and description, and are not intended to limit the scope of the invention, which is defined by the claims, so that any structural modifications, changes in proportions, or adjustments of sizes, which do not affect the efficacy or the achievement of the present invention, should fall within the scope of the invention.
FIG. 1 is a block diagram of the structure of the present invention;
FIG. 2 is an organizational chart of the present invention;
FIG. 3 is a control flow diagram of the present invention;
FIG. 4 is a diagram of a multi-stage stimulation interface in accordance with the present invention;
FIG. 5 is a diagram of a multi-stage stimulation interface according to the present invention;
FIG. 6 is a third multi-level stimulation interface diagram of the present invention;
FIG. 7 is a diagram of a multi-level stimulation interface of the present invention;
fig. 8 is a diagram of a multi-level stimulation interface according to the present invention.
Wherein: the brain electric helmet comprises a brain electric helmet 1, a notebook computer 2, a mechanical arm 3, a chassis robot 4, an upper computer 5, an end effector 6 and a depth camera 7.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions in the embodiments of the present invention will be clearly and completely described below, and it is apparent that the described embodiments are only some embodiments of the present application, but not all embodiments, and these descriptions are only for further illustrating the features and advantages of the present invention, not limiting the claims of the present invention; all other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The following describes in further detail the embodiments of the present invention with reference to the drawings and examples. The following examples are illustrative of the invention and are not intended to limit the scope of the invention.
In the description of the present application, it should be noted that, unless explicitly specified and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be either fixedly connected, detachably connected, or integrally connected, for example; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the terms in this application will be understood by those of ordinary skill in the art in a specific context.
Embodiment one:
in this embodiment, as shown in fig. 1 and 2, a brain-controlled composite robot control system structure for multiple tasks includes an electroencephalogram cap 1, a notebook computer 2, an upper computer 3, a mechanical arm 4, a chassis robot 5, a depth camera 6, and an end effector 7. Firstly, a professional wears an electroencephalogram cap 1 for a user, and judges whether to open an SSVEP stimulation interface according to an EOG signal of a forehead lobe area of the brain; the user gazes on the stimulation interface running on the notebook computer 2 to identify the functional frequency of eye gazing through the SSVEP signal of the occipital lobe area of the brain; the instruction corresponding to the annotated function frequency is sent to the upper computer 3 of the robot system through the tcp socket, the upper computer 3 obtains the socket and then interprets the socket as the corresponding ROS topic and issues the corresponding ROS topic, the depth camera 6, the mechanical arm 4 and the chassis robot 5 node can make corresponding control actions through subscribing different topics, and the aim of completing tasks through brain control instructions under a multi-task scene is achieved.
Embodiment two:
this embodiment is substantially the same as the first embodiment, and is characterized in that:
in a first embodiment, the multi-level stimulation interface has more than three task level interfaces, including a variety of tasks such as: opening/closing the door, opening/closing a lamp, taking/putting articles, etc.; the second level interface may select an entered map location after picking up an item such as: corridor, laboratory No. 1, laboratory No. 3, stairwell, etc.; the third level interface can be accessed after the laboratory is selected, and the stations for placing the articles can be selected in the interface, such as: station No. 1, station No. 2, etc. By looking at different stimulus blocks and activating the stimulus blocks, the robot can complete different tasks.
The mechanical arm 4 and the depth camera 6 are combined to realize the functions of automatically identifying the pose of an object and grabbing, the mechanical arm 4 and the depth camera 6 are required to be calibrated by hands and eyes, the target pose acquired by the depth camera 6 is converted into a space coordinate system of the mechanical arm 4 through a conversion matrix of the end effector 7 and the depth camera 6 obtained through calibration, and the mechanical arm 4 performs inverse kinematics solution, so that the movement to the target position can be realized autonomously.
Example III
This embodiment is substantially identical to the previous embodiment, except that:
in this embodiment, as shown in fig. 3, a brain-controlled composite robot control method for multiple tasks is operated by adopting the system structure and the multi-stage stimulation interface described in the foregoing embodiment, and specific operation steps are as follows:
step one, an electroencephalogram cap 1 is connected with a notebook computer 2 in a wired connection mode, and a stimulation interface program is operated after the notebook computer 2 tests electroencephalogram signals;
step two, the mechanical arm 4 and the chassis robot 5 are connected with an upper computer through a local area network;
step three, after the user wears the electroencephalogram cap 1, the user starts to stimulate and flash after blinking three times, and the user can watch the task stimulation block so as to generate an electroencephalogram signal, and the electroencephalogram cap acquires the electroencephalogram signal;
step four, converting the electroencephalogram signals into commands by the program of the notebook computer 2, and transmitting corresponding control signals in the local area network in a TCP socket mode;
step five, the TCP socket is transmitted to the upper computer 3, and the upper computer 3 obtains the command and then translates the command into ros topic communication to issue the command;
step six, the mechanical arm and the chassis robot node realize instruction receiving in a topic subscription mode, so that different tasks are controlled to be executed;
step seven, the upper computer 3 judges whether to execute the grabbing task, if the topic is the working topic of the control mechanical arm 4, the depth camera 6 component carries out environment judgment and object pose recognition, and the topic communication mechanism of the ROS is used for sending the position information acquired by the depth camera 6 to the subscribing node of the mechanical arm, so that the mechanical arm 4 can execute the grabbing task more flexibly, and the grabbing position and track do not need to be set in advance;
step eight, after the mechanical arm 4 reaches a specified clamping/placing position, the upper computer controls the mechanical arm 4 node to send a reaching topic, the mechanical arm 4 node sends an instruction to the pico board through subscribing the topic, and the pico board controls the opening and closing of the end effector 7 through high and low electric frequency;
and step nine, the upper computer 3 judges whether to execute a moving task, if the topic is a working topic of the control chassis robot 5, the chassis robot receives a target coordinate in a working map from the upper computer 3, and the robot can autonomously sense, position, avoid an obstacle and move to a target position through components such as a laser radar, an IMU and the like.
Further, the mechanical arm and the depth camera are combined to realize automatic recognition of the pose of an object, the mechanical arm and the depth camera are required to be subjected to hand-eye calibration, the pixel coordinates of the depth camera are transformed into a space coordinate system of the mechanical arm through a calibrated transformation matrix, the pose executed at the tail end of the mechanical arm is sent to a hand-eye calibration algorithm through an end_positon method, the pose of the object recognized by the camera is sent to a pose topic named/arco_single/position through a method in an arco_ros function package, and the hand-eye calibration algorithm subscribes to tcp and arco topics. Firstly, the hand-eye calibration needs to find a plurality of unchanged quantities, and the rotation translation relation between the camera and the tail end of the mechanical arm is unchangedThe relation between the calibration plate and the mechanical arm base is unchanged +.>Find->After the rotation and translation relation of the robot arm, the pose acquired from the camera can be converted into pose information with the base coordinate system of the robot arm as a reference, so that the robot arm can accurately grasp the pose information.
The following formula is given according to two invariant relations:
the posture of the mechanical arm is changed as follows:
wherein the method comprises the steps ofIs known->The relation between the mechanical arm base and the mechanical arm end effector is obtained through the calibration of the mechanical arm hand eyes. />The relationship between the calibration plate and the depth camera is obtained through Zhang's calibration, namely the external parameters of the camera. The simultaneous formulas (1) and (2) are obtained:
then we reduce to a x=x×b and solve this matrix equation, the unknowns are matricesIs the relationship between the eyes and hands.
The hand-eye relation is solved by taking the robot base coordinates as a world coordinate system and converting the calibration plate coordinate system into a robot coordinate system, so that the final relation is simplified as follows: a x=x×b, thereby obtainingIs provided for the rotation and translation relationship of (a).
Further, the IMU and the odometer are fused to more accurately estimate the position change of the robot in space, the fusion of multiple sensors is realized through a robot_localization package provided by a ros, an extended Kalman filtering algorithm provided by a EKF _localization_node in the package fuses two sensor data to be used for accurately positioning the robot in motion, and the EKF can approach the real state by linearizing a nonlinear function and combines observation data and a motion model to perform state estimation. The feature pack is run by the ros run, and the configuration parameters of the extended kalman filter node are stored by the yaml file. After setting the parameters, the robot_localization can output fused data, the topic output by the robot_localization is called as/odometric/filtered, and the state of the robot can be represented by (X, Y, Z, roll, pitch, yaw, X, Y, Z). Using a mapping algorithm to realize that a global map is constructed to assist a robot to realize obstacle avoidance and navigation, using a particle swarm to describe and estimate the possibility of the pose and map of the robot through a RBPF particle filtering algorithm by using the robot to scan a task area through a laser radar; each particle contains a possible historical track of the robot and an associated corresponding map. The particle sets can be converged on a few particles with higher weight coefficients in the continuous updating and resampling processes, and a two-dimensional grid map is drawn. The generated map is matched with an adaptive Monte Carlo positioning algorithm (amcl), which is a method of probability statistics, using a particle filter for existing mapsTracking the pose of a robot, providing a high-precision positioning result, estimating pose information of the robot under a map coordinate system/map by using an amcl package in ros, and providing TF conversion between/base,/odom,/map. In the constructed grid map, global planning is conducted through a Dijkstra algorithm, and a single-source shortest path can be found through the Dijkstra algorithm after the position of a target end point is set. The core idea is that the point of the shortest path from the starting point is taken out from the point of the shortest path, and the point is used as the distance of the bridge refreshing point of the shortest path. After the global path planning algorithm is finished, the DWA algorithm can carry out the adoption of control space (speed and angular speed) according to the current positions of the robot, the obstacle and the end point, thereby finishing the local path planning. The method comprises the following steps of determining a sampling speed space meeting the hardware constraint of the mobile robot in a speed space (v, omega) according to the current position state and the speed state of the mobile robot, predicting the track of the mobile robot moving for a certain time under the conditions of the speeds, evaluating the track through an evaluation function, and finally selecting the speed corresponding to the track with the optimal evaluation as the movement speed of the mobile robot, and circulating until the mobile robot reaches a target point. We therefore first sample the space V by setting the robot speed s The limitations fall into three main categories: the speed limit Vm, the acceleration limit Vd and the environmental obstacle limit Va, the final mobile robot speed sampling space is an intersection Vs=vm n Vdn Va of three speed spaces, after the speed sampling space Vs is determined, the DWA algorithm uniformly samples the space with a certain sampling interval (resolution), and the number of sampling speed groups is
n=[(vhigh―vlow)/Ev]·[(whigh―wlow)/Ew]
Where vhigh, vlow, whow are upper and lower limits of the velocity space, and Ew and Ev represent sampling resolutions. At the time of sampling a group of V s Then, track prediction is carried out through a kinematic model of the mobile robot, multiple groups of tracks obtained through sampling are evaluated, and an evaluation function is shown as follows
G(v,ω)=σ(α·heading(v,ω))+σ(β·dist(v,ω))+σ(γ·velocity(v,ω))
Heading is the azimuth evaluation function, dist is the distance evaluation function, velocity is the speed evaluation function, and α, β and γ are coefficients of the evaluation function, where σ represents normalization, whereby a path satisfying avoidance of the obstacle and fast travel toward the target point can be obtained.
Further, the control system instruction of the end effector 7 is transmitted through the path of the upper computer-pico board-control board, the upper computer 3 sends the instruction to the pico board by acquiring serial port id numbers, and the pico board discharges by changing the high-low electric frequency output end of the control board after acquiring the instruction, so as to control the end effector 7 to be opened and closed.
Further, as shown in fig. 4-8, if the user wants to grasp an article, the user first needs to look at the "get thing" stimulus block when the stimulus block blinks, the interface response will enter the next task stimulus interface, and looking at any article selected from the article interface completes the sending of the grasping instruction of the mechanical arm 4 once, and at this time, the robot can be seen through the interface of the depth camera 6 to execute the grasping task process. After the grabbing is finished, entering a task stimulation interface for transporting articles, wherein the stimulation interface takes a map as a background, if a user wants the robot to enter a 401 laboratory, when the stimulation interface blinks, looking at a 401 module, a moving instruction and a target position are sent, and the robot can reach a designated position according to the instruction; after the moving instruction is finished, the next-stage task interface is entered, the interface finishes the task of grabbing the object to be placed, and the robot is enabled to place the object by watching the 'station' stimulating block and transmitting the placing position instruction.
The preferred embodiments of the present invention have been described in detail, but the present invention is not limited to the above embodiments, and various changes can be made within the knowledge of those skilled in the art without departing from the spirit of the present invention, and the various changes are included in the scope of the present invention.

Claims (10)

1. A brain control compound robot control system facing multitasking is characterized in that: the brain electric stimulation device comprises an brain electric cap (1), a notebook computer (2), a mechanical arm (3), a chassis robot (4), an upper computer (5), an end effector (6) and a depth camera (7), wherein the brain electric cap (1) is worn on the head of a user, the brain electric cap (1) is connected with the notebook computer (2) in a wired connection mode, and the notebook computer (2) operates a stimulation interface program after testing brain electric signals; the mechanical arm (3) is connected with the chassis robot (4) through a local area network and an upper computer (5), the notebook computer (2) is connected with the upper computer (5), the depth camera (7) is electrically connected with the mechanical arm (3), and the mechanical arm (3) is electrically connected with the end effector (6).
2. A method for controlling a brain-controlled compound robot facing multitasking, which is used for the brain-controlled compound robot control system facing multitasking as claimed in claim 1, and is characterized in that: comprises the following steps:
s1, after a user wears an electroencephalogram cap, blinking is started three times, stimulation and flickering are started, the user can look at a task stimulation block to generate an electroencephalogram signal, the electroencephalogram cap can transmit corresponding control signals to an upper computer in a tcp socket mode by identifying the function frequency of eye fixation, the upper computer obtains commands and translates the commands into ros topic communication to issue the commands, and a mechanical arm and a chassis robot node realize command receiving in a topic subscribing mode so as to control execution of different tasks;
s2, when a grabbing task is carried out, the depth camera component carries out environment judgment and object pose recognition, position information acquired by the depth camera is sent to a subscribing node of the mechanical arm through a topic communication mechanism of the ROS, the mechanical arm can carry out the grabbing task more flexibly, grabbing positions and tracks do not need to be set in advance, an upper computer controls the mechanical arm node to send an arriving topic after the mechanical arm reaches a specified grabbing/placing position, the mechanical arm node sends an instruction to a pico board through subscribing the topic, and the pico board controls the opening and closing of an end effector through high and low electric frequencies;
and S3, when the moving task is executed, the chassis robot receives a target coordinate in a working map from the upper computer, and the robot automatically senses, positions and avoids the obstacle and moves to a target position through the laser radar and the IMU component.
3. The method for controlling the brain-controlled composite robot facing the multitasking according to claim 2, wherein the method comprises the following steps: the method for the mechanical arm to execute the grabbing task in the S2 is as follows:
the mechanical arm and the depth camera are combined to realize automatic recognition pose of an object and capture the function, the mechanical arm and the depth camera are required to be subjected to hand-eye calibration, the pixel coordinates of the depth camera are transformed into a space coordinate system of the mechanical arm through a calibrated transformation matrix, the pose executed at the tail end of the mechanical arm is sent to a hand-eye calibration algorithm through an end_positon method, the pose of the object recognized by the camera is sent to a pose topic named/acuco_single/position through a method in an acuco_ros function package, and the hand-eye calibration algorithm subscribes to tcp and acuco topics; firstly, the hand-eye calibration needs to find a plurality of unchanged quantities, and the rotation translation relation between the camera and the tail end of the mechanical arm is unchangedThe relation between the calibration plate and the mechanical arm base is unchanged +.>Find->After the rotation and translation relation of the robot arm, the pose acquired from the camera can be converted into pose information with the base coordinate system of the robot arm as a reference, so that the robot arm can accurately grasp the pose information.
4. A method for controlling a multi-task-oriented brain-controlled composite robot according to claim 3, wherein: the determinationThe method of the rotation translation relation of (a) is as follows:
the following formula is given according to two invariant relations:
the posture of the mechanical arm is changed as follows:
wherein the method comprises the steps ofIs known->The relation between the mechanical arm base and the mechanical arm end effector is obtained through the calibration of a mechanical arm hand eye; />The relation between the calibration plate and the depth camera is obtained through Zhang's calibration, namely the external parameters of the camera; the simultaneous formulas (1) and (2) are obtained:
then we reduce to a x=x×b and solve this matrix equation, the unknowns are matricesNamely the handRelationships between eyes;
the hand-eye relation is solved by taking the robot base coordinates as a world coordinate system and converting the calibration plate coordinate system into a robot coordinate system, so that the final relation is simplified as follows: a x=x×b, thereby obtainingIs provided for the rotation and translation relationship of (a).
5. The method for controlling the brain-controlled composite robot facing the multitasking according to claim 2, wherein the method comprises the following steps: the method for realizing autonomous sensing, positioning, obstacle avoidance and movement of the robot to the target position through the laser radar and the IMU component in the S3 comprises the following steps:
using a mapping algorithm to realize that a global map construction auxiliary robot realizes obstacle avoidance and navigation, and using a laser radar to scan a task area by the robot, and using an RBpf particle filtering algorithm to draw a two-dimensional grid map; the generated map is matched with a self-adaptive Monte Carlo positioning algorithm, a high-precision positioning result is provided, the IMU and the odometer are fused to be used for more accurately estimating the position change of the robot in space, the position change is realized by using an extended Kalman filter algorithm, two sensor data are fused by using the extended Kalman filter algorithm provided by a ekf _lacalination_node node in a robot_localization packet provided by ros to be used for accurately positioning the robot in motion, global planning is performed by using a Dijkstra algorithm, and local path planning is performed by using a DWA algorithm to realize autonomous movement of the robot.
6. The method for controlling the brain-controlled compound robot facing the multitasking in claim 5, which is characterized in that: the method for drawing the two-dimensional grid map through the RBpf particle filter algorithm comprises the following steps:
describing the possibility of estimating the pose and the map of the robot by using particle swarm through an RBPF particle filter algorithm; each particle comprises a possible historical track of the robot and an associated corresponding map, and the particle sets can be converged on a few particles with higher weight coefficients in the continuous updating and resampling process to draw a two-dimensional grid map.
7. The method for controlling the brain-controlled compound robot facing the multitasking in claim 6, which is characterized in that: the two-dimensional grid map is matched with an adaptive Monte Carlo positioning algorithm amcl, a particle filter is used for tracking the gesture of a robot aiming at the existing map, a high-precision positioning result is provided, the gesture information of the robot under a map coordinate system/map can be estimated and estimated by using a amcl package in a ros, and TF conversion between/base,/odom,/map is provided.
8. The method for controlling the brain-controlled compound robot facing the multitasking in claim 5, which is characterized in that: the method for estimating the position change of the robot in the space comprises the following steps: the IMU and the odometer are fused to more accurately estimate the position change of the robot in space, the fusion of multiple sensors is realized through a robot_localization packet provided by a ros, an extended Kalman filtering algorithm provided by a EKF _localization_node in the packet fuses two sensor data to be used for accurately positioning the robot in motion, and the EKF approximates to a real state by linearizing a nonlinear function and combines observation data and a motion model to perform state estimation; running the function package through a ros run, and storing configuration parameters of the extended Kalman filter node through a yaml file; after setting the parameters, the robot_localization outputs the fused data, and the topic output by the robot_localization is called as the model/filtered, and the state of the robot is represented by (X, Y, Z, roll, pitch, yaw, X, Y, Z).
9. The method for controlling the brain-controlled compound robot facing the multitasking in claim 5, which is characterized in that: the method for global planning through Dijkstra algorithm comprises the following steps: in the constructed grid map, global planning is performed through Dijkstra algorithm, and a single-source shortest path can be found through Dijkstra algorithm after the position of a target end point is set: the point of the shortest path from the start point is taken out from the points of the shortest path, and the distance between the points of the shortest path is refreshed by taking the point as a bridge.
10. The method for controlling the brain-controlled compound robot facing the multitasking in claim 5, which is characterized in that: the method for the DWA algorithm to perform local path planning comprises the following steps:
after the global path planning algorithm is finished, the DWA algorithm can carry out control space adoption according to the current robot position, the obstacle and the position of the end point, so that the local path planning is finished; determining a sampling speed space meeting the hardware constraint of the mobile robot in a speed space (v, omega) according to the current position state and the speed state of the mobile robot, predicting the track of the mobile robot moving for a certain time under the speed conditions, evaluating the track through an evaluation function, and finally selecting the speed corresponding to the track with the optimal evaluation as the movement speed of the mobile robot, and circulating until the mobile robot reaches a target point; thus first sampling the space V by setting the robot speed s The limitations fall into three main categories: the speed limit Vm, the acceleration limit Vd and the environmental obstacle limit Va, the final mobile robot speed sampling space is an intersection of three speed spaces vs=vm n Vd n Va, after the speed sampling space Vs is determined, the DWA algorithm uniformly samples in the space at a certain sampling interval, and the number of sampling speed groups is as follows:
n=[(v high―v low)/Ev]·[(w high―w low)/Ew]
where vhigh, vlow, whow are upper and lower limits of the velocity space, and Ew and Ev represent sampling resolutions; at the time of sampling a group of V s And then, track prediction is carried out through a kinematic model of the mobile robot, and a plurality of groups of tracks obtained by sampling are evaluated, wherein the evaluation function is as follows:
G(v,ω)=σ(α·heading(v,ω))+σ(β·dist(v,ω))+σ(γ·velocity(v,ω))
heading is the azimuth evaluation function, dist is the distance evaluation function, velocity is the speed evaluation function, and α, β and γ are coefficients of the evaluation function, where σ represents normalization, thereby deriving a path satisfying avoidance of the obstacle and fast travel toward the target point.
CN202311765753.7A 2023-12-21 2023-12-21 Multi-task-oriented brain-control composite robot control system and method Pending CN117718962A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311765753.7A CN117718962A (en) 2023-12-21 2023-12-21 Multi-task-oriented brain-control composite robot control system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311765753.7A CN117718962A (en) 2023-12-21 2023-12-21 Multi-task-oriented brain-control composite robot control system and method

Publications (1)

Publication Number Publication Date
CN117718962A true CN117718962A (en) 2024-03-19

Family

ID=90205000

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311765753.7A Pending CN117718962A (en) 2023-12-21 2023-12-21 Multi-task-oriented brain-control composite robot control system and method

Country Status (1)

Country Link
CN (1) CN117718962A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118209098A (en) * 2024-05-20 2024-06-18 西南科技大学 Unknown radiation field distribution map construction method of robot

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106671084A (en) * 2016-12-20 2017-05-17 华南理工大学 Mechanical arm self-directed auxiliary system and method based on brain-computer interface
CN110555889A (en) * 2019-08-27 2019-12-10 西安交通大学 CALTag and point cloud information-based depth camera hand-eye calibration method
CN111571619A (en) * 2020-04-17 2020-08-25 上海大学 Life assisting system and method based on SSVEP brain-controlled mechanical arm grabbing
CN113805694A (en) * 2021-08-26 2021-12-17 上海大学 Auxiliary grabbing system and method based on brain-computer interface and computer vision
CN115145387A (en) * 2022-03-09 2022-10-04 上海大学 Brain-controlled mobile grabbing robot system based on machine vision and control method
CN115599099A (en) * 2022-10-25 2023-01-13 中科璀璨机器人(成都)有限公司(Cn) ROS-based autonomous navigation robot
US20230339112A1 (en) * 2023-03-17 2023-10-26 University Of Electronic Science And Technology Of China Method for robot assisted multi-view 3d scanning measurement based on path planning

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106671084A (en) * 2016-12-20 2017-05-17 华南理工大学 Mechanical arm self-directed auxiliary system and method based on brain-computer interface
CN110555889A (en) * 2019-08-27 2019-12-10 西安交通大学 CALTag and point cloud information-based depth camera hand-eye calibration method
CN111571619A (en) * 2020-04-17 2020-08-25 上海大学 Life assisting system and method based on SSVEP brain-controlled mechanical arm grabbing
CN113805694A (en) * 2021-08-26 2021-12-17 上海大学 Auxiliary grabbing system and method based on brain-computer interface and computer vision
CN115145387A (en) * 2022-03-09 2022-10-04 上海大学 Brain-controlled mobile grabbing robot system based on machine vision and control method
CN115599099A (en) * 2022-10-25 2023-01-13 中科璀璨机器人(成都)有限公司(Cn) ROS-based autonomous navigation robot
US20230339112A1 (en) * 2023-03-17 2023-10-26 University Of Electronic Science And Technology Of China Method for robot assisted multi-view 3d scanning measurement based on path planning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
熊光明: "《智能车辆理论与应用:慕课版》", 31 December 2021, 北京理工大学出版社, pages: 153 - 156 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118209098A (en) * 2024-05-20 2024-06-18 西南科技大学 Unknown radiation field distribution map construction method of robot

Similar Documents

Publication Publication Date Title
US8577126B2 (en) System and method for cooperative remote vehicle behavior
CN109955254B (en) Mobile robot control system and teleoperation control method for robot end pose
CN117718962A (en) Multi-task-oriented brain-control composite robot control system and method
Ballard et al. Principles of animate vision
US10612934B2 (en) System and methods for robotic autonomous motion planning and navigation
US20090180668A1 (en) System and method for cooperative remote vehicle behavior
WO2020221311A1 (en) Wearable device-based mobile robot control system and control method
Brown Gaze controls with interactions and decays
Yuan et al. Human gaze-driven spatial tasking of an autonomous MAV
CN106737673A (en) A kind of method of the control of mechanical arm end to end based on deep learning
US20170348858A1 (en) Multiaxial motion control device and method, in particular control device and method for a robot arm
CN105912980A (en) Unmanned plane and unmanned plane system
WO2022062169A1 (en) Sharing control method for electroencephalogram mobile robot in unknown environment
CN112965507B (en) Cluster unmanned aerial vehicle cooperative work system and method based on intelligent optimization
Bustamante et al. Towards information-based feedback control for binaural active localization
Grando et al. Deep reinforcement learning for mapless navigation of unmanned aerial vehicles
Tian et al. A universal self-adaption workspace mapping method for human–robot interaction using kinect sensor data
Zhang et al. Bio-inspired motion planning for reaching movement of a manipulator based on intrinsic tau jerk guidance
Tresa et al. A study on internet of things: overview, automation, wireless technology, robotics
CN111134974B (en) Wheelchair robot system based on augmented reality and multi-mode biological signals
Gromov et al. Guiding quadrotor landing with pointing gestures
Zhao et al. An adaptive real-time gesture detection method using EMG and IMU series for robot control
Kong et al. An investigation of spatial behavior in agile guidance tasks
Mavsar et al. RoverNet: Vision-based adaptive human-to-robot object handovers
Huo et al. A BCI-based motion control system for heterogeneous robot swarm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination