CN116149313A - Visual and tactile integrated aerial robot teleoperation system and following auxiliary control method - Google Patents
Visual and tactile integrated aerial robot teleoperation system and following auxiliary control method Download PDFInfo
- Publication number
- CN116149313A CN116149313A CN202211437290.7A CN202211437290A CN116149313A CN 116149313 A CN116149313 A CN 116149313A CN 202211437290 A CN202211437290 A CN 202211437290A CN 116149313 A CN116149313 A CN 116149313A
- Authority
- CN
- China
- Prior art keywords
- module
- aerial robot
- robot
- control
- speed
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000000007 visual effect Effects 0.000 title claims abstract description 55
- 238000000034 method Methods 0.000 title claims abstract description 28
- 230000033001 locomotion Effects 0.000 claims abstract description 61
- 238000012545 processing Methods 0.000 claims abstract description 39
- 238000011217 control strategy Methods 0.000 claims abstract description 23
- 238000004891 communication Methods 0.000 claims abstract description 20
- 238000013507 mapping Methods 0.000 claims abstract description 20
- 230000004927 fusion Effects 0.000 claims abstract description 11
- 238000004364 calculation method Methods 0.000 claims description 14
- 230000015572 biosynthetic process Effects 0.000 claims description 12
- 238000006073 displacement reaction Methods 0.000 claims description 12
- 230000004888 barrier function Effects 0.000 claims description 11
- 239000003795 chemical substances by application Substances 0.000 claims description 8
- 239000002245 particle Substances 0.000 claims description 4
- 238000013016 damping Methods 0.000 claims description 2
- 238000005457 optimization Methods 0.000 claims description 2
- 230000008569 process Effects 0.000 claims description 2
- 239000000758 substrate Substances 0.000 claims description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 206010063385 Intellectualisation Diseases 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000001149 cognitive effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 230000001373 regressive effect Effects 0.000 description 1
- 230000008054 signal transmission Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0223—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/02—Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Electromagnetism (AREA)
- Manipulator (AREA)
Abstract
The invention relates to the field of teleoperation of aerial robots, in particular to a teleoperation system of an aerial robot with visual and tactile fusion and a following auxiliary control method. The system comprises a master end, a slave end, a control strategy module and a network communication module, wherein the master end comprises a joystick signal acquisition module, a touch feedback module, a visual feedback module and a signal processing module, the slave end comprises a leader and a follower, the leader is provided with a positioning module, and the follower is an aerial robot; the aerial robot comprises a motion control module, a positioning and mapping module, a motion planning module and a vision acquisition module. The invention adopts a method of tactile feedback to provide follow-up auxiliary control guidance, so that a teleoperation system can safely complete complex tasks; compared with the control of the aerial robot in an unstructured environment by means of visual feedback signals, the control method reduces the workload of operators and improves the safety of control.
Description
Technical Field
The invention relates to the field of teleoperation of aerial robots, in particular to a teleoperation system of an aerial robot integrating visual and tactile feedback and a following auxiliary control method.
Background
As the teleoperation technology of aerial robots has matured, it has become increasingly widely used in dangerous, complex unstructured environments, such as high altitudes, fire sites, nuclear radiation areas, where human access is difficult. In addition, multi-agent collaboration techniques may take advantage of the heterogeneity between man-machine or multi-machine in unstructured environments where the collaboration is operated by an operator to complete tasks.
The teleoperation technology is a technology of remote interaction between a person and a robot, and is characterized in that on one hand, control intention of an operator is captured, a control command of the operator is transmitted to a remote robot controller, and on the other hand, state information of the environment and the robot is fed back to the operator through multi-sensor equipment arranged on the robot, so that the operator obtains a basis of control decision.
The existing remote control method of the aerial robot mainly depends on unilateral remote control based on feedback of human eyes or visual sensors to an image, and has larger cognitive load on an operator in a highly unstructured environment, such as an environment with poor sight. Furthermore, multi-agent technology treats each moving subject capable of autonomous or steered in a man-machine formation or multi-machine formation as an agent. In the course of the formation movement, there are often people or robots that act as leader roles to lead the rest of the robots to move. Whether the remaining robots can safely follow the leader is a concern in recent years. However, although the current robot can complete the task of following and avoiding obstacles according to the optimal path by means of target tracking and autonomous navigation technologies, the autonomous method is difficult to adapt to the unexpected and unreachable risks.
Disclosure of Invention
In order to solve the problems in the prior art, the invention utilizes the haptic feedback to provide the following auxiliary control for an operator, combines the first visual angle information of vision and constructs a teleoperation system of the vision-haptic fusion type aerial robot; according to state information, environment information and operator commands of other intelligent agents in formation, the air robot following auxiliary control method based on visual and tactile fusion is provided, teleoperation safety is improved, and workload of operators is reduced.
The invention provides an aerial robot teleoperation system with visual and tactile fusion, which comprises:
the main end comprises a control lever signal acquisition module, a touch feedback module, a visual feedback module and a signal processing module, wherein the signal processing module is respectively connected with the control lever signal acquisition module, the touch feedback module and the visual feedback module;
a control strategy module;
a network communication module;
the slave end comprises a leader and a follower, wherein the leader is provided with a positioning module, and the follower is an aerial robot; the aerial robot comprises a motion control module, a positioning and mapping module, a motion planning module and a vision acquisition module; the positioning module is respectively connected with the network communication module and the motion planning module;
the control lever signal acquisition module acquires the rotation amount and corresponding time information of a control lever rotating shaft of the mobile tactile feedback device and outputs a tactile feedback device signal to the signal processing module;
the haptic feedback module receives the expected haptic feedback device operating lever displacement signal calculated by the signal processing module, and controls the haptic feedback device operating lever to render the corresponding touch sense so as to assist in controlling the movement of the haptic device operating lever;
the signal processing module receives the information of the control lever signal acquisition module and processes the information into an air robot speed control instruction; receiving the following auxiliary control speed signal calculated by the control strategy module, and processing the following auxiliary control speed signal into a desired tactile feedback equipment joystick displacement signal to the tactile feedback module; receiving a first visual angle image signal acquired by a visual acquisition module of the slave aerial robot and transmitted by a network communication module, and transmitting the first visual angle image signal to a visual feedback module;
the visual feedback module displays and outputs the first visual angle image signal of the aerial robot, which is transmitted by the signal processing module;
the positioning module acquires and calculates three-dimensional position coordinates and corresponding time information of the leader and the follower;
the motion control module receives positioning information from the positioning mapping module and a final slave-end aerial robot speed control instruction calculated by the control strategy module so as to control the motion of the aerial robot;
the positioning mapping module acquires point cloud map information and positioning information of the slave aerial robot, outputs the point cloud map information and the positioning information to the motion planning module, and transmits the positioning information to the control strategy module through the network communication module;
the motion planning module receives point cloud map information and positioning information of the slave aerial robot positioning and mapping module, and receives positioning information of a leader as a following target; combining a dynamics model of the slave-end aerial robot, processing to obtain a motion track meeting dynamics and obstacle avoidance, and transmitting track information to a control strategy module through a network communication module;
the visual acquisition module acquires a first visual angle image signal from the visual sensor in real time and transmits the first visual angle image signal to the signal processing module through the network communication module;
the control strategy module receives the speed control instruction of the slave-end aerial robot, the positioning information of the slave-end aerial robot and the expected track information from the signal processing module, calculates and outputs the final speed control instruction of the slave-end aerial robot to the motion control module, and outputs a following auxiliary control speed signal to the signal processing module.
The invention provides a follow-up auxiliary control method, which is based on the visual and tactile fusion air robot teleoperation system and specifically comprises the following steps:
the signal processing module matches the space displacement amount of the tail end point of the control rod of the tactile feedback device and the rotation amount of the tail end rotating shaft into a space movement speed instruction expected by the slave end aerial robot and a deflection angular speed instruction thereof;
and calculating the auxiliary resultant force on the haptic feedback device according to the guiding auxiliary force followed by the slave aerial robot and the repulsive auxiliary force of collision avoidance among the aerial robot formation clusters.
Compared with the prior art, the invention has the following technical effects:
1. according to the teleoperation system and the follow-up auxiliary control method of the air robot based on the haptic fusion, the experience of an operator and the self-capacity of the air robot are combined, and the follow-up auxiliary control guidance is provided for the operator by adopting a haptic feedback method, so that the teleoperation system can safely complete complex tasks; compared with the control of the aerial robot in an unstructured environment by means of visual feedback signals, the control method reduces the workload of operators and improves the safety of control.
2. According to the invention, the tactile feedback force is divided into the guiding force for following the unmanned aerial vehicle and the repulsive force for avoiding the obstacle of the dynamic obstacle among multiple machines, so that the track following precision of the aerial robot is improved, and the safety among multiple machines is improved.
3. The invention adopts the teleoperation system structure of the multi-machine formation, can endow basic intellectualization to different aerial robots and configure different equipment suitable for different tasks, and compared with the scheme that all functional modules are configured in one aerial robot, the invention ensures that each robot in the formation has the characteristic of light weight, completes complex tasks through multi-machine cooperation, and improves the flexibility of the system.
4. The following auxiliary control method adopted by the invention fuses the autonomy of the person and the auxiliary prompt of the automatic system, fully utilizes the experience of the person and the advantages of the autonomous system, and improves the capability of the system to adapt to different tasks.
Drawings
FIG. 1 is a schematic diagram of a teleoperation system according to an embodiment of the present invention;
FIG. 2 is an example of a conceptual diagram of a velocity obstacle in a location space in an embodiment of the invention;
fig. 3 is an example of a conceptual diagram of a speed obstacle in a speed space in an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but embodiments of the present invention are not limited thereto.
Examples
As shown in fig. 1, the embodiment provides a teleoperation system of an air robot with visual and tactile fusion, which comprises a master end, a slave end, a wireless network communication module and a control strategy module. The main end comprises a control lever signal acquisition module, a touch feedback module, a visual feedback module and a signal processing module, wherein the signal processing module is respectively connected with the control lever signal acquisition module, the touch feedback module and the visual feedback module. The slave comprises a leader and a follower, wherein the leader is a person or a leader robot in a remote environment and is provided with a positioning module; the follower is an aerial robot, and comprises a special task module, a motion control module, a positioning and mapping module, a motion planning module and a vision acquisition module, wherein the positioning and mapping module can be realized by adopting a positioning and mapping series sensor, and the vision acquisition module can be realized by adopting a vision sensor. The motion planning module, the positioning mapping module, the motion control module and the motion control module are sequentially connected; the positioning module is respectively connected with the network communication module and the motion planning module. Wherein, the liquid crystal display device comprises a liquid crystal display device,
the control lever signal acquisition module is used for acquiring the rotation amounts and corresponding time information of six rotating shafts from the control lever of the mobile tactile feedback device and outputting a tactile feedback device signal to the signal processing module; moving the tactile feedback device joystick is operable by an operator;
the haptic feedback module is used for receiving the expected haptic feedback device operating lever displacement signal calculated by the signal processing module, controlling the haptic feedback device operating lever to render corresponding touch feeling so as to assist an operator to control the haptic device operating lever to move;
the signal processing module is used for receiving the information of the control lever signal acquisition module and processing the information into an aerial robot speed control instruction, wherein the aerial robot speed control instruction comprises an aerial robot space movement speed instruction and an aerial robot deflection angular speed instruction; the control strategy module is used for calculating a suggested follow-up auxiliary control speed signal according to the control strategy, and processing the follow-up auxiliary control speed signal into a desired tactile feedback device joystick displacement signal to the tactile feedback module; the visual feedback module is used for receiving the first visual angle image signals acquired by the visual acquisition module of the slave aerial robot and transmitted by the wireless network communication module, and transmitting the first visual angle image signals of the aerial robot to the visual feedback module at a specified frame rate and format;
the visual feedback module is used for displaying and outputting the first visual angle image signal of the aerial robot transmitted by the signal processing module, and generally can display and output the first visual angle image signal on a display screen;
the positioning module is used for acquiring and calculating three-dimensional position coordinates and corresponding time information of the leader/leader robot and other followers in formation, and the position coordinates calculated by the positioning module of each robot are in the same coordinate system;
the special task module is used for receiving an instruction of the motion control module for running a special function and is responsible for executing a special task;
the motion control module is used for receiving the positioning information from the positioning mapping module and the final speed control instruction of the slave-end aerial robot calculated by the control strategy module, outputting PWM signals after processing to control the motor rotating speed of the power system of the slave-end aerial robot, so as to control the motion of the slave-end aerial robot; the control module is used for sending a control command signal to control the operation of the special task module;
the positioning map building module is used for acquiring point cloud information of a positioning map building series sensor, IMU information of an inertial measurement sensor and other positioning sensor information of the aerial robot, outputting the point cloud map information and the positioning information to the motion planning module, outputting positioning information of the aerial robot in a world coordinate system and giving the positioning information to the control strategy module through the wireless network communication module;
the motion planning module is used for receiving point cloud map information and positioning information of the aerial robot positioning and mapping module, receiving positioning information of a leader/leader robot as a following target, combining a dynamic model of the aerial robot, and obtaining a motion track meeting dynamics and obstacle avoidance through online processing of an aerial robot on-board computer, wherein the track is required to have path point position information; transmitting the track information to a control strategy module through a wireless network communication module;
the visual acquisition module is used for acquiring the first visual angle image signal from the visual sensor in real time and transmitting the first visual angle image signal to the signal processing module through the wireless network communication module;
the wireless network communication module is used for being responsible for signal transmission among the master end, the slave end aerial robots, the leader/leader robots and other intelligent agents;
the control strategy module is used for receiving the air robot speed control instruction from the signal processing module, the positioning information of the positioning modules of other followers, the air robot positioning information and the expected track information, outputting the final air robot speed control instruction to the motion control module and outputting the following auxiliary control speed signal to the signal processing module through the following auxiliary control method calculation of the air robot teleoperation system.
In this embodiment, the positioning module is a UWB positioning tag or motion capture system tag fitted at the leader/robot center; the special task modules are sensors and/or actuators, such as infrared detectors, robotic arms, mounted on the aerial robot.
In this embodiment, the aerial robot includes a flight controller, an onboard computer; the motion control module is arranged in the flight controller, acquires control instructions and positioning information of the top layer from the airborne computer, collects and analyzes motion state information of the robot according to sensors such as a gyroscope, an accelerometer and a barometer on the flight controller, generates PWM signals to control the motor rotating speed of the braking system, and further controls the space pose of the robot. In addition, the motion control module is responsible for carrying out information interaction with the special task module, sending information to the special task module through the onboard computer or the flight controller, and receiving information collected by the special task module.
In this embodiment, the positioning mapping module runs a simultaneous positioning and mapping algorithm, and the positioning mapping series sensor may use a sensor scheme including, but not limited to, the following: multi-line laser radar+IMU module, multi-line laser radar, monocular/binocular camera+IMU, binocular camera+depth camera+IMU, sense tracking camera, etc.; the positioning map building module calculates and obtains a three-dimensional point cloud map, three-dimensional positioning coordinate information of the robot and quaternion attitude information of the robot through a simultaneous positioning and map building algorithm.
In a preferred embodiment, the vision acquisition module is an RGB camera disposed on the aerial robot.
Based on the same inventive concept, the embodiment also provides a following auxiliary control method, which is based on the visual and tactile fusion aerial robot teleoperation system and specifically comprises the following steps:
s1, a signal processing module matches the space displacement of the tail end point of a control rod of the haptic feedback equipment and the rotation quantity of a tail end rotating shaft into a space movement speed instruction expected by a slave end aerial robot and a deflection angular speed instruction thereof, and a displacement-speed matching formula is as follows:
wherein the space movement speed command expected by the aerial robot and the deflection angular speed command thereof Line speed command, ω, representing an airborne robot z,com Representing yaw rate command, +.>Representing the displacement of the end position of a joystick of a haptic feedback device, ψ HD Representing the rotation amount of the rotation shaft at the end of the joystick of the haptic feedback device, coefficient +.> Is a displacement-velocity matching parameter;
s2, calculating to obtain auxiliary resultant force on the haptic feedback device according to the guide auxiliary force followed by the slave aerial robot and the rejection auxiliary force for enquiring and avoiding collision of the aerial robot formation cluster, wherein the formula is as follows:
F=F at +F re
wherein, the liquid crystal display device comprises a liquid crystal display device,guiding assistance force for the aerial robot to follow, < >>Rejection assistance force for inter-cluster collision avoidance for aerial robots, < ->An assist force generated for the haptic feedback device.
Guiding assisting force F followed by aerial robot at Can be further decomposed into:
F at =F ⊥ +F ||
wherein, the liquid crystal display device comprises a liquid crystal display device,for the regression assistance force in the vertical direction of the trajectory calculated towards the motion planning module,a follow-up assisting force for the advancing direction of the track calculated towards the motion planning module;
regression assisting force F ⊥ The calculation formula of (2) is as follows:
K HD,P for the elastic coefficient, K, of the joystick of the haptic feedback device HD,D For the damping coefficient of the joystick of the haptic feedback device,to define the robot body coordinate system O in the air B,F Space speed of aerial robot desired by lower motion planning module, +.>A speed of movement of the joystick tip for the haptic feedback device; wherein (1)>The calculation formula of (2) is as follows:
represents a coordinate system O taking the initial pose of the aerial robot as the origin of coordinates and the direction F ,/>The calculation formula of the regressive air robot moving speed in the vertical direction of the expected track calculated by the control strategy module is as follows:
K ⊥,P 、K ⊥,I and K ⊥,D Proportional, integral and derivative parameters, x, of a PID controller with path regression, respectively F In a coordinate system O for an aerial robot F Is provided with a coordinate of the position of (c),and calculating the nearest path point to the aerial robot on the expected path for the motion planning module.
The following assisting force F || The aim is to guide an operator to control an aerial robot to follow a leader (i.e. the leader or the leader robot) in the track direction planned by a motion planning module at a safe speed through a tactile feedback device, and the calculation method is as follows:
when the aerial robot is very close to the leader, the following speed formula of the aerial robot in the track direction is as follows:
wherein K is PP 、K PI And K PD For the proportional, integral and derivative parameters of the PID controller, D rel D is the current distance between the aerial robot and the leader default A relative distance when the aerial robot is expected to be stationary with the leader;
when the aerial robot is far from the leader, the following acceleration formula of the aerial robot in the track direction is as follows:
wherein K is VP 、K VI And K VD Proportional, integral and derivative parameters, v, of a PID controller followed by a path, respectively f,tra For the speed of the aerial robot in the direction of the trajectory,in inertial coordinate system O for leader F The following movement speed, and further, the speed formula can be obtained as follows:
thus, the speed is derived from the distance of the aerial robot from the leaderCan define an aerial robot body coordinate system O in the track direction B,F Velocity vector +.>Its mould size is +.>
Thus, the follow-up assist force F || The calculation formula of (2) is as follows:
the repulsive assisting force F followed by the aerial robot re The calculation method comprises the following steps:
F re the core of the calculation method is that a speed barrier (VO) is built for each intelligent body except the current aerial robot, and finally the speed barrier space of the current aerial robot is the superposition of each speed barrier VO;
for each speed obstacle VO, the current airborne robot is reduced to a particle with radius 0, and the remaining agents, including the leader, expand into a sphere with radius r Le The calculation method comprises the following steps:
r Le =r L +r e
r L r is the sum of the radius of the smallest sphere surrounding the aerial robot and the radius of the smallest sphere surrounding the corresponding agent e Is an extended safety error radius.
As shown in fig. 2, taking the formation of one aerial robot and one leader as an example, they are given the numbers F and L respectively,indicating that the aerial robot is scaled down to mass points, +.>Indicates that the leader swells into balls, +.>Indicating the speed of the aerial robot,representing the leaderCollision cone VO 'of velocity, velocity obstacle VO' F|L Can be expressed as:
as shown in FIG. 3, a further optimization of the velocity obstacle VO is in the collision cone VO' F|L On the basis of (a) a feasible region is set so that the control strategy only pays attention to a future period T h Collision in the viable region VO H The method comprises the following steps:
wherein d m Particle shrinking for airborne robotsBall to leader expansion->Is the closest to the surface of the substrate. The optimized VO space area of the speed barrier is as follows:
definition of the rejection assistance force F re Is of the direction of (2)Quick waste of aerial robot>Vector end point is perpendicular to speed obstacle space region +.>The direction of the surface, the repulsive assist force Fre is calculated by:
wherein F is set Is a suitable force module length value which is adjusted manually.
In the present embodiment, the speed of the aerial robot can be intuitively seen in the position space by fig. 2Leader speed +.>And judging at which position of the speed barrier VO the speed after superposition is, and judging whether the speed after superposition is in the speed barrier VO, so as to judge whether the aerial robot and the leader collide in the future. In the speed space according to FIG. 3, the speed obstacle VO can be set to the leader speed +.>Direction and size movement, can judge the speed of the aerial robot +.>Whether the pointing position is in the speed obstacle VO or not is judged, and whether the aerial robot collides with the leader or not in the future is judged.
The above examples are preferred embodiments of the present invention, but the embodiments of the present invention are not limited to the above examples, and any other changes, modifications, substitutions, combinations, and simplifications that do not depart from the spirit and principle of the present invention should be made in the equivalent manner, and the embodiments are included in the protection scope of the present invention.
Claims (9)
1. An air robot teleoperation system for visual and tactile fusion, comprising:
the main end comprises a control lever signal acquisition module, a touch feedback module, a visual feedback module and a signal processing module, wherein the signal processing module is respectively connected with the control lever signal acquisition module, the touch feedback module and the visual feedback module;
a control strategy module;
a network communication module;
the slave end comprises a leader and a follower, wherein the leader is provided with a positioning module, and the follower is an aerial robot; the aerial robot comprises a motion control module, a positioning and mapping module, a motion planning module and a vision acquisition module; the positioning module is respectively connected with the network communication module and the motion planning module;
the control lever signal acquisition module acquires the rotation amount and corresponding time information of a control lever rotating shaft of the mobile tactile feedback device and outputs a tactile feedback device signal to the signal processing module;
the haptic feedback module receives the expected haptic feedback device operating lever displacement signal calculated by the signal processing module, and controls the haptic feedback device operating lever to render the corresponding touch sense so as to assist in controlling the movement of the haptic device operating lever;
the signal processing module receives the information of the control lever signal acquisition module and processes the information into an air robot speed control instruction; receiving the following auxiliary control speed signal calculated by the control strategy module, and processing the following auxiliary control speed signal into a desired tactile feedback equipment joystick displacement signal to the tactile feedback module; receiving a first visual angle image signal acquired by a visual acquisition module of the slave aerial robot and transmitted by a network communication module, and transmitting the first visual angle image signal to a visual feedback module;
the visual feedback module displays and outputs the first visual angle image signal of the aerial robot, which is transmitted by the signal processing module;
the positioning module acquires and calculates three-dimensional position coordinates and corresponding time information of the leader and the follower;
the motion control module receives positioning information from the positioning mapping module and a final slave-end aerial robot speed control instruction calculated by the control strategy module so as to control the motion of the aerial robot;
the positioning mapping module acquires point cloud map information and positioning information of the slave aerial robot, outputs the point cloud map information and the positioning information to the motion planning module, and transmits the positioning information to the control strategy module through the network communication module;
the motion planning module receives point cloud map information and positioning information of the slave aerial robot positioning and mapping module, and receives positioning information of a leader as a following target; combining a dynamics model of the slave-end aerial robot, processing to obtain a motion track meeting dynamics and obstacle avoidance, and transmitting track information to a control strategy module through a network communication module;
the visual acquisition module acquires a first visual angle image signal from the visual sensor in real time and transmits the first visual angle image signal to the signal processing module through the network communication module;
the control strategy module receives the speed control instruction of the slave-end aerial robot, the positioning information of the slave-end aerial robot and the expected track information from the signal processing module, calculates and outputs the final speed control instruction of the slave-end aerial robot to the motion control module, and outputs a following auxiliary control speed signal to the signal processing module.
2. The air robot teleoperation system of claim 1, wherein the positioning module is a UWB positioning tag or a motion capture system tag.
3. The teleoperational system for an aerial robot of claim 1, wherein the aerial robot further comprises a task specific module coupled to the motion control module, the task specific module being a sensor and/or an actuator mounted on the aerial robot.
4. The aerial robot teleoperation system with visual and tactile fusion according to claim 1, wherein the positioning and mapping module calculates and obtains three-dimensional point cloud map information, three-dimensional positioning coordinate information of the robot and quaternion gesture information of the robot through a simultaneous positioning and mapping algorithm.
5. A follow-up auxiliary control method based on the visual haptic fusion aerial robot teleoperation system of any one of claims 1-4; the control method is characterized by comprising the following steps:
the signal processing module matches the space displacement amount of the tail end point of the control rod of the tactile feedback device and the rotation amount of the tail end rotating shaft into a space movement speed instruction expected by the slave end aerial robot and a deflection angular speed instruction thereof;
and calculating the auxiliary resultant force on the haptic feedback device according to the guiding auxiliary force followed by the slave aerial robot and the repulsive auxiliary force of collision avoidance among the aerial robot formation clusters.
6. The following assist control method according to claim 5, wherein the matching formula of the spatial movement speed command and the yaw rate command thereof is:
wherein the space movement speed command and deflection angular speed command expected by the slave aerial robot Representing the linear velocity command, ω, of the slave aerial robot z,com Representing yaw rate command, P HD Representing the displacement of the end position of a joystick of a haptic feedback device, ψ HD Representing the amount of rotation of the rotational axis at the end of the joystick of the haptic feedback device, the coefficient α is a displacement-velocity matching parameter.
7. The following assistance control method according to claim 5, wherein the calculation formula of the assistance force on the haptic feedback device is:
F=F at +F re
wherein F is at F for guiding the assisting force to follow from the aerial robot re For the repulsive assistance force of collision avoidance between slave aerial robot formation clusters, F is the assistance force generated by the haptic feedback device.
8. The following assist control method according to claim 7, wherein the guiding assist force F is followed from an aerial robot at The method comprises the following steps of:
F at =F ⊥ +F ||
wherein F is ⊥ For returning the assisting force in the vertical direction of the trajectory calculated by the motion planning module, F || A follow-up assisting force for the advancing direction of the track calculated towards the motion planning module;
regression assisting force F ⊥ The calculation formula of (2) is as follows:
K HD,P for the elastic coefficient, K, of the joystick of the haptic feedback device HD,D Damping coefficient, P, for a joystick of a haptic feedback device HD Representing the displacement of the position of the end of the joystick of the haptic feedback device,to define the body coordinate system O of the robot in the slave end air B,F Space velocity, v, of the slave aerial robot expected by the lower motion planning module HD A speed of movement of the joystick tip for the haptic feedback device;
the following assisting force F || The calculation formula of (2) is as follows:
9. The following assist control method according to claim 7, wherein the rejection assist force F following from the aerial robot re The calculation method comprises the following steps:
constructing a speed barrier for each intelligent body except the current aerial robot, wherein the speed barrier space of the current aerial robot is the superposition of each speed barrier VO;
for each speed obstacle VO, the current airborne robot is reduced to a particle with radius 0, and the remaining agents, including the leader, expand into a sphere;
is provided withIndicating that the aerial robot is scaled down to mass points, +.>Indicates that the leader swells into balls, +.>Representing the speed of an aerial robot, +.>Collision cone VO 'representing the velocity of the leader, velocity obstacle VO' F|L Expressed as:
velocity barrier VO optimization, in collision cone VO' F|L On the basis of (a) a feasible region is set so that the control strategy only pays attention to a future period T h Collision in the viable region VO H The method comprises the following steps:
wherein d m Particle shrinking for airborne robotsBall to leader expansion->Is the closest to the surface of the substrate; the optimized VO space area of the speed barrier is as follows:
definition of the rejection assistance force F re Is of the direction of (2)For the speed of the aerial robot->Vector end point is perpendicular to speed obstacle space region +.>Direction of surface, repulsive assist force F re The calculation method of (1) is as follows:
wherein F is set Is a suitable force module length value which is adjusted manually.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211437290.7A CN116149313A (en) | 2022-11-17 | 2022-11-17 | Visual and tactile integrated aerial robot teleoperation system and following auxiliary control method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211437290.7A CN116149313A (en) | 2022-11-17 | 2022-11-17 | Visual and tactile integrated aerial robot teleoperation system and following auxiliary control method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116149313A true CN116149313A (en) | 2023-05-23 |
Family
ID=86357187
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211437290.7A Pending CN116149313A (en) | 2022-11-17 | 2022-11-17 | Visual and tactile integrated aerial robot teleoperation system and following auxiliary control method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116149313A (en) |
-
2022
- 2022-11-17 CN CN202211437290.7A patent/CN116149313A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8447440B2 (en) | Autonomous behaviors for a remote vehicle | |
CN109388150B (en) | Multi-sensor environment mapping | |
EP3398022B1 (en) | Systems and methods for adjusting uav trajectory | |
US8214098B2 (en) | System and method for controlling swarm of remote unmanned vehicles through human gestures | |
Kim et al. | Accurate modeling and robust hovering control for a quad-rotor VTOL aircraft | |
CN104950885A (en) | UAV (unmanned aerial vehicle) fleet bilateral remote control system and method thereof based on vision and force sense feedback | |
WO2022252221A1 (en) | Mobile robot queue system, path planning method and following method | |
Sathiyanarayanan et al. | Gesture controlled robot for military purpose | |
CN110825076A (en) | Mobile robot formation navigation semi-autonomous control method based on sight line and force feedback | |
Xiao et al. | Visual servoing for teleoperation using a tethered uav | |
CN112445232A (en) | Portable somatosensory control autonomous inspection robot | |
Sato et al. | A simple autonomous flight control of multicopter using only web camera | |
JP5969903B2 (en) | Control method of unmanned moving object | |
Hou | Haptic teleoperation of a multirotor aerial robot using path planning with human intention estimation | |
Kim et al. | Single 2D lidar based follow-me of mobile robot on hilly terrains | |
CN116149313A (en) | Visual and tactile integrated aerial robot teleoperation system and following auxiliary control method | |
CN116100565A (en) | Immersive real-time remote operation platform based on exoskeleton robot | |
CN116774691A (en) | Controlled area management system and method, mobile management system, and non-transitory storage medium | |
JP6949417B1 (en) | Vehicle maneuvering system and vehicle maneuvering method | |
Horan et al. | Bilateral haptic teleoperation of an articulated track mobile robot | |
EP2147386B1 (en) | Autonomous behaviors for a remote vehicle | |
Nemec et al. | Safety Aspects of the Wheeled Mobile Robot | |
CN113781676B (en) | Security inspection system based on quadruped robot and unmanned aerial vehicle | |
EP4024155B1 (en) | Method, system and computer program product of control of unmanned aerial vehicles | |
WO2021140916A1 (en) | Moving body, information processing device, information processing method, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |