CN116149313A - Visual and tactile integrated aerial robot teleoperation system and following auxiliary control method - Google Patents

Visual and tactile integrated aerial robot teleoperation system and following auxiliary control method Download PDF

Info

Publication number
CN116149313A
CN116149313A CN202211437290.7A CN202211437290A CN116149313A CN 116149313 A CN116149313 A CN 116149313A CN 202211437290 A CN202211437290 A CN 202211437290A CN 116149313 A CN116149313 A CN 116149313A
Authority
CN
China
Prior art keywords
module
aerial robot
robot
control
speed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211437290.7A
Other languages
Chinese (zh)
Inventor
陈铭楠
尹选春
文晟
张建桃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China Agricultural University
Original Assignee
South China Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China Agricultural University filed Critical South China Agricultural University
Priority to CN202211437290.7A priority Critical patent/CN116149313A/en
Publication of CN116149313A publication Critical patent/CN116149313A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Manipulator (AREA)

Abstract

The invention relates to the field of teleoperation of aerial robots, in particular to a teleoperation system of an aerial robot with visual and tactile fusion and a following auxiliary control method. The system comprises a master end, a slave end, a control strategy module and a network communication module, wherein the master end comprises a joystick signal acquisition module, a touch feedback module, a visual feedback module and a signal processing module, the slave end comprises a leader and a follower, the leader is provided with a positioning module, and the follower is an aerial robot; the aerial robot comprises a motion control module, a positioning and mapping module, a motion planning module and a vision acquisition module. The invention adopts a method of tactile feedback to provide follow-up auxiliary control guidance, so that a teleoperation system can safely complete complex tasks; compared with the control of the aerial robot in an unstructured environment by means of visual feedback signals, the control method reduces the workload of operators and improves the safety of control.

Description

Visual and tactile integrated aerial robot teleoperation system and following auxiliary control method
Technical Field
The invention relates to the field of teleoperation of aerial robots, in particular to a teleoperation system of an aerial robot integrating visual and tactile feedback and a following auxiliary control method.
Background
As the teleoperation technology of aerial robots has matured, it has become increasingly widely used in dangerous, complex unstructured environments, such as high altitudes, fire sites, nuclear radiation areas, where human access is difficult. In addition, multi-agent collaboration techniques may take advantage of the heterogeneity between man-machine or multi-machine in unstructured environments where the collaboration is operated by an operator to complete tasks.
The teleoperation technology is a technology of remote interaction between a person and a robot, and is characterized in that on one hand, control intention of an operator is captured, a control command of the operator is transmitted to a remote robot controller, and on the other hand, state information of the environment and the robot is fed back to the operator through multi-sensor equipment arranged on the robot, so that the operator obtains a basis of control decision.
The existing remote control method of the aerial robot mainly depends on unilateral remote control based on feedback of human eyes or visual sensors to an image, and has larger cognitive load on an operator in a highly unstructured environment, such as an environment with poor sight. Furthermore, multi-agent technology treats each moving subject capable of autonomous or steered in a man-machine formation or multi-machine formation as an agent. In the course of the formation movement, there are often people or robots that act as leader roles to lead the rest of the robots to move. Whether the remaining robots can safely follow the leader is a concern in recent years. However, although the current robot can complete the task of following and avoiding obstacles according to the optimal path by means of target tracking and autonomous navigation technologies, the autonomous method is difficult to adapt to the unexpected and unreachable risks.
Disclosure of Invention
In order to solve the problems in the prior art, the invention utilizes the haptic feedback to provide the following auxiliary control for an operator, combines the first visual angle information of vision and constructs a teleoperation system of the vision-haptic fusion type aerial robot; according to state information, environment information and operator commands of other intelligent agents in formation, the air robot following auxiliary control method based on visual and tactile fusion is provided, teleoperation safety is improved, and workload of operators is reduced.
The invention provides an aerial robot teleoperation system with visual and tactile fusion, which comprises:
the main end comprises a control lever signal acquisition module, a touch feedback module, a visual feedback module and a signal processing module, wherein the signal processing module is respectively connected with the control lever signal acquisition module, the touch feedback module and the visual feedback module;
a control strategy module;
a network communication module;
the slave end comprises a leader and a follower, wherein the leader is provided with a positioning module, and the follower is an aerial robot; the aerial robot comprises a motion control module, a positioning and mapping module, a motion planning module and a vision acquisition module; the positioning module is respectively connected with the network communication module and the motion planning module;
the control lever signal acquisition module acquires the rotation amount and corresponding time information of a control lever rotating shaft of the mobile tactile feedback device and outputs a tactile feedback device signal to the signal processing module;
the haptic feedback module receives the expected haptic feedback device operating lever displacement signal calculated by the signal processing module, and controls the haptic feedback device operating lever to render the corresponding touch sense so as to assist in controlling the movement of the haptic device operating lever;
the signal processing module receives the information of the control lever signal acquisition module and processes the information into an air robot speed control instruction; receiving the following auxiliary control speed signal calculated by the control strategy module, and processing the following auxiliary control speed signal into a desired tactile feedback equipment joystick displacement signal to the tactile feedback module; receiving a first visual angle image signal acquired by a visual acquisition module of the slave aerial robot and transmitted by a network communication module, and transmitting the first visual angle image signal to a visual feedback module;
the visual feedback module displays and outputs the first visual angle image signal of the aerial robot, which is transmitted by the signal processing module;
the positioning module acquires and calculates three-dimensional position coordinates and corresponding time information of the leader and the follower;
the motion control module receives positioning information from the positioning mapping module and a final slave-end aerial robot speed control instruction calculated by the control strategy module so as to control the motion of the aerial robot;
the positioning mapping module acquires point cloud map information and positioning information of the slave aerial robot, outputs the point cloud map information and the positioning information to the motion planning module, and transmits the positioning information to the control strategy module through the network communication module;
the motion planning module receives point cloud map information and positioning information of the slave aerial robot positioning and mapping module, and receives positioning information of a leader as a following target; combining a dynamics model of the slave-end aerial robot, processing to obtain a motion track meeting dynamics and obstacle avoidance, and transmitting track information to a control strategy module through a network communication module;
the visual acquisition module acquires a first visual angle image signal from the visual sensor in real time and transmits the first visual angle image signal to the signal processing module through the network communication module;
the control strategy module receives the speed control instruction of the slave-end aerial robot, the positioning information of the slave-end aerial robot and the expected track information from the signal processing module, calculates and outputs the final speed control instruction of the slave-end aerial robot to the motion control module, and outputs a following auxiliary control speed signal to the signal processing module.
The invention provides a follow-up auxiliary control method, which is based on the visual and tactile fusion air robot teleoperation system and specifically comprises the following steps:
the signal processing module matches the space displacement amount of the tail end point of the control rod of the tactile feedback device and the rotation amount of the tail end rotating shaft into a space movement speed instruction expected by the slave end aerial robot and a deflection angular speed instruction thereof;
and calculating the auxiliary resultant force on the haptic feedback device according to the guiding auxiliary force followed by the slave aerial robot and the repulsive auxiliary force of collision avoidance among the aerial robot formation clusters.
Compared with the prior art, the invention has the following technical effects:
1. according to the teleoperation system and the follow-up auxiliary control method of the air robot based on the haptic fusion, the experience of an operator and the self-capacity of the air robot are combined, and the follow-up auxiliary control guidance is provided for the operator by adopting a haptic feedback method, so that the teleoperation system can safely complete complex tasks; compared with the control of the aerial robot in an unstructured environment by means of visual feedback signals, the control method reduces the workload of operators and improves the safety of control.
2. According to the invention, the tactile feedback force is divided into the guiding force for following the unmanned aerial vehicle and the repulsive force for avoiding the obstacle of the dynamic obstacle among multiple machines, so that the track following precision of the aerial robot is improved, and the safety among multiple machines is improved.
3. The invention adopts the teleoperation system structure of the multi-machine formation, can endow basic intellectualization to different aerial robots and configure different equipment suitable for different tasks, and compared with the scheme that all functional modules are configured in one aerial robot, the invention ensures that each robot in the formation has the characteristic of light weight, completes complex tasks through multi-machine cooperation, and improves the flexibility of the system.
4. The following auxiliary control method adopted by the invention fuses the autonomy of the person and the auxiliary prompt of the automatic system, fully utilizes the experience of the person and the advantages of the autonomous system, and improves the capability of the system to adapt to different tasks.
Drawings
FIG. 1 is a schematic diagram of a teleoperation system according to an embodiment of the present invention;
FIG. 2 is an example of a conceptual diagram of a velocity obstacle in a location space in an embodiment of the invention;
fig. 3 is an example of a conceptual diagram of a speed obstacle in a speed space in an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but embodiments of the present invention are not limited thereto.
Examples
As shown in fig. 1, the embodiment provides a teleoperation system of an air robot with visual and tactile fusion, which comprises a master end, a slave end, a wireless network communication module and a control strategy module. The main end comprises a control lever signal acquisition module, a touch feedback module, a visual feedback module and a signal processing module, wherein the signal processing module is respectively connected with the control lever signal acquisition module, the touch feedback module and the visual feedback module. The slave comprises a leader and a follower, wherein the leader is a person or a leader robot in a remote environment and is provided with a positioning module; the follower is an aerial robot, and comprises a special task module, a motion control module, a positioning and mapping module, a motion planning module and a vision acquisition module, wherein the positioning and mapping module can be realized by adopting a positioning and mapping series sensor, and the vision acquisition module can be realized by adopting a vision sensor. The motion planning module, the positioning mapping module, the motion control module and the motion control module are sequentially connected; the positioning module is respectively connected with the network communication module and the motion planning module. Wherein, the liquid crystal display device comprises a liquid crystal display device,
the control lever signal acquisition module is used for acquiring the rotation amounts and corresponding time information of six rotating shafts from the control lever of the mobile tactile feedback device and outputting a tactile feedback device signal to the signal processing module; moving the tactile feedback device joystick is operable by an operator;
the haptic feedback module is used for receiving the expected haptic feedback device operating lever displacement signal calculated by the signal processing module, controlling the haptic feedback device operating lever to render corresponding touch feeling so as to assist an operator to control the haptic device operating lever to move;
the signal processing module is used for receiving the information of the control lever signal acquisition module and processing the information into an aerial robot speed control instruction, wherein the aerial robot speed control instruction comprises an aerial robot space movement speed instruction and an aerial robot deflection angular speed instruction; the control strategy module is used for calculating a suggested follow-up auxiliary control speed signal according to the control strategy, and processing the follow-up auxiliary control speed signal into a desired tactile feedback device joystick displacement signal to the tactile feedback module; the visual feedback module is used for receiving the first visual angle image signals acquired by the visual acquisition module of the slave aerial robot and transmitted by the wireless network communication module, and transmitting the first visual angle image signals of the aerial robot to the visual feedback module at a specified frame rate and format;
the visual feedback module is used for displaying and outputting the first visual angle image signal of the aerial robot transmitted by the signal processing module, and generally can display and output the first visual angle image signal on a display screen;
the positioning module is used for acquiring and calculating three-dimensional position coordinates and corresponding time information of the leader/leader robot and other followers in formation, and the position coordinates calculated by the positioning module of each robot are in the same coordinate system;
the special task module is used for receiving an instruction of the motion control module for running a special function and is responsible for executing a special task;
the motion control module is used for receiving the positioning information from the positioning mapping module and the final speed control instruction of the slave-end aerial robot calculated by the control strategy module, outputting PWM signals after processing to control the motor rotating speed of the power system of the slave-end aerial robot, so as to control the motion of the slave-end aerial robot; the control module is used for sending a control command signal to control the operation of the special task module;
the positioning map building module is used for acquiring point cloud information of a positioning map building series sensor, IMU information of an inertial measurement sensor and other positioning sensor information of the aerial robot, outputting the point cloud map information and the positioning information to the motion planning module, outputting positioning information of the aerial robot in a world coordinate system and giving the positioning information to the control strategy module through the wireless network communication module;
the motion planning module is used for receiving point cloud map information and positioning information of the aerial robot positioning and mapping module, receiving positioning information of a leader/leader robot as a following target, combining a dynamic model of the aerial robot, and obtaining a motion track meeting dynamics and obstacle avoidance through online processing of an aerial robot on-board computer, wherein the track is required to have path point position information; transmitting the track information to a control strategy module through a wireless network communication module;
the visual acquisition module is used for acquiring the first visual angle image signal from the visual sensor in real time and transmitting the first visual angle image signal to the signal processing module through the wireless network communication module;
the wireless network communication module is used for being responsible for signal transmission among the master end, the slave end aerial robots, the leader/leader robots and other intelligent agents;
the control strategy module is used for receiving the air robot speed control instruction from the signal processing module, the positioning information of the positioning modules of other followers, the air robot positioning information and the expected track information, outputting the final air robot speed control instruction to the motion control module and outputting the following auxiliary control speed signal to the signal processing module through the following auxiliary control method calculation of the air robot teleoperation system.
In this embodiment, the positioning module is a UWB positioning tag or motion capture system tag fitted at the leader/robot center; the special task modules are sensors and/or actuators, such as infrared detectors, robotic arms, mounted on the aerial robot.
In this embodiment, the aerial robot includes a flight controller, an onboard computer; the motion control module is arranged in the flight controller, acquires control instructions and positioning information of the top layer from the airborne computer, collects and analyzes motion state information of the robot according to sensors such as a gyroscope, an accelerometer and a barometer on the flight controller, generates PWM signals to control the motor rotating speed of the braking system, and further controls the space pose of the robot. In addition, the motion control module is responsible for carrying out information interaction with the special task module, sending information to the special task module through the onboard computer or the flight controller, and receiving information collected by the special task module.
In this embodiment, the positioning mapping module runs a simultaneous positioning and mapping algorithm, and the positioning mapping series sensor may use a sensor scheme including, but not limited to, the following: multi-line laser radar+IMU module, multi-line laser radar, monocular/binocular camera+IMU, binocular camera+depth camera+IMU, sense tracking camera, etc.; the positioning map building module calculates and obtains a three-dimensional point cloud map, three-dimensional positioning coordinate information of the robot and quaternion attitude information of the robot through a simultaneous positioning and map building algorithm.
In a preferred embodiment, the vision acquisition module is an RGB camera disposed on the aerial robot.
Based on the same inventive concept, the embodiment also provides a following auxiliary control method, which is based on the visual and tactile fusion aerial robot teleoperation system and specifically comprises the following steps:
s1, a signal processing module matches the space displacement of the tail end point of a control rod of the haptic feedback equipment and the rotation quantity of a tail end rotating shaft into a space movement speed instruction expected by a slave end aerial robot and a deflection angular speed instruction thereof, and a displacement-speed matching formula is as follows:
Figure BDA0003947434180000051
wherein the space movement speed command expected by the aerial robot and the deflection angular speed command thereof
Figure BDA0003947434180000052
Figure BDA0003947434180000053
Line speed command, ω, representing an airborne robot z,com Representing yaw rate command, +.>
Figure BDA0003947434180000054
Representing the displacement of the end position of a joystick of a haptic feedback device, ψ HD Representing the rotation amount of the rotation shaft at the end of the joystick of the haptic feedback device, coefficient +.>
Figure BDA0003947434180000055
Figure BDA0003947434180000056
Is a displacement-velocity matching parameter;
s2, calculating to obtain auxiliary resultant force on the haptic feedback device according to the guide auxiliary force followed by the slave aerial robot and the rejection auxiliary force for enquiring and avoiding collision of the aerial robot formation cluster, wherein the formula is as follows:
F=F at +F re
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0003947434180000057
guiding assistance force for the aerial robot to follow, < >>
Figure BDA0003947434180000058
Rejection assistance force for inter-cluster collision avoidance for aerial robots, < ->
Figure BDA0003947434180000059
An assist force generated for the haptic feedback device.
Guiding assisting force F followed by aerial robot at Can be further decomposed into:
F at =F +F ||
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0003947434180000061
for the regression assistance force in the vertical direction of the trajectory calculated towards the motion planning module,
Figure BDA0003947434180000062
a follow-up assisting force for the advancing direction of the track calculated towards the motion planning module;
regression assisting force F The calculation formula of (2) is as follows:
Figure BDA0003947434180000063
K HD,P for the elastic coefficient, K, of the joystick of the haptic feedback device HD,D For the damping coefficient of the joystick of the haptic feedback device,
Figure BDA0003947434180000064
to define the robot body coordinate system O in the air B,F Space speed of aerial robot desired by lower motion planning module, +.>
Figure BDA0003947434180000065
A speed of movement of the joystick tip for the haptic feedback device; wherein (1)>
Figure BDA0003947434180000066
The calculation formula of (2) is as follows:
Figure BDA0003947434180000067
Figure BDA0003947434180000068
represents a coordinate system O taking the initial pose of the aerial robot as the origin of coordinates and the direction F ,/>
Figure BDA0003947434180000069
The calculation formula of the regressive air robot moving speed in the vertical direction of the expected track calculated by the control strategy module is as follows:
Figure BDA00039474341800000610
K ⊥,P 、K ⊥,I and K ⊥,D Proportional, integral and derivative parameters, x, of a PID controller with path regression, respectively F In a coordinate system O for an aerial robot F Is provided with a coordinate of the position of (c),
Figure BDA00039474341800000611
and calculating the nearest path point to the aerial robot on the expected path for the motion planning module.
The following assisting force F || The aim is to guide an operator to control an aerial robot to follow a leader (i.e. the leader or the leader robot) in the track direction planned by a motion planning module at a safe speed through a tactile feedback device, and the calculation method is as follows:
when the aerial robot is very close to the leader, the following speed formula of the aerial robot in the track direction is as follows:
Figure BDA00039474341800000612
wherein K is PP 、K PI And K PD For the proportional, integral and derivative parameters of the PID controller, D rel D is the current distance between the aerial robot and the leader default A relative distance when the aerial robot is expected to be stationary with the leader;
when the aerial robot is far from the leader, the following acceleration formula of the aerial robot in the track direction is as follows:
Figure BDA00039474341800000613
wherein K is VP 、K VI And K VD Proportional, integral and derivative parameters, v, of a PID controller followed by a path, respectively f,tra For the speed of the aerial robot in the direction of the trajectory,
Figure BDA0003947434180000071
in inertial coordinate system O for leader F The following movement speed, and further, the speed formula can be obtained as follows:
Figure BDA0003947434180000072
thus, the speed is derived from the distance of the aerial robot from the leader
Figure BDA0003947434180000073
Can define an aerial robot body coordinate system O in the track direction B,F Velocity vector +.>
Figure BDA0003947434180000074
Its mould size is +.>
Figure BDA0003947434180000075
Thus, the follow-up assist force F || The calculation formula of (2) is as follows:
Figure BDA0003947434180000076
the repulsive assisting force F followed by the aerial robot re The calculation method comprises the following steps:
F re the core of the calculation method is that a speed barrier (VO) is built for each intelligent body except the current aerial robot, and finally the speed barrier space of the current aerial robot is the superposition of each speed barrier VO;
for each speed obstacle VO, the current airborne robot is reduced to a particle with radius 0, and the remaining agents, including the leader, expand into a sphere with radius r Le The calculation method comprises the following steps:
r Le =r L +r e
r L r is the sum of the radius of the smallest sphere surrounding the aerial robot and the radius of the smallest sphere surrounding the corresponding agent e Is an extended safety error radius.
As shown in fig. 2, taking the formation of one aerial robot and one leader as an example, they are given the numbers F and L respectively,
Figure BDA0003947434180000077
indicating that the aerial robot is scaled down to mass points, +.>
Figure BDA0003947434180000078
Indicates that the leader swells into balls, +.>
Figure BDA0003947434180000079
Indicating the speed of the aerial robot,
Figure BDA00039474341800000710
representing the leaderCollision cone VO 'of velocity, velocity obstacle VO' F|L Can be expressed as:
Figure BDA00039474341800000711
Figure BDA00039474341800000712
as shown in FIG. 3, a further optimization of the velocity obstacle VO is in the collision cone VO' F|L On the basis of (a) a feasible region is set so that the control strategy only pays attention to a future period T h Collision in the viable region VO H The method comprises the following steps:
Figure BDA00039474341800000713
wherein d m Particle shrinking for airborne robots
Figure BDA00039474341800000714
Ball to leader expansion->
Figure BDA00039474341800000715
Is the closest to the surface of the substrate. The optimized VO space area of the speed barrier is as follows:
Figure BDA0003947434180000081
definition of the rejection assistance force F re Is of the direction of (2)
Figure BDA0003947434180000082
Quick waste of aerial robot>
Figure BDA0003947434180000083
Vector end point is perpendicular to speed obstacle space region +.>
Figure BDA0003947434180000084
The direction of the surface, the repulsive assist force Fre is calculated by:
Figure BDA0003947434180000085
wherein F is set Is a suitable force module length value which is adjusted manually.
In the present embodiment, the speed of the aerial robot can be intuitively seen in the position space by fig. 2
Figure BDA0003947434180000086
Leader speed +.>
Figure BDA0003947434180000087
And judging at which position of the speed barrier VO the speed after superposition is, and judging whether the speed after superposition is in the speed barrier VO, so as to judge whether the aerial robot and the leader collide in the future. In the speed space according to FIG. 3, the speed obstacle VO can be set to the leader speed +.>
Figure BDA0003947434180000088
Direction and size movement, can judge the speed of the aerial robot +.>
Figure BDA0003947434180000089
Whether the pointing position is in the speed obstacle VO or not is judged, and whether the aerial robot collides with the leader or not in the future is judged.
The above examples are preferred embodiments of the present invention, but the embodiments of the present invention are not limited to the above examples, and any other changes, modifications, substitutions, combinations, and simplifications that do not depart from the spirit and principle of the present invention should be made in the equivalent manner, and the embodiments are included in the protection scope of the present invention.

Claims (9)

1. An air robot teleoperation system for visual and tactile fusion, comprising:
the main end comprises a control lever signal acquisition module, a touch feedback module, a visual feedback module and a signal processing module, wherein the signal processing module is respectively connected with the control lever signal acquisition module, the touch feedback module and the visual feedback module;
a control strategy module;
a network communication module;
the slave end comprises a leader and a follower, wherein the leader is provided with a positioning module, and the follower is an aerial robot; the aerial robot comprises a motion control module, a positioning and mapping module, a motion planning module and a vision acquisition module; the positioning module is respectively connected with the network communication module and the motion planning module;
the control lever signal acquisition module acquires the rotation amount and corresponding time information of a control lever rotating shaft of the mobile tactile feedback device and outputs a tactile feedback device signal to the signal processing module;
the haptic feedback module receives the expected haptic feedback device operating lever displacement signal calculated by the signal processing module, and controls the haptic feedback device operating lever to render the corresponding touch sense so as to assist in controlling the movement of the haptic device operating lever;
the signal processing module receives the information of the control lever signal acquisition module and processes the information into an air robot speed control instruction; receiving the following auxiliary control speed signal calculated by the control strategy module, and processing the following auxiliary control speed signal into a desired tactile feedback equipment joystick displacement signal to the tactile feedback module; receiving a first visual angle image signal acquired by a visual acquisition module of the slave aerial robot and transmitted by a network communication module, and transmitting the first visual angle image signal to a visual feedback module;
the visual feedback module displays and outputs the first visual angle image signal of the aerial robot, which is transmitted by the signal processing module;
the positioning module acquires and calculates three-dimensional position coordinates and corresponding time information of the leader and the follower;
the motion control module receives positioning information from the positioning mapping module and a final slave-end aerial robot speed control instruction calculated by the control strategy module so as to control the motion of the aerial robot;
the positioning mapping module acquires point cloud map information and positioning information of the slave aerial robot, outputs the point cloud map information and the positioning information to the motion planning module, and transmits the positioning information to the control strategy module through the network communication module;
the motion planning module receives point cloud map information and positioning information of the slave aerial robot positioning and mapping module, and receives positioning information of a leader as a following target; combining a dynamics model of the slave-end aerial robot, processing to obtain a motion track meeting dynamics and obstacle avoidance, and transmitting track information to a control strategy module through a network communication module;
the visual acquisition module acquires a first visual angle image signal from the visual sensor in real time and transmits the first visual angle image signal to the signal processing module through the network communication module;
the control strategy module receives the speed control instruction of the slave-end aerial robot, the positioning information of the slave-end aerial robot and the expected track information from the signal processing module, calculates and outputs the final speed control instruction of the slave-end aerial robot to the motion control module, and outputs a following auxiliary control speed signal to the signal processing module.
2. The air robot teleoperation system of claim 1, wherein the positioning module is a UWB positioning tag or a motion capture system tag.
3. The teleoperational system for an aerial robot of claim 1, wherein the aerial robot further comprises a task specific module coupled to the motion control module, the task specific module being a sensor and/or an actuator mounted on the aerial robot.
4. The aerial robot teleoperation system with visual and tactile fusion according to claim 1, wherein the positioning and mapping module calculates and obtains three-dimensional point cloud map information, three-dimensional positioning coordinate information of the robot and quaternion gesture information of the robot through a simultaneous positioning and mapping algorithm.
5. A follow-up auxiliary control method based on the visual haptic fusion aerial robot teleoperation system of any one of claims 1-4; the control method is characterized by comprising the following steps:
the signal processing module matches the space displacement amount of the tail end point of the control rod of the tactile feedback device and the rotation amount of the tail end rotating shaft into a space movement speed instruction expected by the slave end aerial robot and a deflection angular speed instruction thereof;
and calculating the auxiliary resultant force on the haptic feedback device according to the guiding auxiliary force followed by the slave aerial robot and the repulsive auxiliary force of collision avoidance among the aerial robot formation clusters.
6. The following assist control method according to claim 5, wherein the matching formula of the spatial movement speed command and the yaw rate command thereof is:
Figure FDA0003947434170000021
wherein the space movement speed command and deflection angular speed command expected by the slave aerial robot
Figure FDA0003947434170000022
Figure FDA0003947434170000023
Figure FDA0003947434170000024
Representing the linear velocity command, ω, of the slave aerial robot z,com Representing yaw rate command, P HD Representing the displacement of the end position of a joystick of a haptic feedback device, ψ HD Representing the amount of rotation of the rotational axis at the end of the joystick of the haptic feedback device, the coefficient α is a displacement-velocity matching parameter.
7. The following assistance control method according to claim 5, wherein the calculation formula of the assistance force on the haptic feedback device is:
F=F at +F re
wherein F is at F for guiding the assisting force to follow from the aerial robot re For the repulsive assistance force of collision avoidance between slave aerial robot formation clusters, F is the assistance force generated by the haptic feedback device.
8. The following assist control method according to claim 7, wherein the guiding assist force F is followed from an aerial robot at The method comprises the following steps of:
F at =F +F ||
wherein F is For returning the assisting force in the vertical direction of the trajectory calculated by the motion planning module, F || A follow-up assisting force for the advancing direction of the track calculated towards the motion planning module;
regression assisting force F The calculation formula of (2) is as follows:
Figure FDA0003947434170000031
K HD,P for the elastic coefficient, K, of the joystick of the haptic feedback device HD,D Damping coefficient, P, for a joystick of a haptic feedback device HD Representing the displacement of the position of the end of the joystick of the haptic feedback device,
Figure FDA0003947434170000032
to define the body coordinate system O of the robot in the slave end air B,F Space velocity, v, of the slave aerial robot expected by the lower motion planning module HD A speed of movement of the joystick tip for the haptic feedback device;
the following assisting force F || The calculation formula of (2) is as follows:
Figure FDA0003947434170000033
Figure FDA0003947434170000034
in order to obtain the coordinate system O of the robot body in the track direction from the end air B,F Is a velocity vector of (a).
9. The following assist control method according to claim 7, wherein the rejection assist force F following from the aerial robot re The calculation method comprises the following steps:
constructing a speed barrier for each intelligent body except the current aerial robot, wherein the speed barrier space of the current aerial robot is the superposition of each speed barrier VO;
for each speed obstacle VO, the current airborne robot is reduced to a particle with radius 0, and the remaining agents, including the leader, expand into a sphere;
is provided with
Figure FDA0003947434170000035
Indicating that the aerial robot is scaled down to mass points, +.>
Figure FDA0003947434170000036
Indicates that the leader swells into balls, +.>
Figure FDA0003947434170000037
Representing the speed of an aerial robot, +.>
Figure FDA0003947434170000038
Collision cone VO 'representing the velocity of the leader, velocity obstacle VO' F|L Expressed as:
Figure FDA0003947434170000039
Figure FDA00039474341700000310
velocity barrier VO optimization, in collision cone VO' F|L On the basis of (a) a feasible region is set so that the control strategy only pays attention to a future period T h Collision in the viable region VO H The method comprises the following steps:
Figure FDA00039474341700000311
wherein d m Particle shrinking for airborne robots
Figure FDA00039474341700000312
Ball to leader expansion->
Figure FDA00039474341700000313
Is the closest to the surface of the substrate; the optimized VO space area of the speed barrier is as follows:
Figure FDA00039474341700000314
definition of the rejection assistance force F re Is of the direction of (2)
Figure FDA0003947434170000041
For the speed of the aerial robot->
Figure FDA0003947434170000042
Vector end point is perpendicular to speed obstacle space region +.>
Figure FDA0003947434170000043
Direction of surface, repulsive assist force F re The calculation method of (1) is as follows:
Figure FDA0003947434170000044
wherein F is set Is a suitable force module length value which is adjusted manually.
CN202211437290.7A 2022-11-17 2022-11-17 Visual and tactile integrated aerial robot teleoperation system and following auxiliary control method Pending CN116149313A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211437290.7A CN116149313A (en) 2022-11-17 2022-11-17 Visual and tactile integrated aerial robot teleoperation system and following auxiliary control method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211437290.7A CN116149313A (en) 2022-11-17 2022-11-17 Visual and tactile integrated aerial robot teleoperation system and following auxiliary control method

Publications (1)

Publication Number Publication Date
CN116149313A true CN116149313A (en) 2023-05-23

Family

ID=86357187

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211437290.7A Pending CN116149313A (en) 2022-11-17 2022-11-17 Visual and tactile integrated aerial robot teleoperation system and following auxiliary control method

Country Status (1)

Country Link
CN (1) CN116149313A (en)

Similar Documents

Publication Publication Date Title
US8447440B2 (en) Autonomous behaviors for a remote vehicle
CN109388150B (en) Multi-sensor environment mapping
EP3398022B1 (en) Systems and methods for adjusting uav trajectory
US8214098B2 (en) System and method for controlling swarm of remote unmanned vehicles through human gestures
Kim et al. Accurate modeling and robust hovering control for a quad-rotor VTOL aircraft
CN104950885A (en) UAV (unmanned aerial vehicle) fleet bilateral remote control system and method thereof based on vision and force sense feedback
WO2022252221A1 (en) Mobile robot queue system, path planning method and following method
Sathiyanarayanan et al. Gesture controlled robot for military purpose
CN110825076A (en) Mobile robot formation navigation semi-autonomous control method based on sight line and force feedback
Xiao et al. Visual servoing for teleoperation using a tethered uav
CN112445232A (en) Portable somatosensory control autonomous inspection robot
Sato et al. A simple autonomous flight control of multicopter using only web camera
JP5969903B2 (en) Control method of unmanned moving object
Hou Haptic teleoperation of a multirotor aerial robot using path planning with human intention estimation
Kim et al. Single 2D lidar based follow-me of mobile robot on hilly terrains
CN116149313A (en) Visual and tactile integrated aerial robot teleoperation system and following auxiliary control method
CN116100565A (en) Immersive real-time remote operation platform based on exoskeleton robot
CN116774691A (en) Controlled area management system and method, mobile management system, and non-transitory storage medium
JP6949417B1 (en) Vehicle maneuvering system and vehicle maneuvering method
Horan et al. Bilateral haptic teleoperation of an articulated track mobile robot
EP2147386B1 (en) Autonomous behaviors for a remote vehicle
Nemec et al. Safety Aspects of the Wheeled Mobile Robot
CN113781676B (en) Security inspection system based on quadruped robot and unmanned aerial vehicle
EP4024155B1 (en) Method, system and computer program product of control of unmanned aerial vehicles
WO2021140916A1 (en) Moving body, information processing device, information processing method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination