CN113721639B - Path planning method and control method for docking of small underwater robot - Google Patents

Path planning method and control method for docking of small underwater robot Download PDF

Info

Publication number
CN113721639B
CN113721639B CN202111011001.2A CN202111011001A CN113721639B CN 113721639 B CN113721639 B CN 113721639B CN 202111011001 A CN202111011001 A CN 202111011001A CN 113721639 B CN113721639 B CN 113721639B
Authority
CN
China
Prior art keywords
robot
coordinate system
path
docking station
docking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111011001.2A
Other languages
Chinese (zh)
Other versions
CN113721639A (en
Inventor
邢会明
叶秀芬
刘畅
刘文智
李海波
王璘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Engineering University
Original Assignee
Harbin Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Engineering University filed Critical Harbin Engineering University
Priority to CN202111011001.2A priority Critical patent/CN113721639B/en
Publication of CN113721639A publication Critical patent/CN113721639A/en
Application granted granted Critical
Publication of CN113721639B publication Critical patent/CN113721639B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/04Control of altitude or depth
    • G05D1/06Rate of change of altitude or depth
    • G05D1/0692Rate of change of altitude or depth specially adapted for under-water vehicles
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

A path planning method and a control method for a small underwater robot to dock relate to the technical field of underwater robot docking control and are used for solving the problem that the existing robot docking method is low in docking efficiency because the underwater walking path of the robot is not planned effectively. The invention provides a method for establishing a virtual cylinder by taking a robot as a reference, wherein the initial position and the three-dimensional path end point of the robot are both on the surface of the virtual cylinder, and a track is generated based on the virtual cylinder surface, so that the robot reaches the three-dimensional path end point along the track, and then continuously enters a dock base station along the central line; according to the invention, the three-dimensional path is generated by utilizing the relative positions and directions of the robot and the docking station, so that the docking success rate of the robot is improved. The three-dimensional path is further divided into a horizontal path and a vertical path, and the robot is controlled to walk according to the generated three-dimensional path according to the forces and the torques calculated in the horizontal direction and the vertical direction. The invention can be applied to path planning and control of the underwater robot before docking.

Description

Path planning method and control method for docking of small underwater robot
Technical Field
The invention relates to the technical field of underwater docking control of robots, in particular to a path planning method and a control method for docking of a small underwater robot.
Background
As ocean development activities become more frequent and deeper, so too is the demand for ocean exploration techniques and equipment. Conventional AUV, UUV and other underwater vehicles often face problems of underwater charging and recovery and the like due to the problem of power supply quantity. Therefore, autonomous docking of an underwater robot is of great importance. The current underwater robot recovery comprises two stages, wherein the first stage is that the robot is far away from the docking station, underwater sound positioning and the like are often adopted, and when the robot moves to a position close to the docking station, the second stage is entered, and docking operation is often carried out by adopting methods such as vision and the like. In the second stage, current researchers typically position the robot in front of the docking station, the robot adjusts the pose and position so that the robot heads are aligned with the docking station entrance, and then the robot proceeds to dock. The docking process is complex, requires multiple adjustments by the robot, consumes time and is inefficient.
Thus, there are the following problems with existing underwater robot dockings: 1) The underwater robot performs docking operation by adopting underwater acoustic positioning, electromagnetic positioning and visual positioning, and underwater acoustic signals in the underwater acoustic positioning are easy to be interfered by the reflecting surface, so that the positioning accuracy is reduced; in electromagnetic positioning, the electromagnetic signal has fast attenuation in water due to the characteristic of the electromagnetic signal, so that the application of the electromagnetic signal is limited; in visual positioning, the visibility is low due to cloudiness of the underwater environment. To increase the visual positioning distance, high-brightness blue-green lamps (blue-green light has low attenuation characteristics in water) are often used as signal lamps to guide underwater robots to dock. However, when the underwater robot is close to the signal lamp, the light spots of the blue-green light lamp are too large, the visual positioning result is large in jumping, and as the distance is reduced, the signal lamp can run out of the visual field range of the robot, so that the positioning cannot be performed. 2) In the process of docking underwater under-actuated aircrafts such as AUV, UUV and the like, the underwater under-actuated aircrafts are firstly positioned in front of a docking station, the posture of the aircrafts is adjusted, and the aircrafts are kept to move towards the entrance of the docking station. However, when the aircraft is not positioned directly in front of the docking station, an included angle is easily generated between the center line of the docking station and the movement direction of the aircraft, so that the docking is unsuccessful.
Disclosure of Invention
Aiming at the problems, the invention provides a path planning method and a control method for docking a small underwater robot, which are used for solving the problem that the conventional robot docking method is low in docking efficiency because the underwater walking path of the robot is not planned effectively.
According to an aspect of the invention, a path planning method for docking a small underwater robot is proposed, the method comprising the steps of:
step one, establishing a docking station coordinate system, and obtaining initial position coordinates of the robot under the docking station coordinate system based on a vision measurement method;
setting a point on a central line right in front of the docking station as a three-dimensional path end point, and obtaining a position coordinate of the three-dimensional path end point under a docking station coordinate system;
generating a three-dimensional path for the robot to walk underwater by utilizing the relative positions and directions of the robot and the docking station; the method specifically comprises the following steps:
establishing a virtual cylinder by taking a robot as a reference, wherein the initial position and the three-dimensional path end point of the robot are both arranged on the surface of the virtual cylinder, and a cylinder coordinate system O corresponding to the virtual cylinder C -X C Y C Z C The establishment is as follows: center of circle O of virtual cylinder C Y on the perpendicular bisector of the connecting line of the robot and the three-dimensional path end point C The positive axis direction is vertically upward, X C The positive direction of the axis is from the circle center O C Pointing to a three-dimensional path end point, and establishing Z according to a right-hand rule C A shaft;
step three, converting initial position coordinates of the robot under the docking station coordinate system and position coordinates of the three-dimensional path end point under the docking station coordinate system into position coordinates under the cylindrical coordinate system;
step three, calculating according to the initial position of the robot and the position coordinates of the three-dimensional path end point under a cylindrical coordinate system to obtain the radius of the virtual cylinder;
and thirdly, calculating to obtain a three-dimensional path according to the position coordinates of the initial position of the robot under the cylindrical coordinate system, the initial course angle of the robot and the radius of the virtual cylinder.
Further, in the third step, the coordinates of the initial position of the robot and the three-dimensional path end point under the cylindrical coordinate system are obtained through calculation by using the following formula:
C P R =R D P R +T
wherein, D P R =(x d ,y d ,z d ) Representing the position coordinates of the robot in the docking station coordinate system, C P R =(x c ,y c ,z c ) Representing the position coordinates of the robot in a cylindrical coordinate system; r and T are the transformation matrix and translation vector between the docking station coordinate system and the cylindrical coordinate system, respectively, r=i 3×3 ,T=[r,0,l] T R represents the radius of the virtual cylinder; l represents the horizontal distance of the three-dimensional path end point from the docking station.
Further, the radius r of the virtual cylinder obtained by calculation in the third and the second steps is:
wherein,representing the X-axis and Z-axis coordinates of the initial position of the robot under a cylindrical coordinate system; />And representing the X-axis and Z-axis coordinates of the three-dimensional path end point under a cylindrical coordinate system.
Further, the three-dimensional path obtained by the three-step calculation is as follows:
wherein, (x) c ,y c ,z c ) Representing expected coordinates of the robot walking in the cylindrical coordinate system;representing the Y-axis coordinate of the initial position of the robot under a cylindrical coordinate system; alpha 0 Representing an initial course angle of the robot; alpha represents the central angle corresponding to the connecting line of the real-time position and the three-dimensional path end point of the robot.
Further, the central angle alpha corresponding to the connecting line of the real-time position of the robot and the three-dimensional path end point is smaller than 120 degrees.
According to another aspect of the present invention, a control method based on the path planning method is provided, and the control method includes the following steps:
firstly, generating an expected three-dimensional path according to the path planning method, and projecting the expected three-dimensional path in the horizontal direction and the vertical direction;
the path control in the horizontal direction is: calculating a robot sight angle according to the real-time position and the path point coordinates of the robot by using a path tracking control method based on a sight line method, and calculating course deviation according to the sight angle so as to obtain the force and moment of the robot in the horizontal direction;
the path control in the vertical direction is: calculating a vertical error by using a vertical coordinate obtained based on a visual measurement method as feedback by using an ADRC active disturbance rejection controller, and calculating and obtaining the force and moment of the robot in the vertical direction according to the vertical error;
and finally, calculating by adopting a real-time dynamic thrust vector according to the forces and the torques obtained by calculation in the horizontal direction and the vertical direction, and generating the thrust and the rotation angle of each propeller of the robot so as to control the robot to walk according to the expected three-dimensional path, thereby realizing the three-dimensional path tracking control of the robot.
Further, the signal lamp and the ArUco code are utilized to conduct segmented positioning on the robot so as to obtain the real-time position of the robot, wherein the signal lamp and the ArUco code are in the same plane with the dock base station front door; the positioning process comprises the following steps: acquiring the position of a signal lamp from a camera on the robot within a fixed distance of underwater walking of the robot, and acquiring the position coordinate of the signal lamp, thereby acquiring the position coordinate of the robot relative to a docking station based on a vision measurement method; when the signal lamp cannot be captured by the camera on the robot, the position of the ArUco code is acquired by the camera on the robot, the position coordinate of the ArUco code is obtained, and the position coordinate of the robot relative to the docking station is obtained based on a vision measurement method.
Further, the vision measurement-based method is a PNP algorithm.
Further, in the path tracking control method based on the line-of-sight method, the robot line-of-sight angle is given by:
wherein,representing the next expected route point of the robot under the cylindrical coordinate system; (x) c ,y c ) Representing real-time position coordinates of the robot under a cylindrical coordinate system;
robot desired horizontal heading angle alpha φ Given by the formula:
wherein e represents a lateral tracking error; alpha k Is the angle between the expected track and the X axis of the world coordinate system; l represents a multiple of the robot length;
the heading bias is:
ψ e =α φ
the beneficial technical effects of the invention are as follows:
in order to ensure that an underwater robot enters along the central line of a docking station, the invention provides a three-dimensional path generating method based on the surface of a virtual cylinder, wherein a virtual cylinder is established by taking the robot as a reference, the initial position and the three-dimensional path end point of the robot are both on the surface of the virtual cylinder, and a track is generated based on the virtual cylindrical surface of the virtual cylinder, so that the robot reaches the three-dimensional path end point along the track, and then enters the docking station along the central line continuously; according to the invention, the relative positions and directions of the robot and the docking station are utilized to generate the three-dimensional path, so that the docking success rate of the robot is improved; further, dividing the three-dimensional path into a horizontal path and a vertical path, taking visual measurement as feedback, adopting horizontal path tracking control based on a sight line method and depth path control based on ADRC, calculating according to the force and torque calculated in the horizontal direction and the vertical direction, adopting real-time dynamic thrust vector to calculate, generating thrust of each propeller and the rotation angle of the propeller, and controlling the robot to walk according to the generated three-dimensional path so as to ensure that the robot is successfully docked; the robot position is positioned in real time through the signal lamp and the sectional positioning of the two-dimensional code, so that the accuracy of estimating the position and the gesture of the aircraft in the docking process is ensured.
Drawings
The invention may be better understood by reference to the following description taken in conjunction with the accompanying drawings, which are included to provide a further illustration of the preferred embodiments of the invention and to explain the principles and advantages of the invention, together with the detailed description below.
FIG. 1 is a schematic view of a docking station and robot of the present invention;
FIG. 2 is a schematic illustration of a segmented positioning based on signal lights and ArUco codes in accordance with the present invention;
FIG. 3 is a schematic diagram of the conversion relationship between the docking station coordinate system and the camera coordinate system in the present invention;
FIG. 4 is a schematic view of a robotic underwater docking process in accordance with the present invention;
FIG. 5 is a schematic representation of virtual cylinder based three-dimensional path generation in accordance with the present invention;
FIG. 6 is a schematic view of a three-dimensional path projected in a horizontal plane in accordance with the present invention;
FIG. 7 is a block diagram of a robotic underwater docking control of the present invention;
FIG. 8 is a schematic diagram of a line-of-sight based path tracking control in accordance with the present invention;
FIG. 9 is a view of a clockwise docking scenario in the present invention;
FIG. 10 is a video screenshot of a robot docking clockwise in the present invention;
FIG. 11 is a graph of the results of a clockwise docking experiment in accordance with the present invention; wherein, the graph (a) represents a three-dimensional trajectory data graph; fig. (b) shows an XY plane robot trajectory diagram; FIG. (c) shows a yaw data map; fig. (d) shows a vertical direction data map.
Detailed Description
In order that those skilled in the art will better understand the present invention, exemplary embodiments or examples of the present invention will be described below with reference to the accompanying drawings. It is apparent that the described embodiments or examples are only implementations or examples of a part of the invention, not all. All other embodiments or examples, which may be made by one of ordinary skill in the art without undue burden, are intended to be within the scope of the present invention based on the embodiments or examples herein.
Aiming at exploration and sampling tasks of an underwater complex environment, such as detection and maintenance in a submarine pipeline, monitoring of organisms in a coral reef, collection of minerals in submarine rock joints and the like. The conventional AUV and other underwater vehicles have limited power supply batteries, and when tasks are not completed and the electric quantity of the robot is insufficient, the robot needs to be charged underwater in time. And when the robot completes the task, the robot needs to be recovered underwater in time. The underwater charging and recycling is usually carried out by using a docking station, and the robot completes the underwater robot docking operation by identifying and positioning the docking station. In the process of docking the robot, if the path planning is unreasonable, collision with a docking station easily occurs, so that the robot is damaged. Therefore, the invention provides a path planning method and a control method for docking a small underwater robot, which improve the docking efficiency and the docking safety of the robot.
A path planning method for docking a small underwater robot comprising the steps of:
step one, establishing a docking station coordinate system, and obtaining initial position coordinates of the robot under the docking station coordinate system based on a vision measurement method;
setting a point on a central line right in front of the docking station as a three-dimensional path end point, and obtaining a position coordinate of the three-dimensional path end point under a docking station coordinate system;
according to the embodiment of the invention, in order to ensure that the robot successfully enters the docking station, the docking station needs to adopt an LED signal lamp or an ArUco code so as to ensure that the underwater robot acquires the position and the posture of the underwater robot relative to the docking station. The invention adopts a monocular camera-based LED lamp and ArUco code mark positioning method. As shown in FIG. 1, four LED lamps are respectively arranged at the top of a square, the square is centered on the position of ArUco code, and the side length is l 1 And the plane is formed with the front door of the dock base station. As shown in fig. 2, when the robot is far away from the docking station, positioning is performed by using a signal lamp; as the distance of the robot from the docking station decreases, the robot cannot capture the light emitted by the LED lights, further using ArUco code markers to position the robot. The signal lamp and the ArUco code are in the same plane with the front door of the docking station, and in a fixed distance of underwater walking of the robot, the position of the signal lamp is acquired by the robot with a camera, the position coordinate of the signal lamp is obtained, and the position coordinate of the robot relative to the docking station is obtained based on a vision measurement method; when the signal lamp cannot be captured by the camera on the robot, the position of the ArUco code is acquired by the camera on the robot, the position coordinate of the ArUco code is obtained, and the position coordinate of the robot relative to the docking station is obtained based on a vision measurement method.
As shown in fig. 3, four LED lamps in the docking station are UL, UR, DL, DR, respectively, where D represents Down, U represents Up, L represents Left, and R represents Right. The positions of the four LED lamps under the camera coordinate system are respectively P UL 、P UR 、P DL And P DR With square center as origin O D Along the square levelAnd respectively establish Y in the vertical direction D Axis and Z D Shaft, X is established according to right hand rule D Axes, thereby establishing a docking station coordinate system O D -X D Y D Z D . Dock base station coordinate system O D -X D Y D Z D The relationship with the camera coordinate system can be described as:
wherein,and->Is a rotation matrix and translation vector between the docking station coordinate system and the camera coordinate system. P (P) b Representing the position coordinates of the robot in world coordinates; p (P) d Representing the position coordinates of the robot in the docking station coordinate system.
Further, the position coordinates and attitude angles of the robot under the docking station coordinate system can be obtained through a PNP algorithm. Obtaining the position coordinate [ x ] by the above formula d y d z d ] T The method comprises the steps of carrying out a first treatment on the surface of the Rotation matrixThe ith row and jth column are denoted r ij The attitude angle can be obtained by:
φ=arctan(r 32 /r 33 )
in summary, by the above conventional vision measurement method, the position coordinate [ x ] of the robot in the docking station coordinate system can be obtained first d y d z d ] T And attitude angle
Generating a three-dimensional path for the robot to walk underwater by utilizing the relative positions and directions of the robot and the docking station;
according to an embodiment of the invention, the first key point has been obtained for the docking operation as described above: estimating the position and attitude of the robot relative to the docking station, another key point for successful docking is explained next: and (5) track tracking control. During the docking process, the docking station has a center line, as shown in fig. 4, perpendicular to the vertical center line of the square, along which the robot preferably enters. While the prior art uses PID control algorithms to generate reference pitch and attitude angles to perform trajectory tracking control, this approach may guide the underwater vehicle AUV to the docking station, when the underwater vehicle AUV enters the docking station entrance, the docking station centerline is at an angle to the direction of movement of the underwater vehicle AUV, thereby making docking unsuccessful. To overcome the above disadvantages, the present invention generates a three-dimensional path using the relative positions and directions of the robot and the docking station, the three-dimensional path being capable of assisting the robot in vertically entering the docking station entrance.
In order to ensure successful docking, as shown in fig. 5, a virtual cylinder is built by using a robot as a reference according to the coordinates of the underwater robot at the docking station, the initial position and the three-dimensional path end point of the robot are both on the surface of the virtual cylinder, and a track T is generated based on the virtual cylinder surface of the virtual cylinder, so that the underwater robot reaches the three-dimensional path end point along the track T, and then enters the docking station along the central line. The virtual cylinder is built as follows:
assuming that the position on the front central line of the dock base station is a three-dimensional path end point, i represents a three-dimensional pathThe linear distance from the radial end point to the docking station; the coordinates of the three-dimensional path end point in the docking station coordinate system are expressed asThe starting position coordinates of the robot in the docking station coordinate system are expressed as +.>Cylindrical coordinate system O C -X C Y C Z C The establishment is as follows: center of circle O of virtual cylinder C On the perpendicular bisector of the connecting line of the underwater robot and the three-dimensional path end point, the radius is r and Y C The positive axis direction is vertically upward, X C The positive direction of the axis is from the circle center O C Pointing to a three-dimensional path end point, and establishing Z according to a right-hand rule C A shaft.
Cylindrical coordinate system O C -X C Y C Z C And dock base station coordinate system O D -X D Y D Z D The conversion relation between them can be calculated by the following formula:
C P R =R D P R +T
wherein, D P R =(x d ,y d ,z d ) Representing the position coordinates of the underwater robot in the docking station coordinate system, C P R =(x c ,y c ,z c ) Representing the position coordinates of the underwater robot under a cylindrical coordinate system; r and T are the transformation matrix and translation vector between the docking station coordinate system and the cylindrical coordinate system, respectively, and: r=i 3×3 ,T=[r,0,l] T . Through the conversion of the formula, the coordinate representation of the three-dimensional path end point under a cylindrical coordinate system can be obtainedThe starting position coordinates of the robot in the cylindrical coordinate system are denoted +.>The virtual cylinder half is calculated according to the following processAnd a diameter r.
α 0 Is the initial course angle, beta, corresponding to the central angle between the initial position of the underwater robot and the end point of the three-dimensional path 0 Is the included angle between the initial position of the robot and the front central line of the docking station; by the geometrical relationship as shown in fig. 6, it is easy to obtain:
α 0 =2β 0 (2)
from formulas (2) and (3):
the virtual cylinder radius r is obtained by formulas (1) and (4) as follows:
in a cylindrical coordinate system, the three-dimensional trajectory T of the underwater robot can be expressed as:
wherein alpha represents a central angle corresponding to the connecting line of the real-time position and the three-dimensional path end point of the underwater robot.
In order to track the trajectory, the robot needs to rotate according to formula (3) until the central angle α is equal to the initial yaw angle α 0 The generated trajectory can be tracked. Since the field angle of the monocular camera is 120 degrees, it is necessary to ensure that the docking station is within the visual range of the monocular camera on the underwater robot to generate the track. When underwater machineThe person cannot capture the LED lights and then needs to rotate on one point until the robot detects four LED lights. Therefore, the precondition for generating the trajectory is that the angle α must be smaller than 120 °.
To track the path trajectory, the robot path tracking is decoupled into horizontal tracking and vertical tracking, corresponding to a line-of-sight based horizontal controller and an ADRC based depth controller, respectively.
With the three-dimensional path generated as described above, another embodiment of the present invention proposes a path tracking control method, as shown in fig. 7, by first obtaining the position and posture (direction) of the robot relative to the docking station coordinate system by a vision measurement method, and generating a desired three-dimensional path according to the path planning method described above; the motion control of the robot is divided into horizontal plane control and vertical plane control, then the expected three-dimensional path is projected to the horizontal direction to form an arc path, in order to realize the tracking of the arc path, the path tracking control of a sight line method is adopted, the sight line angle of the robot is calculated according to the current position and the path point coordinates of the robot, and the sight line angle is used as the input of a path tracking control algorithm of the sight line method to calculate the force and the moment of the robot on the horizontal plane; in the vertical direction, the rotation of the propeller causes the highly nonlinear characteristic of the robot control model, so that an ADRC controller with obvious anti-interference capability is adopted, vertical coordinates of visual measurement are used as feedback, vertical errors are calculated and used as input of depth control, and vertical force is obtained through the depth control; and finally, calculating by adopting a real-time dynamic thrust vector according to the calculated forces and torques in the horizontal direction and the vertical direction, and generating the thrust of each propeller and the rotation angle of the propeller so as to control the robot and realize the three-dimensional path tracking of the robot.
The control method of the ADRC active disturbance rejection controller in the vertical direction in the depth is the prior art, and is not repeated here; the path tracking control by the line-of-sight method in the horizontal direction is described in detail below.
Assuming that the current expected waypoint isThe next expected waypoint is +.>The current position of the underwater robot is P c =(x c ,y c ) T 。/>And->May be measured. The line of sight angle is given by:
the path tracking control method based on the line-of-sight method can be used for linear track tracking and curved track tracking, and is realized by using the visual range L. As shown in fig. 8, P' is the current position P of the robot to the expected trajectoryThe visual range L is P' to the next expected waypoint +.>Is a distance of (2); l=nl b (n=2~5),L b Is the robot length. Therefore, the robot expects the horizontal heading angle α φ Given by the formula:
where e is the lateral tracking error, i.e. the distance between P and P', α k Is the expected trackAnd the angle of the X axis of the world coordinate system.
Psi is the line of sight angle, i.e. the actual heading angle, then the heading bias psi e =α φ -ψ。When psi is e When the robot approaches zero, the robot sails to the expected waypoint. At this point, the trajectory is successfully tracked.
The robot tracks the current expected waypointIn the process, when the robot reaches that waypoint, the robot will switch to the next expected waypoint, i.e., let k=k+1. Thus, a circle is defined, and +.>Is the center of a circle, R 0 Is the radius of the circle. The condition for judging that the expected waypoint is reached is given by:
further experiments prove the technical effect of the invention.
The docking may be divided into a clockwise docking and a counter-clockwise docking according to the position of the robot relative to the docking station, and both docking modes are similar, and a clockwise docking is exemplified below. As shown in fig. 9, the docking station adopts the high-brightness LED lamp and the two-dimensional code for identification, the initial position of the robot is positioned on the right side of the central line of the docking station, at this time, the robot needs to perform clockwise motion for docking, and the planned path of the robot is shown as a long-dashed line in fig. 9. The robot adopts an "H" type motion mode and the docking video screenshot is shown in fig. 10, after 20 seconds, the docking is completed.
Fig. 11 (a) and (b) show a two-dimensional data map of the three-dimensional trajectory data of the docking station in the coordinate system of the docking station, and the XY plane of the coordinate system of the docking station, respectively. It is obvious that in the initial stage, the actual track of the underwater robot has a larger error than the reference track, but the planned reference track is tracked after adjustment. The actual track has an outward swing phenomenon relative to the reference track, and the main reason is that the speed regulating system of the robot propeller has a certain hysteresis, so that the yaw direction is not timely adjusted to a desired angle in the earlier stage of track tracking, and the yaw angle needs to be adjusted while advancing. Fig. 11 (c) and (d) show experimental data of yaw angle and vertical direction, respectively.
The track tracking error statistics are shown in Table 1, and it can be seen that the robot is in the docking station coordinate system X D The median error in the axial direction is not more than 40cm, along Y D The median error in the axial direction is not more than 10cm, along Z D The median error in the axial direction is not more than 10cm, the median error in the yaw angle is not more than 5 degrees, and the tracking error is not more than one length of the robot. From the perspective of overall effect and experimental data, the robot realizes the task of autonomous docking, and meets the performance index.
Table 1 clockwise docking experimental data statistics
While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of the above description, will appreciate that other embodiments are contemplated within the scope of the invention as described herein. The disclosure of the present invention is intended to be illustrative, but not limiting, of the scope of the invention, which is defined by the appended claims.

Claims (7)

1. A path planning method for docking a small underwater robot, comprising the steps of:
step one, establishing a docking station coordinate system, and obtaining initial position coordinates of the robot under the docking station coordinate system based on a vision measurement method;
setting a point on a central line right in front of the docking station as a three-dimensional path end point, and obtaining a position coordinate of the three-dimensional path end point under a docking station coordinate system;
generating a three-dimensional path for the robot to walk underwater by utilizing the relative positions and directions of the robot and the docking station; the method specifically comprises the following steps:
establishing a virtual cylinder by taking a robot as a reference, and establishing an initial position and a three-dimensional path of the robotThe end points are all on the surface of the virtual cylinder, and the cylinder coordinate system O corresponding to the virtual cylinder C -X C Y C Z C The establishment is as follows: center of circle O of virtual cylinder C Y on the perpendicular bisector of the connecting line of the robot and the three-dimensional path end point C The positive axis direction is vertically upward, X C The positive direction of the axis is from the circle center O C Pointing to a three-dimensional path end point, and establishing Z according to a right-hand rule C A shaft;
step three, converting initial position coordinates of the robot under the docking station coordinate system and position coordinates of the three-dimensional path end point under the docking station coordinate system into position coordinates under the cylindrical coordinate system;
thirdly, calculating to obtain the radius of the virtual cylinder according to the initial position of the robot and the position coordinates of the three-dimensional path end point under a cylindrical coordinate system; the calculated radius r of the virtual cylinder is:
wherein,representing the X-axis and Z-axis coordinates of the initial position of the robot under a cylindrical coordinate system; />Representing X-axis and Z-axis coordinates of a three-dimensional path end point under a cylindrical coordinate system;
thirdly, calculating to obtain a three-dimensional path according to the position coordinates of the initial position of the robot under the cylindrical coordinate system, the initial course angle of the robot and the radius of the virtual cylinder; the three-dimensional path obtained by calculation is as follows:
wherein, (x) c ,y c ,z c ) Representing expected coordinates of the robot walking in the cylindrical coordinate system;representing the Y-axis coordinate of the initial position of the robot under a cylindrical coordinate system; alpha 0 Representing an initial course angle of the robot; alpha represents the central angle corresponding to the connecting line of the real-time position and the three-dimensional path end point of the robot.
2. A path planning method for docking a small underwater robot according to claim 1, wherein in step three (one), the position coordinates of the initial position of the robot and the three-dimensional path end point in the cylindrical coordinate system are calculated using the following formula:
C P R =R D P R +T
wherein, D P R =(x d ,y d ,z d ) Representing the position coordinates of the robot in the docking station coordinate system, C P R =(x c ,y c ,z c ) Representing the position coordinates of the robot in a cylindrical coordinate system; r and T are the transformation matrix and translation vector between the docking station coordinate system and the cylindrical coordinate system, respectively, r=i 3×3 ,T=[r,0,l] T R represents the radius of the virtual cylinder; l represents the horizontal distance of the three-dimensional path end point from the docking station.
3. A path planning method for docking a small underwater robot according to claim 1, characterized in that the central angle α corresponding to the robot real-time position and the three-dimensional path end point line is smaller than 120 °.
4. A control method based on the path planning method according to any one of claims 1-3, characterized by comprising the steps of:
firstly, generating an expected three-dimensional path according to the path planning method, and projecting the expected three-dimensional path in the horizontal direction and the vertical direction;
the path control in the horizontal direction is: calculating a robot sight angle according to the real-time position and the path point coordinates of the robot by using a path tracking control method based on a sight line method, and calculating course deviation according to the sight angle so as to obtain the force and moment of the robot in the horizontal direction;
the path control in the vertical direction is: calculating a vertical error by using a vertical coordinate obtained based on a visual measurement method as feedback by using an ADRC active disturbance rejection controller, and calculating and obtaining the force and moment of the robot in the vertical direction according to the vertical error;
and finally, calculating by adopting a real-time dynamic thrust vector according to the forces and the torques obtained by calculation in the horizontal direction and the vertical direction, and generating the thrust and the rotation angle of each propeller of the robot so as to control the robot to walk according to the expected three-dimensional path, thereby realizing the three-dimensional path tracking control of the robot.
5. The control method according to claim 4, wherein the robot is positioned in a segmented manner by using signal lights and ArUco codes to obtain a real-time position of the robot, wherein the signal lights and ArUco codes are in the same plane as a docking station portal; the positioning process comprises the following steps: acquiring the position of a signal lamp from a camera on the robot within a fixed distance of underwater walking of the robot, and acquiring the position coordinate of the signal lamp, thereby acquiring the position coordinate of the robot relative to a docking station based on a vision measurement method; when the signal lamp cannot be captured by the camera on the robot, the position of the ArUco code is acquired by the camera on the robot, the position coordinate of the ArUco code is obtained, and the position coordinate of the robot relative to the docking station is obtained based on a vision measurement method.
6. The control method according to claim 5, wherein the vision-based measurement method is a PNP algorithm.
7. The control method according to claim 6, wherein the robot eye-sight angle in the eye-sight method-based path-tracking control method is given by:
wherein,representing the next expected route point of the robot under the cylindrical coordinate system; (x) c ,y c ) Representing real-time position coordinates of the robot under a cylindrical coordinate system;
robot desired horizontal heading angle alpha φ Given by the formula:
wherein e represents a lateral tracking error; alpha k Is the angle between the expected track and the X axis of the world coordinate system; l represents a multiple of the robot length;
the heading bias is:
ψ e =α φ -ψ。
CN202111011001.2A 2021-08-31 2021-08-31 Path planning method and control method for docking of small underwater robot Active CN113721639B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111011001.2A CN113721639B (en) 2021-08-31 2021-08-31 Path planning method and control method for docking of small underwater robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111011001.2A CN113721639B (en) 2021-08-31 2021-08-31 Path planning method and control method for docking of small underwater robot

Publications (2)

Publication Number Publication Date
CN113721639A CN113721639A (en) 2021-11-30
CN113721639B true CN113721639B (en) 2024-03-15

Family

ID=78679713

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111011001.2A Active CN113721639B (en) 2021-08-31 2021-08-31 Path planning method and control method for docking of small underwater robot

Country Status (1)

Country Link
CN (1) CN113721639B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114428507B (en) * 2022-01-14 2023-07-07 哈工大机器人集团(杭州湾)国际创新研究院 Vertical docking algorithm for shallow aircraft
CN116360464B (en) * 2023-05-15 2023-09-08 广州市显浩医疗设备股份有限公司 Mobile DR equipment motion path control method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110196602A (en) * 2019-05-08 2019-09-03 河海大学 The quick underwater robot three-dimensional path planning method of goal orientation centralized optimization
CN110936077A (en) * 2019-12-31 2020-03-31 南京衍构科技有限公司 Method for generating surfacing path of membrane type water-cooled wall
WO2020082821A1 (en) * 2018-10-26 2020-04-30 河海大学 Guide cable based apparatus and method for unmanned vehicle recovering autonomous underwater vehicle

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6914811B2 (en) * 2017-11-09 2021-08-04 日本電産サンキョー株式会社 Horizontal articulated robot and its origin return method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020082821A1 (en) * 2018-10-26 2020-04-30 河海大学 Guide cable based apparatus and method for unmanned vehicle recovering autonomous underwater vehicle
CN110196602A (en) * 2019-05-08 2019-09-03 河海大学 The quick underwater robot three-dimensional path planning method of goal orientation centralized optimization
CN110936077A (en) * 2019-12-31 2020-03-31 南京衍构科技有限公司 Method for generating surfacing path of membrane type water-cooled wall

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
Development of A Variable Buoyancy System for the Long Term Station-keeping Profile Observing AUV;Yuan Yin 等;2019 IEEE 8th International Conference on Fluid Power and Mechatronics;第858-865页 *
Improved Model Predictive-Based Underwater Trajectory Tracking Control for the Biomimetic Spherical Robot under Constraints;huiming xing 等;APPLIED SCIENCES-BASEL;20201116;第1-24页 *
Vision locating method based RGB-D camera for amphibious spherical robots;huiming xing 等;2017 IEEE International Conference on Mechatronics and Automation;第1509-1514页 *
人工势场和虚拟结构相结合的多水下机器人编队控制;潘无为;姜大鹏;庞永杰;李岳明;张强;兵工学报(02);第326-334页 *
基于人工矢量场的AUV自主回收路径规划;薛源;严卫生;高剑;施淑伟;;鱼雷技术(02);第104-108页 *
基于对线控位策略的UUV回收运动控制研究;张伟;徐达;王南南;李滋;严浙平;;船舶工程(05);第64-68页 *
模块化小型水下作业机器人设计及试验;胡策 等;水雷战与舰船防护;20170531;第25卷(第2期);第48-52页 *

Also Published As

Publication number Publication date
CN113721639A (en) 2021-11-30

Similar Documents

Publication Publication Date Title
CN109397249B (en) Method for positioning and grabbing robot system by two-dimensional code based on visual identification
CN113721639B (en) Path planning method and control method for docking of small underwater robot
US9862090B2 (en) Surrogate: a body-dexterous mobile manipulation robot with a tracked base
CN107433573B (en) Intelligent binocular automatic grabbing mechanical arm
Sun et al. A review of robot control with visual servoing
CN112928799B (en) Automatic butt-joint charging method of mobile robot based on laser measurement
CN111208845B (en) Path planning method and device for movable butt joint recovery of underwater robot
CN111522351B (en) Three-dimensional formation and obstacle avoidance method for underwater robot
WO2022252221A1 (en) Mobile robot queue system, path planning method and following method
Hebert et al. Supervised remote robot with guided autonomy and teleoperation (SURROGATE): a framework for whole-body manipulation
Han et al. Grasping control method of manipulator based on binocular vision combining target detection and trajectory planning
Matsuda et al. Control system for object transportation by a mobile robot with manipulator combined with manual operation and autonomous control
Srivastava et al. Range estimation and visual servoing of a dynamic target using a monocular camera
CN110722547B (en) Vision stabilization of mobile robot under model unknown dynamic scene
CN110455272B (en) Sensor system in a track following system
CN111702787A (en) Man-machine cooperation control system and control method
CN115590407A (en) Mechanical arm planning control system and method for cleaning and cleaning robot
CN116117786A (en) Method and system for planning track of mechanical arm under high visual visibility
CN112959342B (en) Remote operation method for grabbing operation of aircraft mechanical arm based on operator intention identification
Ajay et al. Localization and trajectory tracking of an autonomous spherical rolling robot using imu and odometry
Roennau et al. Six-legged walking in rough terrain based on foot point planning
Yu et al. Motion control algorithms for a free-swimming biomimetic robot fish
Luo et al. An effective search and navigation model to an auto-recharging station of driverless vehicles
Nesnas et al. Autonomous vision-based manipulation from a rover platform
Sun et al. Tracking control for a biomimetic robotic fish guided by active vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant