CN117369474A - Visual guidance docking recovery method and system for unmanned surface vehicle - Google Patents

Visual guidance docking recovery method and system for unmanned surface vehicle Download PDF

Info

Publication number
CN117369474A
CN117369474A CN202311490179.9A CN202311490179A CN117369474A CN 117369474 A CN117369474 A CN 117369474A CN 202311490179 A CN202311490179 A CN 202311490179A CN 117369474 A CN117369474 A CN 117369474A
Authority
CN
China
Prior art keywords
unmanned ship
reference mark
recovery device
camera
pose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311490179.9A
Other languages
Chinese (zh)
Inventor
向先波
肖咏昕
向巩
黄骁
陶浩
杨少龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN202311490179.9A priority Critical patent/CN117369474A/en
Publication of CN117369474A publication Critical patent/CN117369474A/en
Pending legal-status Critical Current

Links

Landscapes

  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention provides a visual guidance docking recovery method and a visual guidance docking recovery system for a water surface unmanned ship, comprising the following steps: performing preliminary course guidance on the unmanned ship until a camera installed on the unmanned ship detects a first reference mark at the opening end of the recovery device; taking the first reference mark as a target, and performing target detection and positioning based on a video frame acquired by a camera to perform visual guidance on the unmanned ship until the bow of the unmanned ship is opposite to the opening end of the recovery device, wherein the distance between the first reference mark and the camera is within a preset range; taking a second reference mark at the closing end of the recovery device as a target, and performing target detection and positioning based on a video frame acquired by a camera to perform visual guidance on the unmanned ship until the unmanned ship is in butt joint with the recovery device; the second fiducial mark is different from the first fiducial mark in the encoding used. The invention improves the terminal accuracy of autonomous docking of the unmanned ship and the overall recovery success rate of the unmanned ship.

Description

Visual guidance docking recovery method and system for unmanned surface vehicle
Technical Field
The invention belongs to the technical field of autonomous recovery of unmanned boats, and particularly relates to a visual guidance docking recovery method and system for a water surface unmanned boat.
Background
The unmanned surface vehicle is a small water surface platform capable of autonomous control and autonomous navigation through autonomous sensing of surrounding environment, and mainly has the operation modes of remote control, operation according to a design program, full autonomous operation and the like. The unmanned ship can be applied to various fields to execute various task activities such as offshore monitoring, offshore target detection, submarine exploration or search and rescue. The unmanned boat replaces a worker to carry out time-consuming and labor-consuming or dangerous tasks, so that the working efficiency is improved, and the safety of the worker is ensured.
The recovery of unmanned ship means that unmanned ship is after accomplishing the task on water, acquires unmanned ship position through certain mode, and according to the signal guide unmanned ship that obtains slowly drive into recovery bracket, utilize recovery bracket to retrieve unmanned ship afterwards. At present, navigation and control of the unmanned ship recovery process are mainly in a manual remote control stage, and the problems of high guarantee requirement, low intelligent level and the like exist. There are also a few unmanned ships and their automatic recovery techniques, which are to locate and navigate by GPS or inertial navigation, so that a certain guarantee capability is improved, but because of the error of GPS or inertial navigation, there is a certain deviation of the whole control route, this is particularly obvious when the unmanned ships and their unmanned ships are butted, the recovery cage has a limited opening width, and the recovery success rate of unmanned ships will be greatly reduced due to the large butt joint error of the tail ends.
Disclosure of Invention
Aiming at the defects of the prior art, the invention aims to provide a visual guidance docking recovery method and a visual guidance docking recovery system for a water surface unmanned ship, and aims to solve the problem that the terminal docking precision is low and the recovery success rate is low in the existing unmanned ship automatic recovery technology.
In order to achieve the above object, in a first aspect, the present invention provides a visual guidance docking recovery method for a surface unmanned ship, comprising the following steps:
step S110, performing preliminary course guidance on the unmanned ship until a camera installed on the unmanned ship detects a first reference mark at the opening end of the recovery device;
step S120, taking the first reference mark as a target, and carrying out target detection and positioning based on a video frame acquired by a camera so as to carry out visual guidance on the unmanned ship until the bow of the unmanned ship is opposite to the opening end of the recovery device, wherein the distance between the first reference mark and the camera is within a preset range;
step S130, taking a second reference mark at the closing end of the recovery device as a target, and performing target detection and positioning based on the video frame acquired by the camera so as to perform visual guidance on the unmanned ship until the unmanned ship is in butt joint with the recovery device; the second fiducial mark is different from the first fiducial mark in the encoding used.
In an alternative example, step S120 specifically includes:
step S121, performing target detection and positioning on the first reference mark based on a video frame acquired by the camera to obtain the pose of the first reference mark relative to the camera;
step S122, calculating the distance between the first reference mark and the camera based on the pose of the first reference mark and the camera;
step S123, taking the pose of the first reference mark relative to the camera as the pose of the recovery device relative to the unmanned ship, and calculating the heading error of the unmanned ship;
step S124, based on the heading error of the unmanned ship, the unmanned ship is guided in a visual mode until the bow of the unmanned ship is opposite to the opening end of the recovery device, and the distance between the first reference mark and the camera is within a preset range.
In an alternative example, step S123 specifically includes:
calculating an included angle between a central line of the recovery device and a central line of the unmanned ship based on a rotation matrix of the recovery device in a pose relative to the unmanned ship;
calculating a deflection distance based on a displacement matrix of the recovery device in the pose of the recovery device relative to the unmanned ship and an included angle between a central line of the recovery device and a central line of the unmanned ship, and calculating a guide angle based on the deflection distance;
and calculating the heading error of the unmanned ship based on the included angle between the central line of the recovery device and the central line of the unmanned ship and the guidance angle.
In an alternative example, step S121 specifically includes:
performing target detection and positioning on the first reference mark based on the video frame to obtain the coordinate position of the first reference mark in the video frame;
based on the coordinate position of the first reference mark in the video frame, the coordinate position of the first reference mark in the world coordinate system and the internal reference matrix of the camera, the pose of the first reference mark for the camera is calculated.
In an alternative example, the first fiducial mark has a size that is larger than the second fiducial mark; the first reference mark is arranged at the upper part of the opening end of the recovery device, and the second reference mark is arranged at the middle part of the closing end of the recovery device.
In an alternative example, the preliminary heading guidance is specifically achieved by GPS and inertial navigation installed on the recovery device, and GPS and inertial navigation installed on the unmanned boat.
In a second aspect, the present invention provides a visual guidance docking recovery system for a surface unmanned ship, comprising:
the preliminary course guiding module is used for carrying out preliminary course guiding on the unmanned ship until a camera installed on the unmanned ship detects a first reference mark at the opening end of the recovery device;
the first visual guiding module is used for carrying out target detection and positioning based on the video frames acquired by the camera by taking the first reference mark as a target so as to carry out visual guiding on the unmanned ship until the bow of the unmanned ship is opposite to the opening end of the recovery device, and the distance between the first reference mark and the camera is within a preset range;
the second visual guiding module is used for carrying out target detection and positioning based on the video frames acquired by the camera by taking a second reference mark at the closing end of the recovery device as a target so as to carry out visual guiding on the unmanned ship until the unmanned ship is in butt joint with the recovery device; the second fiducial mark is different from the first fiducial mark in the encoding used.
In an alternative example, the first visual guiding module specifically includes:
the pose acquisition unit is used for carrying out target detection and positioning on the first reference mark based on the video frame acquired by the camera to obtain the pose of the first reference mark relative to the camera;
a distance calculating unit for calculating a distance of the first reference mark relative to the camera based on a pose of the first reference mark relative to the camera;
the heading error calculation unit is used for taking the pose of the first reference mark relative to the camera as the pose of the recovery device relative to the unmanned ship and calculating the heading error of the unmanned ship;
the visual guiding unit is used for guiding the unmanned ship in a visual way based on the heading error of the unmanned ship until the bow of the unmanned ship is over against the opening end of the recovery device, and the distance between the first reference mark and the camera is within a preset range.
In an alternative example, the heading error calculation unit is specifically configured to:
calculating an included angle between a central line of the recovery device and a central line of the unmanned ship based on a rotation matrix of the recovery device in a pose relative to the unmanned ship;
calculating a deflection distance based on a displacement matrix of the recovery device in the pose of the recovery device relative to the unmanned ship and an included angle between a central line of the recovery device and a central line of the unmanned ship, and calculating a guide angle based on the deflection distance;
and calculating the heading error of the unmanned ship based on the included angle between the central line of the recovery device and the central line of the unmanned ship and the guidance angle.
In an alternative example, the pose acquisition unit is specifically configured to:
performing target detection and positioning on the first reference mark based on the video frame to obtain the coordinate position of the first reference mark in the video frame;
based on the coordinate position of the first reference mark in the video frame, the coordinate position of the first reference mark in the world coordinate system and the internal reference matrix of the camera, the pose of the first reference mark for the camera is calculated.
In general, the above technical solutions conceived by the present invention have the following beneficial effects compared with the prior art:
the invention provides a visual guidance docking recovery method and a visual guidance docking recovery system for a water surface unmanned ship, which are characterized in that reference marks with different codes are arranged in front of and behind a recovery device in advance, the unmanned ship is firstly subjected to preliminary course guidance until a camera detects a first reference mark, then a visual guidance stage at the docking tail end is entered, the visual guidance stage is subdivided into two stages respectively taking the first reference mark and a second reference mark as reference targets, and target detection and positioning are carried out according to video frames acquired by the camera, so that the unmanned ship is accurately guided in a visual manner, the effect of rough adjustment and fine adjustment is achieved, the autonomous docking tail end precision of the unmanned ship is improved, and the overall recovery success rate of the unmanned ship is improved.
Drawings
Fig. 1 is a schematic flow diagram of a visual guidance docking recovery method for a surface unmanned ship according to an embodiment of the present invention;
FIG. 2 is a schematic plan view of solved parameters provided by an embodiment of the present invention;
FIG. 3 is a schematic view of a fiducial marker and its mounting location according to an embodiment of the present invention;
FIG. 4 is a second flow chart of a visual guidance docking recovery method for a surface unmanned ship according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a coordinate transformation method according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a docking process under an unmanned ship view angle according to an embodiment of the present invention;
FIG. 7 is a schematic view of a docking process at a view angle of a recycling device according to an embodiment of the present invention;
fig. 8 is a structural diagram of a visual guidance docking recovery system for a surface unmanned ship according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The invention provides a visual guidance receiving and recycling method of a water surface unmanned ship, which aims to improve the terminal receiving precision of the unmanned ship so as to improve the overall recycling success rate.
Fig. 1 is one of flow diagrams of a visual guidance docking recovery method for a surface unmanned ship according to an embodiment of the present invention, as shown in fig. 1, the method includes the following steps:
step S110, performing preliminary course guidance on the unmanned ship until a camera installed on the unmanned ship detects a first reference mark at the opening end of the recovery device;
step S120, taking the first reference mark as a target, and carrying out target detection and positioning based on a video frame acquired by a camera so as to carry out visual guidance on the unmanned ship until the bow of the unmanned ship is opposite to the opening end of the recovery device, wherein the distance between the first reference mark and the camera is within a preset range;
step S130, taking a second reference mark at the closing end of the recovery device as a target, and performing target detection and positioning based on the video frame acquired by the camera so as to perform visual guidance on the unmanned ship until the unmanned ship is in butt joint with the recovery device; the second fiducial mark is different from the first fiducial mark in the encoding used.
Preferably, the first fiducial marker and the second fiducial marker may employ a new fiducial marker package STag (A Stable Fiducial Marker System), primarily focusing on the stability of the pose estimation measurement. The reference mark comprises a plurality of reference marks, and different reference marks can be distinguished through different codes. The invention installs the first reference mark on the stern of the recovery device in advance, and installs the second reference mark on the bow of the recovery device. The stern part of the recovery device is the open end of the recovery device, and the bow part of the recovery device is the closing end of the recovery device.
And initially adjusting the navigation direction to finish the approximation of the lateral position to the recovery device until a camera installed on the unmanned ship can detect a first reference mark at the stern of the recovery device. Preferably, in order to improve accuracy and stability of the system, in the stage of preliminary heading guidance, a frame of picture shot by the camera can be taken at intervals of preset time to detect, whether the first reference mark can be identified is judged, if the first reference mark can be identified by continuous preset frames, stable identification is considered to be possible, and the subsequent stage of vision guidance can be entered. The specific method adopted by the embodiment of the invention for the preliminary course guidance and the visual course guidance is not particularly limited.
Preferably, the camera can be a global shutter camera, so that the influence of unmanned ship jolt on the quality of the formed image caused by blurring and the like is reduced. After the current video frame is acquired, preprocessing such as graying, defogging, histogram equalization and the like can be performed on the video frame, then target detection and positioning are performed according to the preprocessed video frame, and the heading of the unmanned ship is adjusted according to the recognition result, so that visual heading guidance of the unmanned ship is realized.
Considering that when the unmanned ship is close to the recovery device, a camera on the unmanned ship can hardly detect a first reference mark at the stern of the recovery device, in the starting stage of visual guidance, the embodiment of the invention adjusts the heading of the unmanned ship by taking the first reference mark as a reference target, judges whether the relative distance calculated by video frames of each round is within a preset range, and if the bow of the unmanned ship is opposite to the opening end of the recovery device and the relative distance is reduced to the preset range for the first time, the unmanned ship is switched to a second reference mark as a reference target, the second reference mark is kept unchanged subsequently, and visual guidance is repeatedly executed until the unmanned ship is in butt joint with the recovery device.
Here, the preset range may be obtained through multiple experiments, or may be obtained through theoretical calculation according to parameters such as an installation position of a mark on the recovery device, a length and width dimension of the recovery device, an installation position of a camera on the unmanned ship, and a draft condition of the unmanned ship in still water, which is not particularly limited in the embodiment of the present invention.
The method and the device can accurately identify the marks in the graph in real time based on the STag reference marks, so that more accurate information can be obtained. And by utilizing different codes of STag, two different reference marks are arranged at the front and the back, thereby achieving the effects of coarse adjustment and fine adjustment at first and further increasing the success rate of butt joint. And a visual method with higher accuracy is adopted at the butt joint end entering the visual range, so that the defect of larger GPS and inertial navigation errors is effectively overcome.
According to the method provided by the embodiment of the invention, the preliminary course guidance is performed on the unmanned ship by arranging the reference marks with different codes in front of and behind the recovery device in advance, the vision guidance stage of the butt joint end is entered until the camera detects the first reference mark, the vision guidance stage is subdivided into two stages respectively taking the first reference mark and the second reference mark as reference targets, and the target detection and the positioning are performed according to the video frames collected by the camera, so that the precise vision guidance is performed on the unmanned ship, the effect of coarse adjustment and fine adjustment is achieved, the autonomous butt joint end precision of the unmanned ship is improved, and the integral recovery success rate of the unmanned ship is improved.
Based on the above embodiment, step S120 specifically includes:
step S121, performing target detection and positioning on the first reference mark based on a video frame acquired by the camera to obtain the pose of the first reference mark relative to the camera;
step S122, calculating the distance between the first reference mark and the camera based on the pose of the first reference mark and the camera;
step S123, taking the pose of the first reference mark relative to the camera as the pose of the recovery device relative to the unmanned ship, and calculating the heading error of the unmanned ship;
step S124, based on the heading error of the unmanned ship, the unmanned ship is guided in a visual mode until the bow of the unmanned ship is opposite to the opening end of the recovery device, and the distance between the first reference mark and the camera is within a preset range.
It should be noted that, the specific method adopted for the visual guidance in step S130 may refer to step S120, and the difference is that the second reference mark adopted in step S130 is targeted to the closing end of the recovery device.
Based on any of the above embodiments, although there is also a technology for guiding recovery of the unmanned boat by using the beacon vision signal, the technology processes and analyzes the original image containing the beacon signal, which is shot by the camera, to obtain the azimuth information and the offset angle of the keel block relative to the unmanned boat. However, because the beacon signal can provide too little information, the beacon signal lacks some attitude information which is important for subsequent guidance, such as distance, offset and the like, and the terminal docking accuracy of the unmanned ship can not be ensured.
In this regard, step S123 in the embodiment of the present invention specifically includes:
calculating an included angle between a central line of the recovery device and a central line of the unmanned ship based on a rotation matrix of the recovery device in a pose relative to the unmanned ship;
calculating a deflection distance based on a displacement matrix of the recovery device in the pose of the recovery device relative to the unmanned ship and an included angle between a central line of the recovery device and a central line of the unmanned ship, and calculating a guide angle based on the deflection distance;
and calculating the heading error of the unmanned ship based on the included angle between the central line of the recovery device and the central line of the unmanned ship and the guidance angle.
Preferably, the vision guidance phase may employ LOS (line-of-sight) guidance.
Fig. 2 is a schematic plan view of the solved parameters provided by the embodiment of the present invention, as shown in fig. 2, a calculation formula of a distance between the recovery device and the unmanned ship, and a heading error of the unmanned ship is specifically as follows:
x u =t x
y u =t z
y e =x u sin(θ)-y u cos(θ)
wherein [ t ] x ,t y ,t z ] T For the displacement matrix of the recovery device relative to the unmanned boat, R is the rotation matrix of the recovery device relative to the unmanned boat,is the z-axis direction of the world coordinate system, +.>In the z-axis direction of the camera coordinate system, θ is the included angle between the center line of the recovery device and the center line of the unmanned ship, dist is the distance between the recovery device and the unmanned ship, i.e. the distance between the reference mark and the camera, y e As offset, delta h Taking a fixed value, the empirical formula is typically 3 to 5 times the ship length, +>Making a lead angle for LOS (Low-Density spring)>And the heading error of the unmanned ship is finally calculated. And transmitting the course error to a guidance controller of the unmanned ship to complete the whole guidance control process.
It should be noted that, although the bow of the unmanned ship is just opposite to the open end of the recovery device at the final stage, that is, the stage using the second reference mark as the reference target, the bow of the unmanned ship cannot be completely and stably aligned due to the influence of wind waves and other factors, and the included angle θ between the center line of the recovery device and the center line of the unmanned ship needs to be continuously calculated in the above manner, so as to continuously adjust the heading of the unmanned ship.
The invention carries out detailed research and explanation on the specific method for calculating the important information about the relative pose of the recovery device such as the heading angle, the deflection angle, the distance and the like after the image position of the target is obtained by recognition, and the data obtained in real time through the video stream can better guide the subsequent navigation of the unmanned ship and has certain theoretical and engineering practical values.
Based on any of the above embodiments, step S121 specifically includes:
performing target detection and positioning on the first reference mark based on the video frame to obtain the coordinate position of the first reference mark in the video frame;
based on the coordinate position of the first reference mark in the video frame, the coordinate position of the first reference mark in the world coordinate system and the internal reference matrix of the camera, the pose of the first reference mark for the camera is calculated.
Here, the world coordinate system may be established with the center of the first fiducial marker as the origin. The coordinate position of the first fiducial marker in the video frame may specifically be taken as the coordinate position of each corner of the outer boundary of the first fiducial marker.
Based on any of the above embodiments, the first fiducial mark has a larger size than the second fiducial mark; the first reference mark is arranged at the upper part of the opening end of the recovery device, and the second reference mark is arranged at the middle part of the closing end of the recovery device.
Specifically, fig. 3 is a schematic diagram of a reference mark and an installation position thereof according to an embodiment of the present invention, where (a) in fig. 3 is a schematic diagram of the reference mark, and (b) in fig. 3 is an installation schematic diagram of the reference mark, as shown in fig. 3, the first reference mark and the second reference mark are both square, and a coordinate position of the reference mark in a video frame may specifically be a coordinate position of four corners of an external square boundary of the reference mark. In addition to having an outer square boundary as such a priori mark, the STag also includes a circular pattern in its center, in which the area of coding is located, and by which different fiducial marks can be distinguished. The upper left 0 and HD23 in fig. 3 (a) are the reference mark numbers to aid in identifying the reference mark.
As shown in fig. 3, the bow and the stern of the recovery device are respectively provided with fiducial marks with different IDs and different sizes, wherein the fiducial mark of the stern of the recovery device, namely a first fiducial mark, is larger and is arranged above the stern of the recovery device, and the fiducial mark of the bow, namely a second fiducial mark, is smaller and is arranged in the middle of the bow of the recovery device. For example, a reference mark of 50mm×50mm with id=0 is selected to be placed above the stern of the recovery device, and a reference mark of 30mm×30mm with id=2 is selected to be placed in the middle of the stern of the recovery device.
It should be noted that, because the first reference mark is larger and is placed above the stern of the recovery device, and the second reference mark is smaller and is placed in the middle of the stem of the recovery device, the visual guidance starting stage based on the first reference mark and the visual guidance final stage based on the second reference mark can be further ensured, and the unmanned boat docking recovery with higher precision can be realized.
Based on any of the above embodiments, the preliminary heading guidance is specifically realized by a GPS and inertial navigation installed on the recovery device, and a GPS and inertial navigation installed on the unmanned ship. The recovery device and the unmanned ship are both provided with GPS and inertial navigation, and the initial approach of the unmanned ship to the recovery device can be realized through mutual feedback of position information.
The invention adopts the comprehensive navigation of GPS and inertial navigation when in long distance, and adopts the visual method with higher accuracy at the butt joint end entering the visual range, thereby effectively overcoming the defect of larger error of GPS and inertial navigation. In addition, when the distance is too far, for example, more than thirty meters, the reference mark is smaller visually, and the situation that the camera cannot recognize the reference mark exists, so that a navigation mode combining the GPS and the inertial navigation is more stable and efficient.
Based on any embodiment, the invention provides a visual guiding, butting and recycling method of a water surface unmanned ship based on reference mark recognition, wherein cameras are arranged on the unmanned ship, and a recycling hopper, namely, the bow and the stern of a recycling device are provided with reference marks with different characteristics, so that video streams obtained by the cameras are processed and analyzed to obtain pose information of the recycling hopper relative to the unmanned ship, and the unmanned ship is guided to enter the recycling hopper according to the pose information, thereby recycling the unmanned ship.
Fig. 4 is a second flow chart of a visual guidance docking recovery method for a surface unmanned ship according to an embodiment of the present invention, as shown in fig. 4, the specific process includes the following steps:
step 1: preliminary approach to
The navigation direction is roughly adjusted to finish the approximation of the lateral position to the recovery bucket until the camera installed on the unmanned ship can detect the reference mark of the stern of the recovery bucket. Taking the preset time as 0.2s and the preset frame number as five frames as an example, taking a frame of picture shot by the camera every 0.2s for preprocessing and detecting when the picture is initially approaching, and judging whether the reference mark can be identified. If the camera can continuously recognize the reference mark of the stern part of the recovery bucket, namely the target reference mark, five frames of continuous frames, the camera can be considered to be stably recognized, the visual guiding stage is carried out, and the next step is executed.
Step 2: identifying targets
Taking a preset range of [0, 5) m as an example, reading a video stream by a global shutter camera on the unmanned aerial vehicle, and recognizing all reference marks in the current frame image after preprocessing to respectively obtain image coordinates of the reference marks of the bow part and the stern part of the recovery bucket. The method is characterized in that the reference mark of the stern of the recovery bucket is firstly used as the reference mark image coordinate, when the unmanned ship and the recovery bucket are gradually close to each other to 5m, the bow of the unmanned ship is just opposite to the recovery bucket opening, the reference mark of the bow of the recovery bucket is used as the reference mark image coordinate, the mark position 1 of the reference mark is used after entering the stage, the final butt joint is not changed any more until the final butt joint is successful, and the reference mark of the bow of the recovery bucket is always used as the reference mark.
The specific implementation flow is as follows:
intercepting a current frame of a video stream acquired in real time for detection, firstly graying an acquired image, then carrying out defogging treatment on the image, enabling the exposure of the whole image to be more uniform by using a gray histogram equalization method, and then detecting the number and the number of the contained reference marks and the positions of the reference marks under an image coordinate system.
The process of this step requires the advance completion of installation and preparation on the recovery hopper. Before butt recovery is performed, fiducial marks with different IDs are respectively installed on the bow and stern of the recovery bucket, wherein the fiducial marks on the stern of the recovery bucket are larger and are arranged above the recovery bucket, and the fiducial marks on the bow of the recovery bucket are smaller and are arranged in the middle of the bow of the recovery bucket, as shown in (b) of fig. 3. For example, a reference mark of 50mm×50mm with id=0 is selected to be placed above the stern of the recovery bucket, and a reference mark of 30mm×30mm with id=2 is selected to be placed in the middle of the stern of the recovery bucket.
The type of fiducial marker used in this step is a new fiducial marker packet STag as shown in FIG. 3 (a), focusing mainly on the stability of the pose estimation measurement. In addition to having an outer square boundary as such a priori mark, the STag also includes a circular pattern in its center. After the straight line segmentation and the quadrangle detection of the square border, the initial homography of the detected markers is calculated. Then, by detecting a circular pattern of marker centers, an ellipse fit can be used to refine the initial homography. Ellipse fitting can provide better positioning and recognition when occluded than using a quadrilateral. This refinement step provides increased measurement stability. The circular pattern is a coding area, the bit rate of the disk shape represents filling, and 48 hard disks are packaged in the circular area by using a simulated annealing method. Different fiducial markers can be distinguished by the difference in encoding.
The detection of the STag mark, i.e. the target detection in step S102, is sequentially divided into three stages, respectively: a candidate detection stage, a candidate verification stage and a homography refinement stage. The candidate detection stage comprises edge detection segments, straight line detection segments, corner detection points and quadrangle detection, the candidate verification stage comprises perspective verification and decoding, and the homography refinement stage comprises ellipse positioning and homography refinement.
After the target detection is completed, the number and the angular points of the STag in the frame image can be obtained and the positions of the STag and the angular points are respectively under an image coordinate system.
In this embodiment, the detected STag with id=0 is the marker above the stern of the recovery bucket, while the STag with id=2 is the marker in the middle of the stern of the recovery bucket.
At this time, judging according to the relative distance dist between the unmanned ship and the recovery bucket obtained by the calculation in the steps 2 and 3 in the previous frame, and if the relative distance dist is more than or equal to 5m, storing the image coordinate coefficient data of the STag with ID=0 to prepare for the calculation in the step 3; if the relative distance dist is less than 5m, the image coordinate data of the STag with id=2 is stored for preparation for the calculation of step 3. If the frame is the first frame to enter the visual guidance phase, the image coordinate data of the STag (mark located above the stern of the recovery bucket) with id=0 is stored for preparation for the calculation of step 3.
Step 3: calculating pose
The relative pose of the recovery bucket and the unmanned ship is calculated by combining the coordinate positions of the camera internal reference and the detection target in the image, so that the distance dist between the recovery bucket and the unmanned ship is calculated, the included angle between the center line of the recovery bucket and the center line of the small ship is marked as theta, the included angle between the connecting line of the small ship and the recovery bucket and the center line of the small ship is marked as theta', and the two-dimensional sitting mark of the recovery bucket is marked as x under the ship-borne coordinate system u 、y u A schematic diagram of a coordinate conversion method provided by the embodiment of the invention is shown in FIG. 5. The specific calculation method is as follows:
(1) from the pinhole camera model, the relationship between the camera coordinate system coordinates of a point and the image coordinate system coordinates is determined, as shown in fig. 5 (a).
Wherein P' [ u, v,1] T Is the coordinates of an image coordinate system, P [ X, Y, Z] T The coordinate of the camera coordinate system is K, and the K is an internal reference matrix of the camera, namely the camera. In the step, an internal reference matrix of the camera is calibrated by adopting a Zhang Zhengyou calibration method.
(2) The conversion relation of the camera coordinate system coordinates and the world coordinate system coordinates established by taking the center of the reference mark on the recycling bin as the origin is obtained according to the principle of coordinate conversion, as shown in (b) of fig. 5.
Wherein P is w [x w ,y w ,z w ] T Is the world coordinate systemThe lower coordinates, R is the rotation matrix and t is the displacement matrix.
And (3) combining the relation between the camera coordinate system coordinates and the image coordinate system coordinates in the step (1) to obtain the conversion relation between the world coordinate system coordinates and the image coordinate system coordinates.
Wherein T is the pose matrix.
And 2, obtaining the image coordinate system coordinates of the target, namely the reference mark, through recognition in the step 2, and knowing the world coordinate system coordinates of the target, and obtaining a rotation matrix R and a displacement matrix t, namely obtaining the pose of the recovery bucket relative to the unmanned ship.
Here, the coordinates of the target image coordinate system, that is, the coordinate position of the current reference mark in the video frame, specifically, the coordinate positions of the four corners of the outer square boundary of the reference mark may be taken, and the coordinates of the target world coordinate system, that is, the coordinate position of the reference mark in the world coordinate system. The 12 equations can be provided through the coordinate positions of the four angles, so that parameters in an unknown matrix are solved in a combined mode, and a rotation matrix R and a displacement matrix t are finally obtained.
(3) And calculating required relevant parameters according to the pose of the recovery bucket relative to the unmanned ship.
x u =t x
y u =t z
The distance between the recovery bucket and the unmanned ship is recorded as distThe included angle between the central line of the recovery bucket and the central line of the small boat is marked as theta, the included angle between the connecting line of the small boat and the recovery bucket and the central line of the small boat is marked as theta', and the two-dimensional sitting mark of the recovery bucket is marked as x under the on-board coordinate system u 、y u As shown in fig. 2.
Step 4: visual guidance
And calculating course deviation according to the obtained data, transmitting the course deviation to a controller, and finishing preliminary guidance, wherein the guidance method comprises the following steps of:
in line of sight angle guidance, the Los Vector angle is calculated by the following equation:
offset distance y e The calculation formula of (2) is as follows:
y e =x u sin(θ)-y u cos(θ)
wherein x is u 、y u Theta is given by visual calculation, delta h Taking a fixed value, the empirical formula is typically 3 to 5 times the captain.
The final heading error is:
and transmitting the course error to the controller to complete the whole guidance control process.
Step 5: docking completion
And (3) after the docking is completed, locking the unmanned ship, lifting the recovery bracket, and completing recovery with the target unmanned ship. In the above process, the docking process under the view angle of the unmanned ship provided by the embodiment of the invention is shown in fig. 6, in which (a) in fig. 6 indicates that docking is started and the visual guiding distance is not entered yet, in fig. 6 (b) indicates that the visual guiding distance is entered, and the unmanned ship enters the guiding wire (on the center line of the recovery bucket); fig. 6 (c) shows the imminent entry of the recovery bucket (still using stern mark, ready to enter the bow mark guiding phase); fig. 6 (d) shows the recovery bucket about to be entered, and the stage of stem mark guiding is entered; fig. 6 (e) shows that docking is completed. The docking process under the view angle of the recovery device provided by the embodiment of the invention is shown in fig. 7, wherein (a) in fig. 7 shows the start of docking, (b) in fig. 7 shows the entry of an unmanned ship into a guide wire, (c) in fig. 7 shows the visual guiding stage, and (d) in fig. 7 shows the completion of docking.
After the unmanned ship is guided to the camera by GPS and inertial navigation roughly and can detect the reference mark of the stern part of the recovery bucket, the position of the recovery bucket in the image is detected in real time on the basis of the reference marks respectively arranged at the bow part and the stern part of the recovery bucket by reading the video stream acquired by the camera; the relative pose of the recovery bucket and the unmanned ship is calculated by combining the information such as the internal parameters of the camera and the size of the designed reference mark, so that the relative distance between the recovery bucket and the unmanned ship, the offset angle and the two-dimensional coordinates of the recovery bucket under the ship-borne coordinate system are accurately calculated; and the course deviation is calculated according to the obtained parameters and is transmitted to the controller to finish guidance, so that the unmanned ship smoothly enters the recovery hopper, and the unmanned ship is butted. Compared with the traditional unmanned ship docking recovery mode, the method improves the accuracy of the parameter measurement of the docking tail end, and further increases the success rate of unmanned ship docking.
Based on any one of the embodiments, the invention provides a visual guidance docking recovery system for a surface unmanned ship. Fig. 8 is a schematic diagram of a visual guidance docking recovery system for a surface unmanned ship according to an embodiment of the present invention, as shown in fig. 8, the system includes:
the preliminary heading guiding module 810 is configured to perform preliminary heading guiding on the unmanned aerial vehicle until a camera installed on the unmanned aerial vehicle detects a first reference mark at an opening end of the recovery device;
the first vision guiding module 820 is configured to perform target detection and positioning based on the video frame acquired by the camera with the first reference mark as a target, so as to perform vision guiding on the unmanned ship until the bow of the unmanned ship is opposite to the opening end of the recovery device, and the distance between the first reference mark and the camera is within a preset range;
the second vision guiding module 830 is configured to perform target detection and positioning based on the video frame collected by the camera with a second reference mark at the closing end of the recovery device as a target, so as to perform vision guiding on the unmanned ship until the unmanned ship is docked with the recovery device; the second fiducial mark is different from the first fiducial mark in the encoding used.
It can be understood that the detailed functional implementation of each module may be referred to the description in the foregoing method embodiment, and will not be repeated herein.
In addition, the embodiment of the invention provides another visual guiding, docking and recycling device of the unmanned surface vehicle, which comprises the following components: a memory and a processor;
the memory is used for storing a computer program;
the processor is configured to implement the method in the above-described embodiments when executing the computer program.
Furthermore, the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method in the above embodiments.
Based on the method in the above embodiments, an embodiment of the present invention provides a computer program product, which when run on a processor causes the processor to perform the method in the above embodiments.
It will be readily appreciated by those skilled in the art that the foregoing description is merely a preferred embodiment of the invention and is not intended to limit the invention, but any modifications, equivalents, improvements or alternatives falling within the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (10)

1. The visual guidance docking recovery method for the unmanned surface vehicle is characterized by comprising the following steps of:
step S110, performing preliminary course guidance on the unmanned ship until a camera installed on the unmanned ship detects a first reference mark at the opening end of the recovery device;
step S120, taking the first reference mark as a target, and carrying out target detection and positioning based on a video frame acquired by a camera so as to carry out visual guidance on the unmanned ship until the bow of the unmanned ship is opposite to the opening end of the recovery device, wherein the distance between the first reference mark and the camera is within a preset range;
step S130, taking a second reference mark at the closing end of the recovery device as a target, and performing target detection and positioning based on the video frame acquired by the camera so as to perform visual guidance on the unmanned ship until the unmanned ship is in butt joint with the recovery device; the second fiducial mark is different from the first fiducial mark in the encoding used.
2. The method according to claim 1, wherein step S120 specifically comprises:
step S121, performing target detection and positioning on the first reference mark based on a video frame acquired by the camera to obtain the pose of the first reference mark relative to the camera;
step S122, calculating the distance between the first reference mark and the camera based on the pose of the first reference mark and the camera;
step S123, taking the pose of the first reference mark relative to the camera as the pose of the recovery device relative to the unmanned ship, and calculating the heading error of the unmanned ship;
step S124, based on the heading error of the unmanned ship, the unmanned ship is guided in a visual mode until the bow of the unmanned ship is opposite to the opening end of the recovery device, and the distance between the first reference mark and the camera is within a preset range.
3. The method according to claim 2, wherein step S123 specifically includes:
calculating an included angle between a central line of the recovery device and a central line of the unmanned ship based on a rotation matrix of the recovery device in a pose relative to the unmanned ship;
calculating a deflection distance based on a displacement matrix of the recovery device in the pose of the recovery device relative to the unmanned ship and an included angle between a central line of the recovery device and a central line of the unmanned ship, and calculating a guide angle based on the deflection distance;
and calculating the heading error of the unmanned ship based on the included angle between the central line of the recovery device and the central line of the unmanned ship and the guidance angle.
4. The method according to claim 2, wherein step S121 specifically comprises:
performing target detection and positioning on the first reference mark based on the video frame to obtain the coordinate position of the first reference mark in the video frame;
based on the coordinate position of the first reference mark in the video frame, the coordinate position of the first reference mark in the world coordinate system and the internal reference matrix of the camera, the pose of the first reference mark for the camera is calculated.
5. The method of any one of claims 1 to 4, wherein the first fiducial mark has a larger size than the second fiducial mark; the first reference mark is arranged at the upper part of the opening end of the recovery device, and the second reference mark is arranged at the middle part of the closing end of the recovery device.
6. The method according to any of the claims 1 to 4, characterized in that the preliminary heading guidance is in particular achieved by GPS and inertial navigation installed on the recovery device, and GPS and inertial navigation installed on the unmanned boat.
7. The utility model provides a surface unmanned ship vision guide butt joint recovery system which characterized in that includes:
the preliminary course guiding module is used for carrying out preliminary course guiding on the unmanned ship until a camera installed on the unmanned ship detects a first reference mark at the opening end of the recovery device;
the first visual guiding module is used for carrying out target detection and positioning based on the video frames acquired by the camera by taking the first reference mark as a target so as to carry out visual guiding on the unmanned ship until the bow of the unmanned ship is opposite to the opening end of the recovery device, and the distance between the first reference mark and the camera is within a preset range;
the second visual guiding module is used for carrying out target detection and positioning based on the video frames acquired by the camera by taking a second reference mark at the closing end of the recovery device as a target so as to carry out visual guiding on the unmanned ship until the unmanned ship is in butt joint with the recovery device; the second fiducial mark is different from the first fiducial mark in the encoding used.
8. The system of claim 7, wherein the first visual guide module specifically comprises:
the pose acquisition unit is used for carrying out target detection and positioning on the first reference mark based on the video frame acquired by the camera to obtain the pose of the first reference mark relative to the camera;
a distance calculating unit for calculating a distance of the first reference mark relative to the camera based on a pose of the first reference mark relative to the camera;
the heading error calculation unit is used for taking the pose of the first reference mark relative to the camera as the pose of the recovery device relative to the unmanned ship and calculating the heading error of the unmanned ship;
the visual guiding unit is used for guiding the unmanned ship in a visual way based on the heading error of the unmanned ship until the bow of the unmanned ship is over against the opening end of the recovery device, and the distance between the first reference mark and the camera is within a preset range.
9. The system according to claim 8, wherein the heading error calculation unit is specifically configured to:
calculating an included angle between a central line of the recovery device and a central line of the unmanned ship based on a rotation matrix of the recovery device in a pose relative to the unmanned ship;
calculating a deflection distance based on a displacement matrix of the recovery device in the pose of the recovery device relative to the unmanned ship and an included angle between a central line of the recovery device and a central line of the unmanned ship, and calculating a guide angle based on the deflection distance;
and calculating the heading error of the unmanned ship based on the included angle between the central line of the recovery device and the central line of the unmanned ship and the guidance angle.
10. The system according to claim 8, wherein the pose acquisition unit is specifically configured to:
performing target detection and positioning on the first reference mark based on the video frame to obtain the coordinate position of the first reference mark in the video frame;
based on the coordinate position of the first reference mark in the video frame, the coordinate position of the first reference mark in the world coordinate system and the internal reference matrix of the camera, the pose of the first reference mark for the camera is calculated.
CN202311490179.9A 2023-11-08 2023-11-08 Visual guidance docking recovery method and system for unmanned surface vehicle Pending CN117369474A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311490179.9A CN117369474A (en) 2023-11-08 2023-11-08 Visual guidance docking recovery method and system for unmanned surface vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311490179.9A CN117369474A (en) 2023-11-08 2023-11-08 Visual guidance docking recovery method and system for unmanned surface vehicle

Publications (1)

Publication Number Publication Date
CN117369474A true CN117369474A (en) 2024-01-09

Family

ID=89402354

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311490179.9A Pending CN117369474A (en) 2023-11-08 2023-11-08 Visual guidance docking recovery method and system for unmanned surface vehicle

Country Status (1)

Country Link
CN (1) CN117369474A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106896815A (en) * 2017-03-16 2017-06-27 南京信息工程大学 A kind of automatic mooring system of unmanned boat and method
CN107830860A (en) * 2017-10-31 2018-03-23 江苏科技大学 A kind of unmanned boat lifting recovery visual guide method
CN113148209A (en) * 2021-03-29 2021-07-23 苏州臻迪智能科技有限公司 Method for controlling unmanned aerial vehicle to return to hangar, unmanned aerial vehicle recovery device and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106896815A (en) * 2017-03-16 2017-06-27 南京信息工程大学 A kind of automatic mooring system of unmanned boat and method
CN107830860A (en) * 2017-10-31 2018-03-23 江苏科技大学 A kind of unmanned boat lifting recovery visual guide method
CN113148209A (en) * 2021-03-29 2021-07-23 苏州臻迪智能科技有限公司 Method for controlling unmanned aerial vehicle to return to hangar, unmanned aerial vehicle recovery device and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张梦辉: "基于机器视觉的自主式水下航行器末端导引系统关键技术研究", 《中国优秀硕士学位论文全文数据库信息科技辑》, 15 June 2018 (2018-06-15), pages 138 - 1252 *
黄烨笙: "无人艇自主靠泊控制系统设计", 《中国测试》, 31 October 2020 (2020-10-31), pages 111 - 117 *

Similar Documents

Publication Publication Date Title
CN109992006B (en) A kind of accurate recovery method and system of power patrol unmanned machine
JP6507437B2 (en) Ship auxiliary docking method and system
KR100842104B1 (en) Guide and control method for automatic landing of uavs using ads-b and vision-based information
CN111290395B (en) Unmanned ship autonomous recovery method based on primary and secondary ships
US20170184396A1 (en) Road curvature detection device
CN109934891B (en) Water area shoreline construction method and system based on unmanned ship
CN113657256B (en) Unmanned aerial vehicle sea-air cooperative vision tracking and autonomous recovery method
CN110610134B (en) Unmanned ship autonomous docking method
CN108318034B (en) AUV docking navigation method based on sonar map
CN107830860B (en) A kind of unmanned boat lifting recycling visual guide method
CN110658826A (en) Autonomous berthing method of under-actuated unmanned surface vessel based on visual servo
KR101883188B1 (en) Ship Positioning Method and System
US20220392211A1 (en) Water non-water segmentation systems and methods
KR20200093271A (en) Apparatus and method for estimating location of landmark and computer recordable medium storing computer program thereof
CN110673622A (en) Unmanned aerial vehicle automatic carrier landing guiding method and system based on visual images
CN110322462B (en) Unmanned aerial vehicle visual landing method and system based on 5G network
CN117470249A (en) Ship anti-collision method and system based on laser point cloud and video image fusion perception
CN117369474A (en) Visual guidance docking recovery method and system for unmanned surface vehicle
Van de Sande et al. Autonomous multi-vehicle contact reacquisition using feature-based navigation and in-situ adaptive path planning for AUVs
CN115830140A (en) Offshore short-range photoelectric monitoring method, system, medium, equipment and terminal
Zhang et al. Terminal stage guidance method for underwater moving rendezvous and docking based on monocular vision
CN112862862B (en) Aircraft autonomous oil receiving device based on artificial intelligence visual tracking and application method
CN111984006B (en) Unmanned ship multi-target meeting collision avoidance method integrating ocean current and scale difference influences
CN114237262A (en) Automatic mooring method and system for unmanned ship on water
CN117806328A (en) Unmanned ship berthing vision guiding control method and system based on reference marks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination