CN112184765B - Autonomous tracking method for underwater vehicle - Google Patents

Autonomous tracking method for underwater vehicle Download PDF

Info

Publication number
CN112184765B
CN112184765B CN202010988752.9A CN202010988752A CN112184765B CN 112184765 B CN112184765 B CN 112184765B CN 202010988752 A CN202010988752 A CN 202010988752A CN 112184765 B CN112184765 B CN 112184765B
Authority
CN
China
Prior art keywords
camera
underwater vehicle
image
point
marker
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010988752.9A
Other languages
Chinese (zh)
Other versions
CN112184765A (en
Inventor
张立川
任染臻
李逸琛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202010988752.9A priority Critical patent/CN112184765B/en
Publication of CN112184765A publication Critical patent/CN112184765A/en
Application granted granted Critical
Publication of CN112184765B publication Critical patent/CN112184765B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Abstract

The invention provides an autonomous tracking method for an underwater vehicle, which is characterized in that in a multi-underwater vehicle cooperative system, a camera is used for collecting visual information of a pilot, preprocessing is carried out to obtain an optical beacon array image for extracting a target pose, and a real optical beacon position is obtained from an underwater environment with water surface reflection, environment stray light and scattered light of an optical beacon; and according to the obtained optical beacon information, estimating the pose of the piloted AUV in the three-dimensional space by solving the problem of coplanarity P4P. Compared with a method of a geometric method, the method breaks through the limitation that the arrangement of characteristic points in the geometric method must meet specific conditions, and the researched visual pose estimation method provides high-precision six-degree-of-freedom information of a pilot for a follower in multi-AUV cooperative motion control and serves as the basis of master-slave cooperative motion of the follower.

Description

Autonomous tracking method for underwater vehicle
Technical Field
The invention relates to the technical field of underwater vehicle vision, in particular to an autonomous tracking method for an underwater vehicle.
Background
The ocean area of the earth is about 3.6 hundred million square kilometers, which accounts for about 71% of the surface area of the earth. The ocean is not only a cradle for life on earth, but also contains much more abundant resources than land. In the ocean development process, An Underwater Vehicle (AUV) is widely applied as a tool for assisting deep sea exploration, and the understanding of people on unknown underwater environment is widened.
Underwater rendezvous and docking is a quick and effective method for solving the problem that AUV communication and carrying capacity are limited. The identification and autonomous tracking of the AUV is the most fundamental step in the overall process. Generally, electromagnetic sensors, acoustic sensors, visual sensors, and the like are generally used for position detection of underwater targets. The acoustic sensor has a wide measurement range which can reach several kilometers, but has relatively low resolution and high cost. Electromagnetic sensors have better accuracy than acoustic sensors, but the measurement range is small, about 30-50 meters. Compared with an acoustic sensor and an electromagnetic sensor, the visual sensor has obvious advantages in the aspect of short-range detection, such as high precision (up to centimeter level), small volume, light weight, convenient carrying, capability of being installed on an aircraft with a certain volume or weight, and the like, and is suitable for short-distance and high-precision target detection and tracking.
The main geometry applied to AUV visual localization at this stage. The geometric method obtains position coordinates of n characteristic points on a visual target under an image system through a camera arranged on an underwater vehicle, and solves the position and the attitude of the underwater vehicle relative to the target through a PnP (Passive-n-Piont) algorithm. Under the application of attitude estimation, the PnP problem is defined as: in the target coordinate system, the coordinates of a series of points and their projections onto the image plane are given, and an external reference matrix between the target coordinate system and the camera coordinate system is solved, assuming that the camera internal reference is known. However, the algorithm has the problems of multiple solutions, poor robustness and the like in the solving process, the arrangement of the characteristic points must meet specific conditions, and the method is limited by more factors in a complex underwater environment, so that the method cannot be widely used for controlling an underwater vehicle in actual use.
Disclosure of Invention
The invention aims to provide an autonomous tracking method for an underwater vehicle, aiming at the defects in the prior art. The method can be used for collecting pose information in a short distance and realizing autonomous tracking of the underwater vehicle, and the pose information is used as a basis for cluster cooperative control.
The technical scheme of the invention is as follows:
the autonomous tracking method for underwater vehicles is characterized in that: the system is used for a multi-underwater vehicle cooperative system, the multi-underwater vehicle cooperative system comprises a piloting underwater vehicle and a following underwater vehicle, a visual identification device is installed at the rear end of the piloting underwater vehicle, the visual identification device is provided with four luminous bodies serving as light beacons, the four luminous bodies are arranged around the rear end of the piloting underwater vehicle in a coplanar manner through a light beacon fixing frame, and the three luminous bodies have the same color and are different from the other luminous body; a camera is arranged at the front end of the underwater vehicle, and front-end image information of the AUV is acquired; an azimuth and attitude measurement system is arranged in the underwater vehicle, so that the real-time angular speed and angle information of the underwater vehicle can be obtained; a Doppler velocimeter is also arranged in the underwater vehicle, so that real-time speed information of the underwater vehicle can be obtained;
the following underwater vehicle adopts the following method to autonomously track and pilot the underwater vehicle:
step 1: starting from a camera model, establishing an underwater vehicle visual navigation model, which comprises the steps of establishing a camera aperture model and determining internal and external parameters of a camera:
step 2: the method comprises the following steps of acquiring a target through rectangular target constraint following an AUV front-end camera, then preprocessing the target to acquire an optical beacon array image for extracting a target pose, and acquiring a real optical beacon position from an underwater environment with water surface reflection, environment stray light and scattered light of an optical beacon:
step 2.1: image preprocessing: firstly, carrying out color space transformation and primary screening filtering on an original image acquired by a camera to obtain a single-channel threshold value image; secondly, performing edge processing in the threshold value image, and adopting a circle with a set diameter to draw the appearance of the highlight part in the threshold value image to the circle;
step 2.2: the designed light beacon array is identified under ambient light conditions using an identification algorithm based on statistical characteristics of the light beacon array following the AUV:
and (3) searching edges in the binary image preprocessed in the step 2.1 by using an edge extraction algorithm:
let the set of found edges be A for the k-th set of edge points therein
B k ={(x i ,y i )|i=1...n}∈A
Its area S k Is composed of
Figure GDA0003741504940000031
And (3) carrying out threshold definition on the area of the connected domain surrounded by the edges: only the edge point set with the connected domain area larger than or equal to a set value, such as 1, is received. If the number of the remaining points is less than 3, the points are considered to be not completely identified, and the algorithm is ended; otherwise, for each acceptedSet of edge points B k Fitting the circle by using a least square method to obtain the radius R, the center (x) of the fitted circle c ,y c );
Defining a set of edge points B k Degree of non-conformity f k The variance of the distance from the edge point set to the center of the fitting circle is:
Figure GDA0003741504940000032
defining a set of edge points B k Degree of incompleteness w k The ratio of the distance between the center coordinate of the edge point set and the center coordinate of the fitting circle to the radius of the fitting circle is:
Figure GDA0003741504940000033
defining a set of edge points B k Overall brightness v of k Is B k Average luminance of the enclosed connected component:
Figure GDA0003741504940000034
wherein v is q,p Is the luminance of a point with coordinates (q, p), n k Is a B k The number of points in the enclosed connected domain.
The degree of non-fitness f of the set of edge points defined above with respect to the fitted circle k Incomplete degree w k Overall brightness v of the point set k And then the radius R of the fitting circle of the point set is added k Four features of the edge point set are formed. These four features can be considered as the coordinates of a point in a 4-dimensional euclidean space. I.e. in a 4-dimensional euclidean space R 4 In (1), defining an edge point set B k Is characterized by
c k =(f k ,w k ,v k ,R k ) T
The optical beacon array is designed to have the strongest characteristics in the environment. First, threshold limits are made on the degree of non-fit and the degree of incompleteness: and when one of the characteristics is larger than a certain value, deleting the corresponding point set from the candidate point set. Then, in order to unify the scale, the characteristic points of each point set are normalized to obtain
Figure GDA0003741504940000041
Wherein m is the number of the remaining candidate point sets. All the processed characteristics are less than or equal to 1, and the sum of all the characteristics is 1.
Taking the ratio of the number of edge points in the point set to the total number of point sets as weight, and calculating weighted average characteristics
Figure GDA0003741504940000042
Wherein n is k Is a B k The number of edge points of (2).
Calculating the Euclidean distance l from each feature point to the weighted average feature point k Is provided with
Figure GDA0003741504940000043
And (3) solving the standard deviation of the Euclidean distances from the feature points of all the point sets to the weighted average feature point:
Figure GDA0003741504940000044
and deleting the point set corresponding to the feature point with the farthest distance, and performing iterative computation until the standard deviation result is smaller than a certain set threshold value. At this time, if the number of remaining connected domains is less than 4, it is considered that sufficient light beacons are not found, otherwise, the remaining set of candidate connected domains can be considered as a set of light beacons and its larger surface reflection. In order to eliminate the influence of water surface reflection, four point sets with the minimum Y coordinate of the circle center of the fitting circle are taken from the rest point sets to be used as real optical beacon array images;
step 2.3: determining a characteristic connected domain according to the average hue of the four-point set, and setting the center O of a fitting circle of the green connected domain g =(x g ,y g ) Three blue connected domain fitting circle center coordinates O b1 =(x b1 ,y b1 ),O b2 =(x b2 ,y b2 ),O b3 =(x b3 ,y b3 ) Then the geometric center O of the quadrangle it forms l Coordinate (x) o ,y o ) Is composed of
Figure GDA0003741504940000051
From O l Starting from O g 、O b1 、O b2 、O b3 Is represented as
Figure GDA0003741504940000052
Figure GDA0003741504940000053
Figure GDA0003741504940000054
Figure GDA0003741504940000055
Order to
Figure GDA0003741504940000056
Figure GDA0003741504940000057
And
Figure GDA0003741504940000058
angle of (gamma) of 1 Can be obtained by calculation
γ 1 =f(y b1 -y o ,x b1 -x o )-f(y g -y o ,x g -x o )
The same can be obtained
Figure GDA0003741504940000059
And
Figure GDA00037415049400000510
angle γ of 2
Figure GDA00037415049400000511
And
Figure GDA00037415049400000512
angle gamma 3 . And sorting according to the size, wherein the corresponding connected domains are arranged clockwise.
And 3, step 3: and (3) according to the optical beacon information obtained in the step (2), estimating the pose of the piloting AUV in the three-dimensional space by solving the coplanar P4P problem.
Advantageous effects
Compared with a method of a geometric method, the method breaks through the limitation that the arrangement of characteristic points in the geometric method must meet a specific condition, and the researched visual pose estimation method provides high-precision six-degree-of-freedom information of a pilot for a follower in multi-AUV cooperative motion control and serves as the basis of master-slave cooperative motion of the follower.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1: the invention is a schematic diagram of a camera model coordinate system;
FIG. 2 is a schematic diagram: selecting two Ar markers with different IDs;
FIG. 3: the visual recognition device of the invention;
FIG. 4 is a schematic view of: the invention relates to a light beacon array recognition algorithm based on characteristics;
FIG. 5: the invention relates to an experiment hardware platform and a pool experiment; (a) an experiment hardware platform, (b) a pool experiment;
FIG. 6: analyzing results of experimental data of Z, X and Y axial distances and yaw angles of the optical beacon array;
(a) and (3) analyzing data of the Z-axis distance experiment of the optical beacon array: comparing the measured mean value with the true value, and (II) measuring the absolute error and the measured value variance of the mean value;
(b) analyzing results of X-axis distance experiment data of the optical beacon array; comparing the measured mean value with the true value, and (II) measuring the absolute error and the measured value variance of the mean value;
(c) analyzing results of Y-axis distance experiment data of the optical beacon array; comparing the measured mean value with the true value, and (II) measuring the absolute error and the measured value variance of the mean value;
(d) analyzing the data of the yaw angle experiment of the optical beacon array; comparing the measured mean value with the true value, and (II) measuring the absolute error and the measured value variance of the mean value;
FIG. 7: analyzing results of Z, X and Y axis distances and yaw angle experimental data of the Ar Marker;
(a) and analysis results of Z-axis distance experimental data of the Ar Marker are as follows: comparing the measured mean value with the true value, and (II) measuring the absolute error and the measured value variance of the mean value;
(b) analysis results of the Ar Marker X-axis distance experiment data: comparing the measured mean value with the true value, and (II) measuring the absolute error and the measured value variance of the mean value;
(c) analysis results of Ar Marker Y-axis distance experiment data: comparing the measured mean value with the true value, and (II) measuring the absolute error and the measured value variance of the mean value;
(d) and (3) analysis results of Ar Marker yaw angle experimental data: comparing the measured mean value with the true value, and (II) measuring the absolute error and the measured value variance of the mean value;
FIG. 8: the depth error and the yaw angle error of the follower AUV and the pilot AUV; depth error, and yaw angle error;
FIG. 9: and (4) extracting three-axis displacement and yaw angle information by using a visual algorithm.
Detailed Description
The following detailed description of embodiments of the invention is intended to be illustrative, and not to be construed as limiting the invention.
In the embodiment, in terms of hardware implementation, a visual identification device is installed at the rear end of the piloting underwater vehicle, the visual identification device is provided with four luminous bodies as light beacons, the four luminous bodies are arranged around the rear end of the piloting underwater vehicle through a light beacon fixing frame, the three luminous bodies are blue in color, and the other luminous body is green in color. And two Ar markers with different IDs are also arranged at the rear end of the piloting underwater vehicle.
Installing an industrial camera at the front end of the underwater vehicle, preferably, the frame rate is at least more than 20 frames, and acquiring the front-end image information of the AUV; an azimuth and attitude measurement system is installed in the underwater vehicle to obtain real-time angular speed and angle information of the underwater vehicle; and a Doppler velocimeter is also arranged in the underwater vehicle to obtain real-time speed information of the underwater vehicle.
Step 1: starting a camera model, establishing an underwater vehicle visual navigation model, specifically comprising establishing an aperture camera model and camera internal and external parameter definition as shown in fig. 1;
step 1-1: a camera model coordinate system is defined. First, four coordinate systems are defined, respectively: world coordinate system W: namely a reference coordinate system outside the camera, wherein the camera and the shot object exist in a world coordinate system; camera coordinate system C: optical center of camera O c The position and the rotation relation of the camera under a world coordinate system are reflected as an origin; image physical coordinate system M: reflecting an image of a shot object in the camera; image pixel coordinate system U: using the upper right corner of the image as the origin of coordinates O U The coordinate system represents a certain image in the pixel set obtained by rasterizing the imageThe unit of the position of the pixel in the whole pixel set is the pixel.
Let P be the position of a certain point P on the world coordinate system w =[X w ,Y w ,Z w ] T . The position of P is P in the camera coordinate system c =[X c ,Y c ,Z c ] T . After projective transformation, the coordinate of the image P 'of P on the imaging plane is P' ═ x, y] T And obtaining the pixel coordinate p ═ u, v after rasterization] T . In the process of converting the camera coordinate system C into the image physical coordinate system M, let P' be [ x, y, z ] in the camera coordinate system C]Where the focal length | z | ═ f. According to the principle of similar triangles
Figure GDA0003741504940000081
The basic formula of the camera pinhole model can be obtained:
Figure GDA0003741504940000082
step 1-2: and defining internal and external parameters of the camera, including an external parameter matrix T and an internal parameter matrix K of the camera.
To Z c In other words, equation (1-1) is non-linear, and in order to linearize it, it is necessary to extend the dimensions using homogeneous coordinates, defining:
Figure GDA0003741504940000083
wherein, the first and the second end of the pipe are connected with each other,
Figure GDA0003741504940000084
is the homogeneous coordinate of the image point P ', with Cartesian coordinates P' [ x, y ]] T The following relationships exist:
Figure GDA0003741504940000085
Figure GDA0003741504940000086
as coordinates of the added dimension. Using a rotation matrix R according to the position of the camera and the point P in the world coordinate system 3×3 And a displacement vector t 3×1 Will P w Transformation to P c
P c =RP w +t (1-4)
Unfolding and writing (1-4) into the form of homogeneous coordinates, wherein T is an external parameter matrix of the camera:
Figure GDA0003741504940000087
in the image physical coordinate system M, assuming that the horizontal length of each pixel of the pixel plane is dx and the vertical length is dy, since the origin point is defined differently, it is possible to pass the translation (c) x ,c y ) A point on the pixel coordinate system is obtained. Therefore, the method comprises the following steps:
Figure GDA0003741504940000091
wherein, alpha is 1/dx, beta is 1/dy. Substituting (1-6) into (1-1) can obtain:
Figure GDA0003741504940000092
let f x =α·f,f y Equation (1-7) is expressed in terms of homogeneous coordinates, since the homogeneous coordinate scaling constant factor is constant, so there is:
Figure GDA0003741504940000093
for combination with external parameters, the added dimension on the right side of equations (1-8) is normalized:
Figure GDA0003741504940000094
wherein, the first and the second end of the pipe are connected with each other,
Figure GDA0003741504940000095
referred to as the intrinsic parameter matrix of the camera. The aperture model of the camera can be expressed as:
Figure GDA0003741504940000096
step 2: the camera acquires a target through rectangular target constraint and then carries out preprocessing to obtain an optical beacon array image used for extracting the pose of the target, and acquires a real optical beacon position from an underwater environment with water surface reflection, environment stray light and scattered light of the optical beacon.
Step 2-1: and (5) image preprocessing. The original image collected by the camera needs to be preprocessed by color space conversion, morphological operation, edge extraction and the like.
Performing color space transformation and primary screening filtering on an image collected by an AUV front-end camera to obtain a single-channel threshold value image; the threshold map is subjected to edge processing, and a circle with a set diameter is used to draw the outline of the highlight portion in the threshold map to the circle.
Specifically, the method comprises the following steps: firstly, an image collected by a camera is converted into HSV color space from RGB color space, and strictly filtered according to setting in the aspect of hue, and meanwhile, the acceptance range of brightness information is increased, so that the threshold can stably identify the target under different brightness conditions. And obtaining a single-channel threshold value image after color space transformation and primary screening.
Next, edge processing is performed on the threshold map. For image matrix
Figure GDA0003741504940000101
Assigning a core
Figure GDA0003741504940000102
Performing graph convolution with A, namely calculating the maximum value of the pixel points of the area covered by B, and assigning the maximum valueGiven the specified pixel of the reference point, namely:
Figure GDA0003741504940000103
and (3) obtaining local minimum values for the A and the B, increasing low-brightness areas in the image and removing high-brightness noise:
Figure GDA0003741504940000104
the highlight in the threshold map is processed by using a set diameter, such as a circle with a diameter of 3: and eliminating a small highlight area, smoothing the edge of a larger highlight area, and removing adhesion among light spots to enable the appearance of the highlight part to be close to a circle.
Step 2-2: in consideration of the fact that the total reflection of the water-air interface and the diffuse reflection caused by the illumination light of the underwater optical beacon affect the identification of the optical beacon array, an identification algorithm based on the statistical characteristics of the optical beacon array is adopted, so that the designed optical beacon array can be accurately identified under the ambient light conditions, such as the light environment of the water pool experiment in the embodiment, following the AUV.
Firstly, an edge extraction algorithm is used for searching edges in the binary image preprocessed in the step 2-1. The set of found edges after one preprocessing is set as A, and the k-th edge point set B is set k ={(x i ,y i ) I 1.. n }. e.a, and the area S thereof is S k Is composed of
Figure GDA0003741504940000105
Due to the existence of noise and small bright spots, the area of the connected domain surrounded by the edges needs to be limited by a threshold value: only the edge point set with the connected domain area being more than or equal to 1 is accepted. If the number of remaining points is less than 3, the recognition is considered to be incomplete and the algorithm ends. Otherwise, for each accepted edge point set B k The circle is fitted using the least squares method.
Let the standard equation of the circle to be fitted be
(x-x c ) 2 +(y-y c ) 2 =R k 2 (1-14)
Radius of the fitting circle is R k The center of the circle is (x) c ,y c ) Let us order
Figure GDA0003741504940000111
Then we get (1-14) which can also be written as:
x 2 +y 2 +ax+by+c=0 (1-16)
order to
Figure GDA0003741504940000112
And is provided with
Figure GDA0003741504940000113
To obtain x c 、y c 、R k Least squares estimation of
Figure GDA0003741504940000114
In order to describe the matching degree of the edge point set and the fitting circle, an edge point set B is defined k Degree of non-conformity f k The variance of the distance from the edge point set to the center of the fitting circle is:
Figure GDA0003741504940000121
apparently, degree of non-fit f k ≥0。f k The larger, B k The worse the fit to the circle. When f is k When equal to 0, B k All points in (a) are on the fitted circle.
To describe the completeness of the edge point set, an edge point set B is defined k Degree of incompleteness w k The ratio of the distance between the center coordinate of the edge point set and the center coordinate of the fitting circle to the radius of the fitting circle is:
Figure GDA0003741504940000122
apparently, degree of incompleteness w k ≥0。w k The larger, B k The more incomplete the distribution on the circle. When w is k When equal to 0, B k All points in (a) are symmetric about the diameter of a certain fitting circle.
In addition, a set of edge points B is defined k Overall brightness v of k Is B k Average luminance of the enclosed connected component:
Figure GDA0003741504940000123
wherein v is q,p Is the luminance of a point with coordinates (q, p), and n in the formula (1-22) is B k The number of points in the enclosed connected domain.
The degree of non-fitness f of the set of edge points defined above with respect to the fitted circle k Incomplete degree w k Overall brightness v of the set of points k Plus the radius R of the fitting circle of the point set k Four features of the edge point set are formed. These four features can be considered as the coordinates of a point in a 4-dimensional euclidean space. I.e. in a 4-dimensional euclidean space R 4 In (1), define the edge point set B k Is characterized by
c k =(f k ,w k ,v k ,R k ) T (1-23)
The optical beacon array is designed to have the strongest characteristics in the environment. First, threshold limits are made on the degree of non-fit and the degree of incompleteness: and when one of the characteristics is larger than a certain value, deleting the corresponding point set from the candidate point set. Then, in order to unify the scale, the characteristic points of each point set are normalized to obtain
Figure GDA0003741504940000131
Wherein m is the number of the remaining candidate point sets. All the processed characteristics are less than or equal to 1, and the sum of each characteristic is 1.
Taking the ratio of the number of edge points in the point set to the total number of point sets as weight, calculating weighted average characteristics
Figure GDA0003741504940000132
Wherein n is k Is a B k The number of edge points of (2).
Calculating the Euclidean distance l from each feature point to the weighted average feature point k Is provided with
Figure GDA0003741504940000133
And (3) solving the standard deviation of the Euclidean distances from the feature points of all the point sets to the weighted average feature point:
Figure GDA0003741504940000134
and deleting the point set corresponding to the feature point with the farthest distance, and iteratively calculating formulas (1-24), (1-25), (1-26) and (1-27) until the result of the formula (1-27) is smaller than a certain set threshold value. At this time, if the number of remaining connected domains is less than 4, it is considered that sufficient light beacons are not found, otherwise, the remaining set of candidate connected domains can be considered as a set of light beacons and its larger surface reflection. In order to eliminate the influence of water surface reflection, four point sets with the minimum Y coordinate of the circle center of the fitting circle are taken from the rest point sets to be used as real optical beacon array images.
Finally, as shown in fig. 3, and finally, as shown in fig. 3, the visual recognition device of the present invention, the improved optical beacon has 4 spherical illuminants, which ensure the three-dimensional isotropy of the imaging shape in the camera. The optical beacon can emit blue-green light with the wavelength of 400-500 nm, and the absorption rate and the scattering rate of visible light in the wavelength range in an underwater environment are minimum. The luminous bodies are arranged in an X shape, and a foundation is provided for obtaining the target pose by solving a P4P problem. The optical beacon fixing frame is manufactured through 3D printing, and the optical beacon is enabled to be tightly connected with an aircraft.
Determining the characteristic connected domains of green and blue according to the average hue of the four-point set, and setting the center coordinates of the fitting circle of the green connected domain as O g =(x g ,y g ) The coordinate of the center of a fitting circle of the three blue connected domains is O b1 =(x b1 ,y b1 ),O b2 =(x b2 ,y b2 ),O b3 =(x b3 ,y b3 ) Then it forms the geometric center O of the quadrangle l Coordinate (x) o ,y o ) Is composed of
Figure GDA0003741504940000141
From O l Starting from O g 、O b1 、O b2 、O b3 Is represented as
Figure GDA0003741504940000142
Order to
Figure GDA0003741504940000143
Figure GDA0003741504940000144
And
Figure GDA0003741504940000145
angle of (a) gamma 1 Can pass throughIs calculated to obtain
γ 1 =f(y b1 -y o ,x b1 -x o )-f(y g -y o ,x g -x o ) (1-31)
The same can be obtained
Figure GDA0003741504940000146
And
Figure GDA0003741504940000147
angle γ of 2
Figure GDA0003741504940000148
And
Figure GDA0003741504940000149
angle gamma 3 . And sorting according to the size, wherein the corresponding connected domains are arranged clockwise.
And 3, step 3: by solving the problem of P4P, the pose of the target in the three-dimensional space is estimated according to the characteristic points of the beacon in the image.
Solving the coplanar P4P problem: set spatial point
Figure GDA00037415049400001410
Image coordinates of
Figure GDA00037415049400001411
P can be obtained from the formula (1-8) i Imaging point at focal length normalized imaging plane of camera
Figure GDA0003741504940000151
Coordinates of (2)
Figure GDA0003741504940000152
A reference coordinate system R is established. Optionally taking P i Three points in the world coordinate system, assuming their positions in the world coordinate system are [ X ] i ,Y i ,0]And its image coordinates are (u) i ,v i ). Taking image coordinates u i The smallest point is used as the origin of R system, if there are two points u i If the same, take v i The minimum point is taken as the origin O of the R system r 。X r The axial direction is from the R system origin to the image coordinate u i Direction of maximum point, Y r Passing through the origin and parallel to the third point as X r Perpendicular to the axis in the direction X R Direction of the axis to a third reference point. Z r Is determined according to the right-hand rule. Without loss of generality, let P1 be the origin O r With the direction of P1 to P2 being X of the reference coordinate system r Axial direction, P3 to X r The direction from the intersection point of the perpendicular lines of the axes to P3 is Y r The direction of the axis. Obtaining the pose of the R system in the world coordinate system w M r And P i Position P in the reference coordinate system ri =(X ri ,Y ri ,Z ri ).
Will P ri Substituting into the formula (1-5) yields:
Figure GDA0003741504940000153
wherein the content of the first and second substances, c n r =[ c n rx , c n ry , c n rz ] Tc o r =[ c o rx , c o ry , c o rz ] Tc α r =[ c α rx , c α ry , c α rz ] T are respectively w M r The X axis, the Y axis and the Z axis of the camera are direction vectors in a camera coordinate system. c p r =[ c p rx , c p ry , c p rz ] T Is that w M r The position vector of (2). As can also be seen from (1-32),
Figure GDA0003741504940000154
substituting the formulas (1-33) into (1-34) to obtain
Figure GDA0003741504940000155
For 4 spatial points, 4 sets (1-35) of equations can be obtained, for a total of 8 sets of equations, which are rewritten into matrices
A 1 H 1 +A 2 H 2 =0 (1-36)
Wherein
Figure GDA0003741504940000161
Figure GDA0003741504940000162
H 1 =[ c n rx c n ry c n rz ] T (1-39)
H 2 =[ c o rx c o ry c o rz c p rx c p ry c p rz ] (1-40)
Wherein, A 1 Is an 8 × 3 matrix, A 2 Is an 8 x 6 matrix. And, H 1 Is that c n r Is a unit vector, therefore
||H 1 ||=1 (1-41)
Structural index function
F=||A 1 H 1 +A 2 H 2 || 2 +λ(1-||H 1 || 2 ) (1-42)
The problem of solving equations (1-36) can be converted to a problem of minimizing the index function F under any lambda condition.
And H 1 And H 2 Is solved as
Figure GDA0003741504940000163
By means of H 1 And H 2 Can obtain the external parameters of the camera relative to the reference coordinate system c M r Wherein, in the step (A), c M r is obtained by cross-multiplying the first and second columns. In addition, the camera is relative to the external parameters of the world coordinate system c M w Can pass through
c M wc M r r M w (1-44)
And calculating to obtain the result, wherein, r M w is the pose of the world coordinate system in the reference coordinate system, and the pose of the world coordinate system in the reference coordinate system w M r Obtained in the following relation
r M ww M r -1 (1-45)
By iterative enhancement using recursive least squares c M r To make it closer to the identity orthogonal matrix. In addition, orthogonal iterative algorithm optimization can also be adopted c M r Can ensure c M r The attitude matrix in (1) is an orthonormal matrix of units. The characteristic-based optical beacon array identification algorithm of the present invention is shown in fig. 4.
In addition, the stability of the output of the Ar Marker can be enhanced through an improved recognition method of the Ar Marker, so that the position and pose information can be acquired at a short distance. By utilizing the advantages of the Ar Marker-based plane detection method and additionally arranging a Marker module on the navigator AUV, the Autonomous Underwater Vehicle (AUV) can further estimate the position and pose of the navigator AUV. FIG. 2 shows two Ar markers with different IDs selected in the present invention.
Step 4-1: firstly, converting an acquired image into a gray-scale image, robustly extracting the most prominent profile in different illumination environments by using a local adaptive threshold method, and performing polygon approximation by adopting a Douela-Pukk algorithm. Next, a binary image area in the Marker is extracted, and information therein is obtained. Specifically, the homography matrix elimination projection is first calculated, and then the best threshold of the image with bimodal distribution is obtained based on the gray level histogram using the method of Otsu, and the image is thresholded. And dividing the binary image into regular networks, and allocating 0 or 1 to each element according to the value of most pixels in each grid until the boundary of all 0 is detected.
Step 4-2: the candidate mark is determined to be part of the Marker. And rotating the obtained binary graph around the central point four times, comparing the 4 graphs obtained by rotating the binary graph four times with all markers in the dictionary, and carrying out effective marking. If no matching item is matched, correcting the matching item: by computing acquired candidate images
Figure GDA0003741504940000171
To dictionary
Figure GDA0003741504940000176
If the distance of the candidate image is less than half of the distance of any two markers in the dictionary, the candidate image is considered to be the closest Marker. Distance of candidate image from dictionary:
Figure GDA0003741504940000172
Figure GDA0003741504940000173
is that
Figure GDA0003741504940000174
To
Figure GDA0003741504940000175
Define:
Figure GDA0003741504940000181
in the formula, H is the code distance of two markers, and is defined as the number of different data at the same position of two sequences. R' k' Is to make Marker clockwise rotates k' operators by 90 °.
Step 4-3: and calculating the position of the corner point by adopting linear regression on the marked edge. The corner reprojection error is minimized by iteration using the Levenberg-Marquardt algorithm. In addition, since the binary region of the Marker has rotational invariance, the order of four corner points can be uniquely determined. And finally, estimating the posture of the Marker relative to the camera through the image coordinates of the corner points by solving the P4P problem.
Considering that the background color of the underwater environment is dark black and is influenced by the lighting condition, a situation that the extraction of the mark edge is unstable may occur, resulting in a jump in the gesture. To suppress such a phenomenon, median filtering may be added after solving the pose. Specifically, each time a Marker is detected, it is first determined whether the currently found Marker is the Marker found last time, which has four cases for each Marker: (a) the Marker found at this time is also found at the last time; (b) the Marker found this time is not found last time; (c) the Marker found last time is not found this time; (d) the Marker that was not found last time is not found this time. For (a), the results of this Marker are placed in the median filter buffer. If the Marker is found for the first 5 times, the first detected value is output, and if the Marker is found for the 5 th time, the median filtering result is output. And (b), putting the result into a median filtering buffer, and directly outputting the result. And (c), deleting the record of the corresponding Marker. For (d), i.e., no corresponding Marker is detected, nothing happens.
The experimental hardware platform and the water pool experiment are shown in fig. 5, the autonomous tracking of the underwater vehicle is completed in the experiment, the analysis results of the experimental data of the distances of the Z, X and Y axes and the yaw angle of the optical beacon array are shown in fig. 6, the analysis results of the experimental data of the distances of the Z, X and Y axes and the yaw angle are shown in fig. 7, the depth error and the yaw angle error of the AUV of the follower and the AUV of the pilot are shown in fig. 8, and the three-axis displacement and the yaw angle information extracted by the visual algorithm are shown in fig. 9. According to the example, under the action of the controller, the X, Y, Z triaxial distance is converged continuously and finally reaches the vicinity of the set formation parameter. The yaw angle also converges to 0 with the control process. In addition, the consistency of the identification process of the optical beacon array and the Ar Marker is better, and the identification process can be basically switched seamlessly, so that the control continuity is ensured. The result shows that the method can accurately complete the estimation of the moving target parameters, complete the autonomous tracking and tracking of the underwater vehicle and lay the foundation for the cluster cooperative control.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made in the above embodiments by those of ordinary skill in the art without departing from the principle and spirit of the present invention.

Claims (6)

1. An autonomous tracking method for an underwater vehicle, characterized in that: the system is used for a multi-underwater vehicle cooperative system, the multi-underwater vehicle cooperative system comprises a piloting underwater vehicle and a following underwater vehicle, a visual identification device is installed at the rear end of the piloting underwater vehicle, the visual identification device is provided with four luminous bodies serving as light beacons, the four luminous bodies are arranged around the rear end of the piloting underwater vehicle in a coplanar manner through a light beacon fixing frame, and the three luminous bodies have the same color and are different from the other luminous body; a camera is arranged at the front end of the underwater vehicle to acquire front-end image information of the underwater vehicle; an azimuth and attitude measurement system is arranged in the underwater vehicle, so that the real-time angular speed and angle information of the underwater vehicle can be obtained; the method comprises the following steps that a Doppler velocimeter is further installed in the underwater vehicle, so that real-time speed information of the underwater vehicle can be obtained;
the following underwater vehicle adopts the following method to autonomously track and pilot the underwater vehicle:
step 1: starting from a camera model, establishing an underwater vehicle visual navigation model, including establishing a camera aperture model and determining internal and external parameters of a camera:
step 2: the method comprises the following steps of preprocessing after a target is obtained by a front-end camera of the underwater vehicle through rectangular target constraint, obtaining an optical beacon array image used for extracting a target pose, and obtaining a real optical beacon position from an underwater environment with water surface reflection, environment stray light and scattered light of an optical beacon:
step 2.1: image preprocessing: firstly, carrying out color space transformation and primary screening filtering on an original image acquired by a camera to obtain a single-channel threshold value image; secondly, extracting edges in the threshold value image; then, adopting a circle with a set diameter, and closing the outline of the highlight part in the threshold value image to the circle;
step 2.2: the following underwater vehicle adopts a recognition algorithm based on the statistical characteristics of the light beacon array to recognize the designed light beacon array under the ambient light condition:
searching for edges in the binarized image preprocessed in step 2.1 by using an edge extraction algorithm: let the set of found edges be A for the k-th set of edge points therein
B k ={(x i ,y i )|i=1...n}∈A
Its area S k Is composed of
Figure FDA0003741504930000011
And (3) carrying out threshold definition on the area of the connected domain surrounded by the edges: only receiving an edge point set of which the area of the connected domain is more than or equal to a set value; if the number of the remaining points is less than 3, the points are considered to be not completely identified, and the algorithm is ended; otherwise, set B for each edge point k Fitting the circle by using a least square method to obtain the radius R of the fitted circle k Center of circle (x) c ,y c );
Defining a set of edge points B k Degree of non-conformity f k The variance of the distance from the edge point set to the center of the fitting circle is:
Figure FDA0003741504930000021
defining a set of edge points B k Degree of incompleteness w k As the center coordinates of the edge point set and the fitting circleThe ratio of the distance of the center coordinates to the radius of the fitting circle is as follows:
Figure FDA0003741504930000022
defining a set of edge points B k Overall brightness v of k Is B k Average luminance of the enclosed connected component:
Figure FDA0003741504930000023
wherein v is q,p Is the luminance of a point with coordinates (q, p), n k Is B k The number of the points in the connected domain enclosed by the connecting domain;
degree of non-fitting f of fitting circle k Incomplete degree w k Overall brightness v of the point set k And then the radius R of the fitting circle of the point set is added k Form an edge point set B k The four features of (1); using these four features as a 4-dimensional Euclidean space R 4 Defining a set of edge points B k Is characterized by
c k =(f k ,w k ,v k ,R k ) T
Threshold limits are made for degree of incompatibilities and degree of incompleteness: when one of the characteristics is larger than a threshold value, deleting the corresponding point set from the candidate point set; normalizing the characteristic points of each point set to obtain
Figure FDA0003741504930000024
Wherein m is the number of the remaining candidate point sets;
taking the ratio of the number of edge points in the point set to the total number of point sets as weight, calculating weighted average characteristics
Figure FDA0003741504930000031
Wherein n is k Is B k The number of edge points of (a);
calculating the Euclidean distance l from each feature point to the weighted average feature point k Is provided with
Figure FDA0003741504930000032
And solving the Euclidean distance from the characteristic points of all the point sets to the weighted average characteristic point to obtain the standard deviation:
Figure FDA0003741504930000033
deleting the point set corresponding to the feature point with the farthest distance, and performing iterative computation until the standard deviation result is smaller than a certain set threshold; if the number of the remaining connected domains is less than 4, determining that enough optical beacons are not found, otherwise, selecting four point sets with the minimum Y coordinate of the circle center of the fitting circle from the remaining candidate connected domain sets as real optical beacon array images;
step 2.3: determining connected domain fitting circle center coordinates O of the light beacon with the single color according to the color tones of the four point sets g =(x g ,y g ) Center O of connected domain fitting circle of the other three light beacons with the same color b1 =(x b1 ,y b1 ),O b2 =(x b2 ,y b2 ),O b3 =(x b3 ,y b3 ) Geometric center O of the formed quadrangle l Coordinate (x) o ,y o ) Is composed of
Figure FDA0003741504930000034
From O l Starting from O g 、O b1 、O b2 、O b3 Is represented as
Figure FDA0003741504930000035
Figure FDA0003741504930000036
Figure FDA0003741504930000037
Figure FDA0003741504930000038
Order to
Figure FDA0003741504930000041
Figure FDA0003741504930000042
And
Figure FDA0003741504930000043
angle of (a) gamma 1 Can be calculated by
γ 1 =f(y b1 -y o ,x b1 -x o )-f(y g -y o ,x g -x o )
Get, correspondingly calculate to get
Figure FDA0003741504930000044
And
Figure FDA0003741504930000045
formed included angle gamma 2
Figure FDA0003741504930000046
And with
Figure FDA0003741504930000047
Formed included angle gamma 3 If the included angles are sorted according to the sizes, the corresponding connected domains are arranged clockwise;
and step 3: and (3) according to the light beacon information obtained in the step (2), estimating the pose of the piloting underwater vehicle in the three-dimensional space by solving the problem of coplanarity P4P.
2. The method for autonomous tracking of an underwater vehicle according to claim 1, characterized in that: starting from a camera model in the step 1, the specific process of establishing the underwater vehicle visual navigation model is as follows:
step 1.1: defining a camera model coordinate system:
first, four coordinate systems are defined, respectively: world coordinate system W: a reference coordinate system outside the camera, wherein the camera and the object exist in a world coordinate system; camera coordinate system C: optical center of camera O c The position and the rotation relation of the camera under a world coordinate system are reflected as an origin; image physical coordinate system M: reflecting an image of a shot object in the camera; image pixel coordinate system U: using the upper right corner of the image as the origin of coordinates O U The coordinate system represents the position of a certain pixel in the pixel set obtained by rasterizing the image in the whole pixel set, and the unit of the coordinate system is a pixel;
let P be the position of a certain point P on the world coordinate system w =[X w ,Y w ,Z w ] T (ii) a The position of P is P in the camera coordinate system c =[X c ,Y c ,Z c ] T (ii) a After projective transformation, the coordinate of the image P 'of P on the imaging plane is P' ═ x, y] T And obtaining the pixel coordinate p ═ u, v after rasterization] T (ii) a In the process of converting the camera coordinate system C into the image physical coordinate system M, let the coordinate of P 'in the camera coordinate system C be P' ═ x, y, z]Wherein the focal length | z | ═ f; according to the principle of similar triangles
Figure FDA0003741504930000051
Obtaining a basic formula of a camera pinhole model:
Figure FDA0003741504930000052
step 1.2: defining internal and external parameters of a camera, including an external parameter matrix T and an internal parameter matrix K of the camera;
to Z c In other words, the basic formula of the camera aperture model is non-linear, and in order to linearize it, it is necessary to extend the dimensions using homogeneous coordinates, defining:
Figure FDA0003741504930000053
wherein the content of the first and second substances,
Figure FDA0003741504930000054
is the homogeneous coordinate of the image point P ', with Cartesian coordinates P' [ x, y ]] T The following relationships exist:
Figure FDA0003741504930000055
Figure FDA0003741504930000056
coordinates as add dimensions; using the rotation matrix R according to the position of the camera and the point P in the world coordinate system 3×3 And a displacement vector t 3×1 Will P w Transformation to P c
P c =RP w +t
The above formula is expanded and written in the form of homogeneous coordinates, where T is the extrinsic parameter matrix of the camera:
Figure FDA0003741504930000057
in the image physical coordinate system M, assuming that the horizontal length of each pixel of the pixel plane is dx and the vertical length is dy, since the origin point is defined differently, it is possible to pass the translation (c) x ,c y ) Obtaining points on a pixel coordinate system; therefore, the method comprises the following steps:
Figure FDA0003741504930000061
wherein, alpha is 1/dx, beta is 1/dy; substituting the above formula into the basic formula of the camera pinhole model to obtain:
Figure FDA0003741504930000062
let f be x =α·f,f y Let β · f denote the above equation in homogeneous coordinates, which, since the homogeneous coordinate scaling constant factor is constant, have:
Figure FDA0003741504930000063
combining with external parameters, adding dimensionality on the right side of the above formula to be homogeneous:
Figure FDA0003741504930000064
wherein the content of the first and second substances,
Figure FDA0003741504930000065
an intrinsic parameter matrix called camera; the aperture model of the camera is then expressed as:
Figure FDA0003741504930000066
3. the method for autonomous tracking of an underwater vehicle according to claim 1, characterized in that: the process of fitting a circle using the least squares method in step 2.2 is:
let the standard equation of the circle to be fitted be (x-x) c ) 2 +(y-y c ) 2 =R k 2
Radius of the fitting circle is R k The center of the circle is (x) c ,y c ) Let us order
a=-2x c
b=-2y c
c=x c 2 +y c 2 -R k 2
Then the fitted circle equation is obtained as:
x 2 +y 2 +ax+by+c=0
order to
Figure FDA0003741504930000071
D=n∑x i y i -∑x i ∑y i
Figure FDA0003741504930000072
Figure FDA0003741504930000073
Figure FDA0003741504930000074
And is provided with
Figure FDA0003741504930000075
Figure FDA0003741504930000076
Figure FDA0003741504930000077
To obtain x c 、y c 、R k Least squares estimation of
Figure FDA0003741504930000078
4. The method for autonomous tracking of an underwater vehicle according to claim 1, characterized in that: the luminous body used as the light beacon adopts a spherical luminous body and emits blue-green light with the wavelength of 400-500 nm; the four luminous bodies are arranged in an X shape.
5. The method for autonomous tracking of an underwater vehicle according to claim 1, characterized in that: two Ar markers with different IDs are also arranged at the rear end of the piloting underwater vehicle, and the position and the pose of the piloting underwater vehicle are further estimated by the following underwater vehicle through an improved Ar Marker identification method:
step a: converting an image acquired by a camera of a following underwater vehicle into a gray level image, extracting the most prominent contour by using a local adaptive threshold method, performing polygonal approximation by using a Douelas-Pukk algorithm, extracting a binary image area in a Marker, and acquiring information in the binary image area;
step b: rotating the obtained binary graph around a central point four times, comparing 4 graphs obtained by rotating the four times with all markers in a dictionary, and carrying out effective marking; if no matching item is matched, correcting the matching item: computing acquired candidate images
Figure FDA0003741504930000081
To dictionary
Figure FDA0003741504930000082
If the distance of the candidate image is less than half of the distance of any two markers in the dictionary, the candidate image is considered to be the closest Marker;
distance of the candidate image from the dictionary:
Figure FDA0003741504930000083
Figure FDA0003741504930000084
is that
Figure FDA0003741504930000085
To
Figure FDA0003741504930000086
Define:
Figure FDA0003741504930000087
h is the code distance of two markers, and is defined as the number of different data at the same position of two sequences; r' k' The Marker is rotated by k' operators for 90 degrees clockwise;
step c: calculating the position of an angular point by adopting linear regression on the marked edge; minimizing the re-projection error of the corner point by iteration by utilizing a Levenberg-Marquardt algorithm; uniquely determining the sequence of the four corner points; and finally, estimating the posture of the Marker relative to the camera through the image coordinates of the corner points by solving the coplanar P4P problem.
6. The method for autonomous tracking of an underwater vehicle according to claim 5, characterized in that: median filtering after estimating pose: when detecting a Marker each time, firstly, judging whether the currently found Marker is the Marker found last time, wherein for each Marker, there are four conditions: (a) the Marker found at this time is also found at the last time; (b) the Marker found this time is not found last time; (c) the Marker found last time is not found this time; (d) the Marker which is not found last time is not found this time; for case (a), the Marker result is placed in the median filter buffer; if the Marker is found for the first 5 times, outputting a value detected for the first time, and if the Marker is found for the 5 th time, outputting a median filtering result; for case (b), putting the result into a median filtering buffer and directly outputting the result; for the case (c), deleting the record of the corresponding Marker; for case (d), i.e. no corresponding Marker is detected, nothing happens.
CN202010988752.9A 2020-09-18 2020-09-18 Autonomous tracking method for underwater vehicle Active CN112184765B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010988752.9A CN112184765B (en) 2020-09-18 2020-09-18 Autonomous tracking method for underwater vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010988752.9A CN112184765B (en) 2020-09-18 2020-09-18 Autonomous tracking method for underwater vehicle

Publications (2)

Publication Number Publication Date
CN112184765A CN112184765A (en) 2021-01-05
CN112184765B true CN112184765B (en) 2022-08-23

Family

ID=73956475

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010988752.9A Active CN112184765B (en) 2020-09-18 2020-09-18 Autonomous tracking method for underwater vehicle

Country Status (1)

Country Link
CN (1) CN112184765B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112924708B (en) * 2021-01-29 2022-06-03 中国航天空气动力技术研究院 Speed estimation method suitable for underwater near-bottom operation vehicle
CN112836889A (en) * 2021-02-19 2021-05-25 鹏城实验室 Path optimization method, underwater vehicle and computer readable storage medium
CN113285765B (en) * 2021-07-20 2021-10-15 深之蓝海洋科技股份有限公司 Underwater robot communication method, electronic equipment and underwater robot
CN116309799A (en) * 2023-02-10 2023-06-23 四川戎胜兴邦科技股份有限公司 Target visual positioning method, device and system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110595476A (en) * 2019-08-30 2019-12-20 天津航天中为数据系统科技有限公司 Unmanned aerial vehicle landing navigation method and device based on GPS and image visual fusion

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102636771A (en) * 2012-04-25 2012-08-15 西北工业大学 AUV (Autonomous Underwater Vehicle) underwater acoustic locating method based on double mobile beacons
CN102980579B (en) * 2012-11-15 2015-04-08 哈尔滨工程大学 Autonomous underwater vehicle autonomous navigation locating method
CN104457754B (en) * 2014-12-19 2017-04-26 东南大学 SINS/LBL (strapdown inertial navigation systems/long base line) tight combination based AUV (autonomous underwater vehicle) underwater navigation positioning method
EP3384362B1 (en) * 2015-11-30 2021-03-17 Raytheon Company Navigation system for an autonomous vehicle based on cross correlating coherent images
CN105910574B (en) * 2016-04-05 2017-06-23 中国科学院南海海洋研究所 A kind of seabed base observation platform
CN106444838A (en) * 2016-10-25 2017-02-22 西安兰海动力科技有限公司 Precise path tracking control method for autonomous underwater vehicle
CN108444478B (en) * 2018-03-13 2021-08-10 西北工业大学 Moving target visual pose estimation method for underwater vehicle
CN109102525B (en) * 2018-07-19 2021-06-18 浙江工业大学 Mobile robot following control method based on self-adaptive posture estimation
CN110246151B (en) * 2019-06-03 2023-09-15 南京工程学院 Underwater robot target tracking method based on deep learning and monocular vision
CN110332887B (en) * 2019-06-27 2020-12-08 中国地质大学(武汉) Monocular vision pose measurement system and method based on characteristic cursor points
CN110533650B (en) * 2019-08-28 2022-12-13 哈尔滨工程大学 AUV underwater pipeline detection tracking method based on vision

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110595476A (en) * 2019-08-30 2019-12-20 天津航天中为数据系统科技有限公司 Unmanned aerial vehicle landing navigation method and device based on GPS and image visual fusion

Also Published As

Publication number Publication date
CN112184765A (en) 2021-01-05

Similar Documents

Publication Publication Date Title
CN112184765B (en) Autonomous tracking method for underwater vehicle
CN112396650B (en) Target ranging system and method based on fusion of image and laser radar
CN108445480B (en) Mobile platform self-adaptive extended target tracking system and method based on laser radar
RU2609434C2 (en) Detection of objects arrangement and location
EP2234064B1 (en) Method for estimating 3D pose of specular objects
Siegemund et al. Curb reconstruction using conditional random fields
Kang et al. Automatic targetless camera–lidar calibration by aligning edge with gaussian mixture model
Zhou et al. T-loam: truncated least squares lidar-only odometry and mapping in real time
US10043279B1 (en) Robust detection and classification of body parts in a depth map
Liu et al. Detection and pose estimation for short-range vision-based underwater docking
CN107063261B (en) Multi-feature information landmark detection method for precise landing of unmanned aerial vehicle
CN111862201A (en) Deep learning-based spatial non-cooperative target relative pose estimation method
Hochdorfer et al. 6 DoF SLAM using a ToF camera: The challenge of a continuously growing number of landmarks
Zelener et al. Cnn-based object segmentation in urban lidar with missing points
Lim et al. A single correspondence is enough: Robust global registration to avoid degeneracy in urban environments
Boroson et al. 3D keypoint repeatability for heterogeneous multi-robot SLAM
Wang et al. Autonomous landing of multi-rotors UAV with monocular gimbaled camera on moving vehicle
Kallasi et al. Computer vision in underwater environments: A multiscale graph segmentation approach
CN107765257A (en) A kind of laser acquisition and measuring method based on the calibration of reflected intensity accessory external
Wang et al. A survey of extrinsic calibration of lidar and camera
Lin et al. Lane departure identification on highway with searching the region of interest on hough space
Hoermann et al. Vehicle localization and classification using off-board vision and 3-D models
CN113109762B (en) Optical vision guiding method for AUV (autonomous Underwater vehicle) docking recovery
Hungar et al. GRAIL: A Gradients-of-Intensities-based Local Descriptor for Map-based Localization Using LiDAR Sensors
Wang et al. Accurate Rapid Grasping of Small Industrial Parts from Charging Tray in Clutter Scenes.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant