CN113074725A - Small underwater multi-robot cooperative positioning method and system based on multi-source information fusion - Google Patents

Small underwater multi-robot cooperative positioning method and system based on multi-source information fusion Download PDF

Info

Publication number
CN113074725A
CN113074725A CN202110512081.3A CN202110512081A CN113074725A CN 113074725 A CN113074725 A CN 113074725A CN 202110512081 A CN202110512081 A CN 202110512081A CN 113074725 A CN113074725 A CN 113074725A
Authority
CN
China
Prior art keywords
robot
positioning
state
equation
vertical distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110512081.3A
Other languages
Chinese (zh)
Other versions
CN113074725B (en
Inventor
邢会明
叶秀芬
刘文智
李海波
梅新奎
王璘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Engineering University
Original Assignee
Harbin Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Engineering University filed Critical Harbin Engineering University
Priority to CN202110512081.3A priority Critical patent/CN113074725B/en
Publication of CN113074725A publication Critical patent/CN113074725A/en
Application granted granted Critical
Publication of CN113074725B publication Critical patent/CN113074725B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments

Abstract

A small underwater multi-robot cooperative positioning method and system based on multi-source information fusion belongs to the technical field of multi-robot cooperative positioning and is used for solving the problem that a small underwater robot cannot be positioned by using a fiber-optic gyroscope, a Doppler (DVL) and an underwater sound positioning system due to small size and limited energy supply. According to the invention, the vertical distance information of two robots based on the pressure sensor and the three-dimensional spatial position information of the robots based on the panoramic stereo sensing device, namely binocular vision positioning, are fused to obtain the accurate spatial position of the underwater robot, and in a special underwater environment, a positioning device which is high in power and heavy is not required to be relied on, so that the problem that a small underwater robot cannot be positioned by using an optical fiber gyroscope, a Doppler (DVL) and an underwater sound positioning system due to small size and limited energy supply is solved, and the precision and robustness of relative cooperative positioning of the small underwater multi-robot are effectively improved. The invention provides a theoretical basis for the cooperative formation control of the small amphibious robot.

Description

Small underwater multi-robot cooperative positioning method and system based on multi-source information fusion
Technical Field
The invention relates to the technical field of multi-robot cooperative positioning, in particular to a small underwater multi-robot cooperative positioning method and system based on multi-source information fusion.
Background
In recent years, researchers have proposed relative co-location techniques, inspired by natural fish formation parade and bird formation flight. The robot has limited external perception and communication range, in the effective measurement range, the robot perceives or positions adjacent robots through sensors such as vision and infrared sensors, and transmits position and attitude information through a multi-jump communication mechanism, so that the relative cooperative positioning of small-sized multiple robots can be realized.
In 2014, inspired by natural bird team formation flight, researchers at university of federal science and engineering, zurich, switzerland adopted ARToolkit codes to identify unmanned aerial vehicles, and carried out relative pose estimation on multiple unmanned aerial vehicles by utilizing an airborne visual perception sensor and communication equipment in combination with a Kalman filtering algorithm, so that completely distributed Leader-follower formation flight control is completed, and indoor unit outdoor formation flight tests are carried out. In 2015 and 2016, to the environment that unmanned aerial vehicle GPS positioning is limited, pennsylvania university researchers proposed an unmanned aerial vehicle group system based on unmanned aerial vehicle relative positioning, need not with the help of extra global positioning system, and unmanned aerial vehicle fixes a position adjacent unmanned aerial vehicle that pastes the sign through airborne monocular vision. And in a real environment, completing a navigation following formation stability experiment, an unmanned aerial vehicle group system stability and a monitoring scene deployment experiment.
In 2015, researchers at Hirana university of Spanish provide a multi-AUV short-distance relative cooperative positioning system based on vision and active signal lamps, a panoramic vision camera is carried below an AUV of a navigator, four active signal lamps are fixed above an AUV body of a follower, and the positions of the signal lamps on a robot are fixed and known. And the pilot AUV detects the pilot AUV signal lamp by a visual means and carries out position and attitude estimation. However, the positioning method based on signal lamps needs to detect all signal lamps, and if a single signal lamp is detected incorrectly or is blocked, the navigator AUV cannot be positioned.
Due to the particularity of the underwater environment, the attenuation of sound waves is fast, the robot cannot receive GPS signals when being submerged, and the application of a satellite positioning navigation system is limited. The underwater acoustic communication equipment, inertial navigation equipment, DVL (dynamic velocity logging) and sonar equipment which are depended by an absolute positioning method (methods such as an underwater acoustic positioning method, an inertia/dead reckoning method, submarine topography matching and the like) need high power and are heavy, so that the method cannot be applied to small amphibious multi-robots.
Disclosure of Invention
In view of the above problems, the present invention provides a small underwater multi-robot cooperative positioning method and system based on multi-source information fusion, so as to solve the problem that the small underwater robot cannot be positioned by using a fiber-optic gyroscope, a Doppler (DVL) and an underwater acoustic positioning system due to its small size and limited energy supply.
According to one aspect of the invention, a small underwater multi-robot cooperative positioning method based on multi-source information fusion is provided, and the method comprises the following steps:
step one, acquiring sensor data; wherein the sensor data comprises a positioning robot underwater pressure sensor value, an image sequence containing a positioning robot;
calculating according to the value of the underwater pressure sensor of the positioning robot and the value of the underwater pressure sensor of the positioned robot to obtain the vertical distance between the positioning robot and the positioned robot;
thirdly, calculating and obtaining the three-dimensional space position coordinates of the positioned robot according to the image sequence containing the positioned robot;
and fourthly, performing information fusion on the vertical distance between the positioning robot and the positioned robot and the three-dimensional space position coordinate of the positioned robot to obtain the final three-dimensional space position of the positioned robot.
Further, the specific process of the second step comprises:
step two, establishing a linear equation of the pressure difference and the vertical distance between the positioning robot and the positioned robot;
the relation between the pressure difference between the positioning robot and the positioned robot and the vertical distance is as follows:
Zp=kpP12
wherein Z ispRepresents a vertical distance; p12Indicating a pressure difference, P12=P1-P2,P1Is the positioning robot pressure sensor value; p2Is the positioned robot pressure sensor value; k is a radical ofpIs a proportional parameter of the pressure difference and the vertical distance;
secondly, determining a first system state equation and an observation equation according to the linear equation;
the state equation and observation equation for the first system are:
Figure BDA0003060671840000021
the state vector and observation vector of the first system are:
Figure BDA0003060671840000022
wherein k represents time; v. ofpRepresenting the vertical direction speed of the positioned robot; a represents a state transition matrix; c denotes an observation matrix which is,
Figure BDA0003060671840000023
Figure BDA0003060671840000024
is gaussian process noise;
Figure BDA0003060671840000025
observing noise for gauss;
and step three, filtering the vertical distance by adopting a Kalman algorithm according to the determined state equation and the observation equation of the first system to obtain the vertical distance between the positioning robot and the positioned robot after filtering.
Further, the specific process of the third step comprises:
thirdly, obtaining the pixel coordinates of the positioned robot through a visual target recognition algorithm according to the image sequence containing the positioned robot;
step two, establishing a vision positioning model equation of the positioned robot;
the vision positioning model equation of the positioned robot is as follows:
Figure BDA0003060671840000031
wherein i represents a binocular camera serial number; l and r respectively represent left and right camera coordinate systems in the binocular camera; u and v represent pixel coordinates;
Figure BDA0003060671840000032
representing the coordinates of the positioned robot under a robot body coordinate system;
thirdly, determining a second system state equation and an observation equation according to the visual positioning model equation;
the state equation and observation equation of the second system are:
Figure BDA0003060671840000033
wherein the content of the first and second substances,
Figure BDA0003060671840000034
state vectors at the k moment;
Figure BDA0003060671840000035
an observation vector at the k moment;
Figure BDA0003060671840000036
is gaussian process noise;
Figure BDA0003060671840000037
observing noise for gauss;
the state vector and observation vector of the second system are:
Figure BDA0003060671840000038
Figure BDA0003060671840000039
and step three, filtering the coordinates of the positioned robot under the robot body coordinate system by adopting an unscented Kalman filtering algorithm according to the determined state equation and observation equation of the second system to obtain the three-dimensional space position coordinates of the positioned robot after filtering
Figure BDA00030606718400000310
Further, the specific process of the step four includes:
fourthly, acquiring an attitude angle of the positioned robot;
fourthly, according to the attitude angle, the three-dimensional space position coordinate of the positioned robot under the robot coordinate system
Figure BDA00030606718400000311
Conversion to three-dimensional spatial position coordinates in world coordinate system
Figure BDA00030606718400000312
Step four and step three, coordinate of Z-axis direction
Figure BDA00030606718400000313
And the vertical distance Z obtained in step twopPerforming fusion to obtain
Figure BDA00030606718400000314
The global state estimate and covariance matrix are:
Figure BDA00030606718400000315
Wherein the content of the first and second substances,
Figure BDA0003060671840000041
is the covariance of the vertical distance,
Figure BDA0003060671840000042
as a Z-axis direction coordinate
Figure BDA0003060671840000043
The covariance of (a);
step four, the obtained
Figure BDA0003060671840000044
Combining position coordinates of a positioned robot in a world coordinate system
Figure BDA0003060671840000045
And obtaining the final three-dimensional space position of the positioned robot.
Further, the specific process of filtering the coordinates of the positioned robot under the robot body coordinate system by adopting the unscented kalman filter algorithm in the third step and the fourth step comprises the following steps:
step three, four and one, setting initial value of state, obtaining Sigma point set { chi } of state estimation by UT conversioni,k-1},i=1,2,…,2n;
Step three, step two, time updating is carried out, namely, the prediction is carried out in the previous step, and the prediction state and the prediction covariance are calculated:
and substituting the Sigma point at the moment k-1 into a state equation of a second system through UT conversion:
χi,k|k-1=f(χi,k-1)
vector χ of mergeri,k/k-1To obtain a previous state estimate at time k; meanwhile, the process noise is considered, and the estimated covariance of the previous prediction step is obtained;
and step three, performing prediction updating, namely updating the prediction state in the previous step by using measurement: and substituting the updated Sigma point into an observation equation of a second system to obtain a measurement predicted value:
Figure BDA0003060671840000046
merging vectors
Figure BDA0003060671840000047
Obtaining a measurement prediction at the moment k, and obtaining a covariance of the measurement prediction; further calculating the covariance of the state predicted value and the measurement predicted value;
step three, four, calculating filter gain and updating state estimation and variance;
and step three, step four, iterating the step three, step two, step three, step four and step four repeatedly to obtain the estimation result of the state vector.
According to another aspect of the invention, a small underwater multi-robot cooperative positioning system based on multi-source information fusion is provided, and the system comprises a sensor layer and a data fusion layer; wherein the content of the first and second substances,
the sensor layer comprises a perspective three-dimensional sensing device, a pressure sensor and an inertial sensor; the all-round stereoscopic perception device comprises a plurality of groups of binocular cameras and is used for acquiring an image sequence comprising the positioned robot; the pressure sensor is used for acquiring a value of the positioning robot underwater pressure sensor and a value of the positioning robot underwater pressure sensor; the inertial sensor is used for acquiring the attitude angle of the positioned robot;
the data fusion layer comprises a sub-filter I, a sub-filter II and a main filter; the sub-filter I is used for calculating and obtaining the vertical distance between the positioning robot and the positioned robot according to the value of the underwater pressure sensor of the positioning robot and the value of the underwater pressure sensor of the positioned robot; the sub-filter II is used for calculating and obtaining the three-dimensional space position coordinates of the positioned robot according to the image sequence containing the positioned robot; the main filter is used for carrying out information fusion on the vertical distance between the positioning robot and the positioned robot and the three-dimensional space position coordinate of the positioned robot to obtain the final three-dimensional space position of the positioned robot;
the sensor layer and the data fusion layer communicate wirelessly.
Further, the specific process of obtaining the vertical distance between the positioning robot and the positioned robot in the sub-filter I includes: firstly, establishing a linear equation of the pressure difference and the vertical distance between the positioning robot and the positioned robot; the relation between the pressure difference between the positioning robot and the positioned robot and the vertical distance is as follows:
Zp=kpP12
wherein Z ispRepresents a vertical distance; p12Indicating a pressure difference, P12=P1-P2,P1Is the positioning robot pressure sensor value; p2Is the positioned robot pressure sensor value; k is a radical ofpIs a proportional parameter of the pressure difference and the vertical distance;
then, determining a first system state equation and an observation equation according to the linear equation; the state equation and observation equation for the first system are:
Figure BDA0003060671840000051
the state vector and observation vector of the first system are:
Figure BDA0003060671840000052
wherein k represents time; v. ofpRepresenting the vertical direction speed of the positioned robot; a represents a state transition matrix; c denotes an observation matrix which is,
Figure BDA0003060671840000053
Figure BDA0003060671840000054
is gaussian process noise;
Figure BDA0003060671840000055
observing noise for gauss;
and finally, filtering the vertical distance by adopting a Kalman algorithm according to the determined state equation and the observation equation of the first system to obtain the vertical distance between the positioning robot and the positioned robot after filtering.
Further, the specific process of obtaining the three-dimensional spatial position coordinates of the positioned robot in the sub-filter II includes:
firstly, obtaining the pixel coordinates of a positioned robot through a visual target recognition algorithm according to an image sequence containing the positioned robot; then, establishing a visual positioning model equation of the positioned robot; the vision positioning model equation of the positioned robot is as follows:
Figure BDA0003060671840000056
wherein i represents a binocular camera serial number; l and r respectively represent left and right camera coordinate systems in the binocular camera; u and v represent pixel coordinates;
Figure BDA0003060671840000057
representing the coordinates of the positioned robot under a robot body coordinate system;
then, determining a second system state equation and an observation equation according to the visual positioning model equation; the state equation and observation equation of the second system are:
Figure BDA0003060671840000061
wherein the content of the first and second substances,
Figure BDA0003060671840000062
state vectors at the k moment;
Figure BDA0003060671840000063
for observation at time kVector quantity;
Figure BDA0003060671840000064
is gaussian process noise;
Figure BDA0003060671840000065
observing noise for gauss;
the state vector and observation vector of the second system are:
Figure BDA0003060671840000066
Figure BDA0003060671840000067
finally, according to the determined state equation and observation equation of the second system, filtering the coordinates of the positioned robot under the body coordinate system of the robot by adopting an unscented Kalman filtering algorithm to obtain the three-dimensional space position coordinates of the positioned robot after filtering
Figure BDA0003060671840000068
Further, the specific process of obtaining the final three-dimensional spatial position of the positioned robot in the main filter includes:
firstly, acquiring a posture angle of a positioned robot; then, according to the attitude angle, the three-dimensional space position coordinate of the positioned robot under the robot coordinate system
Figure BDA0003060671840000069
Conversion to three-dimensional spatial position coordinates in world coordinate system
Figure BDA00030606718400000610
Then, the coordinates in the Z-axis direction are calculated
Figure BDA00030606718400000611
Vertical distance Z obtained in sum sub-filter IpPerforming fusion to obtain
Figure BDA00030606718400000612
The global state estimate and covariance matrix are:
Figure BDA00030606718400000613
wherein the content of the first and second substances,
Figure BDA00030606718400000614
is the covariance of the vertical distance,
Figure BDA00030606718400000615
as a Z-axis direction coordinate
Figure BDA00030606718400000616
The covariance of (a);
finally, will obtain
Figure BDA00030606718400000617
Combining position coordinates of a positioned robot in a world coordinate system
Figure BDA00030606718400000618
And obtaining the final three-dimensional space position of the positioned robot.
Further, the specific process of filtering the coordinates of the robot to be positioned in the robot body coordinate system by using the unscented kalman filter algorithm in the sub-filter II includes: first, given initial value of state, the state estimation Sigma point set { chi ] is obtained by UT conversion i,k-11,2, …,2 n; then, a temporal update is performed, i.e. a prediction is made one step ahead, calculating the prediction state and the prediction covariance: and substituting the Sigma point at the moment k-1 into a state equation of a second system through UT conversion:
χi,k|k-1=f(χi,k-1)
vector χ of mergeri,k/k-1To obtain a previous state estimate at time k; while taking process noise into account, solving for the previous stepMeasured estimated covariance;
then, a prediction update is performed, i.e. the previous prediction state is updated with measurements: and substituting the updated Sigma point into an observation equation of a second system to obtain a measurement predicted value:
Figure BDA0003060671840000071
merging vectors
Figure BDA0003060671840000072
Obtaining a measurement prediction at the moment k, and obtaining a covariance of the measurement prediction; further calculating the covariance of the state predicted value and the measurement predicted value;
then, calculating a filtering gain and updating the state estimation and the variance;
and repeatedly iterating the process to obtain an estimation result of the state vector.
The beneficial technical effects of the invention are as follows:
according to the invention, the vertical distance information of two robots based on the pressure sensor and the three-dimensional spatial position information of the robots based on the panoramic stereo sensing device, namely binocular vision positioning, are fused to obtain the accurate spatial position of the underwater robot, and in a special underwater environment, a positioning device which is high in power and heavy is not required to be relied on, so that the problem that a small underwater robot cannot be positioned by using an optical fiber gyroscope, a Doppler (DVL) and an underwater sound positioning system due to small size and limited energy supply is solved, and the precision and robustness of relative cooperative positioning of the small underwater multi-robot are effectively improved. The invention provides a theoretical basis for the cooperative formation control of the small amphibious robot.
Drawings
The invention may be better understood by referring to the following description in conjunction with the accompanying drawings, in which like reference numerals are used throughout the figures to indicate like or similar parts. The accompanying drawings, which are incorporated in and form a part of this specification, illustrate preferred embodiments of the present invention and, together with the detailed description, serve to further explain the principles and advantages of the invention.
FIG. 1 is a system block diagram of the present invention;
FIG. 2 is a schematic view of a Kalman filtering process of a sub-filter I according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating positioning and modeling of a perspective stereo perception system according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an internal structure of a perspective stereo perception system according to an embodiment of the present invention;
FIG. 5 is a schematic diagram illustrating an arrangement of an amphibious robot and a target object in a visual positioning experiment according to an embodiment of the invention;
FIG. 6 is a schematic diagram of a visual positioning experiment in an embodiment of the present invention;
FIG. 7 is a diagram illustrating a positioning result of a sub-filter II in a visual positioning experiment according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of the positioning experimental robot position arrangement in the embodiment of the present invention;
FIG. 9 is a schematic diagram of the distribution of the three-dimensional spatial positions of multiple robots in the embodiment of the present invention;
FIG. 10 is a schematic diagram of the distribution of multiple robots in an XY plane according to an embodiment of the present invention;
fig. 11 is a graph of the positioning coordinates of the robot 2 in the embodiment of the present invention;
fig. 12 is a graph showing the positioning coordinates of the robot 3 in the embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described hereinafter with reference to the accompanying drawings. In the interest of clarity and conciseness, not all features of an actual implementation are described in the specification. It should be noted that, in order to avoid obscuring the present invention by unnecessary details, only the device structures and/or processing steps that are closely related to the scheme according to the present invention are shown in the drawings, and other details that are not so relevant to the present invention are omitted.
In order to realize relative cooperative positioning between onshore and underwater multi-amphibious robots, a multi-source information fusion cooperative positioning method based on vision, depth and IMU (inertial sensor) is provided. As shown in fig. 1, the co-location method framework adopts a layered structure design and is divided into a sensor layer and a data fusion layer, wherein the sensor layer comprises a surround view stereo sensing system (surround view stereo sensing device), a pressure sensor, an IMU and other small-size sensors; the data fusion layer comprises a sub-filter I, a sub-filter II and a main filter.
Estimating a relative depth model to be a linear model based on the pressure sensor, and filtering by using a Kalman filter as a sub-filter I; the binocular vision positioning model is a nonlinear model, and in order to improve the precision and reduce the calculated amount, an unscented Kalman filter is adopted for position estimation; to avoid the influence of roll and pitch angular jitter of the robot attitude angle on the positioning, the estimated position p is estimatedbPerforming coordinate transformation at ZwIn the direction, the main filter is adopted to carry out visual part information
Figure BDA0003060671840000081
And depth difference ZpFused, shunt-connected, vertical sub-filter II estimates of
Figure BDA0003060671840000082
Obtaining a three-dimensional location estimate based on vision, depth, and IMU
Figure BDA0003060671840000083
The method of the present invention is described in detail below.
1) Designing a sub-filter I to calculate and acquire the vertical distance between two robots (namely a positioning robot and a positioned robot) based on the pressure sensors.
Firstly, the pressure sensors on the sensor layer are used for acquiring the values, namely depth values, of the pressure sensors in water of the two robots, the distance between the two robots in the vertical direction is calculated and estimated through the difference value of the depth values, and the relation between the pressure difference of the pressure sensors of the two robots and the distance between the two robots in the vertical direction is as follows:
Zp=kpP12 (1)
wherein the pressure difference P12=P1-P2,P1Is the positioning robot pressure sensor value; p2Is a positioned robotA pressure sensor value; k is a radical ofpIs a proportional parameter of the pressure difference to the vertical distance.
The state equation and observation equation for the sub-filter I system (i.e., the first system) are simplified in form:
Figure BDA0003060671840000091
the state vector and observation vector are:
Figure BDA0003060671840000092
wherein v ispIs the vertical direction velocity, i.e. the distance moved in the vertical direction per unit time; state transition matrix
Figure BDA0003060671840000093
Observation matrix
Figure BDA0003060671840000094
Figure BDA0003060671840000095
Is Gaussian process noise and
Figure BDA0003060671840000096
representing a fitting gaussian distribution;
Figure BDA0003060671840000097
observe the noise as Gaussian
Figure BDA0003060671840000098
Indicating a gaussian fit.
After a state equation and an observation equation of a sub-filter I system are determined, filtering the depth difference by adopting a Kalman algorithm, wherein the flow of the Kalman filtering algorithm is shown in FIG. 2, and firstly, initializing a state vector and a variance; then, carrying out the prediction of the previous step; then calculating the filtering gain; then updating the measurement estimation, namely estimating a state vector according to the observation vector; finally, calculating covariance to obtain the distance between the two robots in the vertical direction; and circulating according to the steps.
2) And designing a sub-filter II to calculate and obtain the three-dimensional space position of the robot based on the all-round stereo perception system (all-round stereo perception device).
As shown in fig. 3 and 4, the all-around stereo perception system of the invention has four groups of binocular cameras (SC)iI ═ 1,2,3,4), with one set of binocular cameras SC1For the purpose of example only,
Figure BDA0003060671840000099
and
Figure BDA00030606718400000910
respectively represent the left and right camera coordinate systems in the binocular camera,
Figure BDA00030606718400000911
a coordinate system of the robot body is represented,
Figure BDA00030606718400000912
representing a world coordinate system. Obtaining an image sequence containing the positioned robot through a look-around stereo perception system, and assuming that the coordinate of the positioned robot is as follows under a robot body coordinate system
Figure BDA00030606718400000913
Through a visual target recognition algorithm, the positioned robot can be obtained in a binocular camera SC1The coordinates of the middle pixel are respectively
Figure BDA00030606718400000914
And
Figure BDA00030606718400000915
the visual target recognition algorithm can adopt deep learning to perform target recognition, and after the positioned robot is detected, the positioned robot is tracked. Specifically, first, the image frame passes through a detector to obtain a robot frame, so as to obtain a central target position of the robot, and the position is transmitted to a followerA tracker that learns and gives a predicted position; after the next frame of image arrives and is detected to obtain the target central position, the tracker gives a predicted position, and meanwhile, the distance between the predicted position and the actual detection position is matched by an iterative Hungarian algorithm, and the target position is finally output.
Then, by the pinhole imaging principle, the following results are obtained:
Figure BDA0003060671840000101
Figure BDA0003060671840000102
wherein i is 1, i.e. the first set of binocular cameras SC1Correspondingly:
Figure BDA0003060671840000103
Figure BDA0003060671840000104
a is the optical center distance between one set of binocular cameras, b is the optical center distance between two sets of binocular cameras opposite, e.g. SC2And SC4Distance of optical center between, SC1And SC3The optical center distance therebetween; d is the vertical distance from the optical center of the camera to the origin of the coordinate system of the robot body.
Expanding and simplifying the formula (3) and the formula (4) respectively, we can get:
Figure BDA0003060671840000105
Figure BDA0003060671840000106
and then the obtained binocular vision positioning model equation can be simplified as follows:
Figure BDA0003060671840000107
wherein i represents a binocular camera serial number; l and r respectively represent left and right camera coordinate systems in the binocular camera; u and v represent pixel coordinates;
Figure BDA0003060671840000108
and the coordinates of the positioned robot under the robot body coordinate system are shown.
Other binocular camera positioning model and binocular camera SC1Similarly, the only difference is the relationship of the robot coordinate system and the binocular coordinate system. The state equation and observation equation for the sub-filter II system (i.e., the second system) are simplified in form:
Figure BDA0003060671840000111
wherein the content of the first and second substances,
Figure BDA0003060671840000112
is a k time system vector;
Figure BDA0003060671840000113
an observation vector at the k moment;
Figure BDA0003060671840000114
is gaussian process noise;
Figure BDA0003060671840000115
the noise is observed as gaussian. According to equation (8), a system state vector and an observation vector are defined:
Figure BDA0003060671840000116
Figure BDA0003060671840000117
wherein the content of the first and second substances,
Figure BDA0003060671840000118
and
Figure BDA0003060671840000119
to be positioned at the robot position;
Figure BDA00030606718400001110
the left camera pixel coordinate at time k;
Figure BDA00030606718400001111
and
Figure BDA00030606718400001112
the right camera pixel coordinate at time k.
After the system equation and the measurement equation are defined, according to the UT (lossless) transformation matrix form, the position estimation based on the UKF comprises the following steps:
first, given an initial value of a state,
Figure BDA00030606718400001113
Figure BDA00030606718400001114
Figure BDA00030606718400001115
Figure BDA00030606718400001116
wherein the constant α determines the Sigma point to center point
Figure BDA00030606718400001117
The influence of higher-order terms can be reduced by adjusting alpha, and the alpha is usually more than or equal to 0 and less than or equal to 1; λ is a second scale parameter, used for characterizationThe range of sample points around the mean point is typically set to 0 or 3-n; beta is a state x distribution parameter, and for Gaussian distribution, beta is 2 as the optimal value; the parameter k is a scaling parameter that controls the distance of each point from the state mean.
Figure BDA00030606718400001118
Is a weight value corresponding to the mean value of the sampling points
Figure BDA00030606718400001119
Figure BDA00030606718400001120
Is the variance weight. Therefore, the accuracy of the estimated mean value can be improved by properly adjusting alpha and lambda; adjusting α can improve the accuracy of the variance, and these parameters are set as: α is 0.9, β is 2, and κ is 0.
Obtaining state estimate Sigma point set { chi } from UT transformi,k-1},i=1,2,…,2n。
Then, updating time, namely predicting in the previous step, and calculating a prediction state and a prediction covariance;
substituting the Sigma point at the k-1 moment into the equation of state equation through UT conversion
χi,k|k-1=f(χi,k-1) (9)
Vector χ of mergeri,k/k-1To obtain a one-step forward state estimate at time k:
Figure BDA0003060671840000121
meanwhile, considering process noise, the estimated covariance of the previous prediction is solved:
Figure BDA0003060671840000122
then, prediction updating is carried out, namely, the prediction state of the previous step is updated by using measurement;
and substituting the updated Sigma point into a measurement equation to obtain a measurement predicted value:
Figure BDA0003060671840000123
merging vectors
Figure BDA0003060671840000124
Obtaining a measurement prediction at time k:
Figure BDA0003060671840000125
at this time, the covariance of the measurement prediction is
Figure BDA0003060671840000126
Further, the covariance of the state prediction value and the measurement prediction value is calculated:
Figure BDA0003060671840000127
then, a filter gain is calculated:
Figure BDA0003060671840000128
finally, the state estimate and variance are updated:
Figure BDA0003060671840000129
Figure BDA00030606718400001210
and repeatedly iterating the process to obtain an estimation result of the state vector.
In the underwater motion process of the robot, pitching or rolling is easy to generate due to the interference of water flow. In this case, the positioning is performed by a binocular camera, the three-dimensional space coordinate position of the robot based on the all-round stereo perception system obtained according to the above steps is in the robot coordinate system, and the depth difference of the two robots is estimated in the world coordinate system by the pressure difference estimation of the two robot pressure sensors, namely the sub-filter I. And converting the visual positioning information obtained under the robot coordinate system into a world coordinate system in consideration of the uniformity problem of the data coordinate systems of the pressure sensor and the visual sensor.
The attitude angle, phi,
Figure BDA0003060671840000131
theta represents roll, yaw and pitch angles of the positioned robot respectively, and the robot coordinate system to world coordinate system transformation relation is as follows:
Figure BDA0003060671840000132
wherein the content of the first and second substances,
Figure BDA0003060671840000133
and
Figure BDA0003060671840000134
respectively representing a rotation matrix and a translation vector; t is t1、t2And t3X, Y and the amount of translation in the Z direction, respectively.
Second system state vector
Figure BDA0003060671840000135
Is written as
Figure BDA0003060671840000136
And under the world coordinate system, the coordinates of the positioned robot are as follows:
Figure BDA0003060671840000137
3) designing a main filter will be based on the three-dimensional space position p of the robot of the all-round stereo perception systemwZ axis coordinate of (1)
Figure BDA0003060671840000138
And depth difference (vertical distance) ZPFusing, connecting in parallel and vertically based on the three-dimensional space position p of the robot of the all-round three-dimensional perception systemwIn (1)
Figure BDA0003060671840000139
A three-dimensional position estimate based on a look-around stereo perception system, pressure sensors, and an IMU is obtained.
The main filter has the effect of Z on the visual estimatewAxial distance
Figure BDA00030606718400001310
And the depth difference Z of the two robotsPAnd carrying out data fusion. In the data fusion layer, the sub-filters complete the optimal estimation of the local state, and the main filter measures the filtering precision according to the covariance matrix of the sub-filters. The global state estimate and its covariance matrix are then:
Figure BDA00030606718400001311
wherein the content of the first and second substances,
Figure BDA00030606718400001312
in order to be the depth-difference variable covariance,
Figure BDA00030606718400001313
for visual measurement of ZwAxial distance
Figure BDA00030606718400001314
The covariance.
Further combined with vision to estimate position information
Figure BDA00030606718400001315
And
Figure BDA00030606718400001316
establishing vision and pressure sensor based positional information estimation of a positioned robot
Figure BDA00030606718400001317
Detailed description of the preferred embodiment
The following experiment is performed on the binocular positioning effect in the all-round stereo perception system (all-round stereo perception device), namely, the positioning experiment based on vision. As shown in fig. 5, the amphibious robot is placed in a laboratory pool and fixed, and the positioning amphibious robot is equally divided into 12 equal parts, each of 30 degrees, by a circular angle calibration plate, as shown in fig. 6. The center of the positioning robot is used as an origin, circles are made by taking 0.8m and 1.5m as radiuses, each circle has 12 positioning points, a target object is placed on the positioning points to perform positioning experiments, each positioning point is positioned for 5 times, and an average value is obtained to serve as a positioning experiment result. In order to more visually display the positioning data, a positioning result graph is drawn in a three-dimensional space, as shown in fig. 7, a center "●" is a positioning robot, a robot body coordinate system X, Y, Z axis is shown in the figure,
Figure BDA0003060671840000142
and "□" represent the actual and measured positions of the target object, respectively. The average errors for the positioning on the 80cm circle and the 150cm circle are (3.4cm, 3.0cm, 2.3cm) and (6.9cm, 4.9cm, 4.5cm), and it is clear that the error increases with increasing distance. Similarly, the root mean square error is used to measure the relationship between the vision measurement distance of the binocular camera and the actual distance, that is, the following formula is used:
Figure BDA0003060671840000141
wherein d isiAnd di' denotes an actual distance and a visually measured distance, respectively. Through analysis, positioning is carried out on a circle with a radius of 80cm, and the mean square deviations of the positioning errors in the x direction, the y direction and the z direction are respectively 3.67cm, 3.17cm and 2.36 cm; at a radius ofPositioning is carried out on a 150cm circle, the mean square deviations of positioning errors in the x direction, the y direction and the z direction are 7.07cm, 5.25cm and 4.88cm respectively, the diameter of the amphibious spherical robot is 30cm, and the mean square deviations of the positioning errors in the x direction, the y direction and the z direction account for 23.6%, 17.5% and 16.3% respectively.
Detailed description of the invention
In order to verify the effectiveness of the small underwater multi-robot cooperative positioning method and system based on multi-source information fusion, three robots are adopted, and the robots adopt different identifications to carry out experiments. As shown in fig. 8, the robot 1 is equipped with a panoramic stereo perception system, and the other two robots are equipped with ordinary binocular cameras. The placement positions and postures of the three robots are shown in fig. 8, the robots are equilateral triangles, each side is 2m, and the coordinates of the robot 1 are shown in the figure. By the method, the positioning results of the robots 2 and 3 are unified to the coordinate system of the robot 1. The three-dimensional positioning results are shown in fig. 9, and fig. 10 is an effect graph projected onto the XY plane, and it can be seen that the positioning results are relatively dispersed. As can be seen from the figure, the convergence effect is more obvious compared with the direct positioning result. Fig. 11 and 12 are graphs showing the positioning results of the amphibious robot 1 on the robot 2 and the robot 3, respectively, and the positioning error based on the method of the invention is obviously small. The longer the positioning distance (the larger the distance between the two robots) is, the larger the error is, and the positioning errors of the robot 2 and the robot 3 in the X direction are the largest, respectively 18.1cm and 17.5 cm. After the positioning is carried out by the method, the maximum positioning errors of the two robots are respectively 10.9cm and 10.5cm, the positioning precision is improved by 39.8 percent and 40 percent, and the positioning precision can meet the requirement of the small amphibious robot on underwater cooperative motion.
While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this description, will appreciate that other embodiments can be devised which do not depart from the scope of the invention as described herein. The present invention has been disclosed in an illustrative rather than a restrictive sense, and the scope of the present invention is defined by the appended claims.

Claims (10)

1. A small underwater multi-robot cooperative positioning method based on multi-source information fusion is characterized by comprising the following steps:
step one, acquiring sensor data; wherein the sensor data comprises a positioning robot underwater pressure sensor value, an image sequence containing a positioning robot;
calculating according to the value of the underwater pressure sensor of the positioning robot and the value of the underwater pressure sensor of the positioned robot to obtain the vertical distance between the positioning robot and the positioned robot;
thirdly, calculating and obtaining the three-dimensional space position coordinates of the positioned robot according to the image sequence containing the positioned robot;
and fourthly, performing information fusion on the vertical distance between the positioning robot and the positioned robot and the three-dimensional space position coordinate of the positioned robot to obtain the final three-dimensional space position of the positioned robot.
2. The small underwater multi-robot cooperative positioning method based on multi-source information fusion as claimed in claim 1, wherein the specific process of the second step comprises:
step two, establishing a linear equation of the pressure difference and the vertical distance between the positioning robot and the positioned robot;
the relation between the pressure difference between the positioning robot and the positioned robot and the vertical distance is as follows:
Zp=kpP12
wherein Z ispRepresents a vertical distance; p12Indicating a pressure difference, P12=P1-P2,P1Is the positioning robot pressure sensor value; p2Is the positioned robot pressure sensor value; k is a radical ofpIs a proportional parameter of the pressure difference and the vertical distance;
secondly, determining a first system state equation and an observation equation according to the linear equation;
the state equation and observation equation for the first system are:
Figure FDA0003060671830000011
the state vector and observation vector of the first system are:
Figure FDA0003060671830000012
wherein k represents time; v. ofpRepresenting the vertical direction speed of the positioned robot; a represents a state transition matrix; c denotes an observation matrix which is,
Figure FDA0003060671830000013
Figure FDA0003060671830000014
is gaussian process noise;
Figure FDA0003060671830000015
observing noise for gauss;
and step three, filtering the vertical distance by adopting a Kalman algorithm according to the determined state equation and the observation equation of the first system to obtain the vertical distance between the positioning robot and the positioned robot after filtering.
3. The small underwater multi-robot cooperative positioning method based on multi-source information fusion as claimed in claim 2, wherein the specific process of step three comprises:
thirdly, obtaining the pixel coordinates of the positioned robot through a visual target recognition algorithm according to the image sequence containing the positioned robot;
step two, establishing a vision positioning model equation of the positioned robot;
the vision positioning model equation of the positioned robot is as follows:
Figure FDA0003060671830000021
wherein i represents a binocular camera serial number; l and r respectively represent left and right camera coordinate systems in the binocular camera; u and v represent pixel coordinates;
Figure FDA0003060671830000022
representing the coordinates of the positioned robot under a robot body coordinate system;
thirdly, determining a second system state equation and an observation equation according to the visual positioning model equation;
the state equation and observation equation of the second system are:
Figure FDA0003060671830000023
wherein the content of the first and second substances,
Figure FDA0003060671830000024
state vectors at the k moment;
Figure FDA0003060671830000025
an observation vector at the k moment;
Figure FDA0003060671830000026
is gaussian process noise;
Figure FDA0003060671830000027
observing noise for gauss;
the state vector and observation vector of the second system are:
Figure FDA0003060671830000028
Figure FDA0003060671830000029
and step three, filtering the coordinates of the positioned robot under the robot body coordinate system by adopting an unscented Kalman filtering algorithm according to the determined state equation and observation equation of the second system to obtain the three-dimensional space position coordinates of the positioned robot after filtering
Figure FDA00030606718300000210
4. The small underwater multi-robot cooperative positioning method based on multi-source information fusion as claimed in claim 3, wherein the specific process of step four comprises:
fourthly, acquiring an attitude angle of the positioned robot;
fourthly, according to the attitude angle, the three-dimensional space position coordinate of the positioned robot under the robot coordinate system
Figure FDA00030606718300000211
Conversion to three-dimensional spatial position coordinates in world coordinate system
Figure FDA00030606718300000212
Step four and step three, coordinate of Z-axis direction
Figure FDA00030606718300000213
And the vertical distance Z obtained in step twopPerforming fusion to obtain
Figure FDA00030606718300000214
The global state estimate and covariance matrix are:
Figure FDA00030606718300000215
wherein the content of the first and second substances,
Figure FDA0003060671830000031
is the covariance of the vertical distance,
Figure FDA0003060671830000032
as a Z-axis direction coordinate
Figure FDA0003060671830000033
The covariance of (a);
step four, the obtained
Figure FDA0003060671830000034
Combining position coordinates of a positioned robot in a world coordinate system
Figure FDA0003060671830000035
And obtaining the final three-dimensional space position of the positioned robot.
5. The small underwater multi-robot cooperative positioning method based on multi-source information fusion as recited in claim 4, wherein the specific process of filtering the coordinates of the positioned robot in the robot body coordinate system by using the unscented kalman filter algorithm in the third and fourth steps comprises:
step three, four and one, setting initial value of state, obtaining Sigma point set { chi } of state estimation by UT conversioni,k-1},i=1,2,…,2n;
Step three, step two, time updating is carried out, namely, the prediction is carried out in the previous step, and the prediction state and the prediction covariance are calculated:
and substituting the Sigma point at the moment k-1 into a state equation of a second system through UT conversion:
χi,k|k-1=f(χi,k-1)
vector χ of mergeri,k/k-1To obtain a previous state estimate at time k; meanwhile, the process noise is considered, and the estimated covariance of the previous prediction step is obtained;
and step three, performing prediction updating, namely updating the prediction state in the previous step by using measurement: and substituting the updated Sigma point into an observation equation of a second system to obtain a measurement predicted value:
Figure FDA0003060671830000036
merging vectors
Figure FDA0003060671830000037
Obtaining a measurement prediction at the moment k, and obtaining a covariance of the measurement prediction; further calculating the covariance of the state predicted value and the measurement predicted value;
step three, four, calculating filter gain and updating state estimation and variance;
and step three, step four, iterating the step three, step two, step three, step four and step four repeatedly to obtain the estimation result of the state vector.
6. A small underwater multi-robot cooperative positioning system based on multi-source information fusion is characterized by comprising a sensor layer and a data fusion layer; wherein the content of the first and second substances,
the sensor layer comprises a perspective three-dimensional sensing device, a pressure sensor and an inertial sensor; the all-round stereoscopic perception device comprises a plurality of groups of binocular cameras and is used for acquiring an image sequence comprising the positioned robot; the pressure sensor is used for acquiring a value of the positioning robot underwater pressure sensor and a value of the positioning robot underwater pressure sensor; the inertial sensor is used for acquiring the attitude angle of the positioned robot;
the data fusion layer comprises a sub-filter I, a sub-filter II and a main filter; the sub-filter I is used for calculating and obtaining the vertical distance between the positioning robot and the positioned robot according to the value of the underwater pressure sensor of the positioning robot and the value of the underwater pressure sensor of the positioned robot; the sub-filter II is used for calculating and obtaining the three-dimensional space position coordinates of the positioned robot according to the image sequence containing the positioned robot; the main filter is used for carrying out information fusion on the vertical distance between the positioning robot and the positioned robot and the three-dimensional space position coordinate of the positioned robot to obtain the final three-dimensional space position of the positioned robot;
the sensor layer and the data fusion layer communicate wirelessly.
7. The system of claim 6, wherein the specific process of obtaining the vertical distance between the positioning robot and the positioned robot in the sub-filter I comprises: firstly, establishing a linear equation of the pressure difference and the vertical distance between the positioning robot and the positioned robot; the relation between the pressure difference between the positioning robot and the positioned robot and the vertical distance is as follows:
Zp=kpP12
wherein Z ispRepresents a vertical distance; p12Indicating a pressure difference, P12=P1-P2,P1Is the positioning robot pressure sensor value; p2Is the positioned robot pressure sensor value; k is a radical ofpIs a proportional parameter of the pressure difference and the vertical distance;
then, determining a first system state equation and an observation equation according to the linear equation; the state equation and observation equation for the first system are:
Figure FDA0003060671830000041
the state vector and observation vector of the first system are:
Figure FDA0003060671830000042
wherein k represents time; v. ofpRepresenting the vertical direction speed of the positioned robot; a represents a state transition matrix; c denotes an observation matrix which is,
Figure FDA0003060671830000043
Figure FDA0003060671830000044
is gaussian process noise;
Figure FDA0003060671830000045
observing noise for gauss;
and finally, filtering the vertical distance by adopting a Kalman algorithm according to the determined state equation and the observation equation of the first system to obtain the vertical distance between the positioning robot and the positioned robot after filtering.
8. The system of claim 7, wherein the specific process of obtaining the three-dimensional spatial position coordinates of the positioned robot in the sub-filter II comprises:
firstly, obtaining the pixel coordinates of a positioned robot through a visual target recognition algorithm according to an image sequence containing the positioned robot; then, establishing a visual positioning model equation of the positioned robot; the vision positioning model equation of the positioned robot is as follows:
Figure FDA0003060671830000046
wherein i represents a binocular camera serial number; l and r respectively represent left and right camera coordinate systems in the binocular camera; u and v represent pixel coordinates;
Figure FDA0003060671830000047
representing the coordinates of the positioned robot under a robot body coordinate system;
then, determining a second system state equation and an observation equation according to the visual positioning model equation; the state equation and observation equation of the second system are:
Figure FDA0003060671830000051
wherein the content of the first and second substances,
Figure FDA0003060671830000052
state vectors at the k moment;
Figure FDA0003060671830000053
an observation vector at the k moment;
Figure FDA0003060671830000054
is gaussian process noise;
Figure FDA0003060671830000055
observing noise for gauss;
the state vector and observation vector of the second system are:
Figure FDA0003060671830000056
Figure FDA0003060671830000057
finally, according to the determined state equation and observation equation of the second system, filtering the coordinates of the positioned robot under the body coordinate system of the robot by adopting an unscented Kalman filtering algorithm to obtain the three-dimensional space position coordinates of the positioned robot after filtering
Figure FDA0003060671830000058
9. The system of claim 8, wherein the specific process of obtaining the final three-dimensional spatial position of the positioned robot in the main filter comprises:
firstly, acquiring a posture angle of a positioned robot; then, the robot is positioned according to the attitude angleThree-dimensional space position coordinate under robot coordinate system
Figure FDA0003060671830000059
Conversion to three-dimensional spatial position coordinates in world coordinate system
Figure FDA00030606718300000510
Then, the coordinates in the Z-axis direction are calculated
Figure FDA00030606718300000511
Vertical distance Z obtained in sum sub-filter IpPerforming fusion to obtain
Figure FDA00030606718300000512
The global state estimate and covariance matrix are:
Figure FDA00030606718300000513
wherein the content of the first and second substances,
Figure FDA00030606718300000514
is the covariance of the vertical distance,
Figure FDA00030606718300000515
as a Z-axis direction coordinate
Figure FDA00030606718300000516
The covariance of (a);
finally, will obtain
Figure FDA00030606718300000517
Combining position coordinates of a positioned robot in a world coordinate system
Figure FDA00030606718300000518
And obtaining the final three-dimensional space position of the positioned robot.
10. The small underwater multi-robot cooperative positioning system based on multi-source information fusion of claim 9 is characterized in that the specific process of filtering the coordinates of the positioned robot in the robot body coordinate system by adopting the unscented kalman filter algorithm in the sub-filter II comprises: first, given initial value of state, the state estimation Sigma point set { chi ] is obtained by UT conversioni,k-11,2, …,2 n; then, a temporal update is performed, i.e. a prediction is made one step ahead, calculating the prediction state and the prediction covariance: and substituting the Sigma point at the moment k-1 into a state equation of a second system through UT conversion:
χi,k|k-1=f(χi,k-1)
vector χ of mergeri,k/k-1To obtain a previous state estimate at time k; meanwhile, the process noise is considered, and the estimated covariance of the previous prediction step is obtained;
then, a prediction update is performed, i.e. the previous prediction state is updated with measurements: and substituting the updated Sigma point into an observation equation of a second system to obtain a measurement predicted value:
Figure FDA0003060671830000061
merging vectors
Figure FDA0003060671830000062
Obtaining a measurement prediction at the moment k, and obtaining a covariance of the measurement prediction; further calculating the covariance of the state predicted value and the measurement predicted value;
then, calculating a filtering gain and updating the state estimation and the variance;
and repeatedly iterating the process to obtain an estimation result of the state vector.
CN202110512081.3A 2021-05-11 2021-05-11 Small underwater multi-robot cooperative positioning method and system based on multi-source information fusion Active CN113074725B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110512081.3A CN113074725B (en) 2021-05-11 2021-05-11 Small underwater multi-robot cooperative positioning method and system based on multi-source information fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110512081.3A CN113074725B (en) 2021-05-11 2021-05-11 Small underwater multi-robot cooperative positioning method and system based on multi-source information fusion

Publications (2)

Publication Number Publication Date
CN113074725A true CN113074725A (en) 2021-07-06
CN113074725B CN113074725B (en) 2022-07-22

Family

ID=76616465

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110512081.3A Active CN113074725B (en) 2021-05-11 2021-05-11 Small underwater multi-robot cooperative positioning method and system based on multi-source information fusion

Country Status (1)

Country Link
CN (1) CN113074725B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114018236A (en) * 2021-09-30 2022-02-08 哈尔滨工程大学 Laser vision strong coupling SLAM method based on adaptive factor graph
CN115031726A (en) * 2022-03-29 2022-09-09 哈尔滨工程大学 Data fusion navigation positioning method
CN116592896A (en) * 2023-07-17 2023-08-15 山东水发黄水东调工程有限公司 Underwater robot navigation positioning method based on Kalman filtering and infrared thermal imaging

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101661098A (en) * 2009-09-10 2010-03-03 上海交通大学 Multi-robot automatic locating system for robot restaurant
CN102052924A (en) * 2010-11-25 2011-05-11 哈尔滨工程大学 Combined navigation and positioning method of small underwater robot
CN102980579A (en) * 2012-11-15 2013-03-20 哈尔滨工程大学 Autonomous underwater vehicle autonomous navigation locating method
CN104280025A (en) * 2013-07-08 2015-01-14 中国科学院沈阳自动化研究所 Adaptive unscented Kalman filter-based deepwater robot short-baseline combined navigation method
CN204228171U (en) * 2014-11-19 2015-03-25 山东华盾科技股份有限公司 A kind of underwater robot guider
CN105775082A (en) * 2016-03-04 2016-07-20 中国科学院自动化研究所 Bionic robotic dolphin for water quality monitoring
CN107585280A (en) * 2017-10-12 2018-01-16 上海遨拓深水装备技术开发有限公司 A kind of quick dynamic positioning systems of ROV for being adapted to vertical oscillation current
CN107677272A (en) * 2017-09-08 2018-02-09 哈尔滨工程大学 A kind of AUV collaborative navigation methods based on nonlinear transformations filtering
CN108303094A (en) * 2018-01-31 2018-07-20 深圳市拓灵者科技有限公司 The Position Fixing Navigation System and its positioning navigation method of array are merged based on multiple vision sensor
CN108444478A (en) * 2018-03-13 2018-08-24 西北工业大学 A kind of mobile target visual position and orientation estimation method for submarine navigation device
CN108594834A (en) * 2018-03-23 2018-09-28 哈尔滨工程大学 One kind is towards more AUV adaptive targets search and barrier-avoiding method under circumstances not known
US20190127034A1 (en) * 2017-11-01 2019-05-02 Tampa Deep-Sea X-Plorers Llc Autonomous underwater survey apparatus and system
CN110764533A (en) * 2019-10-15 2020-02-07 哈尔滨工程大学 Multi-underwater robot cooperative target searching method
GB202007680D0 (en) * 2020-05-22 2020-07-08 Equinor Energy As Shuttle loading system
CN111542020A (en) * 2020-05-06 2020-08-14 河海大学常州校区 Multi-AUV cooperative data collection method based on region division in underwater acoustic sensor network
CN111595348A (en) * 2020-06-23 2020-08-28 南京信息工程大学 Master-slave mode cooperative positioning method of autonomous underwater vehicle combined navigation system
CN111638523A (en) * 2020-05-08 2020-09-08 哈尔滨工程大学 System and method for searching and positioning lost person by underwater robot
CN112432644A (en) * 2020-11-11 2021-03-02 杭州电子科技大学 Unmanned ship integrated navigation method based on robust adaptive unscented Kalman filtering
CN112613640A (en) * 2020-12-07 2021-04-06 清华大学 Heterogeneous AUV (autonomous Underwater vehicle) cooperative underwater information acquisition system and energy optimization method
CN112698273A (en) * 2020-12-15 2021-04-23 哈尔滨工程大学 Multi-AUV single-standard distance measurement cooperative operation method

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101661098A (en) * 2009-09-10 2010-03-03 上海交通大学 Multi-robot automatic locating system for robot restaurant
CN102052924A (en) * 2010-11-25 2011-05-11 哈尔滨工程大学 Combined navigation and positioning method of small underwater robot
CN102980579A (en) * 2012-11-15 2013-03-20 哈尔滨工程大学 Autonomous underwater vehicle autonomous navigation locating method
CN104280025A (en) * 2013-07-08 2015-01-14 中国科学院沈阳自动化研究所 Adaptive unscented Kalman filter-based deepwater robot short-baseline combined navigation method
CN204228171U (en) * 2014-11-19 2015-03-25 山东华盾科技股份有限公司 A kind of underwater robot guider
CN105775082A (en) * 2016-03-04 2016-07-20 中国科学院自动化研究所 Bionic robotic dolphin for water quality monitoring
CN107677272A (en) * 2017-09-08 2018-02-09 哈尔滨工程大学 A kind of AUV collaborative navigation methods based on nonlinear transformations filtering
CN107585280A (en) * 2017-10-12 2018-01-16 上海遨拓深水装备技术开发有限公司 A kind of quick dynamic positioning systems of ROV for being adapted to vertical oscillation current
US20190127034A1 (en) * 2017-11-01 2019-05-02 Tampa Deep-Sea X-Plorers Llc Autonomous underwater survey apparatus and system
CN108303094A (en) * 2018-01-31 2018-07-20 深圳市拓灵者科技有限公司 The Position Fixing Navigation System and its positioning navigation method of array are merged based on multiple vision sensor
CN108444478A (en) * 2018-03-13 2018-08-24 西北工业大学 A kind of mobile target visual position and orientation estimation method for submarine navigation device
CN108594834A (en) * 2018-03-23 2018-09-28 哈尔滨工程大学 One kind is towards more AUV adaptive targets search and barrier-avoiding method under circumstances not known
CN110764533A (en) * 2019-10-15 2020-02-07 哈尔滨工程大学 Multi-underwater robot cooperative target searching method
CN111542020A (en) * 2020-05-06 2020-08-14 河海大学常州校区 Multi-AUV cooperative data collection method based on region division in underwater acoustic sensor network
CN111638523A (en) * 2020-05-08 2020-09-08 哈尔滨工程大学 System and method for searching and positioning lost person by underwater robot
GB202007680D0 (en) * 2020-05-22 2020-07-08 Equinor Energy As Shuttle loading system
CN111595348A (en) * 2020-06-23 2020-08-28 南京信息工程大学 Master-slave mode cooperative positioning method of autonomous underwater vehicle combined navigation system
CN112432644A (en) * 2020-11-11 2021-03-02 杭州电子科技大学 Unmanned ship integrated navigation method based on robust adaptive unscented Kalman filtering
CN112613640A (en) * 2020-12-07 2021-04-06 清华大学 Heterogeneous AUV (autonomous Underwater vehicle) cooperative underwater information acquisition system and energy optimization method
CN112698273A (en) * 2020-12-15 2021-04-23 哈尔滨工程大学 Multi-AUV single-standard distance measurement cooperative operation method

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
FENG, WX等: "Novel algorithms for coordination of underwater swarm robotics", 《2006 INTERNATIONAL CONFERENCE ON MECHATRONICS AND AUTOMATION》 *
MUGEN ZHOU等: "A Multi-Binocular Camera-based Localization Method for Amphibious Spherical Robots", 《2020 IEEE INTERNATIONAL CONFERENCE ON MECHATRONICS AND AUTOMATION (ICMA)》 *
YANGYANG WANG等: "Pseudo-3D Vision-Inertia Based Underwater Self-Localization for AUVs", 《IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY 》 *
唐昆: "两栖球形机器人水下定位系统研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *
孙鑫: "基于距离信息的多AUV协同导航研究", 《中国优秀博硕士学位论文全文数据库(硕士)工程科技Ⅱ辑》 *
王天等: "基于SVM算法的碟形水下机器人姿态预测方法研究", 《传感器与微系统》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114018236A (en) * 2021-09-30 2022-02-08 哈尔滨工程大学 Laser vision strong coupling SLAM method based on adaptive factor graph
CN114018236B (en) * 2021-09-30 2023-11-03 哈尔滨工程大学 Laser vision strong coupling SLAM method based on self-adaptive factor graph
CN115031726A (en) * 2022-03-29 2022-09-09 哈尔滨工程大学 Data fusion navigation positioning method
CN116592896A (en) * 2023-07-17 2023-08-15 山东水发黄水东调工程有限公司 Underwater robot navigation positioning method based on Kalman filtering and infrared thermal imaging
CN116592896B (en) * 2023-07-17 2023-09-29 山东水发黄水东调工程有限公司 Underwater robot navigation positioning method based on Kalman filtering and infrared thermal imaging

Also Published As

Publication number Publication date
CN113074725B (en) 2022-07-22

Similar Documents

Publication Publication Date Title
CN113074725B (en) Small underwater multi-robot cooperative positioning method and system based on multi-source information fusion
Wu et al. Survey of underwater robot positioning navigation
CN111156998B (en) Mobile robot positioning method based on RGB-D camera and IMU information fusion
US20220124303A1 (en) Methods and systems for selective sensor fusion
CN107223275B (en) Method and system for fusing multi-channel sensing data
CN109360240B (en) Small unmanned aerial vehicle positioning method based on binocular vision
EP3158293B1 (en) Sensor fusion using inertial and image sensors
EP3158412B1 (en) Sensor fusion using inertial and image sensors
Wang et al. Online high-precision probabilistic localization of robotic fish using visual and inertial cues
CN108089196B (en) Optics is initiative and is fused non-cooperative target position appearance measuring device passively
Shen et al. Optical flow sensor/INS/magnetometer integrated navigation system for MAV in GPS-denied environment
CN111338383B (en) GAAS-based autonomous flight method and system, and storage medium
Siegwart et al. Autonomous mobile robots
CN107727101B (en) Three-dimensional attitude information rapid resolving method based on dual-polarized light vector
Yu et al. Stereo vision based obstacle avoidance strategy for quadcopter UAV
CN115272596A (en) Multi-sensor fusion SLAM method oriented to monotonous texture-free large scene
CN113408623B (en) Non-cooperative target flexible attachment multi-node fusion estimation method
Xian et al. Fusing stereo camera and low-cost inertial measurement unit for autonomous navigation in a tightly-coupled approach
Wang et al. Micro aerial vehicle navigation with visual-inertial integration aided by structured light
Zhang et al. An open-source, fiducial-based, underwater stereo visual-inertial localization method with refraction correction
Cao et al. Omni-directional vision localization based on particle filter
CN114529585A (en) Mobile equipment autonomous positioning method based on depth vision and inertial measurement
He et al. A low cost visual positioning system for small scale tracking experiments on underwater vehicles
Xu et al. Probabilistic membrane computing-based SLAM for patrol UAVs in coal mines
CN108344972A (en) Robotic vision system based on grating loss stereoscopic vision and air navigation aid

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant