CN117128961A - Underwater robot positioning method, device, electronic equipment and storage medium - Google Patents

Underwater robot positioning method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117128961A
CN117128961A CN202310633421.7A CN202310633421A CN117128961A CN 117128961 A CN117128961 A CN 117128961A CN 202310633421 A CN202310633421 A CN 202310633421A CN 117128961 A CN117128961 A CN 117128961A
Authority
CN
China
Prior art keywords
robot
dvl
frame
frame time
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310633421.7A
Other languages
Chinese (zh)
Inventor
吴正兴
李朋
黄雨培
闫帅铮
李思捷
谭民
喻俊志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN202310633421.7A priority Critical patent/CN117128961A/en
Publication of CN117128961A publication Critical patent/CN117128961A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/52Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
    • G01S7/521Constructional features

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Acoustics & Sound (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention provides an underwater robot positioning method, an underwater robot positioning device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a first speed component of the robot at a first frame time and a DVL error term at a second frame time based on the speed measurement data of the DVL; the first frame time is the last frame time of the second frame time; determining an initial pose estimation value of the robot at a second frame moment based on a first speed component of the robot at the first frame moment; based on a DVL error term of the robot at the second frame time, the initial pose estimation value of the robot at the second frame time is optimized, the final pose estimation value of the robot at the second frame time is determined, and the positioning accuracy of the underwater robot is improved.

Description

Underwater robot positioning method, device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of visual image technologies, and in particular, to a positioning method and apparatus for an underwater robot, an electronic device, and a storage medium.
Background
Underwater robots play an increasingly important role in marine resource exploration and offshore platform maintenance. In order to successfully complete a task, it is important that an underwater robot accurately positions itself. Conventional positioning methods typically utilize sonar, inertial measurement unit (Inertial Measurement Unit, IMU), depth sensor, doppler velocimeter (Doppler Velocity Log, DVL), and other sensors for navigation and state estimation of the underwater robot. In recent years, since vision sensors can provide abundant image information at low cost, a vision-based synchronized mapping (Visual-Based Simultaneous Localization and Mapping, V-SLAM) system has received a great deal of attention, and an underwater vision synchronized mapping (Simultaneous Localization and Mapping, SLAM) system has also been verified in many studies.
DVL can obtain the linear velocity in the X-Y-Z direction by transmitting acoustic signals and measuring the doppler shift of them as they reflect off the bottom, these measurements containing information on the position change, which can be used for the position estimation of the robot. While there have been some attempts to fuse the visual sensor with the DVL sensor, these methods fuse the DVL velocity measurements with the gyroscope and the depth sensor, which inevitably lead to error accumulation problems due to the dead reckoning method of performing the accumulation calculation, and thus these fusion methods cannot significantly improve the positioning accuracy on the basis of the visual odometer.
Disclosure of Invention
Aiming at the problems existing in the prior art, the invention provides an underwater robot positioning method, an underwater robot positioning device, electronic equipment and a storage medium.
In a first aspect, the present invention provides a positioning method for an underwater robot, including:
acquiring a first speed component of the robot at a first frame time and a DVL error term at a second frame time based on speed measurement data of a Doppler velocimeter DVL; the first frame time is the last frame time of the second frame time;
determining an initial pose estimation value of the robot at the second frame moment based on a first velocity component of the robot at the first frame moment;
And optimizing an initial pose estimation value of the robot at the second frame moment based on a DVL error term of the robot at the second frame moment, and determining a final pose estimation value of the robot at the second frame moment.
Optionally, the acquiring the first velocity component of the robot at the first frame time based on the velocity measurement data of the DVL includes:
a first velocity component of the robot at the first frame time is obtained based on a product of a second velocity component of the robot at the first frame time and a rotational component of the robot transformed from a visual coordinate system to a DVL coordinate system, a difference between a translational component of the robot transformed from the DVL coordinate system to the visual coordinate system and a displacement of the DVL in the DVL coordinate system between the first frame time and the second frame time, and a translational component of the robot transformed from the visual coordinate system to the DVL coordinate system.
Optionally, the first velocity component of the robot at the first frame instant is determined based on the following formula:
wherein the saidA first velocity component representing the robot at the first frame time, an Said->A second speed component representing the robot at the first frame instant, said +.>Representing a rotational component of the robot transformed from the visual coordinate system to the DVL coordinate system, said +.>Representing a translational component of the robot transformed from the DVL coordinate system to the visual coordinate system, said +.>Representing the displacement of said DVL in said DVL coordinate system between said first frame instant and said second frame instant, said +.>Representing a translational component of the robot transformed from the visual coordinate system to the DVL coordinate system.
Optionally, the determining, based on the first velocity component of the robot at the first frame time, an initial pose estimation value of the robot at the second frame time includes:
and determining an initial pose estimation value of the robot at the second frame time based on the first velocity component of the robot at the first frame time, the second velocity component of the robot at the first frame time and the final pose estimation value of the robot at the first frame time.
Optionally, the acquiring the DVL error term of the robot at the second frame time based on the DVL speed measurement data includes:
A DVL error term for the robot at the second frame time is obtained based on a difference between displacements of the DVL in a DVL coordinate system between the first frame time and the second frame time, a rotational component of the robot from a visual coordinate system to the DVL coordinate system, a rotational component of the robot at the first frame time, a rotational component of the robot relative to the visual coordinate system at the second frame time, a translational component of the robot from the visual coordinate system to the DVL coordinate system, a translational component of the robot relative to the visual coordinate system at the second frame time, a translational component of the robot at the first frame time, a translational component of the robot from the DVL coordinate system to the visual coordinate system.
Optionally, the DVL error term of the robot at the second frame instant is determined based on the following formula:
wherein the saidA DVL error term representing said robot at said second frame instant, said +.>Representing the displacement of said DVL in said DVL coordinate system between said first frame instant and said second frame instant, said +.>Representing a rotational component of the robot transformed from the visual coordinate system to the DVL coordinate system, said +. >Representing a rotational component of said robot at said first frame instant, said +.>Representing a rotational component of said robot relative to said visual coordinate system at said second frame instant, said +.>Representing a translational component of the robot transformed from the visual coordinate system to the DVL coordinate system, said +.>Representing a translational component of said robot with respect to said visual coordinate system at said second frame instant, said +.>Representing a translational component of the robot at the first frame instant, said +.>Representing a translational component of the robot transformed from the DVL coordinate system to the visual coordinate system.
Optionally, the optimizing the initial pose estimation value of the robot at the second frame time based on the DVL error term of the robot at the second frame time, and determining the final pose estimation value of the robot at the second frame time includes:
based on a DVL error term of the robot at the second frame time and a reprojection error of the robot at the second frame time, carrying out nonlinear least square optimization problem solving by taking the pose of the robot at the second frame time as a target, optimizing an initial pose estimated value of the robot at the second frame time, and determining a final pose estimated value of the robot at the second frame time.
In a second aspect, the present invention also provides an underwater robot positioning device, including:
the acquisition module is used for acquiring a first speed component of the robot at a first frame moment and a DVL error item at a second frame moment based on the speed measurement data of the DVL; the first frame time is the last frame time of the second frame time;
a determining module, configured to determine an initial pose estimation value of the robot at the second frame time based on a first velocity component of the robot at the first frame time;
and the optimization module is used for optimizing the initial pose estimation value of the robot at the second frame moment based on the DVL error term of the robot at the second frame moment and determining the final pose estimation value of the robot at the second frame moment.
In a third aspect, the present invention also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the underwater robot positioning method according to the first aspect as described above when executing the program.
In a fourth aspect, the present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the underwater robot positioning method according to the first aspect described above.
According to the underwater robot positioning method, the underwater robot positioning device, the electronic equipment and the storage medium, the initial estimation of the pose of the robot in the visual tracking process is improved by utilizing DVL data measurement data, then the initial pose estimation value of the robot is optimized by constructing a new DVL error item, and the positioning precision of the underwater robot is improved.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of an underwater robot positioning method provided by the invention;
FIG. 2 is a schematic flow chart of a vision-Doppler velocimeter fusion framework provided by the invention;
FIG. 3 is a schematic diagram of the coordinate system relationships of the camera-DVL system provided by the invention;
FIG. 4 is a factor schematic diagram of a visual-DVL fusion system provided by the invention;
FIG. 5 is a schematic diagram of a robot motion trajectory for a binocular vision-DVL fusion mode positioning method provided by the invention;
FIG. 6 is a schematic view of the structure of the underwater robot positioning device provided by the invention;
fig. 7 is a schematic structural diagram of an electronic device provided by the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Fig. 1 is a schematic flow chart of the positioning method of the underwater robot, as shown in fig. 1, the method comprises the following steps:
step 100, acquiring a first speed component of a robot at a first frame time and a DVL error term at a second frame time based on speed measurement data of a Doppler velocimeter DVL; the first frame time is the last frame time of the second frame time.
Step 101, determining an initial pose estimated value of the robot at a second frame moment based on a first speed component of the robot at the first frame moment.
Step 102, optimizing an initial pose estimation value of the robot at the second frame moment based on a DVL error term of the robot at the second frame moment, and determining a final pose estimation value of the robot at the second frame moment.
Specifically, the DVL may obtain the linear velocity of the target in the X-Y-Z direction by transmitting an acoustic signal and measuring the Doppler shift of the target as it reflects off the bottom. In the process of estimating the pose of the robot, the speed of the robot at a certain frame time can be represented by the speed in the translational direction and the speed in the yaw direction. The first velocity component may be used to represent the velocity of the robot in the translational direction. The second frame time may be used to represent the current frame time and the first frame time may be used to represent the last frame time of the current frame time. The DVL error term may be used to represent an error between a displacement measurement and a derivative of the DVL in the DVL coordinate system between the first frame time and the second frame time.
After acquiring the first velocity component of the robot at the first frame time based on the velocity measurement data of the DVL, an initial pose estimation value of the robot at the second frame time may be determined according to the first velocity component of the robot at the first frame time.
After determining the initial pose estimation value of the robot at the second frame time, the initial pose estimation value of the robot at the second frame time can be optimized according to the DVL error item of the robot at the second frame time, and the final pose estimation value of the robot at the second frame time is obtained. The final pose estimated value obtained by the method can be used as the final pose estimated value of the first stage visual tracking thread in the original ORB-SLAM3 system. And performing the operation of the second stage of the visual tracking thread according to the final pose estimation value of the visual tracking thread of the first stage, thereby completing the positioning of the robot. The operation of the second stage of the visual tracking thread may be the same as in the prior art, or may be performed in other ways, and is not specifically limited herein.
According to the underwater robot positioning method, the initial estimation of the pose of the robot in the visual tracking process is improved by using the DVL data measurement data, then the initial pose estimation value of the robot is optimized by constructing a new DVL error item, and the positioning precision of the underwater robot is improved.
Optionally, acquiring a first velocity component of the robot at a first frame time based on the velocity measurement data of the DVL includes:
The first velocity component of the robot at the first frame time is obtained based on a product of a second velocity component of the robot at the first frame time and a rotational component of the robot transformed from the visual coordinate system to the DVL coordinate system, a difference between a translational component of the robot transformed from the DVL coordinate system to the visual coordinate system and a displacement of the DVL in the DVL coordinate system between the first frame time and the second frame time, and a translational component of the robot transformed from the visual coordinate system to the DVL coordinate system.
In particular, the second velocity component may represent the velocity of the robot in the yaw direction at a certain frame moment. The relative position of the robot between the visual coordinate system and the DVL coordinate system is constant. The displacement of the DVL in the DVL coordinate system between the first frame time and the second frame time may be represented by a product of the velocity measurement data of the DVL at the first frame time and a time difference between the first frame time and the second frame time.
In one embodiment, the first velocity component of the robot at the first frame instant may be determined based on the following formula:
wherein,representing a first speed component of the robot at a first frame instant,/>A second speed component representing the robot at the first frame instant,/and/or>Representing the rotational component of the robot transformed from the visual coordinate system to the DVL coordinate system, +. >Representing the translational component of the robot transformed from the DVL coordinate system to the visual coordinate system, +.>Representing the displacement of the DVL in the DVL coordinate system between the first frame instant and the second frame instant,/and/or>Representing the translational component of the robot transformed from the visual coordinate system to the DVL coordinate system.
The displacement of the DVL in the DVL coordinate system between the first frame time and the second frame time may be calculated based on the following formula:
wherein,representing the displacement of the DVL in the DVL coordinate system between the first frame instant and the second frame instant,/and/or>Representing velocity measurement data of the DVL at a first frame instant, dt represents a time difference between the first frame instant and a second frame instant.
Optionally, determining the initial pose estimation value of the robot at the second frame time based on the first velocity component of the robot at the first frame time includes:
an initial pose estimate for the robot at the second frame time is determined based on the first velocity component of the robot at the first frame time, the second velocity component of the robot at the first frame time, and the final pose estimate for the robot at the first frame time.
Specifically, the speed of the robot at the first frame time may be obtained according to the first speed component and the second speed component of the robot at the first frame time, and then the initial pose estimation value of the robot at the second frame time may be determined based on the speed of the robot at the first frame time and the final pose estimation value of the robot at the first frame time.
In one embodiment, the initial pose estimate for the robot at the second frame time may be determined based on the following formula:
wherein,representing an initial pose estimate of the robot at a second frame instant,/>Indicating the speed of the robot at the first frame instant, is->And representing the final pose estimated value of the robot at the first frame moment.
Optionally, acquiring a DVL error term of the robot at the second frame time based on the speed measurement data of the DVL includes:
the DVL error term for the robot at the second frame time is obtained based on a difference between displacements of the DVL in the DVL coordinate system between the first frame time and the second frame time, a rotational component of the robot from the visual coordinate system to the DVL coordinate system, a rotational component of the robot at the first frame time, a rotational component of the robot relative to the visual coordinate system at the second frame time, a translational component of the robot from the visual coordinate system to the DVL coordinate system, a translational component of the robot relative to the visual coordinate system at the second frame time, a translational component of the robot at the first frame time, a translational component of the robot from the DVL coordinate system to the visual coordinate system.
In particular, a DVL error term of the robot at the second frame instant may be used to represent an error between the displacement measurement value and the derived value of the DVL in the DVL coordinate system between the first frame instant and the second frame instant.
The displacement measurement of the DVL in the DVL coordinate system between the first frame time and the second frame time may be represented by a difference between the displacements of the DVL in the DVL coordinate system between the first frame time and the second frame time.
The derived value of the displacement of the DVL in the DVL coordinate system between the first frame time and the second frame time may be determined based on a rotational component of the robot from the visual coordinate system to the DVL coordinate system, a rotational component of the robot at the first frame time, a rotational component of the robot relative to the visual coordinate system at the second frame time, a translational component of the robot from the visual coordinate system to the DVL coordinate system, a translational component of the robot relative to the visual coordinate system at the second frame time, a translational component of the robot at the first frame time, a translational component of the robot from the DVL coordinate system to the visual coordinate system.
In one embodiment, the DVL error term for the robot at the second frame time may be determined based on the following equation:
wherein,DVL error term representing robot at second frame moment,/or->Representing the displacement of the DVL in the DVL coordinate system between the first frame instant and the second frame instant,/and/or>Representing the rotational component of the robot transformed from the visual coordinate system to the DVL coordinate system, +. >Representing the rotational component of the robot at the first frame instant, a first frame instant>Representing the rotational component of the robot relative to the visual coordinate system at the second frame instant, +.>Representing the translational component of the robot transformed from the visual coordinate system to the DVL coordinate system, +.>Representing the translational component of the robot relative to the visual coordinate system at the second frame instant, +.>Representing the translational component of the robot at the first frame instant,/->Representing translational components of the robot transformed from the DVL coordinate system to the visual coordinate system.
Optionally, optimizing the initial pose estimation value of the robot at the second frame time based on the DVL error term of the robot at the second frame time, and determining the final pose estimation value of the robot at the second frame time includes:
based on a DVL error item of the robot at the second frame time and a reprojection error of the robot at the second frame time, solving a nonlinear least square optimization problem with the pose of the robot at the second frame time as a target, optimizing an initial pose estimation value of the robot at the second frame time, and determining a final pose estimation value of the robot at the second frame time.
Specifically, the reprojection error of the robot at the second frame time can be obtained by the existing visual tracking method.
And taking the pose of the robot at the second frame time as a target, taking a DVL error term of the robot at the second frame time and a reprojection error of the robot at the second frame time as known terms, and constructing an objective function.
In one embodiment, the objective function may be constructed by the following formula:
wherein,representing the rotational component of the robot at the first frame instant, a first frame instant>Representing the rotational component of the robot at the first frame instant, according to +.>And->The final pose estimated value, J, of the robot at the second frame time can be determined V Representing the reprojection error of the robot at the second frame instant,/->A DVL error term, Σ, representing the robot at the second frame instant d Representing the covariance of the DVL error term.
And solving a nonlinear least square optimization problem of the objective function to determine a final pose estimated value of the robot at the second frame moment. The solving method may be a method of non-linear optimization of Gauss-Newton (GN) or Levenberg-Marquardt (LM) implemented in the library g2o, and is not specifically limited herein.
The underwater robot positioning method provided by the invention is illustrated by a specific application scene.
The invention provides a tightly coupled Visual-Doppler velocimeter (Visual-DVL) fusion method for underwater robot positioning, which aims to integrate velocity measurement data of DVL into Visual Odometer (VO). The present invention integrates DVL measurement data directly into the visual tracking process, considering that fusing it in Dead Reckoning (DR) systems is prone to error accumulation and suboptimal results. Specifically, the initial estimation of the camera pose in the visual tracking process is improved by using the velocity measurement of the DVL, and a better initial value is provided for the subsequent pose optimization. Then, by constructing a new DVL velocity error term, the velocity measurement of the DVL is directly utilized to restrict the position change of the camera between two adjacent frames, and the position change is optimized together with the vision re-projection error term so as to obtain a more accurate camera pose. The visual-Doppler velocimeter tight coupling fusion method provided by the invention is verified on a data set collected by a plurality of scenes in an underwater simulation environment HoloOcean, and experimental results show that compared with a pure visual odometer, the fusion method provided by the invention can effectively improve the positioning precision of an underwater robot. The invention provides theoretical basis and technical guidance for accurate positioning of the underwater robot.
Fig. 2 is a schematic flow chart of a fusion frame of a vision-doppler velocimeter provided by the invention, and as shown in fig. 2, on the basis of an ORB-SLAM3 frame, a fusion module related to a DVL sensor is designed. In the original ORB-SLAM3 system, the visual tracking thread can be split into two phases. The first phase includes three modes: constant velocity motion model tracking, reference key frame tracking, and repositioning tracking. The latter two modes are only used in case of system initialization or tracking failure, while the first mode is used in most cases. Constant velocity motion model tracking is therefore the primary process of affecting visual tracking performance in the first stage. The second stage of the visual tracking thread is called local map tracking, and projects local map points corresponding to the current frame of the camera onto the current frame to obtain more characteristic point matching relations, and then further optimizes the pose of the camera obtained in the first stage to obtain more accurate pose estimation values. The improvement of the invention is concentrated on the tracking thread in the first stage, and the proposed visual-DVL tight coupling fusion method specifically comprises the following steps:
1. the DVL measurement is data correlated with the image frame. Considering that the data frame rate of the DVL is lower than the image frame rate of the camera, the invention correlates the DVL measurement information for each camera image frame according to the 'time nearest' principle, i.e. attaches the acceleration measurement information for each image frame using the DVL data that is time nearest.
2. The initial pose estimation of the camera at the current frame is improved using the DVL measurement data. In the first stage of tracking the thread, the invention utilizes the visual information and DVL measurement information jointly to predict the current camera pose. Specifically, the present invention uses a constant velocity model of the first stage of the visual tracking process to obtain the rotational component of the camera's current frame pose, while using the velocity measurement data of the DVL to obtain the positional component of the camera's current frame pose.
Fig. 3 is a schematic diagram of the coordinate system relationships of the camera-DVL system provided by the present invention, and as shown in fig. 3, the relationship between the coordinate systems of the camera-DVL system moving from time k-2 to time k is shown. The invention uses a transformation matrix T epsilon SE (3) to represent three-dimensional pose, T is composed of a rotation matrix R epsilon SO (3) and a translation vector p epsilon R 3 Composition is prepared. Furthermore, the transformation matrixThe pose of the coordinate system { N } at time j with respect to the coordinate system { M } at time i is represented. The invention respectively represents a camera coordinate system and a DVL coordinate system at k time as { C } k Sum { D } k The relative pose between the two coordinate systems is fixed, expressed asIn addition, the present invention represents the global world coordinate system as { W }, and defines the position of the camera at the first frame as the origin of the world coordinate system { W }.
During the tracking of the original ORB-SLAM3 system, if visual tracking is performed successfully at the previous frame image, a constant velocity motion model is used to predict the pose of the camera at the current frame based on the assumption that the camera is moving at a constant velocity between adjacent frames. The pose and speed of the camera in the previous frame can be used for predicting the pose of the camera in the current frame, as follows:
in the method, in the process of the invention,is the predicted current frame pose of the camera, +.>Is the camera pose of the previous frame. />Is the speed of the camera at the last frame, which is considered to be equal to the speed of the camera at the penultimate frame +.>Equal:
the proposed method relaxes the constant speed motion assumption and uses the velocity measurement data of the DVL sensor to predict the camera's position at the current frame instant. Since the DVL sensor provides only X-Y-Z three-dimensional linear velocity measurements, without attitude information, the present invention derives using the velocity measurement data of the DVLIs not limited by the translation component of (2) while retaining->The rotation component of (2) is consistent with the derivation of the constant velocity model as follows:
according to (3),translation component of->Can be expressed as:
in the method, in the process of the invention,is the DVL sensor in { D k And { D } and k-1 the displacement between } can be calculated as:
in the method, in the process of the invention,is the measurement of the DVL sensor at time k-1 and dt is the time difference between two image frames. The invention assumes that the measurement noise of the DVL sensor follows a Gaussian distribution, i.e., n v ~N(0,σ v ·I)。
Once the speed of the camera in the previous frame is obtained through vision and DVL measurement, the pose of the camera in the current frame can be predicted in the pose of the camera in the previous frame, and then the first optimization is performed based on the reprojection relation of map points, so that the tracking in the first stage is completed.
3. And designing a DVL error term according to the measured value of the DVL sensor and the estimated value of the camera pose of the two adjacent frames.
In the second stage of the original ORB-SLAM3 system tracking thread, local map points in a local window are projected to the current frame to find more characteristic point matching relations, and then the re-projection constraint of the matching characteristic points is utilized to secondarily optimize the pose of the current frame of the camera estimated in the first stage. On the basis of the original system, the invention designs a DVL speed constraint between two camera image frames, and the position change of the camera between two adjacent frames is constrained by using the speed measurement value of the DVL sensor so as to improve the precision of pose estimation.
Given the pose of the camera in the last frameAnd pose in the current frame +.>Then DVL is at { D k Sum { D } k-1 Between }The displacement of (c) can be deduced as:
coupled with (5) and (6), DVL error terms can be defined as:
according to (7),the covariance of (2) can be deduced from the measurement noise of DVL, i.e. +. >The weights of the DVL error term can then be based on +.>Is obtained.
4. Based on the vision reprojection error, the vision-DVL joint optimization is executed by combining the DVL speed error item designed in the step 3.
In a second stage of tracking threads, namely a local map tracking stage, the method utilizes visual information and DVL measurement information to jointly optimize the current frame pose of the camera, and performs second optimization on the current frame camera pose estimated value obtained in the first stage through beam method adjustment (Bundle Adjustment, BA) through visual re-projection constraint and DVL speed error term constraint.
In particular, the present invention constructs an objective function to minimize the re-projection error term of the matched local map points and the DVL velocity error term between the two camera image frames. Camera pose given current frameMatched world coordinate system 3D point X i ∈R 3 And the corresponding 2D image pixel position u i ∈R 2 Where i εχ, χ is the set of all 3D-2D matches, then the visual re-projection error can be expressed as:
where ρ is a robust kernel function, n represents a camera projection model, Σ v Is the covariance matrix associated with the keypoint scale. Then, the camera pose optimization problem of the visual-DVL fusion method in the second stage of the tracking process can be expressed as follows:
This is a nonlinear least squares optimization problem that can be solved by methods such as Gauss-Newton (GN) or Levenberg-Marquardt (LM) implemented in a nonlinear optimization library g2 o. Fig. 4 is a factor schematic diagram of the vision-DVL fusion system provided by the present invention, as shown in fig. 4, where the parameter to be optimized is the pose of the camera at the current frame, and the pose of the camera at the previous frame is fixed. The DVL velocity factor is a residual term proposed by the present invention to constrain the camera's position change between two frames.
After the current camera pose is obtained through two stages of the tracking thread, local map maintenance and key frame processing are executed in the local map building thread.
The underwater simulation environment HoloOcean comprises PierHarbor and OpenWater scenes, and is used for simulating underwater inspection and monitoring tasks of the underwater robot in a near-shore port scene and submarine debris searching and inspection tasks in an open water scene respectively.
Fig. 5 is a schematic diagram of a motion trajectory of a robot in the binocular vision-DVL fusion mode positioning method provided by the present invention, as shown in fig. 5, illustrating a motion trajectory obtained by a binocular vision mode of an ORB-SLAM3 system and a binocular vision-DVL fusion mode provided by the present invention in an OpenWater scene, and the result shows that, compared with the binocular vision mode of the original ORB-SLAM3 system, the trajectory estimated by the fusion method provided by the present invention is closer to a real trajectory.
Table 1 shows absolute translational errors between the trajectories and the true trajectories estimated by the binocular vision mode of the ORB-SLAM3 system and the binocular vision-DVL fusion mode proposed by the present invention. The results in table 1 show that compared with the binocular vision mode of the original ORB-SLAM3 system, the vision-DVL fusion method provided by the present invention obtains smaller absolute translational error, and the effectiveness of the proposed fusion method in improving positioning accuracy is verified.
TABLE 1
1 Each sequence was evaluated 11 times, showing the mean and median;
2 DVL measurements are only used to provide error constraints, not initial pose estimates;
3 DVL measurements are used to provide both initial pose estimates and error constraints;
4 the improvement is the percentage of median error reduction of the proposed method compared to the ORB-SLAM3 binocular mode.
Table 2 shows the speed errors of the track and the real track in the X-Y-Z directions, which are estimated by the binocular vision mode of the ORB-SLAM3 system and the binocular vision-DVL fusion mode provided by the invention, respectively. The results in Table 2 show that the speed of the binocular-DVL fusion method is more consistent with the true value compared with the binocular ORB-SLAM3 method, and it is verified that the fusion method provided by the invention effectively utilizes the speed measurement information of the DVL sensor.
TABLE 2
The underwater robot positioning device provided by the invention is described below, and the underwater robot positioning device described below and the underwater robot positioning method described above can be referred to correspondingly.
Fig. 6 is a schematic structural diagram of an underwater robot positioning device provided by the present invention, as shown in fig. 6, the device includes:
an acquisition module 600, configured to acquire a first velocity component of the robot at a first frame time and a DVL error term at a second frame time based on the velocity measurement data of the DVL; the first frame time is the last frame time of the second frame time;
a determining module 610, configured to determine an initial pose estimation value of the robot at a second frame time based on a first velocity component of the robot at the first frame time;
the optimizing module 620 is configured to optimize an initial pose estimation value of the robot at the second frame time based on a DVL error term of the robot at the second frame time, and determine a final pose estimation value of the robot at the second frame time.
Optionally, acquiring a first velocity component of the robot at a first frame time based on the velocity measurement data of the DVL includes:
the first velocity component of the robot at the first frame time is obtained based on a product of a second velocity component of the robot at the first frame time and a rotational component of the robot transformed from the visual coordinate system to the DVL coordinate system, a difference between a translational component of the robot transformed from the DVL coordinate system to the visual coordinate system and a displacement of the DVL in the DVL coordinate system between the first frame time and the second frame time, and a translational component of the robot transformed from the visual coordinate system to the DVL coordinate system.
Optionally, the first velocity component of the robot at the first frame instant is determined based on the following formula:
wherein the method comprises the steps of,Representing a first speed component of the robot at a first frame instant,/>A second speed component representing the robot at the first frame instant,/and/or>Representing the rotational component of the robot transformed from the visual coordinate system to the DVL coordinate system, +.>Representing the translational component of the robot transformed from the DVL coordinate system to the visual coordinate system, +.>Representing the displacement of the DVL in the DVL coordinate system between the first frame instant and the second frame instant,/and/or>Representing the translational component of the robot transformed from the visual coordinate system to the DVL coordinate system.
Optionally, determining the initial pose estimation value of the robot at the second frame time based on the first velocity component of the robot at the first frame time includes:
an initial pose estimate for the robot at the second frame time is determined based on the first velocity component of the robot at the first frame time, the second velocity component of the robot at the first frame time, and the final pose estimate for the robot at the first frame time.
Optionally, acquiring a DVL error term of the robot at the second frame time based on the speed measurement data of the DVL includes:
the DVL error term for the robot at the second frame time is obtained based on a difference between displacements of the DVL in the DVL coordinate system between the first frame time and the second frame time, a rotational component of the robot from the visual coordinate system to the DVL coordinate system, a rotational component of the robot at the first frame time, a rotational component of the robot relative to the visual coordinate system at the second frame time, a translational component of the robot from the visual coordinate system to the DVL coordinate system, a translational component of the robot relative to the visual coordinate system at the second frame time, a translational component of the robot at the first frame time, a translational component of the robot from the DVL coordinate system to the visual coordinate system.
Optionally, the DVL error term of the robot at the second frame instant is determined based on the following formula:
wherein,DVL error term representing robot at second frame moment,/or->Representing the displacement of the DVL in the DVL coordinate system between the first frame instant and the second frame instant,/and/or>Representing the rotational component of the robot transformed from the visual coordinate system to the DVL coordinate system, +.>Representing the rotational component of the robot at the first frame instant, a first frame instant>Representing the rotational component of the robot relative to the visual coordinate system at the second frame instant, +.>Representing the translational component of the robot transformed from the visual coordinate system to the DVL coordinate system, +.>Representing the translational component of the robot relative to the visual coordinate system at the second frame instant, +.>Representing the translational component of the robot at the first frame instant,/->Representing translational components of the robot transformed from the DVL coordinate system to the visual coordinate system.
Optionally, optimizing the initial pose estimation value of the robot at the second frame time based on the DVL error term of the robot at the second frame time, and determining the final pose estimation value of the robot at the second frame time includes:
based on a DVL error item of the robot at the second frame time and a reprojection error of the robot at the second frame time, solving a nonlinear least square optimization problem with the pose of the robot at the second frame time as a target, optimizing an initial pose estimation value of the robot at the second frame time, and determining a final pose estimation value of the robot at the second frame time.
It should be noted that, the device provided by the present invention can implement all the method steps implemented by the method embodiment and achieve the same technical effects, and the parts and beneficial effects that are the same as those of the method embodiment in the present embodiment are not described in detail herein.
Fig. 7 is a schematic structural diagram of an electronic device according to the present invention, as shown in fig. 7, the electronic device may include: processor 710, communication interface (Communications Interface) 720, memory 730, and communication bus 740, wherein processor 710, communication interface 720, memory 730 communicate with each other via communication bus 740. Processor 710 may invoke logic instructions in memory 730 to perform any of the underwater robot positioning methods provided in the various embodiments described above.
Further, the logic instructions in the memory 730 described above may be implemented in the form of software functional units and may be stored in a computer readable storage medium when sold or used as a stand alone product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
It should be noted that, the electronic device provided by the present invention can implement all the method steps implemented by the method embodiments and achieve the same technical effects, and the details and beneficial effects of the same parts and advantages as those of the method embodiments in the present embodiment are not described in detail.
In another aspect, the present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, is implemented to perform any of the underwater robot positioning methods provided in the above embodiments.
It should be noted that, the non-transitory computer readable storage medium provided by the present invention can implement all the method steps implemented by the method embodiments and achieve the same technical effects, and detailed descriptions of the same parts and beneficial effects as those of the method embodiments in this embodiment are omitted.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. An underwater robot positioning method, comprising:
acquiring a first speed component of the robot at a first frame time and a DVL error term at a second frame time based on speed measurement data of a Doppler velocimeter DVL; the first frame time is the last frame time of the second frame time;
determining an initial pose estimation value of the robot at the second frame moment based on a first velocity component of the robot at the first frame moment;
and optimizing an initial pose estimation value of the robot at the second frame moment based on a DVL error term of the robot at the second frame moment, and determining a final pose estimation value of the robot at the second frame moment.
2. The underwater robot positioning method of claim 1, wherein the acquiring the first velocity component of the robot at the first frame time based on the DVL velocity measurement data comprises:
a first velocity component of the robot at the first frame time is obtained based on a product of a second velocity component of the robot at the first frame time and a rotational component of the robot transformed from a visual coordinate system to a DVL coordinate system, a difference between a translational component of the robot transformed from the DVL coordinate system to the visual coordinate system and a displacement of the DVL in the DVL coordinate system between the first frame time and the second frame time, and a translational component of the robot transformed from the visual coordinate system to the DVL coordinate system.
3. The underwater robot positioning method as claimed in claim 2, characterized in that the first velocity component of the robot at the first frame instant is determined based on the following formula:
wherein the saidA first speed component representing the robot at the first frame instant, said +.>A second speed component representing the robot at the first frame instant, said +.>Representing a rotational component of the robot transformed from the visual coordinate system to the DVL coordinate system, said +.>Representing a translational component of the robot transformed from the DVL coordinate system to the visual coordinate system, said +.>Representing the displacement of said DVL in said DVL coordinate system between said first frame instant and said second frame instant, said +.>Representing a translational component of the robot transformed from the visual coordinate system to the DVL coordinate system.
4. The underwater robot positioning method of claim 1, wherein the determining an initial pose estimation value of the robot at the second frame time based on the first velocity component of the robot at the first frame time comprises:
and determining an initial pose estimation value of the robot at the second frame time based on the first velocity component of the robot at the first frame time, the second velocity component of the robot at the first frame time and the final pose estimation value of the robot at the first frame time.
5. The underwater robot positioning method of claim 1, wherein the obtaining the DVL error term of the robot at the second frame time based on the DVL speed measurement data comprises:
a DVL error term for the robot at the second frame time is obtained based on a difference between displacements of the DVL in a DVL coordinate system between the first frame time and the second frame time, a rotational component of the robot from a visual coordinate system to the DVL coordinate system, a rotational component of the robot at the first frame time, a rotational component of the robot relative to the visual coordinate system at the second frame time, a translational component of the robot from the visual coordinate system to the DVL coordinate system, a translational component of the robot relative to the visual coordinate system at the second frame time, a translational component of the robot at the first frame time, a translational component of the robot from the DVL coordinate system to the visual coordinate system.
6. The underwater robot positioning method of claim 5, wherein the DVL error term of the robot at the second frame instant is determined based on the following formula:
Wherein the saidA DVL error term representing said robot at said second frame instant, said +.>Representing the displacement of said DVL in said DVL coordinate system between said first frame instant and said second frame instant, said +.>Representing a rotational component of the robot transformed from the visual coordinate system to the DVL coordinate system, said +.>Representing a rotational component of said robot at said first frame instant, said +.>Representing a rotational component of said robot relative to said visual coordinate system at said second frame instant, said +.>Representing a translational component of the robot transformed from the visual coordinate system to the DVL coordinate system, said +.>Representing a translational component of the robot relative to the visual coordinate system at the second frame time, theRepresenting a translational component of the robot at the first frame instant, said +.>Representing a translational component of the robot transformed from the DVL coordinate system to the visual coordinate system.
7. The underwater robot positioning method according to claim 1, wherein the optimizing the initial pose estimation value of the robot at the second frame time based on the DVL error term of the robot at the second frame time, determining the final pose estimation value of the robot at the second frame time includes:
Based on a DVL error term of the robot at the second frame time and a reprojection error of the robot at the second frame time, carrying out nonlinear least square optimization problem solving by taking the pose of the robot at the second frame time as a target, optimizing an initial pose estimated value of the robot at the second frame time, and determining a final pose estimated value of the robot at the second frame time.
8. An underwater robot positioning device, comprising:
the acquisition module is used for acquiring a first speed component of the robot at a first frame moment and a DVL error item at a second frame moment based on the speed measurement data of the DVL; the first frame time is the last frame time of the second frame time;
a determining module, configured to determine an initial pose estimation value of the robot at the second frame time based on a first velocity component of the robot at the first frame time;
and the optimization module is used for optimizing the initial pose estimation value of the robot at the second frame moment based on the DVL error term of the robot at the second frame moment and determining the final pose estimation value of the robot at the second frame moment.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and running on the processor, characterized in that the processor implements the underwater robot positioning method according to any of claims 1 to 7 when executing the program.
10. A non-transitory computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the underwater robot positioning method according to any of claims 1 to 7.
CN202310633421.7A 2023-05-31 2023-05-31 Underwater robot positioning method, device, electronic equipment and storage medium Pending CN117128961A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310633421.7A CN117128961A (en) 2023-05-31 2023-05-31 Underwater robot positioning method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310633421.7A CN117128961A (en) 2023-05-31 2023-05-31 Underwater robot positioning method, device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117128961A true CN117128961A (en) 2023-11-28

Family

ID=88853384

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310633421.7A Pending CN117128961A (en) 2023-05-31 2023-05-31 Underwater robot positioning method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117128961A (en)

Similar Documents

Publication Publication Date Title
CN109211241B (en) Unmanned aerial vehicle autonomous positioning method based on visual SLAM
CN107888828B (en) Space positioning method and device, electronic device, and storage medium
CN112985416B (en) Robust positioning and mapping method and system based on laser and visual information fusion
US9071829B2 (en) Method and system for fusing data arising from image sensors and from motion or position sensors
JP6198230B2 (en) Head posture tracking using depth camera
CN110726406A (en) Improved nonlinear optimization monocular inertial navigation SLAM method
JP2009266224A (en) Method and system for real-time visual odometry
CN110231028B (en) Aircraft navigation method, device and system
CN111623773B (en) Target positioning method and device based on fisheye vision and inertial measurement
Menna et al. Towards real-time underwater photogrammetry for subsea metrology applications
CN114485640A (en) Monocular vision inertia synchronous positioning and mapping method and system based on point-line characteristics
Rahman et al. Contour based reconstruction of underwater structures using sonar, visual, inertial, and depth sensor
Karam et al. Integrating a low-cost mems imu into a laser-based slam for indoor mobile mapping
CN115218906A (en) Indoor SLAM-oriented visual inertial fusion positioning method and system
CN112762929B (en) Intelligent navigation method, device and equipment
Li et al. RD-VIO: Robust visual-inertial odometry for mobile augmented reality in dynamic environments
CN112747749A (en) Positioning navigation system based on binocular vision and laser fusion
Irmisch et al. Simulation framework for a visual-inertial navigation system
CN116380079A (en) Underwater SLAM method for fusing front-view sonar and ORB-SLAM3
CN117128961A (en) Underwater robot positioning method, device, electronic equipment and storage medium
CN115344033A (en) Monocular camera/IMU/DVL tight coupling-based unmanned ship navigation and positioning method
Pfingsthorn et al. Full 3D navigation correction using low frequency visual tracking with a stereo camera
Ho et al. Smartphone level indoor/outdoor ubiquitous pedestrian positioning 3DMA GNSS/VINS integration using FGO
Vintervold Camera-based integrated indoor positioning
Asadi et al. Delayed fusion of relative state measurements by extending stochastic cloning via direct Kalman filtering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination