CN116448100A - Multi-sensor fusion type offshore unmanned ship SLAM method - Google Patents
Multi-sensor fusion type offshore unmanned ship SLAM method Download PDFInfo
- Publication number
- CN116448100A CN116448100A CN202310234432.8A CN202310234432A CN116448100A CN 116448100 A CN116448100 A CN 116448100A CN 202310234432 A CN202310234432 A CN 202310234432A CN 116448100 A CN116448100 A CN 116448100A
- Authority
- CN
- China
- Prior art keywords
- imu
- point
- frame
- coordinate system
- point cloud
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 75
- 230000004927 fusion Effects 0.000 title claims abstract description 16
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 claims abstract description 81
- 238000005259 measurement Methods 0.000 claims abstract description 69
- 239000013598 vector Substances 0.000 claims description 41
- 239000011159 matrix material Substances 0.000 claims description 31
- 239000004973 liquid crystal related substance Substances 0.000 claims description 28
- 230000001133 acceleration Effects 0.000 claims description 18
- 230000008569 process Effects 0.000 claims description 18
- 238000013519 translation Methods 0.000 claims description 12
- 238000001914 filtration Methods 0.000 claims description 10
- 230000011218 segmentation Effects 0.000 claims description 10
- 238000006243 chemical reaction Methods 0.000 claims description 9
- 238000003708 edge detection Methods 0.000 claims description 8
- 238000003709 image segmentation Methods 0.000 claims description 7
- 238000012545 processing Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 230000003287 optical effect Effects 0.000 claims description 2
- 230000005484 gravity Effects 0.000 claims 1
- 230000000007 visual effect Effects 0.000 abstract 1
- 102000008115 Signaling Lymphocytic Activation Molecule Family Member 1 Human genes 0.000 description 33
- 108010074687 Signaling Lymphocytic Activation Molecule Family Member 1 Proteins 0.000 description 33
- 238000010586 diagram Methods 0.000 description 8
- 230000000694 effects Effects 0.000 description 8
- 238000013507 mapping Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 238000000605 extraction Methods 0.000 description 3
- 238000010276 construction Methods 0.000 description 2
- 230000004807 localization Effects 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 241000135164 Timea Species 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- IIBYAHWJQTYFKB-UHFFFAOYSA-N bezafibrate Chemical compound C1=CC(OC(C)(C)C(O)=O)=CC=C1CCNC(=O)C1=CC=C(Cl)C=C1 IIBYAHWJQTYFKB-UHFFFAOYSA-N 0.000 description 1
- 238000004140 cleaning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 238000005295 random walk Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/51—Indexing; Data structures therefor; Storage structures
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
- G01C21/1652—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
- G01C21/1656—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/86—Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/30—Assessment of water resources
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Databases & Information Systems (AREA)
- Automation & Control Theory (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- Medical Informatics (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a multi-sensor fused coastal unmanned ship SLAM method, which is characterized in that a laser radar inertial odometer (LIO) and a Visual Inertial Odometer (VIO) are tightly coupled, and data are jointly processed by utilizing semantic information of a water surface and a water shoreline; and obtaining prior distribution of the unmanned ship state by utilizing forward propagation of the inertial measurement unit. And obtaining posterior distribution of the unmanned ship state in LIO by using a point-to-surface ICP method, and updating the state and the point cloud map in cooperation with prior distribution. Obtaining posterior distribution of unmanned ship state in VIO by using a two-step tracking method of non-water LiDAR key points and a two-step PnP method based on water shoreline information, and updating the state in cooperation with prior distribution; and finally obtaining the running track of the unmanned ship and the point cloud map of the surrounding environment. The multi-sensor fusion near-shore unmanned ship SLAM method does not depend on gray scale invariant assumption, can work on a strong-reflection water surface environment, is not influenced by water surface reflection, and improves robustness and accuracy.
Description
Technical Field
The invention relates to the technical field of synchronous positioning and mapping (SLAM), in particular to a multi-sensor fusion type SLAM method of an offshore unmanned ship.
Background
Synchronous positioning and mapping (Simultaneous Localization and Mapping, SLAM) is a technique that builds an environment map during movement, while estimating self-movement, with information acquired by sensors without environmental prior information. The technology is widely applied to the fields of unmanned vehicles, unmanned aerial vehicles, robots, XRs and the like at present.
Two of the most popular solutions of SLAM technology are laser radar SLAM and vision SLAM, and in addition, an inertial measurement unit (Inertial Measurement Unit, abbreviated IMU) is also often used, and the data acquired by the IMU is called IMU data. The laser radar can provide accurate depth information, the camera can provide rich texture and color information, and the IMU can provide acceleration and angular velocity information of the IMU. Today, SLAMs that fuse these three sensor information are also increasingly proposed, with multi-sensor fused SLAMs having better robustness and providing more accurate localization and mapping.
In fact, SLAM technology also has wide application prospects in the field of offshore unmanned vessels, including unmanned cleaning vessels, autopilot pleasure vessels, unmanned exploration vessels, and the like. However, in a water surface scenario, the application of SLAM technology encounters several new challenges, including: 1) The strong reflection of light from the water surface affects the exposure of the camera, resulting in difficulty in meeting the gray scale invariant assumption in the optoflow method and the direct method. 2) The image can extract the characteristic points on the water reflection, so that false things appear in the image construction and the accuracy of matching and tracking of the characteristic points is affected. 3) The lidar will acquire a partial noise point cloud above the water surface. 4) Rapid fluctuations in the water surface can lead to distortion of the point cloud acquired by the lidar.
In summary, the current SLAM method has a certain difficulty in application in the water surface environment, and both robustness and accuracy are to be improved.
Disclosure of Invention
The invention aims to overcome the difficulty of application of the prior art on the offshore water surface environment, provides a multi-sensor fusion offshore unmanned ship SLAM method, fuses information of three sensors of a laser radar, a camera and an IMU, overcomes the defect that the three sensors are independently applied on the water surface environment, and realizes robust and high-precision positioning and mapping of an unmanned ship on the offshore water surface.
The aim of the invention can be achieved by adopting the following technical scheme:
a multi-sensor fused offshore unmanned ship SLAM method, the SLAM method comprising the steps of:
s1, sensor data acquisition: the unmanned ship is driven to freely move in a near-shore water environment, point cloud data of the surrounding environment under a laser radar coordinate system are collected through a laser radar, image data of the surrounding environment under the camera coordinate system are collected through a camera, acceleration and angular velocity data of the unmanned ship under an IMU coordinate system are collected through an inertial measurement unit, the inertial measurement unit is called IMU for short, the data collected by the IMU are called IMU data, i is set as an index of the IMU data, and k is an index of the point cloud data and the image data;
s2, initializing: after the IMU data of the first frame is obtained, the IMU coordinate system at the moment is set as the origin of the map coordinate system, the IMU data of each frame is utilized to update the state vector x of the unmanned ship,is defined as:
wherein () T Indicating that the elements in brackets are transposed, G r I a rotation vector representing rotation of the IMU coordinate system to the map coordinate system, G p I representing the position of an IMU in a map coordinate system [ I r C , I p C ]Sum [ I r L , I p L ]Representing the external parameters between the IMU and the camera and lidar respectively, G v denotes the speed of the IMU in the map coordinate system, b a And b g Indicating the bias of the accelerometer and gyroscope, respectively, gg indicates the gravitational acceleration, I t C representing the time offset between the camera and IMU, phi= [ f x ,f y ,c x ,c y ] T Representing the internal parameters of the camera, (f) x ,f y ) For the horizontal and vertical focal lengths of the camera, (c) x ,c y ) Horizontal and vertical offsets of the image origin relative to the camera optical center;
after the first frame point cloud data is obtained, the distortion of the frame point cloud is compensated by utilizing IMU back propagation, and the frame point cloud is registered under a map coordinate system by utilizing the pose estimated by IMU forward propagation to form an initial map;
after the first frame of image data is acquired, the water surface part of the frame of image data is segmented by an image segmentation network Deeplab V < 3+ > and the frame of image is binarized, the pixel value of the pixel point of the water surface part is set to 255, the pixel value of the pixel point of the non-water surface part is set to 0, the water bank line part of the frame of image is extracted by Canny edge detection, the point cloud of the first frame registered in the map is projected in the frame of image as a key point to be tracked, and the point of the projected point of the point cloud falling on the water bank line is stored in a queue with the size of MIn (a) and (b);
wherein deep Lab V3+ is an image semantic segmentation network proposed by Liang-Chieh Chen in 2018, "Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation".
S3, updating the pose of the unmanned ship: when IMU data is received, the IMU forward propagation is utilized to update x, and the pose information is the [ in x ] G r I , G p I ]And calculating a priori distribution of x; when the point cloud data is received, the IMU is utilized to back propagate to compensate the distortion of the point cloud, then the point cloud after the distortion removal is projected into the nearest frame image, and the point cloud is set L P j Is a point in the point cloud after the distortion of the frame point cloud data, wherein L represents the point under the laser radar coordinate system, j represents the index of the point, and is set C p j Is that L P j Projection points in the most recent frame image, if C p j Falls on the water surface part of the image, then C p j And L P j deleting; if it is C p j Falling on a non-water part of the image, then find a relation in the map L P j The 5 nearest points are used to fit a plane alpha j To L P j To alpha j The posterior distribution of x is obtained by taking the distance of the point cloud as the residual error, and x is updated by utilizing error state iteration Kalman filtering in cooperation with the prior distribution of x obtained by IMU forward propagation after each point in the point cloud is processed; when image data is received, the water surface part of the frame image is segmented by utilizing an image segmentation network deep V < 3+ >, the frame image is binarized after segmentation, the pixel value of the pixel point of the water surface part is set to be 255, the pixel value of the pixel point of the non-water surface part is set to be 0, the water bank line part of the frame image is extracted by utilizing Canny edge detection, the key point to be tracked of the last frame is tracked by utilizing a non-water LiDAR key point two-step tracking method, and the key point to be tracked is set G P s Is one point of three-dimensional points corresponding to the key points to be tracked in the previous frame, wherein G represents the point under a map coordinate system, s represents the index of the point, and is set C p s Is that G P s Pixel points tracked in the frame image to C p s And G P s the re-projection error between the two is used as residual error to obtain posterior distribution of x, after each key point is processed, the prior distribution of x obtained by the forward propagation of IMU is matched, and the error-like is utilizedUpdating x by state iterative Kalman filtering, and updating the pose information in the updated xThe point in (2) is projected into the frame image, and is provided with G P m Is->In (a) is provided, C p m is the sum of the frame image G P m Projection point nearest water shoreline pixel point to C p m And G P m the re-projection error between the two is used as residual error to obtain posterior distribution of x, and the posterior distribution of x is processed>After each point in the map is matched with the prior distribution of x obtained by IMU forward propagation, the x is updated by utilizing error state iterative Kalman filtering, finally, the point cloud of the latest frame registered in the map is projected into the frame image as a key point to be tracked, and the point of the projected point in the point cloud falling on a water bank line is stored in->In (a) and (b);
s4, updating the map: after each time point cloud data is received and the pose is updated by using the frame point cloud data, each point in the frame point cloud is updated by using the updated pose L P j Registering under the map coordinate system to finish updating the map.
A multi-sensor fusion method for SLAM of an offshore unmanned ship uses an IMU as a body coordinate system, so that the following continuous kinematic model is obtained:
wherein a is m And omega m Representing raw data of accelerometer and gyroscope, respectively, n a And n g White noise representing IMU data, b a And b g Modeled as derivatives of gaussian white noise, respectivelyAnd->Is a random walk of (c).
Further, the IMU forward propagation process in the SLAM method is as follows:
definition of a priori estimates of x asIMU propagates forward with zero-order keeper>I.e. assuming that the IMU measurement is constant over a sampling period and the process noise is set to zero, there is
Wherein, the liquid crystal display device comprises a liquid crystal display device,and->Representing a priori estimates of x at the ith and i+1 IMU measurements, respectively, Δt representing the time interval between the ith and i+1 IMU measurements, u i Representing raw data at the i-th IMU measurement,
is known to beThe symbol [ + ]]Is defined as
Wherein a is mi And omega mi Representing raw data of the accelerometer and gyroscope at the ith IMU measurement,a priori estimates of rotation vectors representing rotation of the IMU coordinate system to the map coordinate system at the ith IMU measurement, are shown>A priori estimate representing the position of the IMU in the map coordinate system at the ith IMU measurement,/-, is given>And->Representing a priori estimates of the external parameters between the IMU and the camera and lidar, respectively, at the ith IMU measurement,/->A priori estimate representing the speed of the IMU in the map coordinate system at the ith IMU measurement,/-, is given>And->Representing a priori estimates of the accelerometer and gyroscope bias at the ith IMU measurement, respectively,>representing a priori estimates of gravitational acceleration at the ith IMU measurement,/and/or>A priori estimates representing the time offset between the camera and the IMU at the ith IMU measurement, +.>Representing a priori estimates of camera internal parameters at the ith IMU measurement, 0 n×m Representing a zero matrix of n×m, exp () and Log () are the rondrigas transform between the rotation vector and the rotation matrix.
Further, as the time at which each point is collected is different and the unmanned ship continues to move, the coordinates of each point are obtained under different coordinate systems, that is, the point cloud data has distortion, and the point cloud needs to be de-distorted. In the SLAM method, the distortion of the point cloud is compensated by utilizing IMU back propagation, and the process is as follows:
t1, IMU back propagation: let the current processing be the k+1st frame point cloud data, the IMU propagates the state vector backward with a zero-order holderAnd the process noise is set to zero, then there is
Wherein, the liquid crystal display device comprises a liquid crystal display device,representing a priori estimates of x, respectively, for the j-1 th and j-th lidar measurements in the k+1-th frame point cloud,/for>Expressing the acceleration and the angular velocity of the unmanned ship during the jth laser radar measurement in the k+1th frame of point cloud;
setting pose to zero at the beginning of back propagation, velocity and IMU bias takeValue of>The prior estimation of x when the k+1st frame point cloud is acquired is shown, and in addition, for any point in a frame of point cloud, IMU data which is earlier in acquisition time than the point and is closest to the point is taken as the input of the acceleration and the angular velocity of the unmanned ship when the point is in back propagation;
t2, point cloud de-distortion: back propagation produces ρ j Time sum t k+1 Relative pose between moments Representing a priori estimates of the transformation matrix from the IMU coordinate system at the jth lidar measurement in the k+1st frame point cloud to the IMU coordinate system at the time the k+1st frame point cloud was acquired, ">Representation->Is a rotation matrix part of->Representation->Is to measure local +.>Measurement in a lidar coordinate system switched to the end of the scan +.>
Wherein, the liquid crystal display device comprises a liquid crystal display device,at t k Maximum a posteriori estimation of external parameters between time lidar and IMU, L j Representing a laser radar coordinate system L in the jth laser radar measurement in the current frame point cloud k+1 And (5) representing a laser radar coordinate system when the k+1st frame point cloud is acquired.
Furthermore, in the multi-sensor fusion approach of the offshore unmanned ship SLAM, the fact that the laser radar with a small field angle lacks feature information is considered, so that feature extraction is not carried out on point cloud data, original point cloud data is directly used, and theoretically, each acquired point falls on a small plane in a point cloud map. In the SLAM method, the point-to-surface distance is used as a residual error, and the concrete process is as follows:
is provided withFor maximum a posteriori estimation of x calculated using the k+1st frame point cloud, the +.>Rotation vector +.>And->Conversion into a rotation matrix>And->Will be described according to the following formula L P j Converting from a lidar coordinate system to a map coordinate system: />Wherein, the liquid crystal display device comprises a liquid crystal display device,and->Maximum posterior estimation of rotation matrix and translation vector of IMU coordinate system relative to map coordinate system and maximum posterior estimation of rotation matrix and translation vector of laser radar coordinate system relative to IMU coordinate system when k+1st frame point cloud is obtained, respectively, in map and L P j nearest face alpha j Having a normal vector u j And an in-plane point q j The measurement residual is: />
Further, a multi-sensor fusion offshore unmanned ship SLAM method adopts a non-water LiDAR key point two-step tracking method. When the non-water LiDAR key point two-step tracking method selects the key points for the image, the projection points of the laser radar point cloud of the non-water part on the image are used as the key points, so that the key points have more accurate depth information, and the pixel points of water reflection are avoided. When the non-water LiDAR key point two-step tracking method is used for tracking the key points, the descriptors of the key points are calculated, and the pixel points matched with the key points in the next frame of image are found by combining the acceleration and angular velocity information of the IMU, so that the non-water LiDAR key point two-step tracking method does not depend on the assumption of unchanged gray scale, and can work better on the water surface environment with strong light reflection. The non-water LiDAR key point two-step tracking method in the SLAM method comprises the following steps:
r1, calculating descriptors: calculating descriptors of key points to be tracked in the previous frame of image;
r2, roughly estimating pose: integrating by utilizing the acceleration and angular velocity information of the IMU, roughly estimating the pose of the unmanned ship when the next image frame is reached, and projecting the three-dimensional point cloud corresponding to the key points into the next image frame according to the estimated pose, so as to preliminarily predict the positions of the key points in the next image frame;
r3, fine tracking key points: in the next frame of image, taking the preliminary predicted key point as the center, taking an N multiplied by N square window, calculating the descriptors of all pixel points in the window, and finding out the point which is closest to the descriptors calculated in R1 in vector space in the pixel points, wherein the pixel points are the points which are finally tracked.
Further, a multi-sensor fusion type offshore unmanned ship SLAM method adopts a two-step PnP method based on a water shoreline, and theoretically, a projection point of a three-dimensional point to be tracked in a current frame image coincides with a tracked pixel point coordinate. In the SLAM method, the re-projection error is used as a residual error, and the process is as follows:
is provided withFor maximum a posteriori estimation of x calculated with the k+1th frame image, the +.>Rotation vector +.>And->Conversion into a rotation matrix>And->For the following C p s And its corresponding point in three-dimensional map G P s The measurement residual is:
wherein, the liquid crystal display device comprises a liquid crystal display device,representation of G P s Three-dimensional coordinates under camera coordinate system, pi (·) is pinhole projection model, +.>And->The maximum posterior estimation of the rotation matrix and the translation vector of the IMU coordinate system relative to the map coordinate system and the maximum posterior estimation of the rotation matrix and the translation vector of the camera coordinate system relative to the IMU coordinate system when the k+1 frame image is acquired are respectively;
similarly, for C p m And its corresponding point in three-dimensional map G P m The measurement residual is:
wherein, the liquid crystal display device comprises a liquid crystal display device,representation of G P m Three-dimensional coordinates in the camera coordinate system.
Compared with the prior art, the invention has the following advantages and effects:
1. according to the non-water LiDAR key point two-step tracking method provided by the invention, the projection points of the laser radar point cloud in the image are used as the key points to be tracked, so that the key points have more accurate depth information, and the tracking process is independent of gray level invariant assumption by combining with an IMU forward propagation and feature matching method, so that the method can work better on a strong-reflection water surface environment.
2. According to the invention, the water shoreline information in the image is extracted by utilizing the image segmentation network and the edge detection algorithm, and in the process of tracking the key points, the water shoreline information is transmitted to the point cloud of the laser radar, so that new constraint is formed between the point cloud data and the image data, the information of the laser radar and the information of the camera are further fused, and the robustness and the accuracy of the whole SLAM method are improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiments of the invention and together with the description serve to explain the invention and do not constitute a limitation on the invention. In the drawings:
FIG. 1 is an overall frame diagram of a multi-sensor fused offshore unmanned ship SLAM method disclosed in the present invention;
FIG. 2 is a flow chart of steps of a SLAM method for a multi-sensor fused offshore unmanned ship disclosed in the present invention;
FIG. 3 is a schematic representation of the index relationship of an IMU frame and a lidar/vision frame in the present invention;
FIG. 4 is an application block diagram of SLAM method construction of a multi-sensor fusion offshore unmanned ship in an embodiment of the invention;
FIG. 5 is a schematic diagram of the coordinate system of the components of the system constructed in an embodiment of the invention;
fig. 6 is an effect diagram of image water surface segmentation and water shoreline extraction in the embodiment of the present invention, wherein fig. 6 (a) is an effect diagram of segmentation map binarization, fig. 6 (b) is an effect diagram of Canny edge detection extraction water shoreline, and fig. 6 (c) is an effect diagram of water shoreline drawing on an original image;
fig. 7 is an effect diagram of a point cloud map established by the system in an embodiment of the present invention, wherein the circled portion can see an obvious water shoreline.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
The embodiment discloses a specific implementation process of a traditional multi-sensor fusion SLAM method, and a system consisting of an unmanned ship, an Livox Mid-40 laser radar, an MT9V034 camera, a HiPNUC CH100 IMU and a personal computer is built. The various components in the system work cooperatively with the coordinate system as shown in fig. 5 through ROS.
The unmanned ship is driven to freely move in the West lake of the Wushan school district of the university of North China, the frequency of the laser radar for collecting point cloud data is set to be 10Hz, the frequency of the camera for collecting image data is set to be 30Hz, the frequency of the IMU for collecting IMU data is set to be 400Hz, and the collected multi-sensor data are all published in the form of ROS topics for subscribing and using by all nodes of the system.
An initialization stage, when the IMU data of the first frame is acquired, the state vector x is set G r I And G p I set to zero vector and at this point set the covariance matrix Σ of the state vector x x Set as a unit vector; when first frame point cloud data is acquired, compensating distortion of the frame point cloud by utilizing IMU back propagation, and registering the frame point cloud under a map coordinate system by utilizing pose estimated by IMU forward propagation to form an initial map; when first frame image data is acquired, projecting a point cloud of a first frame registered in a map into the frame image as a key point to be tracked;
after initialization, when IMU data is received, the IMU data is propagated forward by a zero-order retainerI.e. assuming that the IMU measurements in one sampling period are constant and the process noise is set to zero, assuming that the ith IMU measurement data is currently processed, there is
Wherein, the liquid crystal display device comprises a liquid crystal display device,and->Representing a priori estimates of x at the ith and i+1 IMU measurements, respectively, Δt representing the time interval between the ith and i+1 IMU measurements, u i Representing raw data at the ith IMU measurement,/-for the time of the IMU measurement>And->Representing the raw data of accelerometer and gyroscope, respectively, for the ith IMU measurement, +.>A priori estimates of rotation vectors representing rotation of the IMU coordinate system to the map coordinate system at the ith IMU measurement, are shown>A priori estimate representing the position of the IMU in the map coordinate system at the ith IMU measurement,/-, is given>And->Representing a priori estimates of the external parameters between the IMU and the camera and lidar, respectively, at the ith IMU measurement,/->A priori estimate representing the speed of the IMU in the map coordinate system at the ith IMU measurement,/-, is given>And->Representing a priori estimates of the accelerometer and gyroscope bias at the ith IMU measurement, respectively,>represents the ith timeA priori estimation of gravitational acceleration at IMU measurement, +.>A priori estimates representing the time offset between the camera and the IMU at the ith IMU measurement, +.>Representing a priori estimates of camera internal parameters at the ith IMU measurement, 0 n×m Representing a zero matrix of n×m, exp () and Log () are the rondrigas transform between the rotation vector and the rotation matrix.
With the forward propagation of the pose information of the unmanned ship, i.e. [ in x ] G r I , G p I ]And calculates a priori distribution of x.
When the point cloud data is received, the IMU propagates the state vector backward with zero-order keeperAnd the process noise is set to zero, there is +.>
Wherein, assuming that the k+1st frame point cloud data is currently processed, thenRepresenting a priori estimates of x, respectively, for the j-1 th and j-th lidar measurements in the k+1-th frame point cloud,/for>And expressing the acceleration and the angular velocity of the unmanned ship during the jth laser radar measurement in the kth frame point cloud.
Setting pose to zero at the beginning of back propagation, velocity and IMU bias takeValue of>The a priori estimate of x when the k+1st frame point cloud is acquired is shown, and for any point in a frame of point cloud, IMU data which is earlier in acquisition time than the point and closest to the point is taken as input of acceleration and angular velocity of the unmanned ship when the point counter propagates.
Back propagation produces ρ j Time IMU coordinate system and t k+1 Relative pose between moment IMU coordinate systems Representing a priori estimates of the transformation matrix from the IMU coordinate system at the jth lidar measurement in the k+1st frame point cloud to the IMU coordinate system at the time the k+1st frame point cloud was acquired, ">Representation->Is a rotation matrix part of->Representation->Is to measure local +.>Measurement in a lidar coordinate system switched to the end of the scan +.>
Wherein, the liquid crystal display device comprises a liquid crystal display device,at t k External parameters between the time lidar and the IMUMaximum a posteriori estimation, L j Representing a laser radar coordinate system L in the jth laser radar measurement in the current frame point cloud k+1 And (5) representing a laser radar coordinate system when the k+1st frame point cloud is acquired.
Is provided with L P j Is a point in the point cloud after the distortion of the frame of point cloud data, wherein L represents the point represented under a laser radar coordinate system, j represents the index of the point, and the point is found and matched in a map L P j The 5 nearest points are used to fit a plane alpha j ,α j Having a normal vector u j And an in-plane point q j Using the Rodrigues transformRotation vector +.>And->Conversion into a rotation matrix>And->Maximum posterior estimation of rotation matrix and translation vector of IMU coordinate system relative to map coordinate system when k+1st frame point cloud is acquired>And maximum a posteriori estimation of rotation matrix and translation vector of the lidar coordinate system relative to the IMU coordinate system>Will be L P j Conversion from lidar coordinate system to map coordinate system, < >>To be used for L P j To alpha j Distance of->And obtaining posterior distribution of x as residual error, and updating x by using error state iterative Kalman filtering in cooperation with prior distribution of x obtained by IMU forward propagation after each point in the point cloud is processed after the frame is de-distorted.
When image data is received, calculating a descriptor in a previous frame image of each key point to be tracked, integrating by utilizing acceleration and angular velocity information of an IMU, roughly estimating the pose of an unmanned ship when the current image frame is obtained, and projecting a three-dimensional point cloud corresponding to the key point into the current frame image according to the estimated pose so as to preliminarily predict the position of the key point in the current image; in the current frame image, taking a preliminary predicted key point as a center, taking a 5 multiplied by 5 square window, calculating descriptors of all pixel points in the window, and finding a point which is closest to the previously calculated descriptors in vector space in the pixel points, wherein the pixel points are the points which are finally tracked.
Assuming that the (k+1) th frame image is currently processed, the following is set G P s Is one point of three-dimensional points corresponding to the key points to be tracked in the previous frame, wherein G represents the point under a map coordinate system, s represents the index of the point, and is set C p s Is that G P s The pixel points tracked in the frame image are transformed by the Rodrigues transformationRotation vector +.>And->Conversion to a rotation matrixAnd->Maximum posterior estimation of rotation matrix and translation vector of IMU coordinate system relative to map coordinate system when k+1th frame image is acquired>And maximum a posteriori estimation of rotation matrix and translation vector of camera coordinate system relative to IMU coordinate system>Will be G P s The conversion is performed under the current camera coordinate system,obtaining C p s And G P s between (B) and (C) reprojection errors>And taking the obtained posterior distribution of x as a residual error, after each key point is processed, matching with the prior distribution of x obtained by IMU forward propagation, updating x by using error state iterative Kalman filtering, and finally projecting the point cloud of the latest frame registered in the map into the frame image to be used as the key point to be tracked.
After each time point cloud data is received and the pose is updated by using the frame point cloud data, each point in the frame point cloud is updated by using the updated pose L P j Registering under the map coordinate system to finish updating the map. The updated map is subscribed to by RVIZ in real time and displayed in RVIZ.
In this embodiment, the conventional SLAM method cannot work in a water surface environment, the track drift phenomenon is serious, and a map cannot be constructed.
Example 2
The embodiment discloses a specific implementation process of an SLAM method of an offshore unmanned ship based on multi-sensor fusion, and a system consisting of an unmanned ship, a lipox Mid-40 laser radar, an MT9V034 camera, a HiPNUC CH100 IMU and a personal computer is built. The proposed SLAM method is deployed in a built system, with the individual components of the system working cooperatively with the coordinate system as shown in fig. 5 through ROS.
1000 images of the offshore environment marked out of the water surface portion were used to train the semantic segmentation network deep v3+. When the network is trained, the backbone network selects MobileNet V2, and the SGD optimizer is used to set the learning rate to 7E-3. A total of 150 epochs were trained, the first 50 epochs frozen the trunks of the model, and the last 100 epochs did not freeze the trunks of the model. The resolution of the image before the input network is set to 512×512. And converting the trained model into an onnx format, and deploying the onnx format into the SLAM method.
The unmanned ship was free to move in the western lake of the five mountain school district of the university of south China, data were collected at the same frequency as in example 1, and the collected multi-sensor data were all published in the form of ROS topics for subscription and use by each node of the system.
The initialization stage, when the first frame IMU data and the first frame point cloud data are acquired, the operation is the same as that in embodiment 1; when the first frame of image data is obtained, the water surface part of the frame of image data is segmented by utilizing a trained semantic segmentation network deep V < 3+ > and the frame of image is binarized, the pixel value of the pixel point of the water surface part is set to 255, the pixel value of the pixel point of the non-water surface part is set to 0, and then the water shoreline part of the frame of image is extracted by utilizing Canny edge detection, so that the effect is shown in figure 6. Projecting a first frame of point cloud registered in a map into the frame of image as a key point to be tracked, and storing the points of the projected points in the point cloud falling on a water land line in a queueThe queue size is set to 20.
After the initialization is completed, when IMU data is received, the operation is the same as in embodiment 1. When the point cloud data is received, the point cloud de-distortion operation is the same as in embodiment 1, the de-distorted point cloud is projected into the nearest frame image, and the following is set L P j Is a point in the point cloud after the distortion of the frame of point cloud data, wherein L represents the point in the laserExpressed under the radar coordinate system, j represents the index of the point, and is set C p j Is that L P j Projection points in the most recent frame image, if C p j Falls on the water surface part of the image, then C p j And L P j deleting; if it is C p j When the image falls on the non-water surface part of the image, posterior distribution of x is obtained through the same operation as in the embodiment 1, and after each non-water surface point in the point cloud is processed after the frame is de-distorted, the prior distribution of x obtained by the forward propagation of the IMU is matched, and the x is updated by using error state iterative Kalman filtering;
when image data is received, a trained semantic segmentation network deep V < 3+ > is utilized to segment the water surface part of the frame image, the frame image is binarized after segmentation, the pixel value of the pixel point of the water surface part is set to 255, the pixel value of the pixel point of the non-water surface part is set to 0, and then the Canny edge detection is utilized to extract the water shoreline part of the frame image. Each key point to be tracked was tracked by the same operation as in example 1.
Assuming that the k+1st frame image is currently processed, x is updated by the same operation as in embodiment 1. Is provided with G P m Is thatIn (a) is provided, C p m is the sum of the frame image G P m The closest water shorelin pixel point of the projection point is utilized to update +.>And->Will be G P m The conversion is performed under the current camera coordinate system,obtaining by means of a projection function pi () C P m Projection points in the current frame image are found out by using BFS algorithm C p m Obtaining C p m And G P m between re-projection errorsTaking this as residual to obtain a posterior distribution of x, processing +.>After each point in the map is matched with the prior distribution of x obtained by IMU forward propagation, the x is updated by utilizing error state iterative Kalman filtering, finally, the point cloud of the latest frame registered in the map is projected into the frame image as a key point to be tracked, and the point of the projected point in the point cloud falling on a water bank line is stored in->Is a kind of medium.
The update operation of the point cloud map is the same as that of embodiment 1. The updated map is subscribed to by RVIZ in real time and displayed in RVIZ with the effect shown in FIG. 7.
The above examples are preferred embodiments of the present invention, but the embodiments of the present invention are not limited to the above examples, and any other changes, modifications, substitutions, combinations, and simplifications that do not depart from the spirit and principle of the present invention should be made in the equivalent manner, and the embodiments are included in the protection scope of the present invention.
Claims (7)
1. A multi-sensor fused offshore unmanned ship SLAM method, characterized in that the SLAM method comprises the steps of:
s1, sensor data acquisition: the unmanned ship is driven to freely move in a near-shore water environment, point cloud data of the surrounding environment under a laser radar coordinate system are collected through a laser radar, image data of the surrounding environment under the camera coordinate system are collected through a camera, acceleration and angular velocity data of the unmanned ship under an IMU coordinate system are collected through an inertial measurement unit, the inertial measurement unit is called IMU for short, the data collected by the IMU are called IMU data, i is set as an index of the IMU data, and k is an index of the point cloud data and the image data;
s2, initializing: after the IMU data of the first frame is obtained, the IMU coordinate system at the moment is set as the origin of the map coordinate system, the IMU data of each frame is utilized to update the state vector x of the unmanned ship,is defined as:
wherein () T Indicating that the elements in brackets are transposed, G r I a rotation vector representing rotation of the IMU coordinate system to the map coordinate system, G p I representing the position of an IMU in a map coordinate system [ I r C , I p C ]Sum [ I r L , I p L ]Representing the external parameters between the IMU and the camera and lidar respectively, G v denotes the speed of the IMU in the map coordinate system, b a And b g Representing the bias of the accelerometer and gyroscope respectively, G g represents the acceleration of gravity and, I t C representing the time offset between the camera and IMU, phi= [ f x ,f y ,c x ,c y ] T Representing the internal parameters of the camera, (f) x ,f y ) For the horizontal and vertical focal lengths of the camera, (c) x ,c y ) Horizontal and vertical offsets of the image origin relative to the camera optical center;
after the first frame point cloud data is obtained, the distortion of the frame point cloud is compensated by utilizing IMU back propagation, and the frame point cloud is registered under a map coordinate system by utilizing the pose estimated by IMU forward propagation to form an initial map;
after the first frame of image data is acquired, the water surface part of the frame of image data is segmented by an image segmentation network Deeplab V < 3+ > and the frame of image is binarized, the pixel value of the pixel point of the water surface part is set to 255, the pixel value of the pixel point of the non-water surface part is set to 0, the water shoreline part of the frame of image is extracted by Canny edge detection, and the point cloud of the first frame registered in the map is projected on the mapIn the frame image, the point of the projection point falling on the water bank line in the point cloud is stored in a queue with the size of M as a key point to be trackedIn (a) and (b);
s3, updating the pose of the unmanned ship: when IMU data is received, the IMU forward propagation is utilized to update x, and the pose information is the [ in x ] G r I , G p I ]And calculating a priori distribution of x; when the point cloud data is received, the IMU is utilized to back propagate to compensate the distortion of the point cloud, then the point cloud after the distortion removal is projected into the nearest frame image, and the point cloud is set L P j Is a point in the point cloud after the distortion of the frame point cloud data, wherein L represents the point under the laser radar coordinate system, j represents the index of the point, and is set C p j Is that L P j Projection points in the most recent frame image, if C p j Falls on the water surface part of the image, then C p j And L P j deleting; if it is C p j Falling on a non-water part of the image, then find a relation in the map L P j The 5 nearest points are used to fit a plane alpha j To L P j To alpha j The posterior distribution of x is obtained by taking the distance of the point cloud as the residual error, and x is updated by utilizing error state iteration Kalman filtering in cooperation with the prior distribution of x obtained by IMU forward propagation after each point in the point cloud is processed; when image data is received, the water surface part of the frame image is segmented by utilizing an image segmentation network deep V < 3+ >, the frame image is binarized after segmentation, the water shoreline part of the frame image is extracted by utilizing Canny edge detection, the key point to be tracked of the previous frame is tracked by utilizing a non-water LiDAR key point two-step tracking method, and a key point to be tracked is set G P s Is one point of three-dimensional points corresponding to the key points to be tracked in the previous frame, wherein G represents the point under a map coordinate system, s represents the index of the point, and is set C p s Is that G P s Pixel points tracked in the frame image to C p s And G P s the re-projection error between the two is used as residual error to obtain posterior distribution of x, after each key point is processed, the prior distribution of x obtained by the forward propagation of IMU is matched, the x is updated by using error state iterative Kalman filtering, pose information in the updated x is utilized, and the method comprises the following steps ofThe point in (2) is projected into the frame image, and is provided with G P m Is->In (a) is provided, C p m is the sum of the frame image G P m Projection point nearest water shoreline pixel point to C p m And G P m the re-projection error between the two is used as residual error to obtain posterior distribution of x, and the posterior distribution of x is processed>After each point in the map is matched with the prior distribution of x obtained by IMU forward propagation, the x is updated by utilizing error state iterative Kalman filtering, finally, the point cloud of the latest frame registered in the map is projected into the frame image as a key point to be tracked, and the point of the projected point in the point cloud falling on a water bank line is stored in->In (a) and (b);
s4, updating the map: after each time point cloud data is received and the pose is updated by using the frame point cloud data, each point in the frame point cloud is updated by using the updated pose L P j Registering under the map coordinate system to finish updating the map.
2. The multi-sensor fusion offshore unmanned ship SLAM method of claim 1, wherein the IMU forward propagation process in the SLAM method is as follows:
definition of a priori estimates of x asIMU propagates forward with zero-order keeper>I.e. assuming that the IMU measurement is constant over a sampling period and the process noise is set to zero, there is
Wherein, the liquid crystal display device comprises a liquid crystal display device,and->Representing a priori estimates of x at the ith and i+1 IMU measurements, respectively, Δt representing the time interval between the ith and i+1 IMU measurements, u i Representing raw data at the i-th IMU measurement,
is known to beThe symbol [ + ]]Is defined as
Wherein, the liquid crystal display device comprises a liquid crystal display device,and->Representing the raw data of accelerometer and gyroscope, respectively, for the ith IMU measurement, +.>A priori estimates of rotation vectors representing rotation of the IMU coordinate system to the map coordinate system at the ith IMU measurement, are shown>A priori estimate representing the position of the IMU in the map coordinate system at the ith IMU measurement,/-, is given>And->Representing a priori estimates of the external parameters between the IMU and the camera and lidar, respectively, at the ith IMU measurement,/->A priori estimate representing the speed of the IMU in the map coordinate system at the ith IMU measurement,/-, is given>And->Representing a priori estimates of the accelerometer and gyroscope bias at the ith IMU measurement, respectively,>representing a priori estimates of gravitational acceleration at the ith IMU measurement,/and/or>A priori estimates representing the time offset between the camera and the IMU at the ith IMU measurement, +.>Representing a priori estimates of camera internal parameters at the ith IMU measurement, 0 n×m Representing a zero matrix of n×m, exp () and Log () are the rondrigas transform between the rotation vector and the rotation matrix.
3. The multi-sensor fusion offshore unmanned ship SLAM method according to claim 2, wherein the SLAM method uses IMU back propagation to compensate the distortion of point cloud, and the process is as follows:
t1, IMU back propagation: let the current processing be the k+1st frame point cloud data, the IMU propagates the state vector backward with a zero-order holderAnd the process noise is set to zero, then there is
Wherein, the liquid crystal display device comprises a liquid crystal display device,representing a priori estimates of x at the j-1 st and j-th lidar measurements in the k +1 frame point cloud respectively,expressing the acceleration and the angular velocity of the unmanned ship during the jth laser radar measurement in the k+1th frame of point cloud;
setting pose to zero at the beginning of back propagation, velocity and IMU bias takeValue of>Representing x when the (k+1) th frame point cloud is acquiredIn addition, for any point in a frame of point cloud, IMU data which is earlier in acquisition time and closest to the point is taken as input of acceleration and angular speed of the unmanned ship when the point is in back propagation;
t2, point cloud de-distortion: back propagation produces ρ j Time sum t k+1 Relative pose between moments Representing a priori estimates of the transformation matrix from the IMU coordinate system at the jth lidar measurement in the k+1st frame point cloud to the IMU coordinate system at the time the k+1st frame point cloud was acquired, ">Representation->Is a rotation matrix part of->Representation->Is to measure local +.>Measurement in a lidar coordinate system switched to the end of the scan +.>
Wherein, the liquid crystal display device comprises a liquid crystal display device,at t k Maximum a posteriori estimation of external parameters between time lidar and IMU, L j Representing a laser radar coordinate system L in the jth laser radar measurement in the current frame point cloud k+1 And (5) representing a laser radar coordinate system when the k+1st frame point cloud is acquired.
4. A multi-sensor fused offshore unmanned ship SLAM method according to claim 3, wherein the SLAM method uses the point-to-surface distance as a residual, and the calculation process is as follows:
is provided withFor maximum a posteriori estimation of x calculated using the k+1st frame point cloud, the +.>Rotation vector +.>And->Conversion into a rotation matrix>And->Will be described according to the following formula L P j Converting from a lidar coordinate system to a map coordinate system: />Wherein (1)>And->Maximum posterior estimation of rotation matrix and translation vector of IMU coordinate system relative to map coordinate system and maximum posterior estimation of rotation matrix and translation vector of laser radar coordinate system relative to IMU coordinate system when k+1st frame point cloud is obtained, respectively, in map and L P j nearest face alpha j Having a normal vector u j And an in-plane point q j The measurement residual is:
5. the multi-sensor fusion offshore unmanned ship SLAM method of claim 1, wherein the non-water LiDAR key point two-step tracking method in the SLAM method comprises the following steps:
r1, calculating descriptors: calculating descriptors of key points to be tracked in the previous frame of image;
r2, roughly estimating pose: integrating by utilizing the acceleration and angular velocity information of the IMU, roughly estimating the pose of the unmanned ship when the next image frame is reached, and projecting the three-dimensional point cloud corresponding to the key points into the next image frame according to the estimated pose, so as to preliminarily predict the positions of the key points in the next image frame;
r3, fine tracking key points: in the next frame of image, taking the preliminary predicted key point as the center, taking an N multiplied by N square window, calculating the descriptors of all pixel points in the window, and finding out the point which is closest to the descriptors calculated in R1 in vector space in the pixel points, wherein the pixel points are the points which are finally tracked.
6. The multi-sensor fusion offshore unmanned ship SLAM method according to claim 1, wherein the SLAM method uses the re-projection error as a residual error, and the calculation process is as follows:
is provided withFor maximum a posteriori estimation of x calculated with the k+1th frame image, the +.>Rotation vector +.>And->Conversion into a rotation matrix>And->For the following C p s And its corresponding point in three-dimensional map G P s The measurement residual is:
wherein, the liquid crystal display device comprises a liquid crystal display device,representation of G P s Three-dimensional coordinates under camera coordinate system, pi (·) is pinhole projection model, +.>And->Rotation matrix and translation of IMU coordinate system relative to map coordinate system when k+1th frame image is acquiredMaximum posterior estimation of vectors and maximum posterior estimation of rotation matrix and translation vectors of the camera coordinate system relative to the IMU coordinate system;
similarly, for C p m And its corresponding point in three-dimensional map G P m The measurement residual is:
wherein, the liquid crystal display device comprises a liquid crystal display device,representation of G P m Three-dimensional coordinates in the camera coordinate system.
7. The SLAM method of the multi-sensor fusion near-shore unmanned ship according to claim 1, wherein in the unmanned ship pose updating of step S3, the water surface part of the frame image is segmented by using an image segmentation network deep v3+, the frame image is binarized after segmentation, the pixel value of the pixel point of the water surface part is set to 255, and the pixel value of the pixel point of the non-water surface part is set to 0.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310234432.8A CN116448100A (en) | 2023-03-10 | 2023-03-10 | Multi-sensor fusion type offshore unmanned ship SLAM method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310234432.8A CN116448100A (en) | 2023-03-10 | 2023-03-10 | Multi-sensor fusion type offshore unmanned ship SLAM method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116448100A true CN116448100A (en) | 2023-07-18 |
Family
ID=87131084
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310234432.8A Pending CN116448100A (en) | 2023-03-10 | 2023-03-10 | Multi-sensor fusion type offshore unmanned ship SLAM method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116448100A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117148373A (en) * | 2023-10-30 | 2023-12-01 | 浙江华是科技股份有限公司 | Ship identification method and system based on laser radar and AIS global matching |
CN117387598A (en) * | 2023-10-08 | 2024-01-12 | 北京理工大学 | Tightly-coupled lightweight real-time positioning and mapping method |
-
2023
- 2023-03-10 CN CN202310234432.8A patent/CN116448100A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117387598A (en) * | 2023-10-08 | 2024-01-12 | 北京理工大学 | Tightly-coupled lightweight real-time positioning and mapping method |
CN117148373A (en) * | 2023-10-30 | 2023-12-01 | 浙江华是科技股份有限公司 | Ship identification method and system based on laser radar and AIS global matching |
CN117148373B (en) * | 2023-10-30 | 2024-01-26 | 浙江华是科技股份有限公司 | Ship identification method and system based on laser radar and AIS global matching |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110261870B (en) | Synchronous positioning and mapping method for vision-inertia-laser fusion | |
CN109993113B (en) | Pose estimation method based on RGB-D and IMU information fusion | |
CN112649016B (en) | Visual inertial odometer method based on dotted line initialization | |
CN111275763B (en) | Closed loop detection system, multi-sensor fusion SLAM system and robot | |
CN112304307A (en) | Positioning method and device based on multi-sensor fusion and storage medium | |
CN116448100A (en) | Multi-sensor fusion type offshore unmanned ship SLAM method | |
CN108682027A (en) | VSLAM realization method and systems based on point, line Fusion Features | |
CN112749665B (en) | Visual inertia SLAM method based on image edge characteristics | |
CN113837277B (en) | Multisource fusion SLAM system based on visual point-line feature optimization | |
CN110726406A (en) | Improved nonlinear optimization monocular inertial navigation SLAM method | |
CN111665512B (en) | Ranging and mapping based on fusion of 3D lidar and inertial measurement unit | |
CN111623773B (en) | Target positioning method and device based on fisheye vision and inertial measurement | |
CN113739795B (en) | Underwater synchronous positioning and mapping method based on polarized light/inertia/vision integrated navigation | |
CN114526745A (en) | Drawing establishing method and system for tightly-coupled laser radar and inertial odometer | |
CN114323033B (en) | Positioning method and equipment based on lane lines and feature points and automatic driving vehicle | |
CN115407357A (en) | Low-beam laser radar-IMU-RTK positioning mapping algorithm based on large scene | |
CN111609868A (en) | Visual inertial odometer method based on improved optical flow method | |
CN112781582A (en) | Multi-sensor fusion high-precision pose estimation algorithm under satellite weak observation condition | |
CN116242374A (en) | Direct method-based multi-sensor fusion SLAM positioning method | |
Khoshelham et al. | Vehicle positioning in the absence of GNSS signals: Potential of visual-inertial odometry | |
CN112611376B (en) | RGI-Lidar/SINS tightly-coupled AUV underwater navigation positioning method and system | |
CN112945233B (en) | Global drift-free autonomous robot simultaneous positioning and map construction method | |
CN110211148B (en) | Underwater image pre-segmentation method based on target state estimation | |
CN114440877B (en) | Asynchronous multi-camera visual inertial odometer positioning method | |
CN112837374B (en) | Space positioning method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |