CN114608554A - Handheld SLAM equipment and robot instant positioning and mapping method - Google Patents
Handheld SLAM equipment and robot instant positioning and mapping method Download PDFInfo
- Publication number
- CN114608554A CN114608554A CN202210160734.0A CN202210160734A CN114608554A CN 114608554 A CN114608554 A CN 114608554A CN 202210160734 A CN202210160734 A CN 202210160734A CN 114608554 A CN114608554 A CN 114608554A
- Authority
- CN
- China
- Prior art keywords
- laser
- inertia
- camera
- point
- visual
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 46
- 238000013507 mapping Methods 0.000 title claims abstract description 31
- 238000001514 detection method Methods 0.000 claims abstract description 20
- 238000005259 measurement Methods 0.000 claims description 56
- 238000005457 optimization Methods 0.000 claims description 41
- 238000004422 calculation algorithm Methods 0.000 claims description 28
- 230000000007 visual effect Effects 0.000 claims description 28
- 238000005516 engineering process Methods 0.000 claims description 14
- 239000000126 substance Substances 0.000 claims description 10
- 239000013598 vector Substances 0.000 claims description 9
- 238000012545 processing Methods 0.000 claims description 8
- 230000000694 effects Effects 0.000 claims description 4
- 230000005484 gravity Effects 0.000 claims description 3
- 230000003287 optical effect Effects 0.000 claims description 3
- 230000001133 acceleration Effects 0.000 claims description 2
- 150000001875 compounds Chemical class 0.000 claims description 2
- 230000004807 localization Effects 0.000 claims 2
- 230000008901 benefit Effects 0.000 abstract description 9
- 230000000295 complement effect Effects 0.000 abstract description 2
- 238000013461 design Methods 0.000 description 17
- 238000000605 extraction Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 5
- 230000004927 fusion Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 238000012360 testing method Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- WHXSMMKQMYFTQS-UHFFFAOYSA-N Lithium Chemical compound [Li] WHXSMMKQMYFTQS-UHFFFAOYSA-N 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 229910052744 lithium Inorganic materials 0.000 description 2
- 230000003252 repetitive effect Effects 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/38—Electronic maps specially adapted for navigation; Updating thereof
- G01C21/3804—Creation or updating of map data
- G01C21/3833—Creation or updating of map data characterised by the source of data
- G01C21/3841—Data obtained from two or more sources, e.g. probe vehicles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
- G01C21/1652—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
- G01C21/1656—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
Abstract
The invention discloses a handheld SLAM device and a robot instant positioning and mapping method. The advantages of the camera, the inertia measuring device and the laser radar are combined, and the laser-inertia subsystem provides depth information for the vision-inertia subsystem, so that the scale uncertainty of the monocular camera is eliminated; and the vision-inertia subsystem provides prior estimation and loop detection information for the laser-inertia subsystem, so that more accurate pose estimation is obtained. The advantages of the two systems are complementary, and the accuracy and the robustness of the instant positioning and mapping system are greatly improved. Meanwhile, the designed handheld SLAM equipment is convenient to carry, and lays a foundation for the accuracy of positioning.
Description
Technical Field
The invention relates to the technical field of robot perception and navigation, in particular to a handheld SLAM device and a robot instant positioning and mapping method.
Background
The instant positioning and Mapping (SLAM) technology is a popular research direction in the field of autonomous navigation and positioning of robots at present. The instant positioning and mapping technology is mainly characterized in that the robot does not rely on information obtained by external measurement to determine the position of the robot, the position increment is given mainly through a sensor carried by the robot, so that the position of the robot in the environment is determined, and meanwhile, an environment point cloud map is established according to the result obtained by positioning and data returned by the sensor at the current moment. The field is mainly divided into two directions, namely a laser SLAM technology and a visual SLAM technology, wherein the laser SLAM technology has the advantages that the laser SLAM technology can adapt to a complex environment, but is insensitive to extraction of environmental features and loop detection; the visual SLAM technology has the advantages of easily capturing environmental characteristic points and detecting repetitive environments, but is influenced by factors such as illumination, field angle and the like, so that the requirement on the environment is high. The expert scholars propose different solutions to the advantages and disadvantages listed above, and the existing solutions to the laser SLAM technology and the vision SLAM technology are as follows:
scheme 1: the LOAM algorithm is proposed in the literature (Zhang J, Kaess M, Singh S.on Generation of optimization-based state estimation schemes) [ C ]//2016 IEEE International conference on Robotics and Automation (ICRA) [ IEEE,2016 ]). The LOAM algorithm is characterized in that a plane point and edge point feature extraction and point cloud matching method is provided, and point cloud distortion caused by motion is eliminated by assuming uniform motion, so that the real-time performance of the algorithm is guaranteed. However, the LOAM algorithm does not process loop back detection at the back end and uses a loosely coupled approach when using inertial measurement devices.
Scheme 2: the VINS algorithm is proposed in the literature (Tong, Qin, Beiliang, et al. VINS-Mono: A Robust and vertical simple Visual-interferometric State Estimator [ J ]// IEEE Transactions on Robotics,2018,34(4): 1004-20). The VINS algorithm contributes to providing an algorithm framework that the front end receives characteristic points in an image extraction environment, data obtained by an inertia measurement device is preprocessed, and the back end performs joint optimization, global graph optimization and loop detection. However, in actual tests, the VINS algorithm has high requirements on the environment, and often fails in initialization, so that the implementation conditions are harsh.
Scheme 3: the R2LIVE algorithm is proposed in the literature (Lin J, Zheng C, Xu W, et al. R2LIVE: A Robust, Real-time, LiDAR-inert-Visual light-coordinated state estimate and mapping [ J ]// IEEE robots and Automation Letters,2021,6(4): 7469-76.). The R2LIVE mainly contributes to providing an instant positioning and mapping frame with three sensors of a laser radar, a camera and an inertia measurement device which are tightly coupled, the scheme takes the measurement result of the high-frequency inertia measurement device as prior estimation, the measurement results of the laser radar and the camera are respectively optimized by error state iteration Kalman filtering through multithreading, and the robustness is high. However, the algorithm is not suitable for processing a large amount of data, and when the amount of point cloud data is large, the real-time performance of the algorithm cannot be guaranteed.
Disclosure of Invention
In view of this, the invention provides a handheld SLAM device and a robot real-time positioning and mapping method, which can realize the advantage complementation of a laser-inertia subsystem and a vision-inertia subsystem, and greatly improve the accuracy and robustness of a real-time positioning and mapping system.
The invention discloses a robot instant positioning and mapping method, which adopts a laser radar, a camera and an inertia measuring device to carry out instant positioning and mapping and comprises the following steps:
performing optimal estimation on the constructed state variable by adopting a camera and an inertia measurement device based on a visual SLAM technology to obtain attitude information and a visual loop of a visual-inertial odometer; wherein the constructed state variables comprise attitude information and measurement deviation provided by an inertial measurement device and depth information of characteristic points provided by a laser radar;
acquiring attitude information of the laser-inertia odometer by adopting a laser radar and an inertia measuring device and taking the attitude information and the visual loop of the visual-inertia odometer as prior information based on a laser SLAM technology; and establishing a point cloud map of the environment based on the attitude information of the laser-inertia odometer.
Preferably, the feature points are obtained by using a KLT sparse optical flow algorithm.
Preferably, in the visual SLAM technology, a DBoW2 bag-of-words model is adopted for loop detection.
Preferably, in the step 1, the constructed state variable Y is:
wherein, the corner mark W represents a world coordinate system, the corner mark I represents an inertial measurement unit coordinate system, and the corner mark C represents a camera coordinate system;for the attitude information measured by the inertial measurement unit,is the deviation of the inertial measurement unit; n is the number of key frames;the attitude transformation from the camera to the inertial measurement unit; dlGiving the depth information of the ith characteristic point by the laser radar, wherein l belongs to [1, m ∈]And m is the number of feature points.
Preferably, the optimal estimation uses a maximum posterior probability model.
Preferably, in the visual SLAM-based technique, the optimal solution based on the maximum posterior probability model is as follows:
wherein the content of the first and second substances,a set of all inertial measurement device measurements;the method comprises the steps of collecting all characteristic points observed at least twice under a current sliding window;collecting all key frames generating loop detection; (l, j) observing the l characteristic point in the j frame image; (l, V) represents observing the l-th feature point under the camera coordinate system V at the time of the loop back;
||rp-HpY||2for the marginalizing of the corresponding residual terms of the image,andrespectively an inertia measurement device residual error item and a visual characteristic residual error item;
wherein the content of the first and second substances,from coordinate system W to coordinate system IiThe corner mark i is the time i; Δ tiFor the time difference between two measurements of the inertial measurement unit, gWIs the acceleration of gravity []xyzThe vector portion of the quaternion q is extracted to represent the error state,multiplication is carried out for quaternion;is a pre-integrated IMU measurement term measured using only a noisy accelerometer and gyroscope in the time interval between two consecutive image frames;is the observed quantity corresponding to the first characteristic point observed for the first time in the i-frame image;the observed quantity is observed in the jth frame image by the same characteristic;is a back-projection function that converts pixel locations to unit vectors using intrinsic parameters of the camera; lambda [ alpha ]lIs the inverse depth of the first observed characteristic point;Is the tangent plane corresponding to the residual vector; bi,b2Is a tangent planeTwo arbitrarily selected orthogonal bases;is a standard covariance with a fixed length in the tangent plane;
the function ρ (·) is expressed as:
preferably, the method for acquiring the attitude information of the laser-inertial odometer comprises the following steps:
based on the attitude information measured by the inertia measuring device, carrying out distortion removal processing on the laser cloud picture;
carrying out characteristic classification on the point cloud data in the laser cloud image subjected to distortion removal processing, and dividing the point cloud data into plane points and edge points; respectively calculating the characteristic residual errors of the plane points and the edge points;
and optimizing the characteristic residual sum of the plane points and the edge points by using the attitude information and the visual loop of the visual-inertial odometer as prior information to obtain the attitude information of the optimal laser-inertial odometer.
Preferably, in the optimization of the feature residual sums of the plane point and the edge point, when the visual loop of the visual-inertial odometer and the laser loop of the laser-inertial odometer are consistent, the visual loop is regarded as a reliable loop, otherwise, the visual loop is not a reliable loop.
Preferably, the laser loop of the laser-inertia odometer is calculated based on an ICP algorithm.
Preferably, when the point cloud data is subjected to feature classification, firstly, the point cloud data on each line of laser radar is subjected to smoothness calculation according to the following curvature formula:
wherein the content of the first and second substances,is the ith constraint point of the kth scanning under the laser coordinate system L, and s represents a set formed by all points in the neighborhood of the i point in the scanning; the point with the maximum smoothness is extracted as an edge point, and the point with the minimum smoothness is extracted as a plane point.
Preferably, the characteristic residuals of the plane points are:
the feature residuals of the edge points are:
the present invention also provides a handheld SLAM device, comprising: the device comprises a laser radar, an inertia measuring device, a camera, an onboard computer, a mobile power supply, a display screen, a support and a handle; the bracket is of an upper-lower double-layer structure, and the upper part of the bracket is narrow and the lower part of the bracket is wide; the laser radar is fixed on the upper layer of the bracket; the camera is fixed on the lower layer of the bracket and faces forwards; the inertia measuring device is fixed on the lower layer of the bracket, is positioned right below the laser radar and is positioned on the same horizontal line with the camera; the airborne computer and the mobile power supply are positioned on the lower layer of the bracket and are respectively positioned on the left side and the right side of the inertia measuring device; the display screen is fixed at the rear side of the bracket; the airborne computer is connected with the camera, the inertia measuring device, the laser radar and the display screen; the inertial measurement unit, the display screen and the camera are powered by an onboard computer.
Preferably, the device also comprises a distributor plate and a constant voltage module; the power distribution board divides the output current of the mobile power supply into two paths, one path is connected with the laser radar, and the other path is connected with the airborne computer through the constant voltage module.
Preferably, the handle is detachable.
Preferably, the airborne computer realizes the synchronization of the camera, the inertia measurement device and the laser radar based on the timestamp of the ROS.
Preferably, the onboard computer performs instant positioning and map building by adopting the method.
Has the beneficial effects that:
the invention combines the advantages of a camera, an inertia measuring device and a laser radar, and provides a factor graph optimization-based instant positioning and mapping software framework integrating the laser radar, a monocular camera and an inertia navigation device, wherein the laser radar provides depth information and initialization information for camera vision, the camera vision provides feature point and loop detection for the laser radar, and the rear end realizes pose estimation through factor graph optimization.
Meanwhile, in consideration of the problems of accuracy and portability of the testing equipment, the invention designs and completes a set of handheld SLAM equipment. Through the combination of software and hardware, a complete set of system consisting of handheld SLAM hardware equipment and a multi-sensor fusion software framework carried by the handheld SLAM hardware equipment is formed, and the system can ensure the accuracy and robustness of measurement in a complex environment.
Drawings
FIG. 1 is a circuit diagram of the handheld device of the present invention.
FIG. 2 is a diagram of the hardware design of the handheld device of the present invention.
FIG. 3 is a multi-sensor fusion algorithm framework of the present invention.
FIG. 4 is a visual-inertial subsystem factor map model of the present invention.
FIG. 5 is a graphical model of the laser-inertial subsystem factor of the present invention.
FIG. 6 is a diagram showing the testing effect of the instant positioning and mapping system of the present invention.
The system comprises a laser radar 1, an inertial measurement device 2, a monocular camera 3, an onboard computer 4, a mobile power supply 5 and a display screen 6.
Detailed Description
The invention is described in detail below by way of example with reference to the accompanying drawings.
The invention designs and constructs a set of handheld SLAM hardware equipment, and provides a multi-sensor fusion instant positioning and mapping method based on factor graph optimization, thereby forming a complete handheld instant positioning and mapping system and solving the portability problem of the hardware equipment and the robustness problem of a software framework.
The hardware part equipment adopts 32-line mechanical laser radar, a monocular camera and an Inertial navigation Unit (IMU), realizes embedded development through an onboard computer, and is provided with a corresponding display screen and a corresponding peripheral interface. According to the hardware equipment, SoildWorks is adopted for modeling, the placement positions of sensors of the hardware equipment are designed, and a circuit is designed according to the power supply requirements of different sensors so as to ensure that each module of the equipment works normally in the actual operation process. Finally, considering the problem that the coordinate systems of all the sensors are not uniform due to different placing positions and the conventional coordinate systems, the invention calibrates the sensors to solve the problem.
The software part mainly comprises a laser-inertia subsystem (LIDAR-IMU subsystem, LIS) and a Visual-inertia subsystem (VIS), the laser-inertia subsystem provides depth information for the Visual-inertia subsystem, the Visual-inertia subsystem provides feature point prior information and loop detection information in the environment for the laser-inertia subsystem, the two subsystems work independently through two threads to guarantee real-time performance and robustness of the algorithm, and accuracy of the system is guaranteed through fusion optimization. Because the laser-inertia subsystem and the vision-inertia subsystem respectively have corresponding accumulated errors, the invention provides a software framework based on factor graph optimization to fuse data of multiple sensors. According to the method, respective factor graph models are established for the laser-inertia subsystem and the vision-inertia subsystem respectively, and the corresponding Maximum A Posteriori (MAP) problem is solved according to the corresponding constraint terms, so that more accurate attitude estimation is obtained.
The following is a detailed description of the above two parts, respectively, as follows:
hardware design of handheld device
The circuit structure diagram of the handheld device is shown in fig. 1, and the device adopts a 6S lithium battery for power supply. Because the power consumption of the laser radar is larger, the laser radar is set to be powered by one path by adopting the design of the distribution plate, so that the overlarge load of the constant voltage module can be avoided; and the other path supplies power to an onboard computer through a constant voltage module, and the onboard computer is connected with the monocular camera, the inertia measuring device and the display screen through a USB interface and supplies power.
The hardware design diagram of the handheld device is shown in fig. 2, and the difficulties faced by the hardware design of the handheld device are mainly the reasonability of the sensor position placement and the portability and balance of the device. Aiming at the difficulties, the invention mainly embodies the following points on the design of the handheld equipment:
1. laser radar measuring range is 360, vertical 45 for the level, for guaranteeing that laser radar does not receive the sheltering from, has designed double-deck hardware mounting means, and wherein laser radar independently places in the support upper strata, guarantees that the horizontal direction measurement does not receive the sheltering from, and simultaneously, the design that support upper plane is little, the lower floor plane is wide has guaranteed that the vertical direction of laser radar is measured and is not received the sheltering from.
2. The double-layer support design mode ensures that the Z-axis directions of the laser radar, the monocular camera and the inertia measurement device are the same and are as close as possible, the laser radar is positioned right above the equipment, the monocular camera is positioned on the lower layer of the support and faces the front, the inertia measurement device is positioned right below the laser radar and is in the same horizontal line with the camera, and the accuracy of subsequent software calibration work is ensured;
3. in order to keep the center of gravity of the equipment in the middle of the equipment as much as possible, an onboard computer and lithium batteries are respectively placed on two sides of the equipment so as to keep the balance of the equipment;
4. the display screen is located the rear side of equipment and is convenient for carry out human-computer interaction to being equipped with and dismantling the handle, can transplanting the equipment to mobile robot platform when needs, the operation is comparatively simple and convenient, guarantees handheld device operation's portability and the portability of many platforms from this.
In particular, since the laser radar, the monocular camera and the inertial measurement device of the device are different in arrangement position from the conventional coordinate system, the sensor needs to be calibrated to realize the interconversion between the coordinate systems. At the same time, ROS-based timestamps are used to achieve software time synchronization.
Software design of instant positioning and drawing frame
The invention designs a set of multi-sensor fusion instant positioning and mapping method, which is explained in detail in the aspects of overall algorithm framework design, laser-inertia subsystem design and vision-inertia subsystem design.
(1) Overall algorithm framework design
The overall algorithm framework design of the present invention is shown in fig. 3, which is a tightly coupled laser, vision, and inertial odometer, working separately with two independent subsystems, a laser-inertial subsystem and a vision-inertial subsystem.
For the laser-inertia subsystem, firstly, point cloud data obtained by laser is subjected to distortion removal treatment through measurement data given by an inertia measurement device. When laser point cloud feature extraction is carried out, feature classification is carried out according to the smoothness of the feature points to be classified relative to other points in the neighborhood, and corresponding residual error items are established according to the types of the feature points; and meanwhile, receiving attitude information from the vision-inertia subsystem as prior information for optimization. In the laser odometer part, loop detection information from a vision-inertia subsystem is received, so that the attitude estimation and map construction work can be better carried out.
For the vision-inertia subsystem, when feature extraction is performed, depth information given by the laser-inertia subsystem is accepted for better matching.
The above is a general introduction of the system, and in order to facilitate the following modularization to discuss the two subsystems in more detail, the present embodiment first performs mathematical representation on the maximum a posteriori probability problem to be solved, and defines the state variables of the whole system. First, the maximum a posteriori probability problem form is defined as follows: in the presence of observed quantity Z ═ ZiI | ═ 1,2, …, k } and state quantity X ═ XiWhen | i ═ 1,2, …, k }, the optimum state quantity X is solved*The probability P (X | Z) is maximized, expressed as:
from the bayesian formula, one can obtain:
for the instant positioning and mapping algorithm, the probability distribution p (z) of the observed quantity is determined by the parameters of the sensor, is independent of the algorithm, and can be regarded as a normalization parameter, so that the maximum posterior probability problem can be expressed as:
the problem thus translates into: constructing corresponding optimization indexes and solving the optimal state estimation X*. Based on the maximum a posteriori probability problem and state variables defined above, the present invention elaborates the vision-inertia subsystem and the laser-inertia subsystem, respectively.
(2) Vision-inertia subsystem
The factor graph model of the visual-inertial subsystem is shown in fig. 4, and the main process is divided into two steps: extraction and tracking of visual features and visual-inertial tight coupling optimization.
For visual feature extraction and tracking, firstly, extracting frame image feature points by using a KLT sparse optical flow algorithm, and obtaining rotation and translation variables between two adjacent frames; and (3) judging the parallax and the tracking quality between the two frames of images, and using the measured result for extracting the visual key frame.
For the visual-inertial tight coupling optimization, the main algorithm flow is as follows:
(a) optimization of index design
Aiming at the characteristics of a vision-inertia subsystem, the optimization index is divided into three parts, namely: optimization index r of visual characteristic pointsCOptimization index r of inertia measurement deviceIAnd loop detection optimization index rS. The world coordinate system is denoted by W, the inertial measurement unit coordinate system is denoted by I, and the monocular camera coordinate system is denoted by C.
In order to realize joint optimization, state variables need to be expanded to realize tight coupling, and under a sliding window, a new state variable Y is defined as follows:
wherein p, v, q are attitude information measured by an inertia measurement device: p is displacement, v is velocity, q is rotation expressed in four elements; b represents the deviation of the inertial measurement unit: baAs accelerometer bias, bwIs the gyroscope bias; i represents the ith image acquired by the camera; y isiIndicating the state of the device at time i; n represents the number of key frames; m represents the number of feature points; dlDepth information representing the ith feature point is given by the laser-inertial subsystem to reduce the scale uncertainty of the monocular camera.
(b) Optimization index of inertia measurement device
Consider two consecutive frames I under a sliding windowiAnd Ii+1The measured value of the inertia measuring device in between, and the residual error term of the inertia measuring device pre-integration is obtained
Wherein [ ·]xyzThe vector portion of the quaternion q is extracted to represent the error state.Is a three-dimensional error state representation of a quaternion.Is a pre-integrated IMU measurement term that is measured using only noisy accelerometers and gyroscopes in the time interval between two successive image frames. Accelerometer and gyroscope biases are also included in the remainder of the online correction.
(c) Optimization of visual characteristics
Different from the traditional pinhole camera model, the traditional pinhole camera model defines the reprojection error on the generalized image plane, and the invention defines the camera measurement residual on the unit spherical surface. Considering the first observed ith feature point in the ith frame image, the residual error of the feature observed in the jth frame image is defined as
Wherein the content of the first and second substances,is the observed quantity corresponding to the first characteristic point observed for the first time in the i-frame image;observed in the j frame image with the same characteristicsObserving quantity;is a back-projection function that uses intrinsic parameters of the camera to convert pixel positions into unit vectors. Since the degree of freedom of the visual residual is 2, the residual vector is projected to the tangent planeThe above step (1); b1,b2Is a tangent planeTwo arbitrarily selected orthogonal bases;is a standard covariance with a fixed length in the tangent plane.
(d) Loop detection optimization index
Assuming that the relocation is performed when the camera detects a repetitive environment, the camera coordinate system at the moment of the loop back is denoted as V. And establishing a loop detection optimization index by applying the same method as the method for establishing the visual features on the basis of the coordinate system V, wherein the method is specifically represented as follows:
wherein the content of the first and second substances,the pose obtained from the odometer at the previous time is output and treated as a constant.
Finally, aiming at the problem of scale uncertainty of the monocular camera, establishing an edge information optimization index | | | rp-HpY||2As one of the final optimization indicators. According to the derivation, the maximum posterior probability problem of the vision-inertia subsystem is finally established and solved as follows:
wherein the content of the first and second substances,andrespectively an inertia measurement device residual error item and a visual characteristic residual error item;is the set of all inertial measurement device measurements;is the set of all feature points observed at least twice under the current sliding window;collecting all key frames generating loop detection; (l, j) represents that the l characteristic point is observed in the j frame image; (l, V) represents the l-th feature point observed under the coordinate system V; the function ρ (-) is expressed as
The optimal solution of the nonlinear optimization problem can be solved by using a Gauss-Newton iteration method in a C + + Ceres library, and the solution is the optimal solution of the attitude estimation obtained by the vision-inertia odometer under the current sliding window.
(3) Laser-inertial subsystem
The factor graph model of the laser-inertia subsystem is shown in fig. 5, and the system firstly carries out distortion removal processing on point cloud data acquired by laser based on an inertia measuring device, then carries out feature extraction on the point cloud data subjected to distortion removal processing, finds out associated points and constructs residual optimization. The main algorithm flow is as follows:
(a) point cloud feature extraction
The invention refers to LOAM algorithm to extract the characteristic points. Specifically, the method adopts 32-line laser radar of OUSTER, and calculates the smoothness of each line according to the following curvature formula, thereby extracting plane points and edge points in the point cloud data:
wherein the content of the first and second substances,is the i-th constraint point of the k-th scan under the laser coordinate system L, and s represents the set of all points in the neighborhood of the i-point in the scan. Wherein, the point with the maximum smoothness is extracted as the edge point set epsilonkThe point with the minimum smoothness is extracted as a plane point set Hk. In the current scanning, the set of all the edge points is epsilon, and the set of all the plane points is H.
(b) Feature residual calculation
Edge point residual error: for a certain point i ∈ in the current framek+1Finding the nearest point j epsilonkFinding the nearest point l ∈ epsilon of the adjacent scanning line of the j pointkThen (j, l) for its associated edge line, the feature residual can be represented using the point-to-line distance as follows:
plane point residual error: for a certain point i ∈ H in the current framek+1Similarly, find the nearest point j ∈ HkFinding the nearest point l of the same scanning line with the point j, finding the j belongs to HkThen, the nearest point m of the adjacent wire harness of the point j is searched for and belongs to HkThen (j, l, m) is its associated plane, and using the point-to-plane distance, the feature residual can be represented as follows:
(c) optimization index construction
For lidar, the pose estimation of itself is of interest, so for the maximum a posteriori probability problem, T ═ R p is chosen]As the state variable, assume that the state variable corresponding to the ith scan is Ti=[Ri pi]From this, an optimization problem is constructed as follows:
because the optimization problem has SO (3) rotation, the nonlinearity degree is strong, a local optimal solution may exist, and the prior precision of the solution has great influence on the quality of the solution. Therefore, a more accurate prior is needed, and the result obtained from the vision-inertia odometer is used as the prior estimation of the optimization problem, so that the time complexity for solving the optimization problem is reduced, and the obtained result is more accurate. In the visual-inertial odometer part, the optimal state estimation Y is obtained*When the laser-inertia subsystem carries out attitude optimization, extracting the state variable at the latest moment as prior estimationThe following were used:
wherein the a priori estimationOptimal state variable estimation Y from a visual-inertial subsystem*In (1)Given that, the content of the compound (A),as quaternionsA corresponding rotation matrix;as an amount of translationAnd after the rough odometer obtained by the vision-inertia subsystem is matched by the laser-inertia subsystem, more accurate pose estimation is obtained, and the point cloud data after distortion removal is spliced to a point cloud map according to the result, so that the point cloud map of the environment is established.
For the problem of large-scale map building and positioning, loop detection is needed to ensure the global consistency of the algorithm for building the point cloud map, and because the map building effect is seriously influenced by wrong loop, the effectiveness of loop detection is ensured by adopting a two-stage loop detection method. Firstly, based on a visual-inertial subsystem, a DBoW2 bag-of-words model is adopted for loop detection, and when t is detectednTime t andmwhen the image meets the loop constraint, the coordinate transformation corresponding to the two image frames can be obtained based on the positioning result and the constraint relationWhere superscript c denotes the camera; secondly, based on the laser-inertial subsystem, find tnAnd tmLaser point cloud of the moment, and coordinate transformation corresponding to the point cloud is calculated based on ICP algorithmWhere the superscript L denotes a laser. If two coordinates are transformedAndif the two are substantially identical, t is considered to benAnd tmThe moment is a reliable loop, otherwise, the moment is not considered to be a reliable loop.
In the loop-back optimization process, theThe calculation cost is reduced, the feature points are not optimized, and only the pose is optimized to form a pose graph model. Based on historical positioning information, at tmTo tnHas n-m +1 positioning information Ti=[Ri pi]And m, m +1, a, n, and the pose increment of the adjacent time can be calculated through the positioning informationConstraining the loop obtained by laserThe pose graph optimization problem can be constructed by adding the pose graph optimization model into the pose graph optimization model:
by solving the optimization problem, the optimal solution of all poses in the looping moment can be obtained.
Finally, the present invention uses the instant positioning and mapping method to test complex environment, and the obtained result is shown in fig. 6. It can be seen that the instant positioning and mapping method of the present invention achieves considerable effects. In general, the laser-inertial subsystem provides depth information to the vision-inertial subsystem, thereby eliminating the scale uncertainty present with monocular cameras; and the vision-inertia subsystem provides prior estimation and loop detection information for the laser-inertia subsystem, so that more accurate pose estimation is obtained. The advantages of the two systems are complementary, and the accuracy and the robustness of the instant positioning and mapping system are greatly improved. Compared with the method in the document [3], the method provided by the invention has the advantages that the two independent subsystems respectively work in parallel in different computer processes instead of updating data in the same frame, the complexity of solving the optimization problem is reduced while the efficiency of feature extraction is improved, so that the real-time performance and the high efficiency of processing visual data and laser data simultaneously are ensured, and the real-time requirement of the system when a large amount of data exists is met.
It should be noted that the instant positioning and mapping method is not limited to the hardware device designed by the present invention, and only requires that the robot is equipped with the laser radar, the camera and the inertial measurement device at the same time, and the position deviation of the laser radar, the camera and the inertial measurement device can be eliminated by means of calibration of a coordinate system. The hardware of the handheld SLAM equipment designed by the invention is convenient to carry, lays a foundation for the positioning accuracy, and the adopted software part can not be limited to the instant positioning and mapping method designed by the invention. Certainly, the hardware equipment and the software framework provided by the invention are adopted to form a complete handheld instant positioning and mapping system, the problems of accuracy and portability of the traditional SLAM equipment can be solved, and the operation and the running of the equipment are obviously improved compared with the traditional SLAM equipment.
In summary, the above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (16)
1. A robot instant positioning and mapping method adopts a laser radar, a camera and an inertia measuring device to carry out instant positioning and mapping, and is characterized by comprising the following steps:
optimally estimating the constructed state variable by adopting a camera and an inertia measurement device based on a visual SLAM technology to obtain attitude information and a visual loop of a visual-inertial odometer; wherein the constructed state variables comprise attitude information and measurement deviation provided by an inertial measurement device and depth information of characteristic points provided by a laser radar;
acquiring attitude information of the laser-inertia odometer by adopting a laser radar and an inertia measuring device and taking the attitude information and the visual loop of the visual-inertia odometer as prior information based on a laser SLAM technology; and establishing a point cloud map of the environment based on the attitude information of the laser-inertia odometer.
2. The method of claim 1, wherein the feature points are obtained using KLT sparse optical flow algorithm.
3. The method as claimed in claim 1, wherein in the visual SLAM technique, a DBoW2 bag-of-words model is used for loop detection.
4. A robot instant positioning and mapping method as claimed in claim 1,2 or 3, wherein in step 1, the constructed state variables Y are:
wherein, the corner mark W represents a world coordinate system, the corner mark I represents an inertial measurement unit coordinate system, and the corner mark C represents a camera coordinate system;for the attitude information measured by the inertial measurement unit,is the deviation of the inertial measurement unit; n is the number of key frames;changing the attitude of the camera to an inertia measurement device; dlGiving the depth information of the ith characteristic point by the laser radar, wherein l belongs to [1, m ∈]M is the number of characteristic pointsAmount of the compound (A).
5. The method of claim 4, wherein the optimal estimation uses a maximum a posteriori probability model.
6. The method as claimed in claim 5, wherein in the vision-based SLAM technique, the optimization solution based on the maximum posterior probability model is:
wherein the content of the first and second substances,a set of all inertial measurement device measurements;the method comprises the steps of collecting all characteristic points observed at least twice under a current sliding window; v is the set of all key frames generating loop detection; (l, j) observing the l characteristic point in the j frame image; (l, V) represents observing the l-th feature point under the camera coordinate system V at the time of the loop back;
||rp-HpY||2for the marginalizing of the corresponding residual terms of the image,andrespectively an inertia measurement device residual error item and a visual characteristic residual error item;
wherein the content of the first and second substances,from coordinate system W to coordinate system IiThe corner mark i is the time i; Δ tiFor the time difference between two measurements of the inertial measurement unit, gWIs the acceleration of gravity [ ·]xyzThe vector portion of the quaternion q is extracted to represent the error state,multiplication is carried out for quaternion;is a pre-integrated IMU measurement term measured using only a noisy accelerometer and gyroscope in the time interval between two consecutive image frames;is the observed quantity corresponding to the first characteristic point observed for the first time in the i-frame image;the observed quantity is observed in the jth frame image by the same characteristic;is a reverseA projection function that converts pixel positions into unit vectors using intrinsic parameters of the camera; lambda [ alpha ]lIs the inverse depth of the first observed characteristic point;is the tangent plane corresponding to the residual vector; b1,b2Is a tangent planeTwo arbitrarily selected orthogonal bases;is a standard covariance with a fixed length in the tangent plane;
the function ρ (·) is expressed as:
7. the method for instantly positioning and mapping the robot as claimed in claim 1, wherein the method for obtaining the attitude information of the laser-inertial odometer comprises:
based on the attitude information measured by the inertia measuring device, carrying out distortion removal processing on the laser cloud picture;
carrying out characteristic classification on the point cloud data in the laser cloud image subjected to distortion removal processing, and dividing the point cloud data into plane points and edge points; respectively calculating the characteristic residual errors of the plane points and the edge points;
and optimizing the characteristic residual sum of the plane points and the edge points by using the attitude information and the visual loop of the visual-inertial odometer as prior information to obtain the attitude information of the optimal laser-inertial odometer.
8. The method for real-time robot localization and mapping according to claim 1 or 7, wherein in the optimization of the feature residual sums of the plane points and the edge points, a reliable loop is considered when the visual loop of the visual-inertial odometer and the laser loop of the laser-inertial odometer are consistent, and otherwise, a reliable loop is not considered.
9. The method for real-time robot positioning and mapping as claimed in claim 1 or 7, wherein the laser loop of the laser-inertial odometer is calculated based on an ICP algorithm.
10. The method as claimed in claim 7, wherein the point cloud data is classified according to features, and smoothness is calculated according to the following curvature formula for the point cloud data on each line of laser radar:
wherein the content of the first and second substances,is the ith constraint point of the kth scanning under the laser coordinate system L, and s represents a set formed by all points in the neighborhood of the i point in the scanning; the point with the maximum smoothness is extracted as an edge point, and the point with the minimum smoothness is extracted as a plane point.
12. a handheld SLAM device, comprising: the device comprises a laser radar (1), an inertia measuring device (2), a camera (3), an airborne computer (4), a mobile power supply (5), a display screen (6), a support and a handle; the bracket is of an upper-lower double-layer structure, and the upper part of the bracket is narrow and the lower part of the bracket is wide; the laser radar (1) is fixed on the upper layer of the bracket; the camera (3) is fixed on the lower layer of the bracket and faces to the front; the inertia measuring device (2) is fixed on the lower layer of the bracket, is positioned right below the laser radar and is positioned on the same horizontal line with the camera (3); the airborne computer (4) and the mobile power supply (5) are positioned on the lower layer of the bracket and are respectively positioned on the left side and the right side of the inertia measuring device (2); the display screen (6) is fixed at the rear side of the bracket; the airborne computer (4) is connected with the camera (3), the inertia measuring device (2), the laser radar (1) and the display screen (6); the inertia measuring device (2), the display screen (6) and the camera (3) are powered by the onboard computer (4).
13. The handheld SLAM device of claim 12, further comprising a distributor plate and a constant voltage module; the power distribution board divides the output current of the mobile power supply (5) into two paths, one path is connected with the laser radar (1), and the other path is connected with the airborne computer (4) through the constant voltage module.
14. The handheld SLAM device of claim 12, wherein the handle is removable.
15. The handheld SLAM device of claim 12, characterized in that the onboard computer (4) effects synchronization of the camera (3), the inertial measurement unit (2), the laser radar (1) based on the time stamp of the ROS.
16. The hand-held SLAM device according to any of claims 12 to 15, wherein the onboard computer (4) performs the real-time localization and mapping using the method according to any of claims 1 to 11.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210160734.0A CN114608554B (en) | 2022-02-22 | Handheld SLAM equipment and robot instant positioning and mapping method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210160734.0A CN114608554B (en) | 2022-02-22 | Handheld SLAM equipment and robot instant positioning and mapping method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114608554A true CN114608554A (en) | 2022-06-10 |
CN114608554B CN114608554B (en) | 2024-05-03 |
Family
ID=
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115290084A (en) * | 2022-08-04 | 2022-11-04 | 中国人民解放军国防科技大学 | Visual inertia combined positioning method and device based on weak scale supervision |
CN116184430A (en) * | 2023-02-21 | 2023-05-30 | 合肥泰瑞数创科技有限公司 | Pose estimation algorithm fused by laser radar, visible light camera and inertial measurement unit |
TWI822423B (en) * | 2022-07-22 | 2023-11-11 | 杜宇威 | Computing apparatus and model generation method |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107179086A (en) * | 2017-05-24 | 2017-09-19 | 北京数字绿土科技有限公司 | A kind of drafting method based on laser radar, apparatus and system |
CN107869989A (en) * | 2017-11-06 | 2018-04-03 | 东北大学 | A kind of localization method and system of the fusion of view-based access control model inertial navigation information |
CN108827306A (en) * | 2018-05-31 | 2018-11-16 | 北京林业大学 | A kind of unmanned plane SLAM navigation methods and systems based on Multi-sensor Fusion |
CN110261870A (en) * | 2019-04-15 | 2019-09-20 | 浙江工业大学 | It is a kind of to synchronize positioning for vision-inertia-laser fusion and build drawing method |
CN111595333A (en) * | 2020-04-26 | 2020-08-28 | 武汉理工大学 | Modularized unmanned vehicle positioning method and system based on visual inertial laser data fusion |
CN111932674A (en) * | 2020-06-30 | 2020-11-13 | 博雅工道(北京)机器人科技有限公司 | Optimization method of line laser vision inertial system |
CN112347840A (en) * | 2020-08-25 | 2021-02-09 | 天津大学 | Vision sensor laser radar integrated unmanned aerial vehicle positioning and image building device and method |
CN112525202A (en) * | 2020-12-21 | 2021-03-19 | 北京工商大学 | SLAM positioning and navigation method and system based on multi-sensor fusion |
CN113432600A (en) * | 2021-06-09 | 2021-09-24 | 北京科技大学 | Robot instant positioning and map construction method and system based on multiple information sources |
WO2021249387A1 (en) * | 2020-06-08 | 2021-12-16 | 杭州海康机器人技术有限公司 | Visual map construction method and mobile robot |
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107179086A (en) * | 2017-05-24 | 2017-09-19 | 北京数字绿土科技有限公司 | A kind of drafting method based on laser radar, apparatus and system |
CN107869989A (en) * | 2017-11-06 | 2018-04-03 | 东北大学 | A kind of localization method and system of the fusion of view-based access control model inertial navigation information |
CN108827306A (en) * | 2018-05-31 | 2018-11-16 | 北京林业大学 | A kind of unmanned plane SLAM navigation methods and systems based on Multi-sensor Fusion |
CN110261870A (en) * | 2019-04-15 | 2019-09-20 | 浙江工业大学 | It is a kind of to synchronize positioning for vision-inertia-laser fusion and build drawing method |
CN111595333A (en) * | 2020-04-26 | 2020-08-28 | 武汉理工大学 | Modularized unmanned vehicle positioning method and system based on visual inertial laser data fusion |
WO2021249387A1 (en) * | 2020-06-08 | 2021-12-16 | 杭州海康机器人技术有限公司 | Visual map construction method and mobile robot |
CN111932674A (en) * | 2020-06-30 | 2020-11-13 | 博雅工道(北京)机器人科技有限公司 | Optimization method of line laser vision inertial system |
CN112347840A (en) * | 2020-08-25 | 2021-02-09 | 天津大学 | Vision sensor laser radar integrated unmanned aerial vehicle positioning and image building device and method |
CN112525202A (en) * | 2020-12-21 | 2021-03-19 | 北京工商大学 | SLAM positioning and navigation method and system based on multi-sensor fusion |
CN113432600A (en) * | 2021-06-09 | 2021-09-24 | 北京科技大学 | Robot instant positioning and map construction method and system based on multiple information sources |
Non-Patent Citations (2)
Title |
---|
卫文乐 等: "利用惯导测量单元确定关键帧的实时SLAM算法", 计算机应用, vol. 40, no. 4 * |
张伟伟 等: "融合激光与视觉点云信息的定位与建图方法", 计算机应用与软件, vol. 37, no. 7 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI822423B (en) * | 2022-07-22 | 2023-11-11 | 杜宇威 | Computing apparatus and model generation method |
CN115290084A (en) * | 2022-08-04 | 2022-11-04 | 中国人民解放军国防科技大学 | Visual inertia combined positioning method and device based on weak scale supervision |
CN115290084B (en) * | 2022-08-04 | 2024-04-19 | 中国人民解放军国防科技大学 | Visual inertial combined positioning method and device based on weak scale supervision |
CN116184430A (en) * | 2023-02-21 | 2023-05-30 | 合肥泰瑞数创科技有限公司 | Pose estimation algorithm fused by laser radar, visible light camera and inertial measurement unit |
CN116184430B (en) * | 2023-02-21 | 2023-09-29 | 合肥泰瑞数创科技有限公司 | Pose estimation algorithm fused by laser radar, visible light camera and inertial measurement unit |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111156998B (en) | Mobile robot positioning method based on RGB-D camera and IMU information fusion | |
CN109993113B (en) | Pose estimation method based on RGB-D and IMU information fusion | |
CN106017463B (en) | A kind of Aerial vehicle position method based on orientation sensing device | |
US8761439B1 (en) | Method and apparatus for generating three-dimensional pose using monocular visual sensor and inertial measurement unit | |
CN110044354A (en) | A kind of binocular vision indoor positioning and build drawing method and device | |
US9355453B2 (en) | Three-dimensional measurement apparatus, model generation apparatus, processing method thereof, and non-transitory computer-readable storage medium | |
US20160260250A1 (en) | Method and system for 3d capture based on structure from motion with pose detection tool | |
CN111707261A (en) | High-speed sensing and positioning method for micro unmanned aerial vehicle | |
CN110726406A (en) | Improved nonlinear optimization monocular inertial navigation SLAM method | |
WO2015134795A2 (en) | Method and system for 3d capture based on structure from motion with pose detection tool | |
CN113052908B (en) | Mobile robot pose estimation algorithm based on multi-sensor data fusion | |
CN115371665B (en) | Mobile robot positioning method based on depth camera and inertial fusion | |
CN111932674A (en) | Optimization method of line laser vision inertial system | |
CN110929402A (en) | Probabilistic terrain estimation method based on uncertain analysis | |
CN110675455A (en) | Self-calibration method and system for car body all-around camera based on natural scene | |
CN111307146A (en) | Virtual reality wears display device positioning system based on binocular camera and IMU | |
Karam et al. | Integrating a low-cost mems imu into a laser-based slam for indoor mobile mapping | |
CN116772844A (en) | Navigation method of visual inertial indoor robot based on dynamic environment | |
CN112179373A (en) | Measuring method of visual odometer and visual odometer | |
Hinzmann et al. | Flexible stereo: constrained, non-rigid, wide-baseline stereo vision for fixed-wing aerial platforms | |
CN112762929B (en) | Intelligent navigation method, device and equipment | |
CN113701750A (en) | Fusion positioning system of underground multi-sensor | |
CN112580683A (en) | Multi-sensor data time alignment system and method based on cross correlation | |
Schill et al. | Estimating ego-motion in panoramic image sequences with inertial measurements | |
CN111145267A (en) | IMU (inertial measurement unit) assistance-based 360-degree panoramic view multi-camera calibration method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |