CN114608554B - Handheld SLAM equipment and robot instant positioning and mapping method - Google Patents

Handheld SLAM equipment and robot instant positioning and mapping method Download PDF

Info

Publication number
CN114608554B
CN114608554B CN202210160734.0A CN202210160734A CN114608554B CN 114608554 B CN114608554 B CN 114608554B CN 202210160734 A CN202210160734 A CN 202210160734A CN 114608554 B CN114608554 B CN 114608554B
Authority
CN
China
Prior art keywords
laser
inertial
measurement device
inertial measurement
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210160734.0A
Other languages
Chinese (zh)
Other versions
CN114608554A (en
Inventor
徐伯辰
班超
郑可凡
吴德龙
魏韶谆
虞睿
曾宪琳
杨庆凯
方浩
陈杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202210160734.0A priority Critical patent/CN114608554B/en
Publication of CN114608554A publication Critical patent/CN114608554A/en
Application granted granted Critical
Publication of CN114608554B publication Critical patent/CN114608554B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3841Data obtained from two or more sources, e.g. probe vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1652Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1656Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a handheld SLAM device and a method for immediately positioning and mapping a robot. The invention combines the advantages of a camera, an inertial measurement device and a laser radar, and the laser-inertial subsystem provides depth information for the vision-inertial subsystem, thereby eliminating the scale uncertainty of the monocular camera; the vision-inertia subsystem provides priori estimation and loop detection information for the laser-inertia subsystem, so that more accurate pose estimation is obtained. The advantages of the two systems are complementary, and the accuracy and the robustness of the instant positioning and mapping system are greatly improved. Meanwhile, the designed handheld SLAM equipment is convenient to carry, and a foundation is laid for positioning accuracy.

Description

Handheld SLAM equipment and robot instant positioning and mapping method
Technical Field
The invention relates to the technical field of robot perception and navigation, in particular to handheld SLAM equipment and a method for immediately positioning and mapping a robot.
Background
The instant localization and mapping (Simultaneous Localization AND MAPPING, SLAM) technology is currently a popular research direction in the field of autonomous navigation and localization of robots. The real-time positioning and mapping technology is mainly characterized in that the robot does not rely on information obtained by external measurement to determine the position of the robot, and the robot mainly gives the increment of the position through a sensor carried by the robot, so that the position of the robot in the environment is determined, and meanwhile, an environment point cloud map is built according to a result obtained by positioning and data returned by the sensor at the current moment. The field is mainly divided into two directions, namely a laser SLAM technology and a visual SLAM technology, and the laser SLAM technology has the advantages of being adaptable to more complex environments, but is insensitive to extraction of environmental characteristics and loop detection; the visual SLAM technology has the advantages that environmental characteristic points are easy to capture and repeated environments are easy to detect, but the requirements on the environments are high due to the influence of factors such as illumination, field angles and the like. Expert students have proposed different solutions to the advantages and disadvantages listed above, and for laser SLAM technology and vision SLAM technology, there are several solutions as follows:
Scheme 1: literature (Zhang J,Kaess M,Singh S.On degeneracy of optimization-based state estimation problems.[C]//2016 IEEE International Conferernce on Robotics and Automation(ICRA).IEEE,2016.) literature proposes a LOAM algorithm. The LOAM algorithm is contributed to providing a method for extracting and matching point clouds based on characteristics of plane points and edge points, and eliminating point cloud distortion caused by motion by assuming uniform motion, so that the real-time performance of the algorithm is ensured. The LOAM algorithm does not handle loop back detection at the back end and uses a loose coupling approach when using inertial measurement devices.
Scheme 2: the VINS algorithm is proposed in document (Tong,Qin,Peiliang,et al.VINS-Mono:A Robust and Versatile Monocular Visual-Inertial State Estimator[J]//IEEE Transactions on Robotics,2018,34(4):1004-20.). The VINS algorithm is contributed to providing an algorithm framework that the front end receives characteristic points in an image extraction environment, pre-processes data obtained by an inertia measurement device, and performs joint optimization, global graph optimization and loop detection on the rear end. However, in actual testing, the VINS algorithm has high requirements on the environment, and the initialization failure phenomenon often occurs, so that the implementation conditions are severe.
Scheme 3: the R2LIVE algorithm is presented in document (Lin J,Zheng C,Xu W,et al.R2LIVE:A Robust,Real-time,LiDAR-Inertial-Visual tightly-coupled state Estimator and mapping[J]//IEEE Robotics and Automation Letters,2021,6(4):7469-76.). The R2LIVE mainly contributes to providing an immediate positioning and mapping framework for tightly coupling three sensors of a laser radar, a camera and an inertial measurement device, and the scheme uses the measurement result of the high-frequency inertial measurement device as prior estimation, optimizes the measurement results of the laser radar and the camera by error state iterative Kalman filtering through multithreading respectively, and has stronger robustness. However, the algorithm is not suitable for processing a large amount of data, and when the amount of point cloud data is large, the real-time performance of the algorithm cannot be ensured.
Disclosure of Invention
In view of the above, the invention provides a handheld SLAM device and a method for immediately positioning and mapping a robot, which can realize the advantage complementation of a laser-inertial subsystem and a vision-inertial subsystem, and greatly improve the accuracy and the robustness of the immediately positioning and mapping system.
The invention relates to a method for immediately positioning and constructing a map of a robot, which adopts a laser radar, a camera and an inertial measurement device to immediately position and construct the map, and comprises the following steps:
Adopting a camera and an inertial measurement device, and carrying out optimal estimation on the constructed state variable based on a visual SLAM technology to obtain posture information and a visual loop of a visual-inertial odometer; the constructed state variables comprise attitude information and measurement deviation provided by an inertial measurement device and depth information of feature points provided by a laser radar;
The method comprises the steps of adopting a laser radar and an inertial measurement device, taking attitude information of a vision-inertial odometer and a vision loop as priori information, and obtaining the attitude information of the laser-inertial odometer based on a laser SLAM technology; and establishing a point cloud map of the environment based on the attitude information of the laser-inertial odometer.
Preferably, the characteristic points are obtained by using a KLT sparse optical flow algorithm.
Preferably, in the visual SLAM technology, DBoW word bag models are adopted to carry out loop detection.
Preferably, in the step1, the constructed state variable Y is:
The angle mark W represents a world coordinate system, the angle mark I represents an inertial measurement device coordinate system, and the angle mark C represents a camera coordinate system; For the attitude information measured by inertial measurement device,/> Deviation for inertial measurement device; n is the number of key frames; /(I)Transforming the pose of the camera to the inertial measurement device; d l is depth information of the first feature point, given by the laser radar, l epsilon [1, m ], and m is the number of feature points.
Preferably, the optimal estimation adopts a maximum posterior probability model.
Preferably, in the vision SLAM-based technology, the optimization solution based on the maximum posterior probability model is as follows:
Wherein, A set of measurements for all inertial measurement devices; /(I)A set of feature points that are observed at least twice under the current sliding window; /(I)A key frame set for generating loop detection for all the key frames; (l, j) represents observing the first feature point in the j-th frame image; (l, V) represents observing the first feature point in the camera coordinate system V at the time of loop-back;
the r p-HpY||2 is the residual term corresponding to image marginalization, And/>Respectively an inertial measurement device residual error item and a visual characteristic residual error item;
Wherein, The rotation matrix from the coordinate system W to the coordinate system I i is adopted, and the angle mark I is the moment I; deltat i is the time difference between the two inertial measurement device measurements, g W is the gravitational acceleration, [. Cndot. ] xyz extracts the vector portion of the quaternion q to represent the error state,/>Is quaternion multiplication; /(I)Is a pre-integrated IMU measurement term measured using only noise accelerometers and gyroscopes during the time interval between two consecutive image frames; /(I)Is the observed quantity corresponding to the first observed characteristic point in the i-frame image; /(I)Is the observed quantity obtained by observing the j-th frame image with the same characteristic; /(I)Is a back projection function that uses the camera's intrinsic parameters to convert pixel locations into unit vectors; lambda l is the inverse depth of the first observed feature point; /(I)Is the tangential plane corresponding to the residual vector; b i,b2 is the tangential plane/>Two arbitrarily selected orthogonal bases; /(I)Is the standard covariance with fixed length in the tangent plane;
the function ρ (·) is expressed as:
Preferably, the method for acquiring the attitude information of the laser-inertial odometer comprises the following steps:
Performing de-distortion treatment on the laser cloud image based on the attitude information measured by the inertial measurement device;
classifying the characteristics of point cloud data in the undistorted laser cloud image into plane points and edge points; calculating characteristic residual errors of the plane points and the edge points respectively;
And optimizing the characteristic residual errors of the plane points and the edge points by taking the attitude information and the visual loop of the visual-inertial odometer as prior information to obtain the attitude information of the optimal laser-inertial odometer.
Preferably, in the optimization of the characteristic residual sum of the plane point and the edge point, when the visual loop of the visual-inertial odometer is consistent with the laser loop of the laser-inertial odometer, the visual loop is considered to be a reliable loop, otherwise, the visual loop is not a reliable loop.
Preferably, the laser loop of the laser-inertial odometer is calculated based on an ICP algorithm.
Preferably, when the point cloud data is classified according to characteristics, the smoothness of the point cloud data on each line of laser radar is calculated according to the following curvature formula:
Wherein, The method is characterized in that the method is used for obtaining an ith constraint point of a kth scanning under a laser coordinate system L, and s represents a set formed by all points in an i-point neighborhood in the scanning; the point with the highest smoothness is extracted as an edge point, and the point with the smallest smoothness is extracted as a plane point.
Preferably, the characteristic residual error of the plane point is:
The characteristic residual error of the edge points is as follows:
The invention also provides a handheld SLAM device, comprising: laser radar, inertial measurement device, camera, onboard computer, mobile power supply, display screen, bracket and handle; wherein the bracket has an upper-lower double-layer structure, and is narrow at the upper part and wide at the lower part; the laser radar is fixed on the upper layer of the bracket; the camera is fixed at the lower layer of the bracket and faces forward; the inertial measurement device is fixed on the lower layer of the bracket, is positioned right below the laser radar and is positioned on the same horizontal line with the camera; the onboard computer and the mobile power supply are positioned at the lower layer of the bracket and are respectively positioned at the left side and the right side of the inertial measurement device; the display screen is fixed at the rear side of the bracket; the airborne computer is connected with the camera, the inertia measuring device, the laser radar and the display screen; the inertial measurement device, display screen and camera are powered by an on-board computer.
Preferably, the device also comprises a distributor plate and a constant voltage module; the power distribution plate divides the output current of the mobile power supply into two paths, one path is connected with the laser radar, and the other path is connected with the airborne computer through the constant voltage module.
Preferably, the handle is detachable.
Preferably, the on-board computer realizes synchronization of the camera, the inertial measurement device and the laser radar based on the ROS time stamp.
Preferably, the onboard computer adopts the method to perform instant positioning and mapping.
The beneficial effects are that:
The invention combines the advantages of a camera, an inertial measurement device and a laser radar, and provides an immediate positioning and mapping software framework based on fusion of a factor graph optimized laser radar, a monocular camera and an inertial navigation device, wherein the laser radar provides depth information and initialization information for camera vision, the camera vision provides characteristic points and loop detection for the laser radar, and the rear end realizes pose estimation through factor graph optimization.
Meanwhile, in consideration of the problems of accuracy and portability of the test equipment, the invention designs and completes a set of handheld SLAM equipment. Through the combination of software and hardware, a complete set of system consisting of the handheld SLAM hardware equipment and the multi-sensor fusion software framework carried by the handheld SLAM hardware equipment is formed, and the system can ensure the accuracy and the robustness of measurement in a complex environment.
Drawings
FIG. 1 is a circuit diagram of a handheld device of the present invention.
FIG. 2 is a hardware design of a handheld device of the present invention.
FIG. 3 is a diagram of a multi-sensor fusion algorithm framework of the present invention.
FIG. 4 is a diagram of a vision-inertial subsystem factor graph model of the present invention.
FIG. 5 is a factor graph model of a laser-inertial subsystem of the present invention.
FIG. 6 is a diagram showing the test effect of the real-time positioning and mapping system according to the present invention.
The system comprises a 1-laser radar, a 2-inertial measurement device, a 3-monocular camera, a 4-onboard computer, a 5-mobile power supply and a 6-display screen.
Detailed Description
The invention will now be described in detail by way of example with reference to the accompanying drawings.
The invention designs and builds a set of handheld SLAM hardware equipment, and provides a multi-sensor fusion instant positioning and mapping method based on factor graph optimization, thereby forming a complete handheld instant positioning and mapping system, and solving the portability problem of the hardware equipment and the robustness problem of a software framework.
The hardware part equipment adopts a 32-line mechanical laser radar, a monocular camera and an inertial navigation unit (Inertial Measurement Unit, IMU), realizes embedded development through an onboard computer, and is provided with a corresponding display screen and a peripheral interface. The hardware equipment is modeled by SoildWorks, the placement positions of the sensors of the hardware equipment are designed, and a circuit is designed according to the power supply requirements of different sensors so as to ensure that each module of the equipment works normally in the actual running process. Finally, the invention solves the problem of non-uniform coordinate system caused by different placement positions and conventional coordinate systems of the sensors by calibrating the sensors.
The software part is mainly divided into a laser-inertial subsystem (LIS) and a Visual-inertial subsystem (VIS), wherein the laser-inertial subsystem provides depth information for the Visual-inertial subsystem, the Visual-inertial subsystem provides feature point priori information and loop detection information in the environment for the laser-inertial subsystem, and the two subsystems are divided into two threads to work independently so as to ensure the real-time performance and robustness of the algorithm, and the accuracy of the system is ensured through fusion optimization. Because the laser-inertia subsystem and the vision-inertia subsystem respectively have corresponding accumulated errors, the invention provides a software framework based on factor graph optimization for fusing the data of multiple sensors. The invention establishes respective factor graph models for the laser-inertia subsystem and the vision-inertia subsystem respectively, and solves the corresponding maximum posterior probability (maximum a posteriori, MAP) problem according to the corresponding constraint term to obtain more accurate attitude estimation.
The following details are given for the above two parts, respectively, as follows:
1. Hardware design of handheld device
The circuit structure of the handheld device is shown in fig. 1, and the device is powered by a 6S lithium battery. Because the power consumption of the laser radar is larger, the design of the distributor plate is adopted to set the laser radar to be powered in a single path, so that the overload of the constant voltage module can be avoided; the other path supplies power to the onboard computer through the constant voltage module, and the onboard computer is connected with the monocular camera, the inertia measuring device and the display screen through the USB interface and supplies power.
The hardware design diagram of the handheld device is shown in fig. 2, and the difficulties faced by the hardware design of the handheld device are mainly the rationality of the placement of the sensor position and the portability and balance of the device. Aiming at the difficulties, the invention has the following advantages on the design of the handheld device:
1. The laser radar measuring range is 360 degrees horizontally and 45 degrees vertically, in order to ensure that the laser radar is not shielded, a double-layer hardware installation mode is designed, wherein the laser radar is independently arranged on the upper layer of the support, the horizontal direction measurement is not shielded, and meanwhile, the vertical direction measurement of the laser radar is not shielded due to the design that the plane of the upper layer of the support is small and the plane of the lower layer of the support is wide.
2. The double-layer bracket design mode ensures that the Z-axis directions of the laser radar, the monocular camera and the inertia measurement device are the same and are as close as possible, the laser radar is positioned right above the device, the monocular camera is positioned at the lower layer of the bracket and faces forward, and the inertia measurement device is positioned right below the laser radar and is in the same horizontal line with the camera, so that the accuracy of the subsequent software calibration work is ensured;
3. In order to keep the center of gravity of the equipment in the middle of the equipment as much as possible, an onboard computer and a lithium battery are respectively arranged at two sides of the equipment so as to keep the balance of the equipment;
4. The display screen is located at the rear side of the equipment and is convenient for human-computer interaction, and is provided with a detachable handle, so that the equipment can be transplanted to a mobile robot platform when needed, the operation is simple and convenient, and the portability of the operation of the handheld equipment and the portability of multiple platforms are ensured.
In particular, since the placement positions of the lidar, the monocular camera and the inertial measurement device of the present apparatus are different from the conventional coordinate system, calibration of the sensor is required to achieve the interconversion between the coordinate systems. At the same time, software time synchronization is achieved using ROS-based timestamps.
2. Software design for instant positioning and drawing frame
The invention designs a set of multi-sensor fusion instant positioning and mapping method, and three aspects of overall algorithm frame design, laser-inertia subsystem design and vision-inertia subsystem design are described in detail below.
(1) Overall algorithm framework design
The overall algorithm framework design of the present invention is shown in fig. 3, where the framework is a tightly coupled laser, vision and inertial odometer, operating with two independent subsystems, namely a laser-inertial subsystem and a vision-inertial subsystem, respectively.
For a laser-inertia subsystem, the point cloud data obtained by the laser is subjected to de-distortion processing according to measurement data given by an inertia measurement device. When laser point cloud feature extraction is performed, feature classification is performed according to smoothness of feature points to be classified relative to other points in the neighborhood of the feature points, and corresponding residual error items are established according to the types of the feature points; and simultaneously, receiving attitude information from the vision-inertia subsystem as prior information for optimization. And in the laser odometer part, loop detection information from the vision-inertia subsystem is received, so that the attitude estimation and map construction work can be better carried out.
For the vision-inertial subsystem, depth information from the laser-inertial subsystem is accepted for better matching when feature extraction is performed.
The above is a general introduction of the system, and in order to facilitate the following split modularization to discuss the two subsystems in more detail, the present embodiment first performs mathematical representation on the maximum posterior probability problem to be solved, and defines the state variables of the whole system. First, the maximum posterior probability problem form is defined as follows: when the observed quantity z= { Z i |i=1, 2, …, k } and the state quantity x= { X i |i=1, 2, …, k } exist, the optimal state quantity X * is solved so that the probability P (x|z) is maximized, expressed by the formula:
From the Bayesian formula, one can get:
For the instant positioning and mapping algorithm, the probability distribution P (Z) of the observed quantity is determined by the parameters of the sensor itself, and is irrelevant to the algorithm, and can be regarded as a normalized parameter, so that the maximum posterior probability problem can be expressed as:
The problem is thus translated into: and constructing corresponding optimization indexes, and solving the optimal state estimation X *. Based on the maximum posterior probability problem and the state variables defined above, the present invention details the vision-inertial subsystem and the laser-inertial subsystem, respectively.
(2) Vision-inertia subsystem
The factor graph model of the vision-inertial subsystem is shown in fig. 4, and the main flow is divided into two steps: extraction and tracking of visual features and tight coupling optimization of visual-inertial.
For visual feature extraction and tracking, firstly extracting frame image feature points by using a KLT sparse optical flow algorithm, and obtaining rotation and translation variables between two adjacent frames; and the measured result is used for extracting the visual key frame by judging the parallax and tracking quality between the two frames of images.
For the tight coupling optimization of vision-inertia, the main algorithm flow is as follows:
(a) Optimizing index design
Aiming at the characteristics of the vision-inertia subsystem, the optimization index is divided into three parts, namely: visual feature point optimization index r C, inertial measurement unit optimization index r I, and loop detection optimization index r S. The world coordinate system is denoted by W, the inertial measurement unit coordinate system by I, and the monocular camera coordinate system by C.
To achieve joint optimization, the state variables need to be extended to achieve tight coupling, and under a sliding window, a new state variable Y is defined as follows:
Wherein, p, v and q are attitude information measured by the inertial measurement device: p is displacement, v is velocity, q is rotation represented by four elements; b represents the deviation of the inertial measurement device: b a is accelerometer bias, b w is gyroscope bias; i represents an ith image acquired by a camera; y i represents the state of the device at time i; n represents the number of key frames; m represents the number of feature points; d l represents depth information of the first feature point, given by the laser-inertial subsystem, to reduce the scale uncertainty of the monocular camera.
(B) Inertial measurement unit optimization index
Consider the measurement of an inertial measurement unit between two consecutive frames I i and I i+1 under a sliding window and thereby obtain the residual term of the inertial measurement unit pre-integration
Wherein [ (xyz ] extracts the vector portion of quaternion q to represent the error state.Is a three-dimensional error state representation of the quaternion. /(I)Is a pre-integrated IMU measurement using only noise accelerometer and gyroscope measurements during the time interval between two consecutive image frames. Accelerometer and gyroscope bias are also included in the remaining terms of the on-line correction.
(C) Visual characteristic optimization index
Unlike conventional pinhole camera models, which define the re-projection error on a generalized image plane, the present invention defines the camera measurement residual on a unit sphere. Considering the first observed first feature point in the ith frame image, then the residual in the jth frame image in which the feature is observed is defined as
Wherein,Is the observed quantity corresponding to the first observed characteristic point in the i-frame image; Is the observed quantity obtained by observing the j-th frame image with the same characteristic; /(I) Is a back projection function that uses the camera's intrinsic parameters to convert pixel locations into unit vectors. Since the degree of freedom of the visual residual is 2, the residual vector is projected to the tangential plane/>Applying; b 1,b2 is the tangential plane/>Two arbitrarily selected orthogonal bases; /(I)Is the standard covariance with a fixed length in the tangent plane.
(D) Loop detection optimization index
Assuming that repositioning is performed when the camera detects a repetitive environment, the camera coordinate system at the moment of looping is denoted as V. The loop detection optimization index is established on the basis of a coordinate system V by using the same method as the visual characteristic is established, and the loop detection optimization index is specifically expressed as follows:
Wherein, The resulting pose is output from the odometer at the previous moment and considered as constant.
Finally, for the problem of scale uncertainty of the monocular camera, an marginalized information optimization index r p-HpY||2 is established as one of the final optimization indexes. From the above derivation, the maximum posterior probability problem that ultimately builds the vision-inertial subsystem is solved as follows:
Wherein, And/>Respectively an inertial measurement device residual error item and a visual characteristic residual error item; /(I)Is a set of all inertial measurement device measurements; /(I)Is a set of all feature points observed at least twice under the current sliding window; /(I)A key frame set for generating loop detection for all the key frames; (l, j) represents observing the first feature point in the j-th frame image; (l, V) represents observing the first feature point in the coordinate system V; the function ρ (·) is expressed as
The optimal solution of the nonlinear optimization problem can be obtained by using a Gaussian-Newton iteration method in a Ceres library of C++, and the solution is the optimal solution of the attitude estimation obtained by a visual-inertial odometer under the current sliding window.
(3) Laser-inertial subsystem
The factor graph model of the laser-inertia subsystem is shown in fig. 5, the system firstly carries out de-distortion processing on point cloud data acquired by laser based on an inertia measurement device, then carries out feature extraction on the de-distortion processed point cloud data, finds out relevant points and constructs residual optimization. The main algorithm flow is as follows:
(a) Point cloud feature extraction
The invention refers to LOAM algorithm to extract characteristic points. Specifically, the invention adopts OUSTER 32 lines of laser radars, calculates the smoothness of each line according to the following curvature formula, and extracts the plane point and the edge point in the point cloud data:
Wherein, And s represents a set formed by all points in the neighborhood of the i point in the kth scanning under the laser coordinate system L. The point with the greatest smoothness is extracted as an edge point set epsilon k, and the point with the smallest smoothness is extracted as a plane point set H k. The set of all edge points in the current scan is epsilon, and the set of all plane points is H.
(B) Feature residual calculation
Edge point residual: for a point i epsilon k+1 in the current frame, finding the nearest point j epsilon k, finding the nearest point l epsilon k of the adjacent scanning line of the j points, and using the distance from the point to the straight line to represent the characteristic residual error as follows:
Plane point residual: for a point i epsilon H k+1 in the current frame, the nearest point j epsilon H k is found, the nearest point l of the same scanning line as the point j is found j epsilon H k, then the nearest point m epsilon H k of the adjacent harness of the point j is found, and the (j, l, m) is the associated plane, and the characteristic residual error can be represented by using the distance from the point to the plane as follows:
(c) Optimization index construction
For the lidar, the pose estimation is focused, so for the maximum posterior probability problem, t= [ R p ] is selected as a state variable, and the state variable corresponding to the ith scan is assumed to be T i=[Ri pi ], so that the optimization problem is constructed as follows:
Because the optimization problem has SO (3) rotation, the nonlinearity degree is strong, a local optimal solution can exist, and the prior precision of the solution is greatly influenced on the quality of the solution. Therefore, a more accurate priori is needed, and the result obtained from the vision-inertial odometer is used as the priori estimate of the optimization problem, so that the time complexity for solving the optimization problem is reduced, and the obtained result is more accurate. In the vision-inertial odometer section, an optimal state estimate Y * is obtained, and when the laser-inertial subsystem performs attitude optimization, the most recent state variable is extracted as a priori estimate The following are provided:
Wherein the prior estimate Optimal state variable estimation Y * from vision-inertial subsystemGiven,/>Is quaternion/>A corresponding rotation matrix; /(I)For translation/>After the obtained coarse odometer of the vision-inertia subsystem is matched by the laser-inertia subsystem, more accurate pose estimation is obtained, and the undistorted point cloud data are spliced to the point cloud map according to the result, so that the point cloud map of the environment is established.
Aiming at the problem of large-scale map building and positioning, loop detection is needed to ensure the global consistency of the point cloud map built by the algorithm, and the effectiveness of loop detection is ensured by adopting a two-stage loop detection method because the map building effect is seriously affected by error loop. Firstly, based on a vision-inertia subsystem, carrying out loop detection by adopting DBoW word bag model, and when the images at the moment t n and the moment t m are detected to meet loop constraint, obtaining coordinate transformation corresponding to two image frames based on a positioning result and constraint relationWherein superscript c denotes a camera; secondly, searching laser point clouds at the time points t n and t m based on a laser-inertia subsystem, and calculating coordinate transformation/>, corresponding to the point clouds, based on an ICP algorithmWherein the superscript L denotes a laser. If two coordinate transformations/>AndSubstantially identical, then t n and t m are considered a reliable loop, otherwise, they are not considered a reliable loop.
In the loop optimization process, in order to reduce the calculation cost, the invention does not optimize the characteristic points any more, and only optimizes the pose to form a pose graph model. According to the historical positioning information, n-m+1 pieces of positioning information T i=[Ri pi are shared in T m to T n, { i=m, m+1, & gt, n }, and the pose increment of adjacent time can be calculated through the positioning informationLoop confinement/>, obtained by laserAdding the pose graph optimization model to construct a pose graph optimization problem:
By solving the optimization problem, the optimal solution of all the poses in the loop time can be obtained.
Finally, the present invention uses the instant positioning and mapping method to test the complex environment, and the results are shown in fig. 6. It can be seen that the instant positioning and mapping method of the invention achieves considerable effects. In general, the laser-inertial subsystem provides depth information to the vision-inertial subsystem, thereby eliminating the scale uncertainty that exists with monocular cameras; the vision-inertia subsystem provides priori estimation and loop detection information for the laser-inertia subsystem, so that more accurate pose estimation is obtained. The advantages of the two systems are complementary, and the accuracy and the robustness of the instant positioning and mapping system are greatly improved. Compared with the method in the document [3], the method disclosed by the invention has the advantages that two independent subsystems respectively work in parallel in different computer processes, data updating is not carried out in the same frame, the feature extraction efficiency is improved, meanwhile, the complexity of solving the optimization problem is reduced, the real-time performance and the high efficiency of simultaneously processing vision and laser two data are ensured, and the real-time performance requirement of a system when a large amount of data exists is met.
It should be noted that the method for positioning and mapping in real time is not limited to the hardware equipment designed by the invention, and only the robot is required to be equipped with the laser radar, the camera and the inertial measurement device, so that the position deviation of the laser radar, the camera and the inertial measurement device can be eliminated in a coordinate system calibration mode. The handheld SLAM equipment hardware designed by the invention is convenient to carry, lays a foundation for positioning accuracy, and adopts a software part which can be not limited to the instant positioning and mapping method designed by the invention. Of course, the hardware equipment and the software framework provided by the invention are adopted to form a complete handheld instant positioning and mapping system, so that the problems of accuracy and portability of the traditional SLAM equipment can be solved, and the operation and running of the equipment are obviously improved compared with those of the traditional SLAM equipment.
In summary, the above embodiments are only preferred embodiments of the present invention, and are not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (12)

1. The method for immediately positioning and mapping the robot adopts a laser radar, a camera and an inertial measurement device to immediately position and map, and is characterized by comprising the following steps:
Adopting a camera and an inertial measurement device, and carrying out optimal estimation on the constructed state variable based on a visual SLAM technology to obtain posture information and a visual loop of a visual-inertial odometer; the constructed state variables comprise attitude information and measurement deviation provided by an inertial measurement device and depth information of feature points provided by a laser radar; the constructed state variable Y is as follows:
The angle mark W represents a world coordinate system, the angle mark I represents an inertial measurement device coordinate system, and the angle mark C represents a camera coordinate system; For the attitude information measured by inertial measurement device,/> Deviation for inertial measurement device; n is the number of key frames; /(I)Transforming the pose of the camera to the inertial measurement device; d l is depth information of the first feature point, given by the laser radar, l epsilon [1, m ] and m is the number of the feature points;
the optimal estimation adopts a maximum posterior probability model; in the visual SLAM technology, the optimal solution based on the maximum posterior probability model is as follows:
Wherein, A set of measurements for all inertial measurement devices; /(I)A set of feature points that are observed at least twice under the current sliding window; /(I)A key frame set for generating loop detection for all the key frames; (l, j) represents observing the first feature point in the j-th frame image; (l, V) represents observing the first feature point in the camera coordinate system V at the time of loop-back;
the r p-HpY||2 is the residual term corresponding to image marginalization, And/>Respectively an inertial measurement device residual error item and a visual characteristic residual error item;
Wherein, The rotation matrix from the coordinate system W to the coordinate system I i is adopted, and the angle mark I is the moment I; Δt i is the time difference between the two inertial measurement device measurements, g w is the gravitational acceleration, [. Cndot. ] xyz extracts the vector portion of the quaternion q to represent the error state,Is quaternion multiplication; /(I)Is a pre-integrated IMU measurement term measured using only noise accelerometers and gyroscopes during the time interval between two consecutive image frames; /(I)Is the observed quantity corresponding to the first observed characteristic point in the i-frame image; /(I)Is the observed quantity obtained by observing the j-th frame image with the same characteristic; /(I)Is a back projection function that uses the camera's intrinsic parameters to convert pixel locations into unit vectors; lambda l is the inverse depth of the first observed feature point; /(I)Is the tangential plane corresponding to the residual vector; b 1,b2 is the tangential plane/>Two arbitrarily selected orthogonal bases; /(I)Is the standard covariance with fixed length in the tangent plane;
the function ρ (·) is expressed as:
The method comprises the steps of adopting a laser radar and an inertial measurement device, taking attitude information of a vision-inertial odometer and a vision loop as priori information, and obtaining the attitude information of the laser-inertial odometer based on a laser SLAM technology; and establishing a point cloud map of the environment based on the attitude information of the laser-inertial odometer.
2. The method for real-time positioning and mapping of a robot according to claim 1, wherein the feature points are obtained by using a KLT sparse optical flow algorithm.
3. The method for real-time positioning and mapping of a robot according to claim 1, wherein the visual SLAM technique uses DBoW word bag model for loop detection.
4. The method for immediately positioning and mapping a robot according to claim 1, wherein the method for acquiring the attitude information of the laser-inertial odometer comprises the steps of:
Performing de-distortion treatment on the laser cloud image based on the attitude information measured by the inertial measurement device;
classifying the characteristics of point cloud data in the undistorted laser cloud image into plane points and edge points; calculating characteristic residual errors of the plane points and the edge points respectively;
And optimizing the characteristic residual errors of the plane points and the edge points by taking the attitude information and the visual loop of the visual-inertial odometer as prior information to obtain the attitude information of the optimal laser-inertial odometer.
5. The method of claim 1 or 4, wherein the optimization of the sum of the characteristic residuals of the plane points and the edge points is considered a reliable loop when the vision loop of the vision-inertial odometer is identical to the laser loop of the laser-inertial odometer, and is not a reliable loop otherwise.
6. The method for positioning and mapping a robot in real time according to claim 1 or 4, wherein the laser loop of the laser-inertial odometer is calculated based on an ICP algorithm.
7. The method for real-time positioning and mapping of a robot according to claim 4, wherein when the point cloud data is classified, the smoothness is calculated according to the following curvature formula for the point cloud data on each line of laser radar:
Wherein, The method is characterized in that the method is used for obtaining an ith constraint point of a kth scanning under a laser coordinate system L, and s represents a set formed by all points in an i-point neighborhood in the scanning; the point with the highest smoothness is extracted as an edge point, and the point with the smallest smoothness is extracted as a plane point.
8. The method for the instantaneous positioning and mapping of a robot of claim 7, wherein the residual characteristics of the planar points are:
The characteristic residual error of the edge points is as follows:
9. A handheld SLAM device, comprising: the device comprises a laser radar (1), an inertial measurement device (2), a camera (3), an onboard computer (4), a mobile power supply (5), a display screen (6), a bracket and a handle; wherein the bracket has an upper-lower double-layer structure, and is narrow at the upper part and wide at the lower part; the laser radar (1) is fixed on the upper layer of the bracket; the camera (3) is fixed at the lower layer of the bracket and faces forward; the inertial measurement device (2) is fixed at the lower layer of the bracket, is positioned right below the laser radar and is positioned on the same horizontal line with the camera (3); the airborne computer (4) and the mobile power supply (5) are positioned at the lower layer of the bracket and are respectively positioned at the left side and the right side of the inertial measurement device (2); the display screen (6) is fixed at the rear side of the bracket; the airborne computer (4) is connected with the camera (3), the inertia measuring device (2), the laser radar (1) and the display screen (6); the inertial measurement device (2), the display screen (6) and the camera (3) are powered by an onboard computer (4);
The on-board computer (4) performs real-time positioning and mapping by adopting the method as set forth in any one of claims 1 to 8.
10. The handheld SLAM device of claim 9, further comprising a distributor plate and a constant voltage module; the power distribution board divides the output current of the mobile power supply (5) into two paths, one path is connected with the laser radar (1), and the other path is connected with the airborne computer (4) through the constant voltage module.
11. The handheld SLAM device of claim 9, wherein the handle is detachable.
12. The hand-held SLAM device of claim 9, wherein the onboard computer (4) enables synchronization of the camera (3), inertial measurement unit (2), lidar (1) based on ROS time stamps.
CN202210160734.0A 2022-02-22 2022-02-22 Handheld SLAM equipment and robot instant positioning and mapping method Active CN114608554B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210160734.0A CN114608554B (en) 2022-02-22 2022-02-22 Handheld SLAM equipment and robot instant positioning and mapping method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210160734.0A CN114608554B (en) 2022-02-22 2022-02-22 Handheld SLAM equipment and robot instant positioning and mapping method

Publications (2)

Publication Number Publication Date
CN114608554A CN114608554A (en) 2022-06-10
CN114608554B true CN114608554B (en) 2024-05-03

Family

ID=81859728

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210160734.0A Active CN114608554B (en) 2022-02-22 2022-02-22 Handheld SLAM equipment and robot instant positioning and mapping method

Country Status (1)

Country Link
CN (1) CN114608554B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI822423B (en) * 2022-07-22 2023-11-11 杜宇威 Computing apparatus and model generation method
CN115290084B (en) * 2022-08-04 2024-04-19 中国人民解放军国防科技大学 Visual inertial combined positioning method and device based on weak scale supervision
CN116184430B (en) * 2023-02-21 2023-09-29 合肥泰瑞数创科技有限公司 Pose estimation algorithm fused by laser radar, visible light camera and inertial measurement unit

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107179086A (en) * 2017-05-24 2017-09-19 北京数字绿土科技有限公司 A kind of drafting method based on laser radar, apparatus and system
CN107869989A (en) * 2017-11-06 2018-04-03 东北大学 A kind of localization method and system of the fusion of view-based access control model inertial navigation information
CN108827306A (en) * 2018-05-31 2018-11-16 北京林业大学 A kind of unmanned plane SLAM navigation methods and systems based on Multi-sensor Fusion
CN110261870A (en) * 2019-04-15 2019-09-20 浙江工业大学 It is a kind of to synchronize positioning for vision-inertia-laser fusion and build drawing method
CN111595333A (en) * 2020-04-26 2020-08-28 武汉理工大学 Modularized unmanned vehicle positioning method and system based on visual inertial laser data fusion
CN111932674A (en) * 2020-06-30 2020-11-13 博雅工道(北京)机器人科技有限公司 Optimization method of line laser vision inertial system
CN112347840A (en) * 2020-08-25 2021-02-09 天津大学 Vision sensor laser radar integrated unmanned aerial vehicle positioning and image building device and method
CN112525202A (en) * 2020-12-21 2021-03-19 北京工商大学 SLAM positioning and navigation method and system based on multi-sensor fusion
CN113432600A (en) * 2021-06-09 2021-09-24 北京科技大学 Robot instant positioning and map construction method and system based on multiple information sources
WO2021249387A1 (en) * 2020-06-08 2021-12-16 杭州海康机器人技术有限公司 Visual map construction method and mobile robot

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107179086A (en) * 2017-05-24 2017-09-19 北京数字绿土科技有限公司 A kind of drafting method based on laser radar, apparatus and system
CN107869989A (en) * 2017-11-06 2018-04-03 东北大学 A kind of localization method and system of the fusion of view-based access control model inertial navigation information
CN108827306A (en) * 2018-05-31 2018-11-16 北京林业大学 A kind of unmanned plane SLAM navigation methods and systems based on Multi-sensor Fusion
CN110261870A (en) * 2019-04-15 2019-09-20 浙江工业大学 It is a kind of to synchronize positioning for vision-inertia-laser fusion and build drawing method
CN111595333A (en) * 2020-04-26 2020-08-28 武汉理工大学 Modularized unmanned vehicle positioning method and system based on visual inertial laser data fusion
WO2021249387A1 (en) * 2020-06-08 2021-12-16 杭州海康机器人技术有限公司 Visual map construction method and mobile robot
CN111932674A (en) * 2020-06-30 2020-11-13 博雅工道(北京)机器人科技有限公司 Optimization method of line laser vision inertial system
CN112347840A (en) * 2020-08-25 2021-02-09 天津大学 Vision sensor laser radar integrated unmanned aerial vehicle positioning and image building device and method
CN112525202A (en) * 2020-12-21 2021-03-19 北京工商大学 SLAM positioning and navigation method and system based on multi-sensor fusion
CN113432600A (en) * 2021-06-09 2021-09-24 北京科技大学 Robot instant positioning and map construction method and system based on multiple information sources

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
利用惯导测量单元确定关键帧的实时SLAM算法;卫文乐 等;计算机应用;第40卷(第4期);全文 *
融合激光与视觉点云信息的定位与建图方法;张伟伟 等;计算机应用与软件;第37卷(第7期);全文 *

Also Published As

Publication number Publication date
CN114608554A (en) 2022-06-10

Similar Documents

Publication Publication Date Title
CN111156998B (en) Mobile robot positioning method based on RGB-D camera and IMU information fusion
CN114608554B (en) Handheld SLAM equipment and robot instant positioning and mapping method
CN106017463B (en) A kind of Aerial vehicle position method based on orientation sensing device
Peng et al. Pose measurement and motion estimation of space non-cooperative targets based on laser radar and stereo-vision fusion
CN108731672B (en) Coal mining machine attitude detection system and method based on binocular vision and inertial navigation
Johnson et al. Precise image-based motion estimation for autonomous small body exploration
CN101598556B (en) Unmanned aerial vehicle vision/inertia integrated navigation method in unknown environment
Tzschichholz et al. Relative pose estimation of satellites using PMD-/CCD-sensor data fusion
CN110726406A (en) Improved nonlinear optimization monocular inertial navigation SLAM method
CN111707261A (en) High-speed sensing and positioning method for micro unmanned aerial vehicle
CN104268935A (en) Feature-based airborne laser point cloud and image data fusion system and method
CN113052908A (en) Mobile robot pose estimation method based on multi-sensor data fusion
CN113551665B (en) High-dynamic motion state sensing system and sensing method for motion carrier
CN114234967B (en) Six-foot robot positioning method based on multi-sensor fusion
CN111307146A (en) Virtual reality wears display device positioning system based on binocular camera and IMU
CN111665512A (en) Range finding and mapping based on fusion of 3D lidar and inertial measurement unit
CN110929402A (en) Probabilistic terrain estimation method based on uncertain analysis
CN115371665A (en) Mobile robot positioning method based on depth camera and inertia fusion
CN115574816A (en) Bionic vision multi-source information intelligent perception unmanned platform
Hinzmann et al. Flexible stereo: constrained, non-rigid, wide-baseline stereo vision for fixed-wing aerial platforms
Karam et al. Integrating a low-cost mems imu into a laser-based slam for indoor mobile mapping
CN115218889A (en) Multi-sensor indoor positioning method based on dotted line feature fusion
CN112580683A (en) Multi-sensor data time alignment system and method based on cross correlation
CN118189931A (en) Split SLAM equipment and multi-sensor fusion positioning method
CN117710476A (en) Monocular vision-based unmanned aerial vehicle pose estimation and dense mapping method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant