CN114723920A - Point cloud map-based visual positioning method - Google Patents
Point cloud map-based visual positioning method Download PDFInfo
- Publication number
- CN114723920A CN114723920A CN202210455895.2A CN202210455895A CN114723920A CN 114723920 A CN114723920 A CN 114723920A CN 202210455895 A CN202210455895 A CN 202210455895A CN 114723920 A CN114723920 A CN 114723920A
- Authority
- CN
- China
- Prior art keywords
- pose
- visual
- imu
- point cloud
- map
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000000007 visual effect Effects 0.000 title claims abstract description 72
- 238000000034 method Methods 0.000 title claims abstract description 28
- 238000005259 measurement Methods 0.000 claims abstract description 45
- 238000010276 construction Methods 0.000 claims abstract description 11
- 230000010354 integration Effects 0.000 claims abstract description 10
- 230000009977 dual effect Effects 0.000 claims abstract description 7
- 230000003287 optical effect Effects 0.000 claims abstract description 4
- 238000005457 optimization Methods 0.000 claims description 21
- 239000011159 matrix material Substances 0.000 claims description 15
- 238000001914 filtration Methods 0.000 claims description 11
- 239000000126 substance Substances 0.000 claims description 6
- 238000013519 translation Methods 0.000 claims description 6
- 238000001514 detection method Methods 0.000 claims description 4
- 230000004927 fusion Effects 0.000 claims description 4
- 238000007781 pre-processing Methods 0.000 claims description 4
- 230000002159 abnormal effect Effects 0.000 claims description 3
- 239000002245 particle Substances 0.000 claims description 3
- 230000001133 acceleration Effects 0.000 claims description 2
- 230000008859 change Effects 0.000 claims description 2
- 230000001186 cumulative effect Effects 0.000 claims description 2
- 230000005484 gravity Effects 0.000 claims description 2
- 238000012216 screening Methods 0.000 claims description 2
- 238000012545 processing Methods 0.000 claims 1
- 239000000284 extract Substances 0.000 abstract 2
- 238000010586 diagram Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 5
- 238000006073 displacement reaction Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000010408 sweeping Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Mathematical Physics (AREA)
- Mathematical Optimization (AREA)
- Data Mining & Analysis (AREA)
- Software Systems (AREA)
- Computational Mathematics (AREA)
- Pure & Applied Mathematics (AREA)
- Mathematical Analysis (AREA)
- Computing Systems (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Algebra (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Navigation (AREA)
Abstract
The invention provides a visual positioning method based on a point cloud map, which comprises a point cloud map generation module, a visual inertial odometer construction module and a visual matching positioning module based on the point cloud map. The point cloud map module is used for establishing a high-precision point cloud map by fusing laser, IMU (inertial measurement unit) and GPS (global positioning system) information; the visual inertial odometer construction module firstly extracts visual feature points, tracks the feature points of the front frame and the rear frame by using an optical flow method, performs pre-integration on IMUs, fuses the pre-integration with the visual feature points, constructs a visual inertial odometer, outputs the initial pose of each frame and restores the depth of the feature points; and the visual matching positioning module based on the existing map extracts subgraphs from the point cloud map according to the initial position, projects the characteristic points recovered through vision to a 3D space map coordinate system, and queries the closest point in the current subgraph. And for the closest point matched by the vision and the map, optimizing the pose of the current frame by adopting a RANSAC algorithm based on dual quaternions.
Description
Technical Field
The invention relates to the field of robot positioning, in particular to a visual positioning method based on a point cloud map.
Background
With the rapid development of intelligent chips, 5G communication, and artificial intelligence technologies, intelligent robots have been widely used in various environments, such as industrial transfer robots, commercial food delivery robots, and household floor sweeping robots. Positioning is the core of the robot technology, and accurate positioning of the robot is the most basic and important function. The positioning module is the basis for the robot to complete subsequent tasks such as perception, planning and grabbing.
Robot positioning refers to calculating the current position in the environment through a sensor carried by the robot, such as a camera, a laser radar, an IMU, a wheel speed meter, a millimeter wave radar and the like. Wherein, the wheel speed meter, the IMU and other sensors can not be independently and accurately positioned due to the limitation of the device. There are two main positioning schemes currently in use, laser-based schemes and vision-based schemes. The positioning scheme based on the laser is mature, the precision is generally higher than that of vision, but the laser is expensive, the wire harness of the laser in the vertical direction is limited, the obtained information is less, and the positioning precision is influenced; the vision-based positioning scheme is less stable than the laser-based scheme and less robust in dynamic environments with less feature texture and larger scene changes.
According to the above introduction, the following problems still exist in the current field of robot positioning: 1) the vision-based scheme is sensitive to scene changes, has poor positioning accuracy, and cannot solve the scale only by depending on a monocular camera. 2) The laser-based solution is costly and difficult to land. Fusing vision and laser light, while improving accuracy, has disadvantages in cost and system resource usage. Aiming at the problems, the visual inertial positioning based on the point cloud map is provided, and when the point cloud map is known, only a camera and an IMU are used, so that the positioning precision is improved, and the positioning cost is reduced.
Disclosure of Invention
The purpose of the invention is as follows: the invention aims to solve the problems of insufficient precision, high cost and the like of the current positioning scheme. A visual inertial positioning method based on a known map is provided. The method is characterized in that a high-precision map is established by fusing laser, IMU (inertial measurement unit) and GPS (Global Positioning System), the map is matched with the feature points recovered by the visual inertial odometer, the pose of the robot is optimized, the hardware cost is reduced, the Positioning precision is improved, and the robustness of the System is ensured.
In order to solve the technical problem, the invention discloses a visual inertial positioning method based on a point cloud map. The point cloud map building module provides a map required by positioning, and the visual inertial odometer building module provides an initial pose for the positioning module.
The method specifically comprises the following steps: establishing a point cloud map construction module, a visual inertial odometer construction module and a visual matching positioning module;
the point cloud map building module is used for fusing an Inertial Measurement Unit (IMU), laser and a Global Positioning System (GPS) to realize multi-sensor fusion laser map building;
the visual inertial odometer construction module provides an initial pose based on visual positioning of a point cloud map, and solves the pose and feature point depth of each frame of image by fusing a monocular vision and Inertial Measurement Unit (IMU);
the visual matching positioning module constructs a kd-tree (k-dimensional tree, k is the dimensionality of data) for map points based on point clouds of different modes and visual information, and performs positioning by adopting a particle filter based on dual quaternion after the kd-tree is matched with feature points projected to a map coordinate system.
The point cloud map building module specifically executes the following steps:
step 1-1, aligning inertial measurement unit IMU and GPS data relative to laser by taking the laser as a reference, and performing distortion removal treatment on each frame of laser point cloud by using a 100Hz inertial measurement unit IMU;
step 1-2, carrying out inertial solution on an inertial measurement unit IMU, and taking a solution result as prediction update of Kalman filtering;
step 1-3, for the laser point clouds of two adjacent frames, adopting a closest point iterative ICP to calculate a relative pose as an initial pose for observation updating of follow-up Kalman filtering;
1-4, performing Kalman filtering prediction updating;
step 1-5, when a new laser point cloud frame exists, integrating robot motion constraints and carrying out observation updating;
step 1-6, taking the current posterior pose as the next prior, and continuously and iteratively solving a new posterior pose until the pose changes of the last two times do not exceed a first threshold value, namely the pose is converged;
step 1-7, all three frames are used as key frames, when the distance difference between the pose of the current frame and the previous key frame exceeds a second threshold value, the current frame is set as the key frame and is used as an optimization node to be added into graph optimization, and the graph optimization is to continue N1(generally, 10 is taken as a value) the pose of a frame key frame is taken as an optimization variable, pixel feature points observed in the key frame are taken as an edge of graph optimization, the projection error of the feature points is taken as the cost of the graph, the cost of the graph is minimized by increasing or decreasing the pose of nodes in the graph, and the graph optimization is the prior art;
1-8, when the following conditions are met: the current cumulative number of key frames exceeds X1(typically 50) frames or accumulated GPS data exceeds X2(generally 100) frames, or a loop detection number of X3(generally taking the value as 10) times, and carrying out primary graph optimization;
and 1-9, for the key frame, converting the key frame point cloud into a map coordinate system, taking the pose of the first frame point cloud as a map origin, and filtering the map to obtain a point cloud map. (the point cloud is a cluster of three-dimensional space points, and the point cloud map is composed of a frame of laser point cloud.)
In step 1-2, inertia calculation is performed by using the following formula:
wherein the content of the first and second substances,
whereinFor a priori rotation of the inertial measurement unit IMU at time k,is posterior rotation quantity at the moment of k-1, posterior rotation quantity at the moment of 0 is a unit matrix, phi is a rotation vector of an inertial measurement unit IMU, an upper standard lambada symbol represents a subtend quantity antisymmetric matrix,is the a-priori velocity at time k,is the posterior velocity at time k-1, the posterior velocity at time 0 is 0, akFor the acceleration measured by the inertial measurement unit IMU at time k,is the deviation of the inertial measurement unit IMU at the moment k, g is the gravity vector, δ t is the interval time of the inertial measurement unit IMU,is the a priori position of the IMU at time k,posterior position, omega, of inertial measurement unit IMU at time k-1kFor IMU angular velocity, the above equation is used to calculate a priori values for rotation, velocity, and displacement from time k-1 to time k.
The steps 1-4 comprise: calculating the prior pose by adopting the following formula:
wherein the content of the first and second substances,the state at the moment k comprises speed, position and IMU deviation;is the a-priori covariance of the state,a posteriori covariance, Q, for the state at time k-1kFor the system noise at time k, Fk-1And Bk-1In the form of a matrix of coefficients,is Fk-1The transpose of (2) is as follows:
in which I3Is a three-dimensional unit matrix, T is an inertial measurement unit IMU integration period, Rk-1Is the rotation matrix at time k-1.
The steps 1 to 5 comprise:
the observed quantity y is calculated using the following formula:
the observed quantity y includes a positional deviation δ p, a velocity deviation [ δ v ] in the y direction and the z directionb]yzAnd a misalignment angle δ θ, the observation equation being:
y=Gkδx+Ckn
wherein n is inertial measurement unit IMU observation noise, and δ x is current state quantity including speed deviation, position deviation and IMU deviation; gkAnd CkThe equation coefficient matrix is shown below:
wherein [ R ] isbw]yzIndicating rotation in the y-axis and z-axis directions,representing the speed in the directions of the y-axis and z-axis, I2Is a second order unit momentThe number of the arrays is determined,respectively are x-direction position noise, y-direction position noise, z-direction position noise, y-direction speed noise, z-direction speed noise, x-direction angle noise, y-direction angle noise and z-direction angle noise, and then measurement updating is added to calculate the posterior pose:
wherein K iskIn order to be the basis of the kalman gain,is the a posteriori covariance, I is the identity matrix,is a posterior state, ykAnd updating the posterior pose for the observed quantity at the moment k according to the posterior state quantity:
whereinIn order to be a posterior location,in order to be the posterior velocity,in order to obtain the posterior rotation quantity,for the posterior inertial measurement unit IMU accelerometer bias,for the posterior inertial measurement unit IMU gyroscope bias,in order to be a priori the location of the location,in order to be a priori the speed of the vehicle,in order to be a priori the amount of rotation,for a priori inertial measurement unit IMU accelerometer bias,for a priori inertial measurement unit IMU gyroscope bias,in order to be an a-priori positional error,in order to be the a-priori velocity error,in order to be able to take the error of the posterior angle,for the error in the bias of the a posteriori accelerometer,error of the posterior gyroscope bias.
The visual inertial odometer construction module specifically executes the following steps:
step 2-1, a pretreatment stage: extracting Harris (a visual feature point detection method named Harris) corner points from each frame of a visual camera, using Lukas-Kanade optical flow (a visual feature point tracking method named Lukas and Kanade) to track feature points of adjacent frames as visual feature points, removing abnormal tracking points by using RANSAC (random sampling consistency), and simultaneously performing pre-integration on IMU (inertial measurement unit) data to obtain the position, the speed and the rotation amount of the current moment;
step 2-2, an initialization stage: only vision is adopted to recover the pose and the feature point depth of a certain frame (generally 5-10 frames) in front, and finally the pose and the feature point depth are aligned with IMU pre-integration to solve the depth;
step 2-3, back end optimization stage: and carrying out nonlinear optimization on the error of the inertial measurement unit IMU and the vision error. Where the IMU error is the deviation between the actual measured value and the quantity to be optimized (position, speed, rotation and IMU parameters) and the visual error is the error after projection of the feature point position. Optimizing the position, speed, rotation and IMU parameters and feature point positions of the inertial measurement unit of each frame, outputting the initial value of the current pose, and outputting continuous pose and visual features as a visual inertial odometer.
The visual matching positioning module specifically executes the following steps:
step 3-1, preprocessing the point cloud map, carrying out breadth-first search BFS clustering on the point cloud map before matching, and screening out points less than X4Dividing the whole point cloud map into cuboid grids, wherein the size of each cuboid grid is 40m × 40m, and constructing sub-graphs by taking the grid where the current position is located as the center, wherein each sub-graph is a grid of 5 × 3;
step 3-2, estimating the depth of the feature points of the current camera visual frame in a visual inertial odometer, projecting the feature points to a map coordinate system according to the pose of the current frame, and searching the nearest map point in the current sub-image by using a kd-tree algorithm as a matching corresponding point;
3-3, setting the corresponding point logarithm of the visual characteristic points in the map in the step 3-2 as N, randomly arranging the N points, and dividing the corresponding points into N/8 groups of corresponding point sets when every eight points are taken as one group;
step 3-4, using dual quaternionRepresenting the pose of the current frame, wherein qrIs a rotational quaternion, qtIs a quaternion expressed by translation, and epsilon is an even number, satisfies epsilon2=0;
Setting the correct matching point pair proportion in the step 3-2 as X5Taking the set of 8 pairs of points in the step 3-3 as local points, and estimating the current pose by adopting an RANSAC algorithm;
and 3-5, projecting the visual feature points to a map coordinate system by using the current pose, inquiring the nearest point pair, repeating the steps 3-4 until the rotation angle change of the newly estimated pose does not exceed a third threshold, the translation does not exceed a fourth threshold, or the iteration times exceeds a fifth threshold, and outputting the final pose as a positioning result.
Has the advantages that: 1. the invention provides a high-precision point cloud map building method, which provides a foundation for robot positioning and three-dimensional reconstruction. 2. The invention provides a point cloud map-based visual inertial positioning method, which solves the problem of pose drift in visual positioning and laser positioning schemes. 3. The invention provides a low-cost based positioning scheme based only on a point cloud map and visual and IMU sensors, and the map can be reused. 4. The invention provides a method for matching vision with a point cloud map, which improves the accuracy of positioning.
Drawings
The foregoing and/or other advantages of the invention will become further apparent from the following detailed description of the invention when taken in conjunction with the accompanying drawings.
Fig. 1 is a schematic overall flow chart of a visual inertial positioning system based on a point cloud map.
FIG. 2 is a schematic diagram of a process of building a point cloud map by multi-sensor fusion.
Fig. 3 is a schematic flow diagram of a monocular visual inertial odometer.
Fig. 4 is a schematic diagram of a positioning process based on a known point cloud map.
Fig. 5 is a schematic diagram of the experimental result of visual inertial positioning based on a point cloud map.
Detailed Description
Fig. 1 is an overall process of a visual inertial positioning system based on a point cloud map, and fig. 4 is a schematic view of a positioning process based on a known point cloud map. The method comprises a point cloud map generation module, a visual inertial odometer construction module and a visual matching positioning module based on the existing point cloud map.
FIG. 2 is a functional schematic diagram of a point cloud map generation module, which integrates IMU, laser and GPS to realize multi-sensor fusion laser mapping, and the specific implementation steps include:
step 1-1, aligning the IMU and GPS data relative to the laser by taking the laser as a reference because the laser, IMU and GPS data have different time stamps. For each frame of laser point cloud, a high frequency IMU is used for the distortion removal.
And 1-2, carrying out inertial solution on the IMU, and taking a solution result as prediction update of Kalman filtering. Inertia resolving, comprising the following steps:
wherein:
the above equation calculates a priori values for rotation, velocity, and displacement from time k-1 to time k.
And 1-3, calculating the relative pose of the laser point clouds of two adjacent frames by adopting ICP _ SVD (inductively coupled plasma-space vector decomposition) as an initial pose for observation updating of follow-up Kalman filtering.
Step 1-4, performing Kalman filtering prediction updating, and calculating a prior pose:
wherein the content of the first and second substances,
and 1-5, fusing robot motion constraints to perform observation updating when a new laser point cloud frame exists. Observations were position, offset angle, and velocity in the y-and z-axes:
the observation equation:
y=Gtδx+Ctn
wherein the content of the first and second substances,
adding measurement update, and calculating the posterior pose:
and finally, updating the posterior pose according to the posterior state quantity:
and 1-6, taking the current posterior pose as the next prior, and continuously and iteratively solving a new posterior pose until the pose changes of the last two times do not exceed the threshold values of 0.1 degree and 0.05m, namely the pose is converged.
And 1-7, when the distance difference between the pose of the current frame and the previous key frame exceeds 2.5m, setting the current frame as a key frame, and adding the key frame as an optimization node into graph optimization.
1-8, when the following conditions are met: the current accumulated key frame number exceeds 50 frames or the accumulated GPS data exceeds 100 frames, or the loop detection number reaches 10 times, and the system carries out one-time graph optimization.
And 1-9, for the key frame, converting the frame point cloud into a map coordinate system, and taking the pose of the first frame of laser point cloud as a map origin. And filters the map.
FIG. 2 is a functional schematic diagram of a visual inertial odometer building module, providing an initial pose based on visual localization of a point cloud map, and solving the pose and feature point depth of each frame of image by fusing monocular vision and IMU. The specific implementation steps are as follows:
and 2-1, in a preprocessing stage, extracting Harris angular points in each frame, tracking feature points of adjacent frames by using LK optical flow, and removing abnormal tracking points by using RANSAC. And meanwhile, performing pre-integration on IMU data to obtain the position, the speed and the rotation amount at the current moment.
And 2-2, in an initialization stage, restoring the positions and feature point depths of a plurality of previous frames by adopting vision, and finally aligning with an IMU pre-integration to solve the depth.
And 2-3, in a back-end optimization stage, carrying out nonlinear optimization on the IMU constraint and the visual constraint together, and optimizing the position, the speed, the rotation and the IMU parameters of each frame. And outputting an initial value of the current pose.
FIG. 3 is a functional schematic diagram of a visual matching location module based on a point cloud map, the visual matching location module constructs a kd-tree for map points based on point clouds and visual information of different modalities, and locates the map points by adopting a particle filter based on dual quaternion after matching with feature points projected to a map coordinate system, and the process comprises:
and 3-1, preprocessing the point cloud map, wherein discrete points in the map can influence the characteristic point matching effect, so that BFS clustering is carried out on the point cloud map before matching, point cloud clusters with the number less than 30 are screened out, and the distance threshold value of the same type is 0.3 m. And dividing the whole point cloud map into a plurality of grids of cuboids, wherein the size of each cuboid is 40m × 40m, and constructing a subgraph by taking the grid where the current position is located as the center, wherein the subgraph is 5 × 3.
And 3-2, estimating the depth of the feature points of the current frame in the visual odometer, projecting the feature points to a map coordinate system according to the pose of the current frame, and searching the nearest map point in the current sub-image as a corresponding point by using a kd-tree algorithm.
And 3-3, assuming that the number of corresponding points of the visual feature points in the map in the step 3-2 is N, randomly arranging the N points, and taking eight points as one group, dividing the corresponding points into N/8 groups of corresponding point sets.
Step 3-4, using dual quaternionRepresenting the pose of the current frame, wherein qrIs a rotational quaternion, qtIs a quaternion of the translation representation. Assuming the correct matching point pair ratio in 2-2 is 0.6, set of 8 points in step 3-3And (4) the cooperation is an in-office point, and the RANSAC algorithm is adopted to estimate the current pose.
And 3-5, projecting the visual feature points to a map coordinate system by using the current pose, inquiring the nearest point pairs, repeating the steps of 3-3 and 3-4, and increasing the proportion of the correct matching point pairs by 0.05 every time of repeating. Until the newly estimated pose rotation angle changes by no more than 0.5 degrees, the translation is no more than 0.05m, or the iteration times are more than 10 times. And outputting the final pose as a positioning result. Positioning is shown in fig. 5, purple is a positioning track, red is a visual feature point, light green point cloud is a current sub-image, and yellow is a peripheral global point cloud map (since the attached drawing of the specification can only be a gray scale map, the color cannot be seen, and the description is given here).
The present invention provides a visual positioning method based on a point cloud map, and a method and a way for implementing the technical solution are many, the above description is only a preferred embodiment of the present invention, it should be noted that, for those skilled in the art, a plurality of improvements and embellishments can be made without departing from the principle of the present invention, and these improvements and embellishments should also be regarded as the protection scope of the present invention. All the components not specified in the present embodiment can be realized by the prior art.
Claims (7)
1. A visual positioning method based on a point cloud map is characterized by comprising the following steps: establishing a point cloud map construction module, a visual inertial odometer construction module and a visual matching positioning module;
the point cloud map building module is used for fusing an inertial measurement unit IMU, laser and a GPS to realize multi-sensor fusion laser map building;
the visual inertial odometer construction module provides an initial pose based on visual positioning of a point cloud map, and solves the pose and feature point depth of each frame of image by fusing a monocular vision and Inertial Measurement Unit (IMU);
the visual matching positioning module constructs a kd-tree for map points based on point clouds and visual information of different modes, and performs positioning by adopting a particle filter based on dual quaternion after matching with feature points projected to a map coordinate system.
2. The method of claim 1, wherein the point cloud map construction module performs the following steps:
step 1-1, aligning inertial measurement unit IMU and GPS data relative to laser by taking the laser as a reference, and performing distortion removal processing on each frame of laser point cloud by using the inertial measurement unit IMU;
step 1-2, carrying out inertial solution on an Inertial Measurement Unit (IMU), wherein a solution result is used as prediction update of Kalman filtering;
step 1-3, for the laser point clouds of two adjacent frames, adopting a closest point iterative ICP to calculate a relative pose as an initial pose for observation updating of follow-up Kalman filtering;
1-4, performing Kalman filtering prediction updating;
step 1-5, when a new laser point cloud frame exists, fusing robot motion constraints and carrying out observation updating;
step 1-6, taking the current posterior pose as the next prior, and continuously and iteratively solving a new posterior pose until the pose changes of the last two times do not exceed a first threshold value, namely the pose is converged;
step 1-7, all three frames are used as key frames, when the distance difference between the pose of the current frame and the previous key frame exceeds a second threshold value, the current frame is set as the key frame and is used as an optimization node to be added into graph optimization, and the graph optimization is to continue N1The pose of a frame key frame is used as an optimization variable, pixel feature points observed in the key frame are used as an edge for optimizing the graph, projection errors of the feature points are used as the cost of the graph, and the cost of the graph is minimized by increasing or reducing the pose of nodes in the graph;
1-8, when the following conditions are met: the current cumulative number of key frames exceeds X1Frames or accumulated GPS data exceeding X2The number of frame, or loop, detections reaches X3Secondly, carrying out primary graph optimization;
and 1-9, for the key frame, converting the key frame point cloud into a map coordinate system, taking the pose of the first frame point cloud as a map origin, and filtering the map to obtain a point cloud map.
3. The method according to claim 2, wherein in step 1-2, the following formula is used for the inertial solution:
wherein the content of the first and second substances,
whereinFor a priori rotation of the inertial measurement unit IMU at time k,is posterior rotation quantity at the moment of k-1, posterior rotation quantity at the moment of 0 is a unit matrix, phi is a rotation vector of an inertial measurement unit IMU, an upper standard lambada symbol represents a subtend quantity antisymmetric matrix,is the a-priori velocity at time k,is the posterior velocity at time k-1, the posterior velocity at time 0 is 0, akFor the acceleration measured by the inertial measurement unit IMU at time k,is the deviation of the inertial measurement unit IMU at the moment k, g is the gravity vector, δ t is the interval time of the inertial measurement unit IMU,is the a priori position of the IMU at time k,posterior position, omega, of inertial measurement unit IMU at time k-1kIs the IMU angular velocity.
4. The method of claim 3, wherein steps 1-4 comprise: calculating the prior pose by the following formula:
wherein the content of the first and second substances,the state at the moment k comprises speed, position and IMU deviation;is a priori covariance of the states,a posteriori covariance, Q, for the state at time k-1kFor the system noise at time k, Fk-1And Bk-1In the form of a matrix of coefficients,is Fk-1The transpose of (2) is as follows:
wherein I3Is a three-dimensional unit matrix, T is an inertial measurement unit IMU integration period, Rk-1Is the rotation matrix at time k-1.
5. The method of claim 4, wherein steps 1-5 comprise:
the observed quantity y is calculated using the following formula:
the observed quantity y includes a positional deviation δ p, a velocity deviation [ δ v ] in the y direction and the z directionb]yzAnd a misalignment angle δ θ, the observation equation being:
y=Gkδx+Ckn
wherein n is inertial measurement unit IMU observation noise, δ x is current state quantity including speed deviation, position deviation and IMU deviation; gkAnd CkThe equation coefficient matrix is shown below:
wherein [ R ] isbw]yzIndicating rotation in the y-axis and z-axis directions,representing the speed in the directions of the y-axis and z-axis, I2Is a matrix of a second-order unit,x-direction position noise, y-direction position noise, z-direction position noise, y-direction velocity noise, z-direction velocity noise, x-direction angle noise, y-direction angle noise, and z-direction angle noise, respectively;
adding measurement update, and calculating the posterior pose:
wherein K iskIn order to be the basis of the kalman gain,is the a posteriori covariance, I is the identity matrix,is a posterior state, ykAs observed quantity at time k, and finally according to posterior stateMeasuring, updating the posterior pose:
whereinIn order to be a posterior location,in order to be the posterior velocity,in order to determine the amount of posterior rotation,for the posterior inertial measurement unit IMU accelerometer bias,for the posterior inertial measurement unit IMU gyroscope bias,in order to be a priori the location of the location,in order to be the speed a priori,in order to be a priori the amount of rotation,for a priori inertial measurement unit IMU accelerometer bias,for a priori inertial measurement unit IMU gyroscope bias,in order to be an a-priori positional error,in order to be the a-priori velocity error,in order to be able to take the error of the posterior angle,for the error in the bias of the a posteriori accelerometer,error of the posterior gyroscope bias.
6. The method according to claim 5, characterized in that the visual odometry building module performs in particular the following steps:
step 2-1, a pretreatment stage: extracting Harris angular points from each frame of a vision camera to serve as visual feature points, tracking the feature points of adjacent frames by using Lukas-Kanade optical flow, removing abnormal tracking points by using RANSAC, and performing pre-integration on IMU data of an inertial measurement unit to obtain the position, the speed and the rotation amount of the current moment;
step 2-2, initialization stage: only vision is adopted to recover the pose and the feature point depth of a certain frame in front, and finally the pose and the feature point depth are aligned with IMU pre-integration to solve the depth;
step 2-3, back end optimization stage: and carrying out nonlinear optimization on the error of the inertial measurement unit IMU and the visual error, optimizing the position, the speed and the rotation of each frame, optimizing the position of the IMU parameter and the characteristic point of the inertial measurement unit IMU, outputting the initial value of the current pose, and outputting the continuous pose and the visual characteristic as a visual inertial odometer.
7. The method according to claim 6, wherein the visual match location module performs in particular the steps of:
step 3-1, preprocessing the point cloud map, carrying out breadth-first search BFS clustering on the point cloud map before matching, and screening out points less than X4Dividing the whole point cloud map into cuboid grids with the size of 40m × 40m, and constructing a subgraph by taking the grid at the current position as a center, wherein the subgraph is a grid of 5 × 3;
step 3-2, estimating the depth of the feature points of the current camera visual frame in a visual inertial odometer, projecting the feature points to a map coordinate system according to the pose of the current frame, and searching the nearest map point in the current sub-image by using a kd-tree algorithm as a matching corresponding point;
3-3, setting the corresponding point logarithm of the visual characteristic points in the map in the step 3-2 as N, randomly arranging the N points, and dividing the corresponding points into N/8 groups of corresponding point sets when every eight points are taken as one group;
step 3-4, using dual quaternionRepresenting the pose of the current frame, wherein qrIs a rotational quaternion, qtIs a quaternion expressed by translation, and epsilon is an even number, so that epsilon is satisfied2=0;
Setting the correct matching point pair proportion in the step 3-2 as X5Taking the set of 8 pairs of points in the step 3-3 as local points, and estimating the current pose by adopting an RANSAC algorithm;
and 3-5, projecting the visual feature points to a map coordinate system by using the current pose, inquiring the nearest point pair, repeating the steps 3-4 until the rotation angle change of the newly estimated pose does not exceed a third threshold, the translation does not exceed a fourth threshold, or the iteration times exceeds a fifth threshold, and outputting the final pose as a positioning result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210455895.2A CN114723920A (en) | 2022-04-24 | 2022-04-24 | Point cloud map-based visual positioning method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210455895.2A CN114723920A (en) | 2022-04-24 | 2022-04-24 | Point cloud map-based visual positioning method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114723920A true CN114723920A (en) | 2022-07-08 |
Family
ID=82245164
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210455895.2A Pending CN114723920A (en) | 2022-04-24 | 2022-04-24 | Point cloud map-based visual positioning method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114723920A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115797490A (en) * | 2022-12-19 | 2023-03-14 | 广州宸境科技有限公司 | Drawing construction method and system based on laser vision fusion |
-
2022
- 2022-04-24 CN CN202210455895.2A patent/CN114723920A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115797490A (en) * | 2022-12-19 | 2023-03-14 | 广州宸境科技有限公司 | Drawing construction method and system based on laser vision fusion |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112634451B (en) | Outdoor large-scene three-dimensional mapping method integrating multiple sensors | |
CN112347840B (en) | Vision sensor laser radar integrated unmanned aerial vehicle positioning and image building device and method | |
CN106017463B (en) | A kind of Aerial vehicle position method based on orientation sensing device | |
CN107516326B (en) | Robot positioning method and system fusing monocular vision and encoder information | |
Panahandeh et al. | Vision-aided inertial navigation based on ground plane feature detection | |
CN110726406A (en) | Improved nonlinear optimization monocular inertial navigation SLAM method | |
CN103954283A (en) | Scene matching/visual odometry-based inertial integrated navigation method | |
CN114018248B (en) | Mileage metering method and image building method integrating code wheel and laser radar | |
CN114323033B (en) | Positioning method and equipment based on lane lines and feature points and automatic driving vehicle | |
CN115272596A (en) | Multi-sensor fusion SLAM method oriented to monotonous texture-free large scene | |
CN115451948A (en) | Agricultural unmanned vehicle positioning odometer method and system based on multi-sensor fusion | |
Hinzmann et al. | Flexible stereo: constrained, non-rigid, wide-baseline stereo vision for fixed-wing aerial platforms | |
CN117367427A (en) | Multi-mode slam method applicable to vision-assisted laser fusion IMU in indoor environment | |
CN115574816A (en) | Bionic vision multi-source information intelligent perception unmanned platform | |
Peng et al. | Vehicle odometry with camera-lidar-IMU information fusion and factor-graph optimization | |
CN115218889A (en) | Multi-sensor indoor positioning method based on dotted line feature fusion | |
CN112945233B (en) | Global drift-free autonomous robot simultaneous positioning and map construction method | |
Wang et al. | Micro aerial vehicle navigation with visual-inertial integration aided by structured light | |
CN116380079A (en) | Underwater SLAM method for fusing front-view sonar and ORB-SLAM3 | |
CN116608873A (en) | Multi-sensor fusion positioning mapping method for automatic driving vehicle | |
CN115930948A (en) | Orchard robot fusion positioning method | |
Liu et al. | A multi-sensor fusion with automatic vision-LiDAR calibration based on Factor graph joint optimization for SLAM | |
CN114723920A (en) | Point cloud map-based visual positioning method | |
Hu et al. | Efficient Visual-Inertial navigation with point-plane map | |
Wu et al. | Integrated navigation algorithm based on vision-inertial extended Kalman filter for low-cost unmanned aerial vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |