CN113587933A - Indoor mobile robot positioning method based on branch-and-bound algorithm - Google Patents
Indoor mobile robot positioning method based on branch-and-bound algorithm Download PDFInfo
- Publication number
- CN113587933A CN113587933A CN202110863303.6A CN202110863303A CN113587933A CN 113587933 A CN113587933 A CN 113587933A CN 202110863303 A CN202110863303 A CN 202110863303A CN 113587933 A CN113587933 A CN 113587933A
- Authority
- CN
- China
- Prior art keywords
- robot
- point cloud
- pose
- cloud data
- algorithm
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000004422 calculation algorithm Methods 0.000 title claims abstract description 84
- 238000000034 method Methods 0.000 title claims abstract description 73
- 238000005457 optimization Methods 0.000 claims abstract description 14
- 238000001514 detection method Methods 0.000 claims abstract description 7
- 239000011159 matrix material Substances 0.000 claims description 29
- 238000013519 translation Methods 0.000 claims description 21
- 230000009466 transformation Effects 0.000 claims description 20
- 238000001914 filtration Methods 0.000 claims description 7
- 230000004927 fusion Effects 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 6
- 230000001133 acceleration Effects 0.000 claims description 4
- 238000009825 accumulation Methods 0.000 claims description 4
- 238000013461 design Methods 0.000 claims description 4
- 238000009499 grossing Methods 0.000 claims description 3
- 238000012360 testing method Methods 0.000 description 6
- 238000006243 chemical reaction Methods 0.000 description 5
- 238000002474 experimental method Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 230000005484 gravity Effects 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000013138 pruning Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
- G01C21/206—Instruments for performing navigational calculations specially adapted for indoor navigation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The invention provides an indoor mobile robot positioning method based on a branch-and-bound algorithm, which comprises the steps of fusing data collected by a depth camera and a laser radar, searching a global initial pose in an off-line map by adopting the branch-and-bound method, optimizing the global initial pose by adopting an iteration closest point method, finally taking the global pose as the global initial pose matched by laser radar scanning, and continuously calculating the current pose of the robot by adopting a correlation scanning matching method. In addition, in the running process of the laser radar scanning matching algorithm, the loop detection is accelerated by adopting a branch and bound method to carry out global optimization, when the accumulated error cannot be corrected in time by the loop detection algorithm, the accumulated state of the error is effectively detected by adopting a repositioning judgment method, and the accumulated error is corrected by calling a global positioning algorithm, so that the accuracy of the positioning process is improved.
Description
Technical Field
The invention belongs to the technical field of mobile robot positioning, and particularly relates to an indoor mobile robot positioning method based on a branch-and-bound algorithm.
Background
The existing positioning technology can be divided into two categories, namely absolute positioning and relative positioning; absolute positioning requires that an environment model is determined in advance or external position information is directly provided for the robot through a sensor, and the position of the robot in a global coordinate system is calculated; the commonly used methods are probabilistic positioning, map matching, beacon positioning and GPS positioning. And the relative positioning assumes that the initial pose of the robot is known, the position of the robot is tracked and estimated by adopting sensor information at adjacent moments, the relative positioning is divided into a odometry method and an inertial navigation method, and the error is continuously accumulated along with the increase of the traveling distance of the robot in the process, so that the correction is carried out in an absolute positioning mode.
The branch-and-bound algorithm is a common method for solving integer programming problems, and is a searching and iteration method. The main flow is divided into two steps of branching and delimitation. Branching is to divide the whole feasible solution space into smaller and smaller subsets continuously; bounding is then the computation of a target lower bound (for the minimum problem) for the solution set within each subset. In the branch-and-bound algorithm, the most important thing is the process of pruning, i.e. after each branch, any subset whose bounds exceed the target value of the known feasible solution set is not further branched. In the robot positioning process, the method can accelerate the scanning matching process and quickly find the optimal matching of the robot pose.
In the process of robot positioning research, people gradually adopt a laser radar scanning matching mode to correct accumulated errors of a robot, and the mode determines the pose of the robot in the environment through matching the scanning results of a current frame and a previous frame or a plurality of frames, so that the accumulated errors in the positioning process can be effectively reduced, but the process has a great relationship with the precision of a laser radar, in addition, the characteristics of the environment need to be abundant enough to be accurately matched, but the more the characteristics participate in matching, the more the algorithm time complexity is increased.
Disclosure of Invention
Based on the problems, the invention aims to provide an indoor mobile robot positioning method based on a branch-and-bound algorithm, which solves the problem that the positioning precision of a single sensor is limited, and improves the global positioning capability and the relative positioning accuracy of the indoor mobile robot.
In order to achieve the above object, the present invention provides a method for positioning an indoor mobile robot based on a branch-and-bound algorithm, comprising:
step 1: collecting point cloud data of the surrounding environment of the robot, and converting the point cloud data into a robot body coordinate system;
step 2: converting a pre-constructed off-line map of the whole environment into a plurality of layers of grid maps with different resolutions;
and step 3: the fused point cloud data is used as the input of a branch-and-bound algorithm, and a leaf node with the highest score is searched in a search tree as the global initial pose of the robot in a depth-first search mode;
and 4, step 4: taking point cloud data obtained at the current moment as input point cloud, taking an off-line map as target point cloud, taking the global initial pose of the robot as the initial pose of an iterative closest point algorithm, and obtaining an accurate global pose of the robot through the iterative closest point algorithm;
and 5: acquiring pose information of the robot at the current moment as a pose estimation value, and obtaining a relative pose of the robot in the advancing process by utilizing a correlation scanning matching algorithm;
step 6: accelerating the speed of loop detection in a depth-first search mode through a branch-and-bound algorithm, searching for similar matching of the current robot in all constructed sub-graphs, and optimizing a pose graph by adopting a nonlinear optimization library;
and 7: and judging the size of the accumulated error of the robot at certain intervals, if the size exceeds a set threshold, returning to the step 3 to correct the accumulated error, and outputting the corrected robot pose.
The step 1 comprises the following steps:
step 1.1: collecting two groups of point cloud data of the surrounding environment of the position of the robot at the current moment through a depth camera and a laser radar respectively;
step 1.2: denoising point cloud data acquired by a depth camera;
step 1.3: and intercepting data points with a certain height from the denoised depth camera point cloud data, and projecting the point cloud data and the point cloud data acquired by the laser radar to a robot body coordinate system respectively to realize the fusion of the point cloud data.
The step 2 is specifically expressed as follows: starting from a grid map with an original resolution, each layer corresponds to a height value h, the height h corresponding to the map with the original resolution is 0, the height values are increased gradually layer by layer, and the probability value of each grid in the grid map with the height h is 2 around the corresponding grid of the map with the original resolutionh*2hMaximum of individual grid probabilities.
The step 3 comprises the following steps:
step 3.1: determining a node selection strategy, and determining the node selection strategy by adopting a depth-first search mode;
step 3.2: determining a branching rule using an integer array c ═ x, y, θ, c for each node in the treeh) Indicating that the height of the tree in which the node is located is chIndicating that c is satisfied in the treehNode c > 1 is divided into 4 nodes with height ch-1The child node of (2):
wherein s isx、sy、sθBoundary search ranges, S, representing the translation and rotation directions, respectivelyx、Sy、SαRespectively representing an x-direction linear search window, a y-direction linear search window and an angle search window, r represents the resolution of the grid map, epsilonθDenotes the angle step factor, εθExpressed as:
wherein h ismaxRepresenting the maximum value of the distance measured in all hit points of a frame of laser radar data;
step 3.3: calculating the upper bound of the branch-and-bound algorithm:
in the formula,representing corresponding heights c constructed in advance converted from an off-line maphThe grid map of (a); score (c) is the projection of the current lidar data onto a gridThe sum of the probability values above (a) and (b),is the pose, s, corresponding to node ciK is the number of hit points in one frame of laser radar data;
step 3.4: recursively searching the upper bound of each node in each layer in a depth-first search mode, and taking the leaf node with the best score as the best match of the robot pose to obtain the global initial pose of the robot:
ξglobal=ξ0+(rx,ry,εθθ)
in the formula, xi0Indicating the initial pose, ξ, at the start of the searchglobalAnd representing the global initial pose obtained by finishing the search.
The step 4 comprises the following steps:
step 4.1: converting the fused point cloud data into a world coordinate system;
step 4.2: calculating the mass centers of the two groups of point cloud data under a world coordinate system:
in the formula, mup、μqRespectively representing the centroid of the point cloud data P after fusion and the centroid of the point cloud data Q of the off-line map, Np、NqRespectively representing the number of points in the point cloud data P, the number of points in the point cloud data Q, Pi、qjRespectively representing the coordinates of the ith point in the point cloud data P and the coordinates of the jth point in the point cloud data Q;
step 4.3: designing an error function E (R, t) in an iterative closest point algorithm;
in the formula, R is a rotation matrix corresponding to two groups of point cloud set transformation, t is a translation matrix, and N is the number of points participating in point cloud transformation;
step 4.4: and (5) iterative solution of R and t is carried out by using an iterative closest point algorithm, iterative operation is stopped when a convergence condition is reached, and the output optimal solution is used as an accurate global pose of the robot.
The step 5 comprises the following steps:
step 5.1: acquiring the linear acceleration and the angular velocity of the robot through an inertial sensor, acquiring the velocity of the robot through a wheel type odometer, and taking data obtained by fusing two groups of data through extended Kalman filtering as a pose estimation value of the current position of the robot;
step 5.2: converting laser radar data at the current moment into a currently constructed sub-graph according to the current pose estimation value of the robot;
step 5.3: the error function in the design correlation scan matching algorithm is:
in the formula, HsmoothIs a bicubic interpolation function for smoothing the probability values of the grid in the sub-map, TεTransformation matrix, s, representing the correspondence of pose estimatesi′Representing the ith 'frame data collected by the laser radar, wherein K is the point number contained in the ith' frame of laser radar data;
step 5.4: iterative optimization of an error function is carried out by utilizing a correlation scanning matching algorithm, iterative calculation is stopped when a convergence condition is met, and an output value is used as an optimal solution of current pose estimation;
step 5.5: updating probability values of grids in the subgraph:
Hnew(x)=clamp(F-1(F(Hold(x))*F(p)))
wherein clamp is a range-defining function, p is the probability of the lattice being occupied, F (p) is the ratio of the lattice occupation probability to the idle probability, Hold(x) To update the previous trellis probability value, Hnew(x) Is the updated trellis probability value.
The step 6 comprises the following steps:
step 6.1: taking the optimal solution of the current pose estimation as the input of a branch-and-bound algorithm, searching and matching with all constructed sub-images, and obtaining similar matching of the robot in all sub-images in a depth-first searching mode;
step 6.2: similar matching of the pose of the current robot is taken as loop constraint and added into the pose graph for optimization, and an error function of global pose graph optimization is designed as follows:
in the formula,for the pose corresponding to the ith sub-image,position and appearance corresponding to j frame laser dataijIs the covariance between the ith sub-image pose and the jth frame lidar pose, ξijAnd delta is a robust kernel function, wherein delta is the relative pose of the ith sub-graph and the jth frame of the laser radar.
The step 7 comprises the following steps:
step 7.1: converting the fused point cloud data into a map coordinate system according to the optimized relative pose of the robot;
step 7.2: calculating the index of the fused point cloud data on the grid map corresponding to the original resolution:
Ix=(xtrans-xorigin-0.5*r)/r+1
Iy=(ytrans-yorigin-0.5*r)/r+1
in the formula IxFor indexing in the x-direction of the map coordinate system, IyFor indexing in the y-direction of the map coordinate system, xtransFor the transformed fused point cloud x-direction translation amount, ytransFor the transformed fused point cloud y-direction translation amount, xoriginAs the origin x-coordinate, y of the grid maporiginThe y coordinate of the origin of the grid map is used, and r is the resolution of the grid map;
step 7.3: computing fused point cloud dataProbability of matching P with grid mapoccupy;
Poccupy=ic/(ic+ie)
In the formula icFor all point cloud data matching the correct number, ieMatching the number of errors in all point cloud data;
step 7.4: judging whether to reposition according to a set threshold value when P isoccupyWhen the pose of the robot at the current moment is less than or equal to the set threshold, the robot is considered to have accumulated errors to a certain degree, and therefore the step 3 needs to be executed to correct the accumulated errors;
step 7.5: and (4) repeatedly executing the step 7.1 to the step 7.5 at certain intervals to judge the error accumulation state, so that the positioning accuracy of the robot in the advancing process is ensured.
The invention has the beneficial effects that:
the invention provides an indoor mobile robot positioning method based on a branch-and-bound algorithm, which comprises the following steps that 1) the convergence speed of global positioning is accelerated by the branch-and-bound method, the accurate estimation of the global pose of a robot is realized by an iterative closest point algorithm, the real-time performance of the global positioning is realized by a two-stage matching algorithm in the whole scheme, and the positioning precision of the algorithm can be ensured; 2) the method has the advantages that the multiple sensors are fused for positioning, the problem of positioning failure of a single sensor is solved, the complementary characteristics among the multiple sensors are utilized, the accumulation of positioning errors is reduced, and the positioning accuracy of the algorithm is improved; 3) a robot repositioning judgment method is added in a relative robot positioning algorithm, so that accumulated errors of the robot can be corrected in time, and the positioning accuracy of the robot is improved; 4) the method combines absolute positioning and relative positioning to form a complete indoor mobile robot positioning scheme, so that the initial positioning and position tracking of the robot in the environment can be realized, and the overall systematicness of the algorithm is improved.
Drawings
FIG. 1 is a flow chart of an indoor mobile robot positioning method based on branch-and-bound algorithm according to the present invention;
FIG. 2 is a schematic diagram of an indoor mobile robot positioning method based on a branch-and-bound algorithm in the invention;
FIG. 3 is a flow chart of a branch and bound algorithm employed in the present invention;
FIG. 4 is a flow chart of relative positioning in the present invention;
FIG. 5 is a flow chart of global positioning in the present invention.
Detailed Description
The invention is further described with reference to the following figures and specific examples. The invention provides an indoor mobile robot positioning method based on a branch-and-bound algorithm, wherein a sensor installed on a robot comprises the following steps: the system comprises a depth camera, a 2D laser radar, an inertial sensor and a wheel type odometer, wherein the depth camera is arranged right in front of the robot and used for collecting an environment image right in front of the robot; the 2D laser radar is arranged right above the robot and used for collecting point cloud data of the surrounding environment; the inertial sensor is arranged right above the robot and is used for acquiring the linear acceleration and the angular velocity of the robot; the wheel type odometer is arranged on a wheel of the robot and is used for acquiring the speed of the robot; the method comprises the steps of fusing data collected by a depth camera and a laser radar, searching for a global initial pose in an off-line map by adopting a branch-and-bound method, and optimizing the global initial pose by adopting an iteration closest point method. And finally, the global pose is used as a global initial pose matched with laser radar scanning, and then the current pose of the robot is continuously calculated in a correlation scanning matching mode, so that the positioning precision of the robot in the traveling process can be ensured, and the accumulated error of the robot is reduced. In addition, in the operation process of the laser radar scanning matching algorithm, the loop detection is accelerated by adopting a branch and bound method to carry out global optimization, when the accumulated error cannot be corrected in time by adopting the loop detection algorithm, the accumulated state of the error is effectively detected by adopting an error judgment method, the error is reduced by repositioning in time, and the accuracy of the positioning process is improved.
As shown in fig. 1-2, a method for positioning an indoor mobile robot based on a branch-and-bound algorithm includes:
acquiring point cloud data of a depth camera, intercepting the point cloud data with the height z being 0, projecting the point cloud data to a robot body coordinate system, acquiring laser radar scanning data, converting the scanning data into point cloud data, projecting the point cloud data to the robot body coordinate system, and fusing the point cloud data on the two robot body coordinate systems;
selecting point cloud data with a certain height, intercepting the point cloud in the height plane, obtaining a transformation matrix from a camera coordinate system to a robot body coordinate system according to static coordinate transformation between a depth camera and the robot body coordinate system, and converting the intercepted point cloud into the robot body coordinate system through the transformation matrix;
the 2D lidar data is usually represented by polar coordinates, and the point cloud data is three-dimensional data, so that the lidar data needs to be converted into coordinate representation under a spatial rectangular coordinate system, the z-axis coordinate is set to 0, and the converted point cloud data is projected to the robot body coordinate system.
Step 1: collecting point cloud data of the surrounding environment of the robot, and converting the point cloud data into a robot body coordinate system; the method comprises the following steps:
step 1.1: collecting two groups of point cloud data of the surrounding environment of the position of the robot at the current moment through a depth camera and a laser radar respectively;
the method includes the steps that point cloud obtained through a depth camera needs to be denoised, due to the fact that original point cloud data obtained through the depth camera contains partial noise and outliers, the problems of irregular point cloud density and the like exist, voxel filtering needs to be conducted on the original point cloud data, a three-dimensional voxel grid can be created for input point cloud through the voxel filtering, the center of gravity of all points in voxels is used for approximately representing other points in the voxels in each voxel, all the points in the voxels are finally represented by a center of gravity point, and the shape of the original point cloud can be kept in the process of down-sampling.
Step 1.2: denoising point cloud data (depth camera point cloud data for short) acquired by a depth camera; specifically, a voxel filtering mode is adopted for denoising;
step 1.3: intercepting data points with certain height (z is 0) from the denoised depth camera point cloud data, and projecting the point cloud data and the point cloud data acquired by the laser radar to a robot body coordinate system respectively to realize the fusion of the point cloud data;
obtaining a transformation matrix from a camera coordinate system to a robot coordinate system according to static coordinate transformation between a depth camera coordinate system and the robot coordinate system, and converting the intercepted point cloud into a robot body coordinate system through the transformation matrix; the formula for the conversion is as follows:
Pb=Rbc*Pc+tbc
wherein, PbFor transformation to a point cloud under the robot coordinate system, RbcIs a rotation matrix from the camera coordinate system to the robot coordinate system, PcAs a point cloud in the camera coordinate system, tbcIs a translation matrix from the camera coordinate system to the robot coordinate system.
Then, converting data (laser radar data for short) obtained by a laser radar into point cloud data, and projecting the point cloud data to a robot coordinate system to be fused with the point cloud data obtained by a depth camera;
data obtained by the 2D lidar is usually expressed in polar coordinates, while point cloud data is three-dimensional data, so that the data obtained by the 2D lidar needs to be converted into coordinate representation in a spatial rectangular coordinate system, and a z-axis coordinate is set to 0, and the conversion formula is as follows:
θcur=θmin+i*Δθ
x=R*cosθcur
y=R*sinθcur
z=0
wherein, thetaminFor the angle at which the lidar starts scanning, take θ in this exampleminWhere Δ θ is 0.33 °, i represents the ith point from the start of scanning to the current position of the laser radar, R represents the distance value returned by the scanning point of the laser radar at the current position, and θ represents the angular resolution of the rotation of the laser radarcurAngle indicating the difference between the current position and the position at which scanning is startedThe degrees, x, y, and z respectively represent coordinates of three dimensions converted into point cloud data.
And projecting the converted point cloud data to a robot coordinate system, wherein the projection formula is as follows:
Pb=Rbl*Pl+tbl
wherein, PbFor transformation to a point cloud under the robot coordinate system, RblIs a rotation matrix from the laser radar coordinate system to the robot coordinate system, PlIs a point cloud under the laser radar coordinate system, tblIs a translation matrix from the laser radar coordinate system to the robot coordinate system.
Step 2: converting a pre-constructed off-line map of the whole environment into a plurality of layers of grid maps with different resolutions; the concrete expression is as follows: starting from a grid map with an original resolution, each layer corresponds to a height value h, the height h corresponding to the map with the original resolution is 0, the height values are increased gradually layer by layer, and the probability value of each grid in the grid map with the height h is 2 around the corresponding grid of the map with the original resolutionh*2hMaximum of individual grid probabilities.
And taking the fused point cloud data as the input of a branch-and-bound algorithm, searching and matching with a map established offline, converting the offline map into six layers of grid maps with different resolutions by the branch-and-bound algorithm, and obtaining the global initial pose of the robot in a depth-first search mode. As shown in fig. 3, in particular, the principle of the branch-and-bound algorithm is to represent all possible solution sets of the problem as nodes in a tree, wherein the root node represents all possible solutions, the child nodes of each node constitute a subset of the parent nodes, and the leaf nodes are of a single structure, representing one feasible solution of the problem. In the invention, a plurality of leaf nodes are quickly evaluated through depth-first search, and the nodes with scores lower than the optimal score are pruned to quickly find the optimal matching.
And step 3: the fused point cloud data is used as the input of a branch-and-bound algorithm, nodes with the highest scores of each layer are searched in a search tree in a depth-first search mode, leaf nodes with the highest scores are used as the best matching result, and the global initial pose of the robot can be obtained according to the result; the method comprises the following steps:
step 3.1: determining a node selection strategy, and determining the node selection strategy by adopting a depth-first search mode; the algorithm can quickly evaluate a plurality of nodes, quickly prune the nodes which do not meet the conditions and improve the operation efficiency of the branch-and-bound algorithm.
Step 3.2: determining a branching rule using an integer array c ═ x, y, θ, c for each node in the treeh) Indicating that the height of the tree in which the node is located is chIndicating that c is satisfied in the treehNode c > 1 is divided into 4 nodes with height ch-1The child node of (2):
wherein,representing a finite set formed by setting a certain search window, and taking Sx=30Sy=30Sθ=180°:
Wherein s isx、sy、sθBoundary search ranges, S, representing the translation and rotation directions, respectivelyx、Sy、SθRespectively representing an x-direction linear search window, a y-direction linear search window and an angle search window, wherein r represents the resolution of the grid map, r is 0.05, and epsilonθDenotes the angle step factor, εθExpressed as:
wherein h ismaxRepresenting the maximum value of the distance measured in all hit points of a frame of laser radar data;
step 3.3: calculating the upper bound of the branch-and-bound algorithm:
in the formula, N (x)i,yi) Is a function of bicubic interpolation,representing corresponding heights c constructed in advance converted from an off-line maphThe grid map of (a); score (c) is the projection of the current lidar data onto a gridThe sum of the probability values above (a) and (b),is the pose, s, corresponding to node ciThe hit points are obtained by scanning the laser radar, and K is the number of hit points in one frame of laser radar data; grid (C)Stored is of size 2h×2hThe maximum value of the grid probability in the pixel frame can be used for quickly calculating the score of the laser scanning matching in a mode of pre-calculating the grid.
Step 3.4: recursively searching the upper bound of each node in each layer in a depth-first search mode, and taking the leaf node with the best score as the best match of the robot pose to obtain the global initial pose of the robot:
ξglobal=ξ0+(rx,ry,εθθ)
in the formula, xi0Indicating the initial pose, ξ, at the start of the searchglobalAnd representing the global initial pose obtained by finishing the search.
And 4, step 4: taking point cloud data obtained at the current moment as input point cloud, taking an off-line map as target point cloud, taking the global initial pose of the robot as the initial pose of an iterative closest point algorithm, and optimizing the initial pose through the iterative closest point algorithm to obtain the accurate global pose of the robot; the method comprises the following steps:
step 4.1: converting the fused point cloud data into a world coordinate system, wherein a conversion formula is as follows:
Pw=Rwb*Pb+twb
wherein, PwFor transformation to a point cloud under the world coordinate system, RwbIs a rotation matrix from the robot coordinate system to the world coordinate system, PbIs a point cloud under the robot coordinate system, twbIs a translation matrix from the robot coordinate system to the world coordinate system.
Step 4.2: calculating the mass centers of the two groups of point cloud data under a world coordinate system:
point clouds to be converted form a point set:
P={p1,p2,…,pM}
the target point cloud forms a point set:
Q={q1,q2,…,qN}
suppose the number of points in P and Q are both NpThen, the centroids of the two sets of point sets are respectively:
in the formula, mup、μqRespectively representing the centroid of the point cloud data P after fusion and the centroid of the point cloud data Q of the off-line map, Np、NqRespectively representing the number of points in the point cloud data P and the number of points in the point cloud data QNumber of points, pi、qjRespectively representing the coordinates of the ith point in the point cloud data P and the coordinates of the jth point in the point cloud data Q;
step 4.3: designing an error function E (R, t) in an iterative closest point algorithm;
in the formula, R is a rotation matrix corresponding to two groups of point cloud set transformation, t is a translation matrix, and N is the number of points participating in point cloud transformation;
step 4.4: iterative solving R and t by using an iterative closest point algorithm, stopping iterative operation when a convergence condition is reached, and taking an output optimal solution as an accurate global pose of the robot; stopping the iteration when one of the following convergence conditions is reached:
1) when the iteration times reach the maximum iteration times, the algorithm is considered to be converged;
2) if the difference value between the current transformation matrix and the current transformation matrix is smaller than a set threshold value, the algorithm is considered to be converged;
3) and if the sum of the mean square errors of the Euclidean distances of the point pairs of the current iteration and the later iteration is less than a set threshold value, the algorithm is considered to be converged.
The obtained rotation matrix and translation matrix can be used as accurate global initial pose of the robot, when the robot performs continuous position tracking, the obtained rotation matrix and translation matrix can be used as initial pose of relative positioning, and the whole global positioning algorithm comprises a step 3 and a step 4, as shown in fig. 5.
And determining the absolute position of the robot in the environment according to the obtained global pose of the robot, taking data acquired by a wheel type odometer and an inertial sensor (imu for short) through extended Kalman filtering as a current pose estimation value of the robot, and obtaining the relative pose of the robot in the advancing process through a correlation scanning matching algorithm.
And 5: acquiring pose information of the robot at the current moment as a pose estimation value, and obtaining a relative pose of the robot in the advancing process by utilizing a correlation scanning matching algorithm; the method comprises the following steps:
step 5.1: acquiring the linear acceleration and the angular velocity of the robot through an inertial sensor, acquiring the velocity of the robot through a wheel type odometer, and taking data obtained by fusing two groups of data through extended Kalman filtering as a pose estimation value of the current position of the robot;
step 5.2: converting data acquired by a laser radar at the current moment into a currently constructed sub-graph according to a pose estimation value of the current position of the robot;
the conversion formula is as follows:
wherein p ismRepresenting lidar data converted into subgraphs, psThe laser radar data at the current moment is represented, theta represents the rotation angle of the pose estimation, and x and y represent the x-direction offset and the y-direction offset of the pose estimation, respectively.
Step 5.3: the error function in the design correlation scan matching algorithm is:
in the formula, HsmoothIs a bicubic interpolation function for smoothing the probability values of the grid in the sub-map, TεTransformation matrix, s, representing the correspondence of pose estimatesi′Representing the ith 'frame data collected by the laser radar, wherein K is the point number contained in the ith' frame of laser radar data;
step 5.4: iterative optimization of an error function is carried out by utilizing a correlation scanning matching algorithm, iterative calculation is stopped when a convergence condition is met, and an output value is used as an optimal solution of current pose estimation; stopping the iterative computation when one of the following convergence conditions is reached:
1) when the set maximum iteration number is reached, the algorithm is considered to be converged;
2) and when the difference value between the current transformation matrix and the last transformation matrix is smaller than a set threshold value, the algorithm is considered to be converged.
Step 5.5: updating probability values of grids in the subgraph: when a frame of laser radar data is inserted into the subgraph, a probability value is allocated to grid points which are not observed previously, and the probability values of the grid points which are observed previously are updated;
Hnew(x)=clamp(F-1(F(Hold(x))*F(p)))
wherein clamp is a range-defining function, p is the probability of the lattice being occupied, F (p) is the ratio of the lattice occupation probability to the idle probability, Hold(x) To update the previous trellis probability value, Hnew(x) Is the updated trellis probability value.
Step 6: accelerating the speed of loop detection by a branch-and-bound algorithm in a depth-first search mode, searching for similar matching of the current robot in all constructed sub-graphs, adding the similar matching into a closed-loop constraint, and optimizing a pose graph by adopting a nonlinear optimization library; the method comprises the following steps:
step 6.1: taking the optimal solution of the current pose estimation as the input of a branch-and-bound algorithm, searching and matching with all constructed sub-images, and obtaining similar matching of the robot in all sub-images in a depth-first searching mode;
step 6.2: similar matching of the pose of the current robot is taken as loop constraint and added into the pose graph for optimization, and an error function of global pose graph optimization is designed as follows:
in the formula,for the pose corresponding to the ith sub-image,position and appearance corresponding to j frame laser dataijIs the covariance between the ith sub-image pose and the jth frame lidar pose, ξijAnd (3) regarding the relative pose of the ith sub-graph and the jth frame of laser radar, wherein delta is a robust kernel function, E (-) represents a residual function, and the calculation formula is as follows:
and 7: judging the size of the accumulated error of the robot at certain intervals, if the accumulated error exceeds a set threshold, returning to the step 3 to correct the accumulated error, and outputting the corrected pose of the robot; the method comprises the following steps of judging the process of robot accumulated errors, wherein the main principle is that fusion point cloud data is transformed to a world coordinate system through coordinates, indexes of each point in the transformed laser radar data corresponding to specific grids are calculated, the probability value of the corresponding position of an off-line map is searched, if the probability value of the position in the off-line map indicates that the position is unknown or unoccupied, the number of unmatched points is updated, and if not, the number of matched points is updated; the method specifically comprises the following steps:
step 7.1: and converting the fused point cloud data into a map coordinate system according to the optimized relative pose of the robot, wherein the conversion formula is as follows:
Pw=Rwb*Pb+twb
wherein, PwFor transformation to a point cloud under the world coordinate system, RwbIs a rotation matrix from the robot coordinate system to the world coordinate system, PbIs a point cloud under the robot coordinate system, twbA translation matrix from a robot coordinate system to a world coordinate system;
step 7.2: calculating the index of the fused point cloud data on the grid map corresponding to the original resolution, storing the index of each position of the map and the corresponding probability value in an array when loading the off-line map for searching in the matching process, wherein the calculation formula of the fused point cloud on the grid map index is as follows:
Ix=(xtrans-xoriggin-0.5*r)/r+1
Iy=(ytrans-yorigin-0.5*r)/r+1
in the formula IxFor indexing in the x-direction of the map coordinate system, IyFor indexing in the y-direction of the map coordinate system, xtransFor the transformed fused point cloud x-direction translation amount, ytransFor the transformed fused point cloud y-direction translation amount, xoriginAs the origin x-coordinate, y of the grid maporiginThe coordinate of an origin y of the grid map is shown, r is the resolution of the grid map, and r is 0.05;
step 7.3: calculating the matching probability P of the fused point cloud data and the grid mapoccupy(ii) a When the fused point cloud data is located at the correct position of the raster map, the correct matching count value is increased progressively, and when the occupation conditions of the fused point cloud data and the original raster of the raster map are inconsistent, the wrong matching count value is increased progressively, and the final matching probability calculation formula is as follows:
Poccupy=ic/(ic+ie)
in the formula icFor all point cloud data matching the correct number, ieMatching the number of errors in all point cloud data;
step 7.4: judging whether to reposition according to a set threshold value when P isoccupyWhen the pose of the robot at the current moment is less than or equal to the set threshold, the robot is considered to have accumulated errors to a certain degree, and therefore the step 3 needs to be executed to correct the accumulated errors;
step 7.5: and (5) repeatedly executing the step 7.1 to the step 7.5 every certain time (delta T is 5s) to judge the error accumulation state so as to ensure the positioning accuracy of the robot in the advancing process, wherein the step 5 to the step 7 are the positioning process of the relative pose in the advancing process of the robot, and the design schematic diagram of the positioning process is shown in fig. 4.
In order to verify the effectiveness of the invention, a simulation experiment and an indoor actual environment test experiment are carried out on the positioning effect of the invention, two parts of test experiments of global positioning and relative positioning are mainly carried out, a main test robot accurately estimates the capability of the robot at any position in a map under the condition of knowing a prior map, the test environments are divided into a simulation environment and an actual environment and are respectively tested for 20 times in the same environment, the running efficiency and the accuracy of a test evaluation algorithm are tested, the test results are as follows, in the table, trans _ x, trans _ y and trans _ z respectively represent x-direction translation errors, y-direction translation errors and z-direction translation errors, ang _ x, ang _ y and ang _ z respectively represent x-direction rotation angle errors, y-direction rotation angle errors and z-direction rotation angle errors:
TABLE 1 Global positioning accuracy comparison Table
TABLE 2 Global localization efficiency comparison Table
TABLE 3 comparative table of relative positioning accuracy
According to the experimental data, the indoor robot positioning method based on the branch-and-bound algorithm provided by the invention has good precision in both global positioning and relative positioning; due to the actual engineering requirements, the robot positioning algorithm must be capable of continuous positioning, so that the real-time performance is a necessary condition, the operation efficiency of the relative positioning algorithm is not evaluated, and the algorithms involved in the experiment can perform relative local positioning.
Claims (8)
1. A method for positioning an indoor mobile robot based on a branch-and-bound algorithm is characterized by comprising the following steps:
step 1: collecting point cloud data of the surrounding environment of the robot, and converting the point cloud data into a robot body coordinate system;
step 2: converting a pre-constructed off-line map of the whole environment into a plurality of layers of grid maps with different resolutions;
and step 3: the fused point cloud data is used as the input of a branch-and-bound algorithm, and a leaf node with the highest score is searched in a search tree as the global initial pose of the robot in a depth-first search mode;
and 4, step 4: taking point cloud data obtained at the current moment as input point cloud, taking an off-line map as target point cloud, taking the global initial pose of the robot as the initial pose of an iterative closest point algorithm, and obtaining an accurate global pose of the robot through the iterative closest point algorithm;
and 5: acquiring pose information of the robot at the current moment as a pose estimation value, and obtaining a relative pose of the robot in the advancing process by utilizing a correlation scanning matching algorithm;
step 6: accelerating the speed of loop detection in a depth-first search mode through a branch-and-bound algorithm, searching for similar matching of the current robot in all constructed sub-graphs, and optimizing a pose graph by adopting a nonlinear optimization library;
and 7: and judging the size of the accumulated error of the robot at certain intervals, if the size exceeds a set threshold, returning to the step 3 to correct the accumulated error, and outputting the corrected robot pose.
2. The indoor mobile robot positioning method based on branch-and-bound algorithm according to claim 1, wherein the step 1 comprises:
step 1.1: collecting two groups of point cloud data of the surrounding environment of the position of the robot at the current moment through a depth camera and a laser radar respectively;
step 1.2: denoising point cloud data acquired by a depth camera;
step 1.3: and intercepting data points with a certain height from the denoised depth camera point cloud data, and projecting the point cloud data and the point cloud data acquired by the laser radar to a robot body coordinate system respectively to realize the fusion of the point cloud data.
3. The indoor mobile robot positioning method based on branch-and-bound algorithm according to claim 1, wherein the step 2 is specifically expressed as: starting from a grid map with an original resolution, each layer corresponds to a height value h, the height h corresponding to the map with the original resolution is 0, the height values are increased gradually layer by layer, and the probability value of each grid in the grid map with the height h is 2 around the corresponding grid of the map with the original resolutionh*2hMaximum of individual grid probabilities.
4. The indoor mobile robot positioning method based on branch-and-bound algorithm as claimed in claim 1, wherein the step 3 comprises:
step 3.1: determining a node selection strategy, and determining the node selection strategy by adopting a depth-first search mode;
step 3.2: determining a branching rule using an integer array c ═ x, y, θ, c for each node in the treeh) Indicating that the height of the tree in which the node is located is chIndicating that c is satisfied in the treeh>Node c of 1 is divided into 4 nodes with height ch-1The child node of (2):
wherein s isx、sy、sθBoundary search ranges, S, representing the translation and rotation directions, respectivelyx、Sy、SθRespectively representing an x-direction linear search window, a y-direction linear search window and an angle search window, r represents the resolution of the grid map, epsilonθDenotes the angle step factor, εθExpressed as:
wherein h ismaxRepresenting the maximum value of the distance measured in all hit points of a frame of laser radar data;
step 3.3: calculating the upper bound of the branch-and-bound algorithm:
in the formula,representing corresponding heights c constructed in advance converted from an off-line maphThe grid map of (a); score (c) is the projection of the current lidar data onto a gridThe sum of the probability values above (a) and (b),is the pose, s, corresponding to node ciK is the number of hit points in one frame of laser radar data;
step 3.4: recursively searching the upper bound of each node in each layer in a depth-first search mode, and taking the leaf node with the best score as the best match of the robot pose to obtain the global initial pose of the robot:
ξglobal=ξ0+(rx,ry,εθθ)
in the formula, xi0Indicating the initial pose, ξ, at the start of the searchglobalAnd representing the global initial pose obtained by finishing the search.
5. The indoor mobile robot positioning method based on branch-and-bound algorithm as claimed in claim 1, wherein the step 4 comprises:
step 4.1: converting the fused point cloud data into a world coordinate system;
step 4.2: calculating the mass centers of the two groups of point cloud data under a world coordinate system:
in the formula, mup、μqRespectively representing the centroid of the point cloud data P after fusion and the centroid of the point cloud data Q of the off-line map, Np、NqRespectively representing the number of points in the point cloud data P, the number of points in the point cloud data Q, Pi、qjRespectively representing the coordinates of the ith point in the point cloud data P and the coordinates of the jth point in the point cloud data Q;
step 4.3: designing an error function E (R, t) in an iterative closest point algorithm;
in the formula, R is a rotation matrix corresponding to two groups of point cloud set transformation, t is a translation matrix, and N is the number of points participating in point cloud transformation;
step 4.4: and (5) iterative solution of R and t is carried out by using an iterative closest point algorithm, iterative operation is stopped when a convergence condition is reached, and the output optimal solution is used as an accurate global pose of the robot.
6. The indoor mobile robot positioning method based on branch-and-bound algorithm as claimed in claim 1, wherein the step 5 comprises:
step 5.1: acquiring the linear acceleration and the angular velocity of the robot through an inertial sensor, acquiring the velocity of the robot through a wheel type odometer, and taking data obtained by fusing two groups of data through extended Kalman filtering as a pose estimation value of the current position of the robot;
step 5.2: converting data acquired by a laser radar at the current moment into a currently constructed sub-graph according to a pose estimation value of the current position of the robot;
step 5.3: the error function in the design correlation scan matching algorithm is:
in the formula, HsmoothIs a bicubic interpolation function for smoothing the probability values of the grid in the sub-map, TεTransformation matrix, s, representing the correspondence of pose estimatesi′Representing the ith 'frame data collected by the laser radar, wherein K is the point number contained in the ith' frame of laser radar data;
step 5.4: iterative optimization of an error function is carried out by utilizing a correlation scanning matching algorithm, iterative calculation is stopped when a convergence condition is met, and an output value is used as an optimal solution of current pose estimation;
step 5.5: updating probability values of grids in the subgraph;
Hnew(x)=clamp(F-1(F(Hold(x))*F(p)))
wherein clamp is a range-defining function, p is the probability of the lattice being occupied, F (p) is the ratio of the lattice occupation probability to the idle probability, Hold(x) To update the previous trellis probability value, Hnew(x) Is the updated trellis probability value.
7. The indoor mobile robot positioning method based on branch-and-bound algorithm as claimed in claim 1, wherein the step 6 comprises:
step 6.1: taking the optimal solution of the current pose estimation as the input of a branch-and-bound algorithm, searching and matching with all constructed sub-images, and obtaining similar matching of the robot in all sub-images in a depth-first searching mode;
step 6.2: similar matching of the pose of the current robot is taken as loop constraint and added into the pose graph for optimization, and an error function of global pose graph optimization is designed as follows:
in the formula,for the pose corresponding to the ith sub-image,position and appearance corresponding to j frame laser dataijIs the covariance between the ith sub-image pose and the jth frame lidar pose, ξijAnd delta is a robust kernel function, and E (-) represents a residual function, wherein delta is the relative pose of the ith sub-graph and the jth frame of the laser radar.
8. The indoor mobile robot positioning method based on branch-and-bound algorithm as claimed in claim 1, wherein the step 7 comprises:
step 7.1: converting the fused point cloud data into a map coordinate system according to the optimized relative pose of the robot;
step 7.2: calculating the index of the fused point cloud data on the grid map corresponding to the original resolution:
Ix=(xtrans-xorigin-0.5*r)/r+1
Iy=(ytrans-yorigin-0.5*r)/r+1
in the formula IxFor indexing in the x-direction of the map coordinate system, IyFor indexing in the y-direction of the map coordinate system, xtransFor the transformed fused point cloud x-direction translation amount, ytransFor the transformed fused point cloud y-direction translation amount, xoriginAs the origin x-coordinate, y of the grid maporiginThe y coordinate of the origin of the grid map is used, and r is the resolution of the grid map;
step 7.3: calculating the matching probability P of the fused point cloud data and the grid mapoccupy:
Poccupy=ic/(ic+ie)
In the formula icFor all point cloud data matching the correct number, ieMatching the number of errors in all point cloud data;
step 7.4: judging whether to reposition according to a set threshold value when P isoccupyWhen the pose of the robot at the current moment is less than or equal to the set threshold, the robot is considered to have accumulated errors to a certain degree, and therefore the step 3 needs to be executed to correct the accumulated errors;
step 7.5: and (4) repeatedly executing the step 7.1 to the step 7.5 at certain intervals to judge the error accumulation state, so that the positioning accuracy of the robot in the advancing process is ensured.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110863303.6A CN113587933B (en) | 2021-07-29 | 2021-07-29 | Indoor mobile robot positioning method based on branch-and-bound algorithm |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110863303.6A CN113587933B (en) | 2021-07-29 | 2021-07-29 | Indoor mobile robot positioning method based on branch-and-bound algorithm |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113587933A true CN113587933A (en) | 2021-11-02 |
CN113587933B CN113587933B (en) | 2024-02-02 |
Family
ID=78251753
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110863303.6A Active CN113587933B (en) | 2021-07-29 | 2021-07-29 | Indoor mobile robot positioning method based on branch-and-bound algorithm |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113587933B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114279434A (en) * | 2021-12-27 | 2022-04-05 | 驭势科技(北京)有限公司 | Picture construction method and device, electronic equipment and storage medium |
CN114593737A (en) * | 2022-03-11 | 2022-06-07 | 美智纵横科技有限责任公司 | Control method, control device, robot and storage medium |
CN115451942A (en) * | 2022-09-26 | 2022-12-09 | 西南交通大学 | Vehicle bottom inspection robot SLAM loop detection method based on wheel characteristics |
CN115880673A (en) * | 2023-02-22 | 2023-03-31 | 西南石油大学 | Obstacle avoidance method and system based on computer vision |
CN115950427A (en) * | 2022-12-23 | 2023-04-11 | 合肥中科深谷科技发展有限公司 | Robot navigation scanning matching algorithm based on combination of ICP (inductively coupled plasma) and genetic optimization |
CN117031481A (en) * | 2023-08-14 | 2023-11-10 | 北京数字绿土科技股份有限公司 | Mobile robot repositioning method and system based on projection 3D laser point cloud |
CN117330083A (en) * | 2023-12-01 | 2024-01-02 | 深圳市好奇心探索科技有限公司 | Robot positioning method, robot, and readable storage medium |
Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080059056A1 (en) * | 2006-08-31 | 2008-03-06 | Raytheon Company | Method for realtime scaling of the vehicle routing problem |
US20120209652A1 (en) * | 2011-02-14 | 2012-08-16 | Deepak Khosla | System and method for resource allocation and management |
CN109993113A (en) * | 2019-03-29 | 2019-07-09 | 东北大学 | A kind of position and orientation estimation method based on the fusion of RGB-D and IMU information |
CN110163968A (en) * | 2019-05-28 | 2019-08-23 | 山东大学 | RGBD camera large-scale three dimensional scenario building method and system |
KR102029850B1 (en) * | 2019-03-28 | 2019-10-08 | 세종대학교 산학협력단 | Object detecting apparatus using camera and lidar sensor and method thereof |
CN111045017A (en) * | 2019-12-20 | 2020-04-21 | 成都理工大学 | Method for constructing transformer substation map of inspection robot by fusing laser and vision |
CN111540005A (en) * | 2020-04-21 | 2020-08-14 | 南京理工大学 | Loop detection method based on two-dimensional grid map |
CN111693047A (en) * | 2020-05-08 | 2020-09-22 | 中国航空工业集团公司西安航空计算技术研究所 | Visual navigation method for micro unmanned aerial vehicle in high-dynamic scene |
AU2020101833A4 (en) * | 2019-12-03 | 2020-09-24 | Wuhan University Of Science And Technology | Laser slam method based on subgraph merging and pose optimization |
WO2020237693A1 (en) * | 2019-05-31 | 2020-12-03 | 华南理工大学 | Multi-source sensing method and system for water surface unmanned equipment |
US20210010824A1 (en) * | 2018-03-19 | 2021-01-14 | Tomtom Traffic B.V. | Methods and Systems for Generating Parking Routes |
CN112258600A (en) * | 2020-10-19 | 2021-01-22 | 浙江大学 | Simultaneous positioning and map construction method based on vision and laser radar |
CN112258618A (en) * | 2020-11-04 | 2021-01-22 | 中国科学院空天信息创新研究院 | Semantic mapping and positioning method based on fusion of prior laser point cloud and depth map |
US10962630B1 (en) * | 2019-10-18 | 2021-03-30 | Toyota Research Institute, Inc. | System and method for calibrating sensors of a sensor system |
CN112785702A (en) * | 2020-12-31 | 2021-05-11 | 华南理工大学 | SLAM method based on tight coupling of 2D laser radar and binocular camera |
CN112985416A (en) * | 2021-04-19 | 2021-06-18 | 湖南大学 | Robust positioning and mapping method and system based on laser and visual information fusion |
-
2021
- 2021-07-29 CN CN202110863303.6A patent/CN113587933B/en active Active
Patent Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080059056A1 (en) * | 2006-08-31 | 2008-03-06 | Raytheon Company | Method for realtime scaling of the vehicle routing problem |
US20120209652A1 (en) * | 2011-02-14 | 2012-08-16 | Deepak Khosla | System and method for resource allocation and management |
US20210010824A1 (en) * | 2018-03-19 | 2021-01-14 | Tomtom Traffic B.V. | Methods and Systems for Generating Parking Routes |
KR102029850B1 (en) * | 2019-03-28 | 2019-10-08 | 세종대학교 산학협력단 | Object detecting apparatus using camera and lidar sensor and method thereof |
CN109993113A (en) * | 2019-03-29 | 2019-07-09 | 东北大学 | A kind of position and orientation estimation method based on the fusion of RGB-D and IMU information |
CN110163968A (en) * | 2019-05-28 | 2019-08-23 | 山东大学 | RGBD camera large-scale three dimensional scenario building method and system |
WO2020237693A1 (en) * | 2019-05-31 | 2020-12-03 | 华南理工大学 | Multi-source sensing method and system for water surface unmanned equipment |
US10962630B1 (en) * | 2019-10-18 | 2021-03-30 | Toyota Research Institute, Inc. | System and method for calibrating sensors of a sensor system |
AU2020101833A4 (en) * | 2019-12-03 | 2020-09-24 | Wuhan University Of Science And Technology | Laser slam method based on subgraph merging and pose optimization |
CN111045017A (en) * | 2019-12-20 | 2020-04-21 | 成都理工大学 | Method for constructing transformer substation map of inspection robot by fusing laser and vision |
CN111540005A (en) * | 2020-04-21 | 2020-08-14 | 南京理工大学 | Loop detection method based on two-dimensional grid map |
CN111693047A (en) * | 2020-05-08 | 2020-09-22 | 中国航空工业集团公司西安航空计算技术研究所 | Visual navigation method for micro unmanned aerial vehicle in high-dynamic scene |
CN112258600A (en) * | 2020-10-19 | 2021-01-22 | 浙江大学 | Simultaneous positioning and map construction method based on vision and laser radar |
CN112258618A (en) * | 2020-11-04 | 2021-01-22 | 中国科学院空天信息创新研究院 | Semantic mapping and positioning method based on fusion of prior laser point cloud and depth map |
CN112785702A (en) * | 2020-12-31 | 2021-05-11 | 华南理工大学 | SLAM method based on tight coupling of 2D laser radar and binocular camera |
CN112985416A (en) * | 2021-04-19 | 2021-06-18 | 湖南大学 | Robust positioning and mapping method and system based on laser and visual information fusion |
Non-Patent Citations (6)
Title |
---|
刘忠泽: "无人平台越野环境下同步定位与地图创建", 兵工学报 * |
刘鹏鑫;王扬;: "多传感器集成测量系统的数据对齐方法", 华南理工大学学报(自然科学版), no. 09 * |
张伟伟: "融合激光与视觉点云信息的定位与建图方法", 计算机应用与软件, pages 1 - 6 * |
李敏: "基于深度优先搜索分支定界法的Graph-SLAM后端优化算法改进", 自动化技术与应用, pages 1 - 5 * |
李鑫: "基于多分辨率搜索与多点云密度匹配的快速ICP-SLAM 方法", 机器人, pages 1 - 12 * |
蔡先杰;: "基于局部坐标系法线投射的点云精细配准算法", 现代计算机(专业版), no. 26 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114279434A (en) * | 2021-12-27 | 2022-04-05 | 驭势科技(北京)有限公司 | Picture construction method and device, electronic equipment and storage medium |
CN114593737A (en) * | 2022-03-11 | 2022-06-07 | 美智纵横科技有限责任公司 | Control method, control device, robot and storage medium |
CN115451942A (en) * | 2022-09-26 | 2022-12-09 | 西南交通大学 | Vehicle bottom inspection robot SLAM loop detection method based on wheel characteristics |
CN115451942B (en) * | 2022-09-26 | 2024-08-02 | 西南交通大学 | SLAM loop detection method for vehicle bottom inspection robot based on wheel characteristics |
CN115950427A (en) * | 2022-12-23 | 2023-04-11 | 合肥中科深谷科技发展有限公司 | Robot navigation scanning matching algorithm based on combination of ICP (inductively coupled plasma) and genetic optimization |
CN115950427B (en) * | 2022-12-23 | 2024-02-09 | 合肥中科深谷科技发展有限公司 | Robot navigation scanning matching algorithm based on combination of ICP and genetic optimization |
CN115880673A (en) * | 2023-02-22 | 2023-03-31 | 西南石油大学 | Obstacle avoidance method and system based on computer vision |
CN117031481A (en) * | 2023-08-14 | 2023-11-10 | 北京数字绿土科技股份有限公司 | Mobile robot repositioning method and system based on projection 3D laser point cloud |
CN117330083A (en) * | 2023-12-01 | 2024-01-02 | 深圳市好奇心探索科技有限公司 | Robot positioning method, robot, and readable storage medium |
CN117330083B (en) * | 2023-12-01 | 2024-04-19 | 深圳市好奇心探索科技有限公司 | Robot positioning method, robot, and readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN113587933B (en) | 2024-02-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113587933B (en) | Indoor mobile robot positioning method based on branch-and-bound algorithm | |
CN108444482B (en) | Unmanned aerial vehicle autonomous road finding and obstacle avoiding method and system | |
CN110930495A (en) | Multi-unmanned aerial vehicle cooperation-based ICP point cloud map fusion method, system, device and storage medium | |
CN109186606B (en) | Robot composition and navigation method based on SLAM and image information | |
CN110501017A (en) | A kind of Mobile Robotics Navigation based on ORB_SLAM2 ground drawing generating method | |
CN113168717A (en) | Point cloud matching method and device, navigation method and equipment, positioning method and laser radar | |
CN113781582A (en) | Synchronous positioning and map creating method based on laser radar and inertial navigation combined calibration | |
CN113432600A (en) | Robot instant positioning and map construction method and system based on multiple information sources | |
CN114004869A (en) | Positioning method based on 3D point cloud registration | |
CN113985429B (en) | Unmanned aerial vehicle environment scanning and reconstructing method based on three-dimensional laser radar | |
CN113345008B (en) | Laser radar dynamic obstacle detection method considering wheel type robot position and posture estimation | |
CN113409410A (en) | Multi-feature fusion IGV positioning and mapping method based on 3D laser radar | |
CN110986956B (en) | Autonomous learning global positioning method based on improved Monte Carlo algorithm | |
CN114119920A (en) | Three-dimensional point cloud map construction method and system | |
Liu et al. | Real-time 6d lidar slam in large scale natural terrains for ugv | |
CN113514843A (en) | Multi-subgraph laser radar positioning method and system and terminal | |
CN111273312A (en) | Intelligent vehicle positioning and loop-back detection method | |
WO2024120269A1 (en) | Position recognition method for fusing point cloud map, motion model and local feature | |
CN114596360A (en) | Double-stage active instant positioning and graph building algorithm based on graph topology | |
CN115131514A (en) | Method, device and system for simultaneously positioning and establishing image and storage medium | |
CN111812978A (en) | Cooperative SLAM method and system for multiple unmanned aerial vehicles | |
CN113034504B (en) | Plane feature fusion method in SLAM mapping process | |
CN113960614A (en) | Elevation map construction method based on frame-map matching | |
CN113503876A (en) | Multi-sensor fusion laser radar positioning method, system and terminal | |
CN116679307A (en) | Urban rail transit inspection robot positioning method based on three-dimensional laser radar |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |