CN115855062A - Autonomous mapping and path planning method for indoor mobile robot - Google Patents

Autonomous mapping and path planning method for indoor mobile robot Download PDF

Info

Publication number
CN115855062A
CN115855062A CN202211563757.2A CN202211563757A CN115855062A CN 115855062 A CN115855062 A CN 115855062A CN 202211563757 A CN202211563757 A CN 202211563757A CN 115855062 A CN115855062 A CN 115855062A
Authority
CN
China
Prior art keywords
point cloud
mobile robot
laser radar
grid
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211563757.2A
Other languages
Chinese (zh)
Inventor
欧阳奇
张兴兰
范月欣
杨伟静
林燕花
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Technology
Original Assignee
Chongqing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Technology filed Critical Chongqing University of Technology
Priority to CN202211563757.2A priority Critical patent/CN115855062A/en
Publication of CN115855062A publication Critical patent/CN115855062A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention provides an autonomous mapping and path planning method for an indoor mobile robot, which comprises the steps of collecting indoor environment information by adopting an explosion-proof laser radar and a strapdown inertial navigation which are installed on a mobile robot, carrying out joint calibration on the explosion-proof laser radar and the strapdown inertial navigation by an external reference calibration tool, then carrying out closed-loop uniform motion on the remote control mobile robot in the indoor environment according to a preset route, carrying out distortion correction and combined filtering on point cloud data of the laser radar by high-frequency information collected by the strapdown inertial navigation, carrying out pose estimation by fusing multi-sensor information by using unscented Kalman filtering, carrying out matching between each frame of point cloud by an NDT _ OMP point cloud registration algorithm, fitting a plane coefficient by a plane detection algorithm to add ground detection constraint for mapping, and autonomously planning a global map path of the mobile robot by an A-algorithm. The method can realize accurate navigation of the indoor environment of the mobile robot, improve the working efficiency of the indoor mobile robot and reduce the cost of human resources.

Description

Autonomous mapping and path planning method for indoor mobile robot
Technical Field
The invention relates to the technical field of mobile robots in indoor environments, in particular to an autonomous map building and path planning method for an indoor mobile robot.
Background
The mobile robot is widely applied to the service industry, the medical field, the military field and the like, and has wide application prospect. However, the inventor of the present invention finds, through research, that many indoor mapping algorithms are huge in size, generate ghost images for mapping in a narrow space or a complex environment, and are not beneficial to mobile robot path planning, so how to newly develop an autonomous mapping and path planning method for a mobile robot, which can autonomously map and plan a mobile robot path in an indoor environment and can quickly, accurately and stably complete multi-sensor fusion in an indoor environment, is of great significance.
Disclosure of Invention
The invention provides an autonomous mapping and path planning method for an indoor mobile robot, which can accurately map an indoor environment and can quickly, accurately and stably complete the path planning of the mobile robot with multi-sensor fusion in the indoor environment.
In order to solve the technical problems, the invention adopts the following technical scheme:
a method for automatically constructing an image and planning a path of an indoor mobile robot is characterized in that a plurality of sensors are adopted, the sensors comprise an explosion-proof laser radar and a strapdown inertial navigation device, the explosion-proof laser radar and the strapdown inertial navigation device are installed on a hydraulically-driven four-footed mobile robot, the explosion-proof laser radar and the strapdown inertial navigation device are electrically connected with the mobile robot, and the initial pose of the mobile robot in an indoor environment is set; the method comprises the following steps:
s1, initializing, namely moving the remote control robot to an indoor initial point, and establishing communication between the mobile robot and a computer;
s2, the mobile robot is static at an initial position, joint calibration is respectively carried out on the explosion-proof laser radar and the strapdown inertial navigation, and specifically calibration between coordinate systems is carried out on the explosion-proof laser radar and the explosion-proof strapdown inertial navigation through a lidar _ align external reference calibration tool;
s3, remotely controlling the mobile robot to move, starting from an initial point to an initial point in an indoor environment according to a preset route, and performing closed-loop uniform motion;
s4, the mobile robot carries out distortion correction processing on the point cloud data acquired by the explosion-proof laser radar through high-frequency information acquired by strapdown inertial navigation so as to improve the quality of the point cloud acquired by the indoor laser radar, and carries out combined filtering in various modes including voxel filtering, radius filtering and conditional filtering on original signals acquired by the explosion-proof laser radar so as to filter outliers and acquire point cloud information after down-sampling;
s5, the mobile robot carries out pose estimation on the multi-sensor information through the fusion of unscented Kalman filtering to obtain a pose matrix; the method comprises the following steps that information collected by strapdown inertial navigation is used as prediction information, and information collected by an explosion-proof laser radar is used as observation information;
s6, matching each frame of point cloud by the mobile robot through the point cloud data subjected to distortion correction in the step S4 through an NDT _ OMP point cloud registration algorithm to prepare for drawing;
s7, fitting plane coefficients of the point cloud data subjected to filtering processing in the step S4 through a plane detection algorithm by the mobile robot, adding ground detection constraints to the image construction, and improving the image construction precision;
s8, the mobile robot receives target point information sent by the upper computer, and the global map is subjected to path autonomous planning through an A-star algorithm;
and S9, the remote control mobile robot finishes the navigation and waits for the next instruction.
Further, the pose matrix obtained by performing pose estimation on the multi-sensor information in the step S5 in a fusion manner, and the pose matrix between the coordinate system of the explosion-proof laser radar and the strapdown inertial navigation coordinate system mainly comprise a rotation matrix and a translation matrix, and the coordinate systems of the explosion-proof laser radar and the explosion-proof strapdown inertial navigation are calibrated by a lidar _ align external reference calibration tool.
Further, the point cloud distortion correction method and the combined filtering algorithm adopted in step S4 are specifically that distortion correction processing is performed on the point cloud data of the laser radar through angular velocity information acquired by strapdown inertial navigation, then conditional filtering is performed on the point cloud data after the distortion correction processing to filter outliers, then voxel filtering is performed, and finally radius filtering processing is performed, and specifically include the following steps:
s41, firstly, acquiring data information of strapdown inertial navigation, wherein the angular velocity is represented as:
IMU_Agu(ang_x,ang_y,ang_z)
wherein ang _ x, ang _ y, and ang _ z represent components of angular velocity in the xyz direction, and the coordinates of the corrected laser point cloud are represented as:
Figure BDA0003985525010000031
wherein the content of the first and second substances,
Figure BDA0003985525010000032
denotes the conversion of the rotation angle into a quaternion, Δ t = scan _ period i/n, i =1,2 i ,y i ,z i ) Representing the coordinate representation of each point cloud in the current frame laser radar coordinate system;
s42, performing down-sampling processing on the laser point cloud, filtering out point clouds with too close distance and too far distance around the mobile robot through conditional filtering, calculating a cube which can just comprise all the point clouds of the current frame through voxel filtering, dividing the cube into different small cubes according to preset resolution, and approximating a plurality of points in each small cube by coordinates of a centroid in the small cube, wherein the calculation formula of the centroid is as follows:
Figure BDA0003985525010000041
Figure BDA0003985525010000042
Figure BDA0003985525010000043
wherein a represents the number of point clouds within each cube;
radius filtering is carried out to preset a filtering radius and a point cloud number threshold, euclidean distances from a center point cloud in a given radius are calculated by traversing all point cloud data, in the filtering radius, if the number of points with the Euclidean distances smaller than the filtering radius is smaller than the point cloud number threshold, the points are filtered out as outliers, otherwise, if the number of points with the Euclidean distances smaller than the filtering radius is larger than or equal to the point cloud number threshold, the outliers are not considered to be used for a subsequent drawing construction and a positioning module.
Further, the mobile robot performs pose estimation on the multi-sensor information by fusing through unscented kalman filtering in step S5 to obtain a pose matrix, which specifically includes the steps of:
s51, through a state equation and an observation equation of the robot system:
X k+1 =f(X k ,W k )
Z k =h(X k ,V k )
wherein, X k+1 Expressing the relationship between the point cloud of the kth frame and the point cloud of the (k + 1) th frame, and f expressing the state equation function of the nonlinear system; z k An observation equation representing the system, h represents the present nonlinear systemObserving an equation function in a system; w k And V k Current noise of the state equation and the observation equation respectively;
s52, obtaining 2n +1 Sigma point sets and weight values thereof:
Figure BDA0003985525010000044
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003985525010000045
represents the mean of the sample points, C is the covariance matrix of the current state; λ is a scaling parameter used to reduce the total prediction error; n is the number of parameters to be estimated;
s53, obtaining predicted values of the k +1 step from 2n +1 Sigma point sets and a state equation:
Figure BDA0003985525010000051
s54, according to the obtained 2n +1 prediction results and the weight of each prediction result:
Figure BDA0003985525010000052
wherein, subscript m represents the mean value of the predicted values of the step k +1, c represents the covariance of the predicted values of the step k +1, and the obtained weight value is input into the following formula to obtain the predicted mean value and covariance matrix of the system state quantity at the step k + 1:
Figure BDA0003985525010000053
Figure BDA0003985525010000054
wherein Q represents a prediction noise matrix;
s55, according to the predicted mean value
Figure BDA0003985525010000055
Sum covariance matrix P k+1|k UT transformation is used again to generate new 2n +1 Sigma point sets:
Figure BDA0003985525010000056
s56, substituting the Sigma point set into an observation equation to obtain a predicted observed quantity of the k +1 step:
Figure BDA0003985525010000057
s57, calculating the mean value and covariance matrix of the prediction observed quantity of the step k +1 according to the prediction observed quantity of the step k + 1:
Figure BDA0003985525010000061
Figure BDA0003985525010000062
Figure BDA0003985525010000063
wherein R represents an observation noise matrix;
s58, calculating a Kalman gain matrix of the k +1 step by adopting the following formula, wherein the Kalman gain matrix is used for adjusting the proportional relation between the predicted value and the observed value:
Figure BDA0003985525010000064
s59, updating the state of the robot system
Figure BDA0003985525010000065
Sum covariance C k+1|k+1
Figure BDA0003985525010000066
Figure BDA0003985525010000067
Wherein Z is k+1 Represents the observed quantity at step k + 1.
Further, the matching between each frame of point cloud is performed through NDT _ OMP point cloud registration algorithm in the step S6, which specifically includes the steps of:
s61, fitting a given reference point cloud by normal distribution, and calculating the mean value and covariance in each grid in the reference point cloud:
Figure BDA0003985525010000068
Figure BDA0003985525010000069
s62, assuming that the current pose transformation matrix is T l w Calculating the coordinate of the target point cloud in the coordinate system of the previous frame of reference point cloud, determining the corresponding normal distribution of the target point cloud in the reference point cloud according to the coordinate, and calculating the probability that the point meets the corresponding normal distribution according to the probability calculation method of the normal distribution; wherein the coordinates of the target point cloud converted to the reference point cloud are denoted X i '=T(X i ,P),X i The coordinates of the point cloud are represented,
Figure BDA0003985525010000071
representing the conversion parameters from the target point cloud to the reference point cloud, the target function is represented as: />
Figure BDA0003985525010000079
OptimizationThe problem is generally described as a minimization problem, iteratively finding the parameters that minimize the function using a newton algorithm, calculating the optimal parameters by iteratively solving the equation H Δ P = -g, g being the transposed gradient of score (P), H being the hessian matrix of score (P), expressed as sign confusion for the purpose of preventing sign confusion
Figure BDA0003985525010000073
Wherein q = X i '-q i
Further, the fitting of the plane coefficient by the plane detection algorithm in step S7 adds ground detection constraint for map building, and specifically includes:
suppose it is in the world coordinate system X w Y w Z w In the general equation of the global plane pi 1, ax + By + Cz + D =0, one plane can be described By four parameters, and the laser radar coordinate is X l Y l Z l Transformation of lidar into world coordinate system
Figure BDA0003985525010000074
R represents a rotation matrix, T represents a translation matrix, and a parameter equation of the global plane in a laser radar coordinate system can be solved according to the parameter equation of the global plane and the pose of the laser radar; suppose a (x) a ,y a ,z a ) Is a three-dimensional representation of a point on the global plane whose normal vector is ≥>
Figure BDA0003985525010000075
The parametric equation for the plane can be derived as:
Figure BDA0003985525010000076
normal vector of plane
Figure BDA0003985525010000077
In the lidar coordinate system may be expressed as:
Figure BDA0003985525010000078
point a (x) a ,y a ,z a ) Expressed in the lidar coordinate system as:
a'(x' a ,y' a ,z' a )=R a (x a ,y a ,z a )+T
the expression of this global plane in the lidar coordinate system is then:
x' n x+y' n y+z' n z-(x' n x' a +y' n y' a +z' n z' a )=0
the plane detection equation fitted by the RANSAC function is:
x d x+y d x+z d x+D d =0
therefore, two plane equations in a laser radar coordinate system are obtained, but due to measurement errors, coefficients of the two plane equations are different, the error equation is required to be defined to measure the degree of the difference, two fitted planes are arranged in the laser radar coordinate system, a blue plane represents a plane converted from the globally consistent ground to the radar coordinate system, a red plane represents a plane fitted by the laser radar coordinate system, respective normal vectors correspond to vectors with the same color, and in order to express the errors quantitatively, a rotation matrix is required to be constructed, R x The plane normal vector representing the globally consistent ground in the lidar coordinate system is rotated to the x-axis as follows:
Figure BDA0003985525010000081
the rotation angle is alpha, and the rotation matrix is applied to a plane equation fitted by RANSAC algorithm:
Figure BDA0003985525010000082
the rotation angle is beta, and the error is represented by the included angle of two transformed normal vectors;
alpha is represented by alpha = arctan2 (y' d ,x' d ) And beta is represented by
Figure BDA0003985525010000083
Further, the specific step of the mobile robot performing the autonomous path planning on the global map through the a-x algorithm in step S8 includes:
s81, judging whether the initial point or the target point is on the map, whether the initial point or the target point is the same point, and whether the initial point or the target point is an obstacle;
s82, adding the initial point S into an openlist, and storing a grid to be checked in the openlist;
s83, adding grids around the grid S into an openlist table;
s84, deleting the grid S from the openlist table, and putting the grid S into a closed list table which is used for storing grids which do not need to be checked any more;
s85, calculating a total cost value F of each surrounding grid, presetting that the cost of moving one grid transversely or longitudinally is 10, and the cost of moving one grid obliquely is 14, wherein the total cost value F of each grid is equal to the cost G from an initial point to the grid and the cost H from the grid to a target point, calculating by adopting a Manhattan distance, neglecting barrier information, and only calculating the transverse movement and the longitudinal movement;
s86, selecting the grid Q with the lowest total cost value F from the openlist table, deleting the grid Q from the openlist table, and adding the grid Q into a close list table;
s87, calculating grids around the grid Q, and without considering the barrier grids and the grids already in the closed list table, if the grids around the grid Q are not in the open list, adding the grids into the open list, calculating the total cost value F of the newly added grids, setting the parent node of the newly added grids as Q, if the grids around the grid Q are in the open list, recalculating the cost value G of the grids from the initial point to the grid after passing through the grid Q, judging whether the G value needs to be updated, and if the G value needs to be updated, updating the parent node information and the total cost value F of the grids;
s88, circularly executing the step S86 and the step S87;
and S89, when the openlist has a target point, indicating that the path is found, and when the openlist has no grid node, indicating that no path exists between the initial grid and the target grid.
Compared with the prior art, the autonomous mapping and path planning method for the indoor mobile robot, provided by the invention, has the advantages that the mobile robot acquires indoor environment information through a plurality of sensors carried on the mobile robot, the explosion-proof laser radar and the strapdown inertial navigation are calibrated in a combined manner, then the mobile robot is remotely controlled to travel a closed loop route indoors, the indoor point cloud data acquired during the traveling process is kept at a constant speed as far as possible, then distortion correction and fusion filtering processing are carried out on the acquired indoor point cloud data, outlier point clouds and noise point clouds are filtered out, then an improved NDT _ OMP point cloud registration algorithm is operated to construct an indoor map, the accumulated error of local mapping is eliminated through loop detection, the pose estimation of the mobile robot is carried out by adopting unscented Kalman filtering, and an algorithm A autonomously plans a motion path. The method can accelerate the operation speed of the computer, improve the efficiency of map building and path planning of the mobile robot, and reduce the cost of human resources.
Drawings
Fig. 1 is a flowchart of an autonomous mapping and path planning method for an indoor mobile robot according to the present invention.
FIG. 2 is a schematic diagram of the filtering of a point cloud provided by the present invention.
Fig. 3 is a schematic illustration of NDT provided by the present invention.
Fig. 4 is a schematic diagram of the conversion from the world coordinate system to the lidar coordinate system provided by the present invention.
Fig. 5 is a schematic diagram of two fitting planes in the lidar coordinate system provided by the present invention.
Detailed Description
In order to make the technical means, the creation characteristics, the achievement purposes and the effects of the invention easy to understand, the invention is further explained by combining the specific drawings.
Referring to fig. 1, the present invention provides an autonomous mapping and path planning method for an indoor mobile robot, in which a plurality of sensors are used, the sensors include an explosion-proof laser radar and a strapdown inertial navigation system, the explosion-proof laser radar and the strapdown inertial navigation system are mounted on a hydraulically-driven four-footed mobile robot, the explosion-proof laser radar and the strapdown inertial navigation system are electrically connected to the mobile robot, and an initial pose of the mobile robot in an indoor environment is set; the explosion-proof laser radar can be realized by adopting the existing laser radar with the model of LR-16FIS-C1, the horizontal angular resolution of the explosion-proof laser radar is 0.09 degrees, the explosion-proof strapdown inertial navigation can be realized by adopting the existing strapdown inertial navigation with the model of LPMS-IG1-485, and the angular resolution of the strapdown inertial navigation is 0.01 degrees; the method comprises the following steps:
s1, initializing, namely moving a remote control robot to an indoor initial point, and establishing communication between the mobile robot and a computer;
s2, the mobile robot is static at an initial position, joint calibration is respectively carried out on the explosion-proof laser radar and the strapdown inertial navigation, and specifically calibration between coordinate systems is carried out on the explosion-proof laser radar and the explosion-proof strapdown inertial navigation through a lidar _ align external reference calibration tool;
s3, remotely controlling the mobile robot to move, starting from an initial point to an initial point in an indoor environment according to a preset route, and performing closed-loop uniform motion;
s4, the mobile robot carries out distortion correction processing on the point cloud data acquired by the explosion-proof laser radar through high-frequency information acquired by strapdown inertial navigation so as to improve the quality of the point cloud acquired by the indoor laser radar, and carries out combined filtering in various modes including voxel filtering, radius filtering and conditional filtering on original signals acquired by the explosion-proof laser radar so as to filter outliers and acquire point cloud information after down-sampling;
s5, the mobile robot carries out pose estimation on the multi-sensor information through the fusion of unscented Kalman filtering to obtain a pose matrix; the method comprises the following steps that information collected by strapdown inertial navigation is used as prediction information, and information collected by an explosion-proof laser radar is used as observation information;
s6, the mobile robot carries out matching between each frame of point cloud on the point cloud data subjected to distortion correction in the step S4 through an NDT _ OMP point cloud registration algorithm to prepare for drawing construction;
s7, fitting plane coefficients of the point cloud data subjected to filtering processing in the step S4 through a plane detection algorithm by the mobile robot, adding ground detection constraints to the image construction, and improving the image construction precision;
s8, the mobile robot receives target point information sent by the upper computer, and path autonomous planning is carried out on the global map through an A-star algorithm;
and S9, the remote control mobile robot finishes the navigation and waits for the next instruction.
As a specific embodiment, the pose matrix obtained by performing pose estimation on the multi-sensor information in the step S5 and the pose matrix between the coordinate system of the explosion-proof laser radar and the strapdown inertial navigation coordinate system mainly comprise a rotation matrix and a translation matrix, and the coordinate systems of the explosion-proof laser radar and the explosion-proof strapdown inertial navigation are calibrated by a lidar _ align external reference calibration tool.
As a specific embodiment, the point cloud distortion correction method and the combined filtering algorithm adopted in step S4 specifically include that distortion correction processing is performed on point cloud data of a laser radar through angular velocity information acquired by strapdown inertial navigation, then conditional filtering is performed on the point cloud data after the distortion correction processing to filter outliers, then voxel filtering is performed, and finally radius filtering processing is performed, and specifically include the following steps:
s41, firstly, collecting data information of strapdown inertial navigation, wherein the angular velocity is expressed as:
IMU_Agu(ang_x,ang_y,ang_z)
wherein ang _ x, ang _ y, and ang _ z represent components of angular velocity in the xyz direction, and the coordinates of the corrected laser point cloud are represented as:
Figure BDA0003985525010000121
wherein the content of the first and second substances,
Figure BDA0003985525010000122
denotes the conversion of the rotation angle into a quaternion, Δ t = scan _ period i/n, i =1,2 i ,y i ,z i ) Representing the coordinate representation of each point cloud in the current frame laser radar coordinate system;
s42, performing down-sampling processing on the laser point cloud, and filtering out point clouds with too close distance and too far distance around the mobile robot through conditional filtering, wherein all magenta point clouds in the diagram are filtered out in a conditional filtering stage as shown in FIG. 2; calculating a cube which can just include all point clouds of a current frame through voxel filtering, dividing the cube into different small cubes according to a preset resolution ratio, and for each point in each small cube, approximating a plurality of points in the small cube by using coordinates of a centroid in the small cube, wherein a calculation formula of the centroid is as follows:
Figure BDA0003985525010000123
Figure BDA0003985525010000124
Figure BDA0003985525010000125
wherein a represents the number of point clouds within each cube;
radius filtering presets a filtering radius and a point cloud number threshold, calculates the Euclidean distance from a center point cloud in a given radius by traversing all point cloud data, and in the filtering radius, if the number of points with the Euclidean distance smaller than the filtering radius is smaller than the point cloud number threshold, the Euclidean distance is filtered by being used as a outlier point (such as a blue point shown in fig. 2), otherwise, if the number of points with the Euclidean distance smaller than the filtering radius is larger than or equal to the point cloud number threshold, the outlier point (such as a red point shown in fig. 2) is not considered to be used for a subsequent graph construction and a positioning module.
As a specific embodiment, the method for estimating the pose of the mobile robot fusing the multi-sensor information through the unscented kalman filter in step S5 to obtain the pose matrix specifically includes the steps of:
s51, through a state equation and an observation equation of the robot system:
X k+1 =f(X k ,W k )
Z k =h(X k ,V k )
wherein, X k+1 Expressing the relationship between the point cloud of the kth frame and the point cloud of the (k + 1) th frame, and f expressing the state equation function of the nonlinear system; z k An observation equation of the system is represented, and h represents an observation equation function of the nonlinear system; w k And V k Current noise of the state equation and the observation equation respectively;
s52, obtaining 2n +1 Sigma point sets and weight values thereof:
Figure BDA0003985525010000131
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003985525010000132
represents the mean of the sample points, C is the covariance matrix of the current state; λ is a scaling parameter used to reduce the total prediction error; n is the number of parameters to be estimated;
s53, obtaining predicted values of the k +1 step from 2n +1 Sigma point sets and a state equation:
Figure BDA0003985525010000133
s54, according to the obtained 2n +1 prediction results and the weight of each prediction result:
Figure BDA0003985525010000134
wherein, subscript m represents the mean value of the predicted values of the step k +1, c represents the covariance of the predicted values of the step k +1, and the obtained weight value is input into the following formula to obtain the predicted mean value and covariance matrix of the system state quantity in the step k + 1:
Figure BDA0003985525010000135
Figure BDA0003985525010000141
wherein Q represents a prediction noise matrix;
s55, according to the predicted mean value
Figure BDA0003985525010000142
Sum covariance matrix P k+1|k And UT transformation is used again to generate new 2n +1 Sigma point sets:
Figure BDA0003985525010000143
s56, substituting the Sigma point set into an observation equation to obtain a predicted observed quantity of the k +1 step:
Figure BDA0003985525010000144
s57, calculating the mean value and covariance matrix of the prediction observed quantity of the step k +1 according to the prediction observed quantity of the step k + 1:
Figure BDA0003985525010000145
Figure BDA0003985525010000146
Figure BDA0003985525010000147
wherein R represents an observation noise matrix;
s58, calculating a Kalman gain matrix, namely a posture matrix, in the step k +1 by adopting the following formula, wherein the Kalman gain matrix is used for adjusting the proportional relation between the predicted value and the observed value:
Figure BDA0003985525010000148
s59, finally, updating the state of the robot system
Figure BDA0003985525010000149
Sum covariance C k+1|k+1
Figure BDA00039855250100001410
Figure BDA00039855250100001411
Wherein Z is k+1 Represents the observed quantity at step k + 1.
As a specific embodiment, the NDT _ OMP point cloud registration algorithm adopted in step S6 is a multi-thread optimized version of the NDT _ OMP algorithm, as shown in fig. 3, the left side in the figure is a schematic diagram after fitting a given reference point cloud (laser scanning at the previous time) with a normal distribution; in step S6, matching each frame of point cloud is performed through the NDT _ OMP point cloud registration algorithm, which specifically includes the steps of:
s61, fitting a given reference point cloud by normal distribution, and calculating the mean value and covariance in each grid in the reference point cloud:
Figure BDA0003985525010000151
Figure BDA0003985525010000152
the blue ellipse in fig. 3 is a normal distribution ellipse corresponding to the gaussian distribution, and the right side in the figure represents the scanned target point cloud at the current time;
s62, assuming that the current pose transformation matrix is T l w Calculating the coordinate of the target point cloud in the coordinate system of the previous frame of reference point cloud, determining the corresponding normal distribution of the target point cloud in the reference point cloud according to the coordinate, and calculating the probability that the point meets the corresponding normal distribution according to the probability calculation method of the normal distribution; wherein the coordinates of the target point cloud converted to the reference point cloud are denoted as X i '=T(X i ,P),X i The coordinates of the point cloud are represented,
Figure BDA0003985525010000153
representing the conversion parameters from the target point cloud to the reference point cloud, the target function is represented as: />
Figure BDA0003985525010000154
/>
The optimization problem is usually described as a minimization problem, iteratively finding the parameters that minimize the function using a newton algorithm, calculating the optimal parameters by iteratively solving the equation H Δ P = -g, g being the transposed gradient of score (P), H being the hessian matrix of score (P), expressed as sign confusion for the purpose of preventing sign confusion
Figure BDA0003985525010000155
Wherein q = X i '-q i
As a specific embodiment, in a flat indoor environment, the movement of the mobile robot has no large jitter, the elevation error can be reduced by adding plane detection constraint, because the number of point clouds is large, not all the point clouds need to participate in the calculation of a plane detection algorithm, and the ground is detected every 10s in order to save calculation resources; in contrast, in step S7, the plane coefficient is fitted by using a plane detection algorithm, and a ground detection constraint is added for mapping, which specifically includes:
assumed to be in the world coordinate system X w Y w Z w In the general equation of the global plane pi 1, ax + By + Cz + D =0, one plane can be described By four parameters, and the laser radar coordinate is X l Y l Z l Transformation of lidar into world coordinate system
Figure BDA0003985525010000161
R represents a rotation matrix, T represents a translation matrix, and as shown in FIG. 4, a parameter equation of the global plane in a laser radar coordinate system can be solved according to the parameter equation of the global plane and the pose of the laser radar; suppose a (x) a ,y a ,z a ) Is a three-dimensional representation of a point on the global plane whose normal vector is ≥>
Figure BDA0003985525010000162
The parametric equation for the plane can be derived as:
x n x+y n y+z n z-(x n x a +y n y a +z n z a )=0
normal vector of plane
Figure BDA0003985525010000163
In the lidar coordinate system may be expressed as:
Figure BDA0003985525010000164
point a (x) a ,y a ,z a ) Expressed in the lidar coordinate system as:
a'(x' a ,y' a ,z' a )=R a (x a ,y a ,z a )+T
the expression of this global plane in the lidar coordinate system is then:
x' n x+y' n y+z' n z-(x' n x' a +y' n y' a +z' n z' a )=0
the plane detection equation fitted by the RANSAC function is:
x d x+y d x+z d x+D d =0
thus, two plane equations in the lidar coordinate system are obtained, but due to measurement errors, coefficients of the two plane equations are different, and it is necessary to define an error equation to measure the degree of the difference, as shown in fig. 5, two fitted planes are in the lidar coordinate system, a blue plane represents a plane where the globally consistent ground is transformed to the radar coordinate system, a red plane represents a plane fitted to the lidar coordinate system, respective normal vectors correspond to vectors of the same color, in order to quantitatively represent the errors, a rotation matrix is required to be constructed, and R is a value corresponding to the rotation matrix x The plane normal vector representing the globally consistent ground in the lidar coordinate system is rotated to the x-axis as follows:
Figure BDA0003985525010000171
the rotation angle is alpha, and the rotation matrix is applied to a plane equation fitted by RANSAC algorithm:
Figure BDA0003985525010000172
the rotation angle is beta, and the error is represented by the included angle of two transformed normal vectors;
alpha is represented by alpha = arctan2 (y' d ,x' d ) And beta is represented by
Figure BDA0003985525010000173
As a specific embodiment, the specific step of the mobile robot performing the autonomous path planning on the global map through the a-x algorithm in step S8 includes:
s81, judging whether the initial point or the target point is on the map, whether the initial point or the target point is the same point, and whether the initial point or the target point is an obstacle;
s82, adding the initial point S into an openlist, and storing a grid to be checked in the openlist;
s83, adding grids around the grid S into an openlist table;
s84, deleting the grid S from the openlist table, and putting the grid S into a closed list table which is used for storing grids which do not need to be checked any more;
s85, calculating a total cost value F of each surrounding grid, presetting that the cost of moving one grid transversely or longitudinally is 10, and the cost of moving one grid obliquely is 14, wherein the total cost value F of each grid is equal to the cost G from an initial point to the grid and the cost H from the grid to a target point, calculating by adopting a Manhattan distance, neglecting barrier information, and only calculating the transverse movement and the longitudinal movement;
s86, selecting the grid Q with the lowest total cost value F from the openlist table, deleting the grid Q from the openlist table, and adding the grid Q into a close list table;
s87, calculating grids around the grid Q, and without considering the barrier grids and the grids already in the closed list table, if the grids around the grid Q are not in the open list, adding the grids into the open list, calculating the total cost value F of the newly added grids, setting the parent node of the newly added grids as Q, if the grids around the grid Q are in the open list, recalculating the cost value G of the grids from the initial point to the grid after passing through the grid Q, judging whether the G value needs to be updated, and if the G value needs to be updated, updating the parent node information and the total cost value F of the grids;
s88, circularly executing the step S86 and the step S87;
and S89, when the openlist has a target point, indicating that the path is found, and when the openlist has no grid node, indicating that no path exists between the initial grid and the target grid.
The working principle of the invention is as follows:
(1) multi-sensor fusion of mobile robot in indoor environment
In the process of walking of the remote control mobile robot, the explosion-proof type laser radar and the strapdown inertial navigation are started to work, information under an indoor environment is collected, information of the two sensors is fused in an unscented Kalman filtering mode, data collected by the strapdown inertial navigation are used as unscented Kalman algorithm prediction quantity, data collected by the laser radar are used as observed quantity of the unscented Kalman algorithm, the value of the updated prediction quantity is corrected through the value measured by the laser radar, and the consistency explanation of the measured target is obtained.
(2) Real-time mapping and positioning of mobile robot in indoor environment
The remote control mobile robot walks for a circle in an indoor environment, so that a traveling path of the mobile robot forms a closed loop, real-time data in the indoor environment is collected through an explosion-proof laser radar and a strapdown inertial navigation sensor which are carried in the walking process, the operation is carried out by adopting a laser point cloud preprocessing method in the indoor environment, the quality of point cloud is improved, the mapping and positioning precision is improved, and a processing result is input to an improved NDT _ OMP algorithm and a unscented Kalman filtering module to carry out mapping and path planning in the indoor environment of the mobile robot.
(3) Route planning of a mobile robot in an indoor environment
The method comprises the steps that a target point position is sent to the mobile robot through an upper computer, the mobile robot starts a path planning module, a path from an initial point to a target point is planned, and a motion instruction is sent to the robot to move to the target point.
Compared with the prior art, the indoor mobile robot autonomous mapping and path planning method provided by the invention has the advantages that the mobile robot acquires indoor environment information through the carried multiple sensors, the explosion-proof laser radar and the strapdown inertial navigation are calibrated in a combined manner, then the mobile robot is remotely controlled to travel in a closed loop route indoors, the constant speed travel is kept as far as possible in the travel process, then the acquired indoor point cloud data is subjected to distortion correction and fusion filtering treatment, outlier point clouds and noise point clouds are filtered out, then an improved NDT _ OMP point cloud registration algorithm is operated to construct an indoor map, the accumulated error of local mapping is eliminated through loop detection, the unscented Kalman filtering is adopted to estimate the pose of the mobile robot, and an algorithm A autonomously plans a motion path. The method can accelerate the operation speed of the computer, improve the efficiency of map building and path planning of the mobile robot, and reduce the cost of human resources.
Finally, the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made to the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention, and all of them should be covered in the claims of the present invention.

Claims (7)

1. The autonomous mapping and path planning method for the indoor mobile robot is characterized in that a plurality of sensors are adopted in the method, the sensors comprise an explosion-proof laser radar and a strapdown inertial navigation device, the explosion-proof laser radar and the strapdown inertial navigation device are installed on a hydraulically-driven four-footed mobile robot, the explosion-proof laser radar and the strapdown inertial navigation device are connected with the mobile robot, and the initial pose of the mobile robot in an indoor environment is set; the method comprises the following steps:
s1, initializing, namely moving a remote control robot to an indoor initial point, and establishing communication between the mobile robot and a computer;
s2, the mobile robot is static at an initial position, joint calibration is respectively carried out on the explosion-proof laser radar and the strapdown inertial navigation, and specifically calibration between coordinate systems is carried out on the explosion-proof laser radar and the explosion-proof strapdown inertial navigation through a lidar _ align external reference calibration tool;
s3, remotely controlling the mobile robot to move, starting from an initial point to an initial point in an indoor environment according to a preset route, and performing closed-loop uniform motion;
s4, the mobile robot carries out distortion correction processing on the point cloud data of the explosion-proof laser radar through high-frequency information acquired by strapdown inertial navigation so as to improve the quality of the point cloud acquired by the indoor laser radar, and carries out combined filtering in various modes including voxel filtering, radius filtering and conditional filtering on original signals acquired by the explosion-proof laser radar so as to filter outliers and acquire point cloud information after down-sampling;
s5, the mobile robot carries out pose estimation on the multi-sensor information through the fusion of unscented Kalman filtering to obtain a pose matrix; the method comprises the following steps that information collected by strapdown inertial navigation is used as prediction information, and information collected by an explosion-proof laser radar is used as observation information;
s6, the mobile robot carries out matching between each frame of point cloud on the point cloud data subjected to distortion correction in the step S4 through an NDT _ OMP point cloud registration algorithm to prepare for drawing construction;
s7, fitting plane coefficients of the point cloud data subjected to filtering processing in the step S4 through a plane detection algorithm by the mobile robot, adding ground detection constraints to the image construction, and improving the image construction precision;
s8, the mobile robot receives target point information sent by the upper computer, and path autonomous planning is carried out on the global map through an A-star algorithm;
and S9, the remote control mobile robot finishes the navigation and waits for the next instruction.
2. The method for autonomous mapping and path planning of an indoor mobile robot according to claim 1, wherein a pose matrix obtained by performing pose estimation by fusing information of multiple sensors in step S5 and a pose matrix between a coordinate system of the explosion-proof laser radar and a strapdown inertial navigation coordinate system are mainly composed of a rotation matrix and a translation matrix, and calibration between the coordinate systems is performed on the explosion-proof laser radar and the explosion-proof strapdown inertial navigation through a lidar _ align external reference calibration tool.
3. The autonomous mapping and path planning method for the indoor mobile robot according to claim 1, wherein the point cloud distortion correction method and the combined filtering algorithm used in step S4 are specifically that distortion correction processing is performed on point cloud data of a laser radar through angular velocity information acquired by strapdown inertial navigation, then conditional filtering is performed on the point cloud data after the distortion correction processing to filter outliers, then voxel filtering is performed, and finally radius filtering processing is performed, and specifically the following steps are included:
s41, firstly, collecting data information of strapdown inertial navigation, wherein the angular velocity is expressed as:
IMU_Agu(ang_x,ang_y,ang_z)
wherein ang _ x, ang _ y, and ang _ z represent components of angular velocity in the xyz direction, and the coordinates of the corrected laser point cloud are represented as:
Figure FDA0003985520000000021
wherein the content of the first and second substances,
Figure FDA0003985520000000022
denotes the conversion of the rotation angle into quaternions, Δ t = scan _ period i/n, i =1,2.. N, scan _ period denotes the scan period of a bundle of laser lines, p (x) i ,y i ,z i ) Representing the coordinate representation of each point cloud in the current frame laser radar coordinate system;
s42, performing down-sampling processing on the laser point cloud, filtering out point clouds with too close distance and too far distance around the mobile robot through conditional filtering, calculating a cube which can just comprise all the point clouds of the current frame through voxel filtering, dividing the cube into different small cubes according to preset resolution, and approximating a plurality of points in each small cube by coordinates of a centroid in the small cube, wherein the calculation formula of the centroid is as follows:
Figure FDA0003985520000000031
Figure FDA0003985520000000032
Figure FDA0003985520000000033
wherein a represents the number of point clouds within each cube;
radius filtering is carried out to preset a filtering radius and a point cloud number threshold, euclidean distances from a center point cloud in a given radius are calculated by traversing all point cloud data, in the filtering radius, if the number of points with the Euclidean distances smaller than the filtering radius is smaller than the point cloud number threshold, the points are filtered out as outliers, otherwise, if the number of points with the Euclidean distances smaller than the filtering radius is larger than or equal to the point cloud number threshold, the outliers are not considered to be used for a subsequent drawing construction and a positioning module.
4. The autonomous mapping and path planning method for the indoor mobile robot according to claim 1, wherein the mobile robot performs pose estimation by fusing multi-sensor information through unscented kalman filtering in step S5 to obtain a pose matrix, and specifically comprises:
s51, through a state equation and an observation equation of the robot system:
X k+1 =f(X k ,W k )
Z k =h(X k ,V k )
wherein, X k+1 Expressing the relationship between the point cloud of the kth frame and the point cloud of the (k + 1) th frame, and f expressing the state equation function of the nonlinear system; z k H represents the observation equation function of the nonlinear system; w k And V k Current noise of the state equation and the observation equation respectively;
s52, obtaining 2n +1 Sigma point sets and weight values thereof:
Figure FDA0003985520000000041
/>
wherein the content of the first and second substances,
Figure FDA0003985520000000042
represents the mean of the sample points, C is the covariance matrix of the current state; λ is a scaling parameter used to reduce the total prediction error; n is the number of parameters to be estimated;
s53, obtaining predicted values of the step k +1 from the Sigma point sets 2n +1 and the state equation:
Figure FDA0003985520000000043
s54, according to the obtained 2n +1 prediction results and the weight of each prediction result:
Figure FDA0003985520000000044
wherein, subscript m represents the mean value of the predicted values of the step k +1, c represents the covariance of the predicted values of the step k +1, and the obtained weight value is input into the following formula to obtain the predicted mean value and covariance matrix of the system state quantity at the step k + 1:
Figure FDA0003985520000000045
Figure FDA0003985520000000046
wherein Q represents a prediction noise matrix;
s55, according to the predicted mean value
Figure FDA0003985520000000047
Sum covariance matrix P k+1|k And UT transformation is used again to generate new 2n +1 Sigma point sets:
Figure FDA0003985520000000048
s56, substituting the Sigma point set into an observation equation to obtain the predicted observed quantity of the k +1 step:
Figure FDA0003985520000000051
s57, calculating the mean value and covariance matrix of the prediction observed quantity of the step k +1 according to the prediction observed quantity of the step k + 1:
Figure FDA0003985520000000052
Figure FDA0003985520000000053
Figure FDA0003985520000000054
wherein R represents an observation noise matrix;
s58, calculating a Kalman gain matrix of the k +1 step by adopting the following formula, wherein the Kalman gain matrix is used for adjusting the proportional relation between the predicted value and the observed value:
Figure FDA0003985520000000055
s59, updating the state of the robot system
Figure FDA0003985520000000056
Sum covariance C k+1|k+1
Figure FDA0003985520000000057
Figure FDA0003985520000000058
Wherein Z is k+1 Represents the observed quantity at step k + 1.
5. The autonomous mapping and path planning method for the indoor mobile robot according to claim 1, wherein the step S6 of matching each frame point cloud by using an NDT _ OMP point cloud registration algorithm specifically includes the steps of:
s61, fitting a given reference point cloud by normal distribution, and calculating the mean value and covariance in each grid in the reference point cloud:
Figure FDA0003985520000000059
Figure FDA00039855200000000510
s62, assuming that the current pose transformation matrix is T l w Calculating the coordinate of the target point cloud in the coordinate system of the previous frame of reference point cloud, determining the corresponding normal distribution of the target point cloud in the reference point cloud according to the coordinate, and calculating the probability that the point meets the corresponding normal distribution according to the probability calculation method of the normal distribution; wherein the coordinates of the target point cloud converted to the reference point cloud are denoted X i '=T(X i ,P),X i The coordinates of the point cloud are represented,
Figure FDA0003985520000000061
representing the conversion parameters from the target point cloud to the reference point cloud, the target function is represented as: />
Figure FDA0003985520000000062
The optimization problem is usually described as a minimization problem, iteratively finding the parameters that minimize the function using newton's algorithm, calculating the optimal parameters by iteratively solving the equation H Δ P = -g, g being the transposed gradient of score (P), H being the hessian matrix of score (P), which will be expressed as a function to prevent sign confusion
Figure FDA0003985520000000063
Wherein q = X i '-q i
6. The indoor mobile robot autonomous mapping and path planning method according to claim 1, wherein in step S7, a plane coefficient is fitted by a plane detection algorithm, and a ground detection constraint is added for mapping, specifically including:
assumed to be in the world coordinate system X w Y w Z w In the general equation of the global plane pi 1 is Ax + By + Cz + D =0, one plane can be described By four parameters, and the laser radar coordinate is X l Y l Z l Transformation of lidar into world coordinate system
Figure FDA0003985520000000064
R represents a rotation matrix, T represents a translation matrix, and a parameter equation of the global plane in a laser radar coordinate system can be solved according to the parameter equation of the global plane and the pose of the laser radar; suppose a (x) a ,y a ,z a ) Is a three-dimensional representation of a point on the global plane whose normal vector is ≥>
Figure FDA0003985520000000065
The parametric equation for the plane can be derived as:
x n x+y n y+z n z-(x n x a +y n y a +z n z a )=0
normal vector of plane
Figure FDA0003985520000000066
In the lidar coordinate system may be expressed as:
Figure FDA0003985520000000071
point a (x) a ,y a ,z a ) Expressed in the lidar coordinate system as:
a'(x' a ,y' a ,z' a )=R a (x a ,y a ,z a )+T
the expression of this global plane in the lidar coordinate system is then:
x' n x+y' n y+z' n z-(x' n x' a +y' n y' a +z' n z' a )=0
the plane detection equation fitted by the RANSAC function is:
x d x+y d x+z d x+D d =0
therefore, two plane equations in a laser radar coordinate system are obtained, but due to measurement errors, coefficients of the two plane equations are different, the error equation is required to be defined to measure the different degree, two fitted planes are arranged in the laser radar coordinate system, a blue plane represents a plane converted from the globally consistent ground to the radar coordinate system, a red plane represents a plane fitted by the laser radar coordinate system, respective normal vectors correspond to vectors with the same color, and in order to quantitatively represent the errors, a rotation matrix is required to be constructed, R x The plane normal vector representing the globally consistent ground in the lidar coordinate system is rotated to the x-axis as follows:
Figure FDA0003985520000000072
the rotation angle is alpha, and the rotation matrix is applied to a plane equation fitted by RANSAC algorithm:
Figure FDA0003985520000000073
the rotation angle is beta, and the error is represented by the included angle of two transformed normal vectors;
alpha is expressed as alpha = arctan2 (y' d ,x' d ) Beta is represented by
Figure FDA0003985520000000074
7. The indoor mobile robot autonomous mapping and path planning method according to claim 1, wherein the specific step of the mobile robot performing the path autonomous planning on the global map through the a-x algorithm in step S8 includes:
s81, judging whether the initial point or the target point is on the map, whether the initial point or the target point is the same point, and whether the initial point or the target point is an obstacle;
s82, adding the initial point S into an openlist, and storing a grid to be checked in the openlist;
s83, adding grids around the grid S into an openlist table;
s84, deleting the grid S from the openlist table, and putting the grid S into a closed list table which is used for storing grids which do not need to be checked any more;
s85, calculating a total cost value F of each surrounding grid, presetting that the cost of moving one grid transversely or longitudinally is 10, and the cost of moving one grid obliquely is 14, wherein the total cost value F of each grid is equal to the cost G from an initial point to the grid and the cost H from the grid to a target point, calculating by adopting a Manhattan distance, neglecting barrier information, and only calculating the transverse movement and the longitudinal movement;
s86, selecting the grid Q with the lowest total cost value F from the openlist table, deleting the grid Q from the openlist table, and adding the grid Q into a close list table;
s87, calculating grids around the grid Q, not considering the barrier grids and grids in a closed list, if the grids around the grid Q are not in the open list, adding the grids into the open list, calculating the total cost value F of the newly added grids, setting the parent nodes of the newly added grids as Q, if the grids around the grid Q are in the open list, recalculating the cost value G of the grids from an initial point to the grids after passing through the grids Q, judging whether the value G needs to be updated, and if the value G needs to be updated, updating the parent node information and the total cost value F of the grids;
s88, circularly executing the step S86 and the step S87;
and S89, when the openlist has a target point, indicating that the path is found, and when the openlist has no grid node, indicating that no path exists between the initial grid and the target grid.
CN202211563757.2A 2022-12-07 2022-12-07 Autonomous mapping and path planning method for indoor mobile robot Pending CN115855062A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211563757.2A CN115855062A (en) 2022-12-07 2022-12-07 Autonomous mapping and path planning method for indoor mobile robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211563757.2A CN115855062A (en) 2022-12-07 2022-12-07 Autonomous mapping and path planning method for indoor mobile robot

Publications (1)

Publication Number Publication Date
CN115855062A true CN115855062A (en) 2023-03-28

Family

ID=85670733

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211563757.2A Pending CN115855062A (en) 2022-12-07 2022-12-07 Autonomous mapping and path planning method for indoor mobile robot

Country Status (1)

Country Link
CN (1) CN115855062A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116449391A (en) * 2023-04-17 2023-07-18 深圳直角设计工程有限公司 Indoor panoramic imaging method and system based on 3D point cloud
CN116448118A (en) * 2023-04-17 2023-07-18 深圳市华辰信科电子有限公司 Working path optimization method and device of sweeping robot
CN116500638A (en) * 2023-06-25 2023-07-28 江苏大学 Automatic navigation method and system for harvester tillage channel based on SLAM technology

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116449391A (en) * 2023-04-17 2023-07-18 深圳直角设计工程有限公司 Indoor panoramic imaging method and system based on 3D point cloud
CN116448118A (en) * 2023-04-17 2023-07-18 深圳市华辰信科电子有限公司 Working path optimization method and device of sweeping robot
CN116448118B (en) * 2023-04-17 2023-10-31 深圳市华辰信科电子有限公司 Working path optimization method and device of sweeping robot
CN116449391B (en) * 2023-04-17 2024-05-17 深圳直角设计工程有限公司 Indoor panoramic imaging method and system based on 3D point cloud
CN116500638A (en) * 2023-06-25 2023-07-28 江苏大学 Automatic navigation method and system for harvester tillage channel based on SLAM technology
CN116500638B (en) * 2023-06-25 2023-10-10 江苏大学 Automatic navigation method and system for harvesting machine tillage based on SLAM technology

Similar Documents

Publication Publication Date Title
Papachristos et al. Autonomous exploration and inspection path planning for aerial robots using the robot operating system
Indelman et al. Factor graph based incremental smoothing in inertial navigation systems
CN115855062A (en) Autonomous mapping and path planning method for indoor mobile robot
CN109459039B (en) Laser positioning navigation system and method of medicine carrying robot
JP6855524B2 (en) Unsupervised learning of metric representations from slow features
CN104964683B (en) A kind of closed-loop corrected method of indoor environment map building
TWI820395B (en) Method for generating three-dimensional(3d) point cloud of object, system for 3d point set generation and registration, and related machine-readable medium
CN111338383B (en) GAAS-based autonomous flight method and system, and storage medium
CA2988326C (en) Automated mobile geotechnical mapping
Ramezani et al. Wildcat: Online continuous-time 3d lidar-inertial slam
JP2016045330A (en) Method and device for aligning three-dimensional point group data and mobile body system thereof
CN114964212B (en) Multi-machine collaborative fusion positioning and mapping method oriented to unknown space exploration
Giubilato et al. An experimental comparison of ros-compatible stereo visual slam methods for planetary rovers
CN114111818A (en) Universal visual SLAM method
CN110598370B (en) Robust attitude estimation of multi-rotor unmanned aerial vehicle based on SIP and EKF fusion
Li et al. Aerial-triangulation aided boresight calibration for a low-cost UAV-LiDAR system
Karam et al. Integrating a low-cost mems imu into a laser-based slam for indoor mobile mapping
He et al. Camera-odometer calibration and fusion using graph based optimization
Hussnain et al. An automatic procedure for mobile laser scanning platform 6dof trajectory adjustment
CN114047766B (en) Mobile robot data acquisition system and method for long-term application of indoor and outdoor scenes
Iser et al. AntSLAM: global map optimization using swarm intelligence
Di Castro et al. An incremental slam algorithm for indoor autonomous navigation
CN114543794A (en) Absolute positioning method for fusion of visual inertial odometer and discontinuous RTK
Rana et al. A pose estimation algorithm for agricultural mobile robots using an rgb-d camera
Liu et al. Error modelling and optimal estimation of laser scanning aided inertial navigation system in GNSS-denied environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination