CN111161412A - Three-dimensional laser mapping method and system - Google Patents

Three-dimensional laser mapping method and system Download PDF

Info

Publication number
CN111161412A
CN111161412A CN201911242680.7A CN201911242680A CN111161412A CN 111161412 A CN111161412 A CN 111161412A CN 201911242680 A CN201911242680 A CN 201911242680A CN 111161412 A CN111161412 A CN 111161412A
Authority
CN
China
Prior art keywords
robot
map
pose
moment
dimensional laser
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911242680.7A
Other languages
Chinese (zh)
Other versions
CN111161412B (en
Inventor
刘胜明
姜志英
芮青青
司秀芬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Agv Robot Co ltd
Original Assignee
Suzhou Agv Robot Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Agv Robot Co ltd filed Critical Suzhou Agv Robot Co ltd
Priority to CN201911242680.7A priority Critical patent/CN111161412B/en
Priority to PCT/CN2019/123931 priority patent/WO2021109167A1/en
Publication of CN111161412A publication Critical patent/CN111161412A/en
Application granted granted Critical
Publication of CN111161412B publication Critical patent/CN111161412B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/11Complex mathematical operations for solving equations, e.g. nonlinear equations, general mathematical optimization problems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Abstract

The invention relates to a three-dimensional laser mapping method and a system, comprising the following steps: acquiring data by using a sensor arranged on the robot, and filtering unqualified data; calculating the pose of the robot by using a rough-to-fine matching algorithm; and constructing a new image according to the pose of the robot, and then modifying a map according to a loop closing detection result. The method can generate the scene map with higher consistency, and has simple algorithm and high accuracy.

Description

Three-dimensional laser mapping method and system
Technical Field
The invention relates to the technical field of computer image processing, in particular to a three-dimensional laser mapping method.
Background
In large scale outdoor environments, creating an environmental map using laser ranging sensors is more complex than indoors. Generally, outdoor environments have a large range and few characteristics, so that the information that the sensor can sense is limited, which presents a new challenge to the autonomous mapping of the robot. At the present stage, a three-dimensional laser SLAM technology is generally adopted, and a point cloud map is generally adopted as a storage format, however, as the scene range is enlarged, the point cloud map inevitably causes the surge of memory, so that a more appropriate data structure needs to be searched; in addition, when the robot returns to the previously passed place through a long path, an inconsistent map is inevitably generated due to inevitable accumulated errors of data association. When the scene range is increased and the data volume is increased rapidly, new challenges are provided for closed loop detection and pose optimization.
Disclosure of Invention
Therefore, the technical problem to be solved by the invention is to overcome the problems of low positioning precision and complex method in the prior art, thereby providing a three-dimensional laser mapping method with high positioning precision and simple method.
In order to solve the technical problem, the three-dimensional laser mapping method provided by the invention comprises the following steps: acquiring data by using a sensor arranged on the robot, and filtering unqualified data; calculating the pose of the robot by using a rough-to-fine matching algorithm, wherein the rough-to-fine matching algorithm comprises the following steps: to QkUsing K-D tree description, pair
Figure BDA0002306696730000011
Each laser point in (1) is sought at QkEstablishing pairing at the nearest point in the sequence; establishing a nonlinear equation, and aiming to minimize the distance of all pairs; solving the nonlinear optimization problem by using an L-M method to solve Topt(R, t); the pose of the robot at the moment k +1 is
Figure BDA0002306696730000012
Wherein QkRepresenting the point cloud map established at the moment k,
Figure BDA0002306696730000013
representing the conversion of the current frame to the point cloud map under the world coordinate system at the moment of k +1,
Figure BDA0002306696730000021
showing the pose of the robot at the moment k in a world coordinate system,
Figure BDA0002306696730000022
representing the pose increment of the robot at the moment k +1, wherein t is a translation position, and R is a rotation angle; and constructing a new image according to the pose of the robot, and then modifying a map according to a loop closing detection result.
In an embodiment of the present invention, the method for modifying the map according to the loop closure detection result includes: adding a new image, judging whether a loop is closed or not, and optimizing the map if the loop is closed; otherwise, the map is updated.
In an embodiment of the present invention, after the map is optimized, the map is added to the global map until the global map is completely built.
In an embodiment of the present invention, the method for optimizing the map includes: firstly, discretizing according to the length of the robot running path; initializing initial data of the robot, and calculating and outputting the pose of the robot by using a rough-to-fine matching algorithm; respectively judging the pose of the robot, the difference value of the key sequence frames, the difference value of the poses of the robots corresponding to the two frames and the matching degree of map environment change, if the conditions are met, determining that a correct closed loop is detected, adjusting all point cloud maps between the two frames, and adding the maps into a global map; and if the condition is not met, updating the map.
In an embodiment of the present invention, the method for determining the pose of the robot, the difference between the serial numbers of the current key frame and the retrieved key frame, the difference between the poses of the robots corresponding to the two frames, and the matching degree of the map environment change includes: calculating the current distance, if the current distance is larger than a first threshold value, continuing the next step, otherwise, continuing the detection at the next moment; judging whether the difference value of the serial numbers of the current key frame and the searched key frame meets a preset second threshold value or not, if yes, continuing to calculate the next step, and if not, updating the map; judging whether the difference value of the pose of the robot meets a preset third threshold value when the robot passes the same point twice, if so, turning to the next step, and if not, updating the map; and matching the observed map environment with the detected map environment, if the matching meets the score, determining that the closed loop is detected, and if not, continuing to detect at the next moment.
In an embodiment of the invention, when a new image is constructed according to the pose of the robot, an incremental optimization method is adopted to calculate the new pose and the map of the computer by acquiring the pose of the robot at the last moment, the control quantity and the observed quantity at the current moment.
In one embodiment of the present invention, the calculation formula of the new pose and the map of the robot is:
Figure BDA0002306696730000031
Figure BDA0002306696730000032
wherein the robot pose at the time of t is xtThe sensor observation is ztThe controlled quantity is ut,x1:t={x1,L,xtAnd m is an environment map.
In an embodiment of the present invention, when the new pose of the robot is calculated, if the kinematic model and the observation model noise satisfy the gaussian distribution, both the kinematic model and the observation model can be converted into:
Figure BDA0002306696730000033
wherein
Figure BDA0002306696730000034
The operation represents a coordinate transformation, TijSum-sigmaijAre respectively xiAnd xjThe relative relationship and covariance between them can be obtained by the above matching algorithm, in the two-dimensional case, x ═ x, y, θ)TThen T can be represented as (x)T,yTT)TThen, the specific form of the operation can be expressed as:
Figure BDA0002306696730000035
the calculation formula of the new pose of the robot is as follows:
Figure BDA0002306696730000036
order to
Figure BDA0002306696730000037
It is a non-linear function, i.e. it has:
Figure BDA0002306696730000038
wherein
Figure BDA0002306696730000039
Is fj(xi) A jacobian matrix.
In one embodiment of the invention, a formula is calculated
Figure BDA00023066967300000310
When it is used, order
Figure BDA00023066967300000311
It is simplified as follows:
Figure BDA00023066967300000312
wherein, note
Figure BDA00023066967300000313
And written in the general form:
Figure BDA0002306696730000041
wherein, Q (R0)TIs the QR decomposition of A.
The invention also provides a three-dimensional laser mapping system, which comprises: an acquisition and filtering module for placement on a robotThe sensor obtains data and filters unqualified data; a calculation module, configured to calculate a pose of the robot by using a rough-to-fine matching algorithm, where the rough-to-fine matching algorithm includes: to QkUsing K-D tree description, pair
Figure BDA0002306696730000042
Each laser point in (1) is sought at QkEstablishing pairing at the nearest point in the sequence; establishing a nonlinear equation, and aiming to minimize the distance of all pairs; solving the nonlinear optimization problem by using an L-M method to solve Topt(R, t); the pose of the robot at the moment k +1 is
Figure BDA0002306696730000043
Wherein QkRepresenting the point cloud map established at the moment k,
Figure BDA0002306696730000044
representing the conversion of the current frame to the point cloud map under the world coordinate system at the moment of k +1,
Figure BDA0002306696730000045
showing the pose of the robot at the moment k in a world coordinate system,
Figure BDA0002306696730000046
representing the pose increment of the robot at the moment k +1, wherein t is a translation position, and R is a rotation angle; and the mapping module is used for constructing a new image according to the pose of the robot and then modifying the map according to the loop closing detection result.
Compared with the prior art, the technical scheme of the invention has the following advantages:
according to the three-dimensional laser mapping method and the system, a rough-to-fine matching algorithm is adopted, and the matching algorithm is favorable for improving the robustness and accuracy of inter-frame matching, so that a scene map with high consistency is generated; and a new image is constructed according to the pose of the robot, and then the map is modified according to the loop closing detection result, so that the positioning precision is high, the algorithm is simple, and the method can be applied to constructing a scene map in a large-scale outdoor environment.
Drawings
In order that the present disclosure may be more readily and clearly understood, reference is now made to the following detailed description of the embodiments of the present disclosure taken in conjunction with the accompanying drawings, in which
FIG. 1 is a flow chart of a three-dimensional laser mapping method of the present invention;
FIG. 2 is a detailed flow chart of the three-dimensional laser mapping method of the present invention;
FIG. 3 is a flow chart of a method for optimizing a map according to the present invention.
Detailed Description
Example one
As shown in fig. 1 and fig. 2, the present embodiment provides a three-dimensional laser mapping method, including the following steps: step S1, acquiring data by using a sensor arranged on the robot and filtering unqualified data; step S2: calculating the pose of the robot by using a rough-to-fine matching algorithm, wherein the rough-to-fine matching algorithm comprises the following steps: to QkUsing K-D tree description, pair
Figure BDA0002306696730000051
Each laser point in (1) is sought at QkEstablishing pairing at the nearest point in the sequence; establishing a nonlinear equation, and aiming to minimize the distance of all pairs; solving the nonlinear optimization problem by using an L-M method to solve Topt(R, t); the pose of the robot at the moment k +1 is
Figure BDA0002306696730000052
Wherein QkRepresenting the point cloud map established at the moment k,
Figure BDA0002306696730000053
representing the conversion of the current frame to the point cloud map under the world coordinate system at the moment of k +1,
Figure BDA0002306696730000054
showing the pose of the robot at the moment k in a world coordinate system,
Figure BDA0002306696730000055
representing the pose increment of the robot at the moment k +1, wherein t is a translation position, and R is a rotation angle; step S3: and constructing a new image according to the pose of the robot, and then modifying a map according to a loop closing detection result.
In the three-dimensional laser mapping method of this embodiment, in step S1, a sensor disposed on the robot is used to obtain data, and unqualified data is filtered out, so that effective data can be obtained for analysis; in step S2, the pose of the robot is calculated by using a rough-to-fine matching algorithm, where the rough-to-fine matching algorithm includes: to QkUsing K-D tree description, pair
Figure BDA0002306696730000056
Each laser point in (1) is sought at QkEstablishing pairing at the nearest point in the sequence; establishing a nonlinear equation, and aiming to minimize the distance of all pairs; solving the nonlinear optimization problem by using an L-M method to solve Topt(R, t); the pose of the robot at the moment k +1 is
Figure BDA0002306696730000057
Wherein QkRepresenting the point cloud map established at the moment k,
Figure BDA0002306696730000058
representing the conversion of the current frame to the point cloud map under the world coordinate system at the moment of k +1,
Figure BDA0002306696730000059
showing the pose of the robot at the moment k in a world coordinate system,
Figure BDA00023066967300000510
the pose increment of the robot at the moment k +1 is represented, t is a translation position, and R is a rotation angle, and the matching algorithm is favorable for improving the robustness and accuracy of inter-frame matching, so that a scene map with high consistency is generated; in the step S3, a new image is constructed according to the pose of the robot, and then a loop is closedThe method has the advantages of high positioning precision and simple algorithm, and can be applied to constructing the scene map in a large-scale outdoor environment.
The L-M method is called Levenberg-Marquardt method, is an estimation method of regression parameter least square estimation in nonlinear regression, and is a method for integrating a steepest descent method and a linearization method (Taylor series). Because the steepest descent method is suitable for the condition that the parameter estimation value is far from the optimal value in the initial stage of iteration, and the linearization method, namely the Gauss-Newton method is suitable for the later stage of iteration, the parameter estimation value is close to the range of the optimal value, the two methods can be combined together to find the optimal value quickly. The L-M algorithm is the most widely used nonlinear least square algorithm, and utilizes gradient to obtain the maximum (small) value, namely the optimal solution, which belongs to the hill climbing method vividly, and has the advantages of both the gradient method and the Newton method.
When the K-D tree is used for establishing images, point clouds between frames need to be compared and matched, common points which are hit by lasers at two different moments are found, obviously, the efficiency is low through a general linear search mode, therefore, in order to improve the search efficiency, a special data structure needs to be constructed to store all previous point cloud data, and the K-D tree is a good data structure, so that the search efficiency can be greatly improved.
For the algorithm, essentially, a K-D tree can be said to be a partition of a three-dimensional space, and the construction of the K-D tree is equivalent to the segmentation of the three-dimensional space by using a plane perpendicular to coordinate axes, so as to construct a series of cubes, wherein each node of the K-D tree corresponds to one cube. The K-D tree is essentially a binary tree, the median of one dimension is taken as a division point each time, the division point is taken as a node of the tree, and the data set is divided into a left part and a right part for recursive division. It is noted that the next dimension is different from the previous one, and the next dimension is generally selected directly. Searching out the point closest to each laser point of the current frame of point cloud data in a point cloud map stored in a K-D tree structure, turning left if the value node of the current dimension is small, turning right if the value node of the current dimension is large, searching for the next dimension, and setting the finally obtained leaf node as the current closest point.
The method for modifying the map according to the loop closing detection result comprises the following steps: adding a new image, judging whether a loop is closed or not, and optimizing the map if the loop is closed; otherwise, the map is updated, thereby being beneficial to reducing the complexity of the algorithm and reducing the calculation and retrieval time. After the map is optimized, the map is added into the global map until the global map is built, so that the complexity of the algorithm can be properly reduced, and the online SLAM process is realized.
As shown in fig. 2, in the concrete map building process, data acquired by a sensor is filtered by a filter to screen out data meeting requirements, a map generated at present is matched with a global map by a rough-to-fine matching algorithm to generate a more accurate pose of a robot, the pose is added to the global map, whether a loop is closed or not is judged, if the loop is closed, the map is optimized, and then the map is rebuilt; otherwise, the map is updated.
As shown in fig. 3, the method for optimizing the map includes: firstly, discretizing according to the length of the robot running path; initializing initial data of the robot, and calculating and outputting the pose of the robot by using a rough-to-fine matching algorithm; respectively judging the pose of the robot, the difference value of the key sequence frames, the difference value of the poses of the robots corresponding to the two frames and the matching degree of map environment change, if the conditions are met, determining that a correct closed loop is detected, adjusting all point cloud maps between the two frames, and adding the maps into a global map; and if the condition is not met, updating the map.
In order to reduce algorithm complexity and realize an online SLAM process, the invention does not adopt a global closed-loop detection algorithm (namely, the current frame is matched with all historical frames), but adopts the key frame matched with the adjacent key frame in a certain region of the current frame to carry out regional closed-loop detection. The specific algorithm steps are as follows:
first, discretization τ, i.e., τ ∈ { τ), is performed according to the length of the robot travel pathi},Wherein tau isiSatisfies the following conditions:
Figure BDA0002306696730000071
wherein xjRepresenting the pose of the robot at the moment j, t representing the current moment, d representing the walking distance of the robot, x representing the pose, l representing a global map, z representing a sensor observation map, and k representing the walking distance of the robottSequence number, x, indicating the key frame at the current timetRepresenting the pose at the current time, ztRepresenting the sensor observation at the current time. L ist={kt,xt,ztAnd the current key frame sequence number, pose and sensor observation set. { k } is a function ofτ,xτ,zτRetrieving a key frame sequence number, a pose and a set observed by a sensor;
initializing initial data of the robot, and enabling d to be 0, t to be 0, k to be 1 and x0=0,l0=0,Lτ={0,x0,z0And by default observations and measurements are consistent,/t=zt,t=t+1;
Will ztAnd ltMatching and outputting the pose x of the robott
Calculating the current distance d ═ d + | | | xt-xt-1If the current distance is larger than the first threshold value, namely d is larger than or equal to dthreIf not, continuing to detect at the next moment;
{kτ,xτ,zτ}∈Lτjudging whether the difference value between the sequence numbers of the current key frame and the searched key frame meets a preset second threshold value, namely k-kτ>kthreIf yes, continuing the next step, otherwise updating the map;
{kτ,xτ,zτ}∈Lτjudging whether the difference value of the pose of the robot meets a preset third threshold value when the robot passes the same point twice, namely | | xt-xτ||<xthreIf yes, continuing to the next step, otherwise updating the map;
matching the observed map environment with the detected map environment, in particular ztAnd zτPerforming ICP matching, and if the matching meets the score Sfitness<SthreIf not, continuing to detect at the next moment;
reconstruct the map, let Lτ=LτU{kt,xt,ztAnd f, returning to the initial step, wherein d is 0 and k is k + 1. The specific method for reconstructing the map comprises the following steps: and adjusting all point cloud maps between two frames, and adding the maps into the global map so as to newly form a new map.
In order to guarantee the operation precision, when a new image is constructed according to the pose of the robot, an incremental optimization method is adopted, and the new pose and the map of the computer are calculated by acquiring the pose of the robot at the last moment, the control quantity and the observed quantity at the current moment.
The calculation formula of the new pose and the map of the robot is as follows:
Figure BDA0002306696730000081
Figure BDA0002306696730000082
wherein the robot pose at the time of t is xtThe sensor observation is ztThe controlled quantity is ut,x1:t={x1,L,xtAnd m is an environment map, and the map m depends on the motion track of the robot and the observation of a corresponding sensor.
When the new pose of the robot is calculated, if the noise of the kinematic model and the observation model meets Gaussian distribution, the kinematic model and the observation model can be converted into the following states:
Figure BDA0002306696730000091
wherein
Figure BDA0002306696730000092
The operation represents a coordinate transformation, TijSum-sigmaijAre respectively xiAnd xjThe relative relationship and covariance between them can be obtained by the above matching algorithm, in the two-dimensional case, x ═ x, y, θ)TX, y, theta denote abscissa, ordinate, and angle, respectively, and T may be expressed as (x)T,yTT)TThen, the specific form of the operation can be expressed as:
Figure BDA0002306696730000093
the calculation formula of the new pose of the robot can be simplified as follows:
Figure BDA0002306696730000094
order to
Figure BDA0002306696730000095
It is a non-linear function, i.e. it has:
Figure BDA0002306696730000096
wherein
Figure BDA0002306696730000097
Is fj(xi) A jacobian matrix.
To calculate the formula
Figure BDA0002306696730000098
When it is used, order
Figure BDA0002306696730000099
It is simplified as follows:
Figure BDA00023066967300000910
wherein, note
Figure BDA00023066967300000911
And written in the general form:
Figure BDA00023066967300000912
and Q (R0)TIs the QR decomposition of A.
The above-mentioned QR decomposition is performed by using an incremental method, and since the change of a in each iteration process is not large, a Givens Rotation (Givens Rotation) matrix of QR decomposition at the previous time can be used as an iteration initial value. At this time, QR decomposition is carried out in increments, and the triangular array inversion is less complex, so that the method can be applied in real time.
Example two
Based on the same inventive concept, the present embodiment provides a three-dimensional laser mapping system, the principle of solving the problem is similar to the three-dimensional laser mapping method, and the repeated parts are not repeated.
The three-dimensional laser mapping system described in this embodiment includes:
the acquisition and filtering module is used for acquiring data by using a sensor arranged on the robot and filtering unqualified data;
a calculation module, configured to calculate a pose of the robot by using a rough-to-fine matching algorithm, where the rough-to-fine matching algorithm includes: to QkUsing K-D tree description, pair
Figure BDA0002306696730000101
Each laser point in (1) is sought at QkEstablishing pairing at the nearest point in the sequence; establishing a nonlinear equation, and aiming to minimize the distance of all pairs; solving the nonlinear optimization problem by using an L-M method to solve Topt(R, t); the pose of the robot at the moment k +1 is
Figure BDA0002306696730000102
Wherein QkRepresenting the point cloud map established at the moment k,
Figure BDA0002306696730000103
representing the conversion of the current frame to the point cloud map under the world coordinate system at the moment of k +1,
Figure BDA0002306696730000104
showing the pose of the robot at the moment k in a world coordinate system,
Figure BDA0002306696730000105
representing the pose increment of the robot at the moment k +1, wherein t is a translation position, and R is a rotation angle;
and the mapping module is used for constructing a new image according to the pose of the robot and then modifying the map according to the loop closing detection result.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications therefrom are within the scope of the invention.

Claims (10)

1. A three-dimensional laser mapping method is characterized by comprising the following steps:
step S1: acquiring data by using a sensor arranged on the robot, and filtering unqualified data;
step S2: calculating the pose of the robot by using a rough-to-fine matching algorithm, wherein the rough-to-fine matching algorithm comprises the following steps: to QkUsing K-D tree description, pair
Figure FDA0002306696720000011
Each laser point in (1) is sought at QkEstablishing pairing at the nearest point in the sequence; establishing a nonlinear equation, and aiming to minimize the distance of all pairs; solving the nonlinear optimization problem by using an L-M method to solve Topt(R, t); the pose of the robot at the moment k +1 is
Figure FDA0002306696720000012
Wherein QkRepresenting the point cloud map established at the moment k,
Figure FDA0002306696720000013
representing the conversion of the current frame to the point cloud map under the world coordinate system at the moment of k +1,
Figure FDA0002306696720000014
showing the pose of the robot at the moment k in a world coordinate system,
Figure FDA0002306696720000015
representing the pose increment of the robot at the moment k +1, wherein t is a translation position, and R is a rotation angle;
step S3: and constructing a new image according to the pose of the robot, and then modifying a map according to a loop closing detection result.
2. The three-dimensional laser mapping method of claim 1, wherein: the method for modifying the map according to the loop closing detection result comprises the following steps: adding a new image, judging whether a loop is closed or not, and optimizing the map if the loop is closed; otherwise, the map is updated.
3. The three-dimensional laser mapping method of claim 2, wherein: and after the map is optimized, adding the map into the global map until the global map is established.
4. The three-dimensional laser mapping method of claim 3, wherein: the method for optimizing the map comprises the following steps: firstly, discretizing according to the length of the robot running path; initializing initial data of the robot, and calculating and outputting the pose of the robot by using a rough-to-fine matching algorithm; judging the pose of the robot, the difference value of the key sequence frames, the difference value of the poses of the robots corresponding to the two frames and the matching degree of map environment change, if the conditions are met, determining that a correct closed loop is detected, adjusting all point cloud maps between the two frames, and adding the maps into a global map; and if the condition is not met, updating the map.
5. The three-dimensional laser mapping method of claim 4, wherein: the method for judging the pose of the robot, the difference value of the key sequence frames, the difference value of the poses of the robots corresponding to the two frames and the matching degree of the map environment change comprises the following steps:
calculating the current distance, if the current distance is larger than a first threshold value, continuing the next step, otherwise, continuing the detection at the next moment;
judging whether the difference value of the serial numbers of the current key frame and the searched key frame meets a preset second threshold value or not, if yes, continuing to calculate the next step, and if not, updating the map;
judging whether the difference value of the pose of the robot meets a preset third threshold value when the robot passes the same point twice, if so, turning to the next step, and if not, updating the map;
and matching the observed map environment with the detected map environment, if the matching meets the score, determining that the closed loop is detected, and if not, continuing to detect at the next moment.
6. The three-dimensional laser mapping method of claim 1, wherein: and when a new image is constructed according to the pose of the robot, calculating the new pose and the map of the computer by acquiring the pose of the robot at the last moment, the control quantity and the observed quantity at the current moment by adopting an incremental optimization method.
7. The three-dimensional laser mapping method of claim 6, wherein: the calculation formula of the new pose and the map of the robot is as follows:
Figure FDA0002306696720000021
Figure FDA0002306696720000022
wherein the robot pose at the time of t is xtThe sensor observation is ztThe controlled quantity is ut,x1:t={x1,L,xtAnd m is an environment map.
8. The three-dimensional laser mapping method of claim 7, wherein: when the new pose of the robot is calculated, if the noise of the kinematic model and the observation model meets Gaussian distribution, the kinematic model and the observation model can be converted into the following states:
Figure FDA0002306696720000031
wherein
Figure FDA0002306696720000032
The operation represents a coordinate transformation, TijSum-sigmaijAre respectively xiAnd xjThe relative relationship and covariance between them can be obtained by the above matching algorithm, in the two-dimensional case, x ═ x, y, θ)TThen T can be represented as (x)T,yTT)TThen, the specific form of the operation is expressed as:
Figure FDA0002306696720000033
the calculation formula of the new pose of the robot is as follows:
Figure FDA0002306696720000034
order to
Figure FDA0002306696720000035
Which is a non-linear function of the signal,namely, the method comprises the following steps:
Figure FDA0002306696720000036
wherein
Figure FDA0002306696720000037
Is fj(xi) A jacobian matrix.
9. The three-dimensional laser mapping method of claim 8, wherein: formula for calculation
Figure FDA0002306696720000038
When it is used, order
Figure FDA0002306696720000039
It is simplified as follows:
Figure FDA00023066967200000310
wherein, let X ═ δ X1 T,δx2 T,L,δxt T)TAnd written in the general form:
Figure FDA00023066967200000311
wherein, Q (R0)TIs the QR decomposition of A.
10. A three-dimensional laser mapping system is characterized in that: the method comprises the following steps:
the acquisition and filtering module is used for acquiring data by using a sensor arranged on the robot and filtering unqualified data;
a calculation module, configured to calculate a pose of the robot by using a rough-to-fine matching algorithm, where the rough-to-fine matching algorithm includes: to QkUsing K-D tree description, pair
Figure FDA0002306696720000041
Each laser point in (1) is sought at QkEstablishing pairing at the nearest point in the sequence; establishing a nonlinear equation, and aiming to minimize the distance of all pairs; solving the nonlinear optimization problem by using an L-M method to solve Topt(R, t); the pose of the robot at the moment k +1 is
Figure FDA0002306696720000042
Wherein QkRepresenting the point cloud map established at the moment k,
Figure FDA0002306696720000043
representing the conversion of the current frame to the point cloud map under the world coordinate system at the moment of k +1,
Figure FDA0002306696720000044
showing the pose of the robot at the moment k in a world coordinate system,
Figure FDA0002306696720000045
representing the pose increment of the robot at the moment k +1, wherein t is a translation position, and R is a rotation angle;
and the mapping module is used for constructing a new image according to the pose of the robot and then modifying the map according to the loop closing detection result.
CN201911242680.7A 2019-12-06 2019-12-06 Three-dimensional laser mapping method and system Active CN111161412B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911242680.7A CN111161412B (en) 2019-12-06 2019-12-06 Three-dimensional laser mapping method and system
PCT/CN2019/123931 WO2021109167A1 (en) 2019-12-06 2019-12-09 Three-dimensional laser mapping method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911242680.7A CN111161412B (en) 2019-12-06 2019-12-06 Three-dimensional laser mapping method and system

Publications (2)

Publication Number Publication Date
CN111161412A true CN111161412A (en) 2020-05-15
CN111161412B CN111161412B (en) 2024-02-09

Family

ID=70556540

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911242680.7A Active CN111161412B (en) 2019-12-06 2019-12-06 Three-dimensional laser mapping method and system

Country Status (2)

Country Link
CN (1) CN111161412B (en)
WO (1) WO2021109167A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111862162A (en) * 2020-07-31 2020-10-30 湖北亿咖通科技有限公司 Loop detection method and system, readable storage medium and electronic device
CN112684430A (en) * 2020-12-23 2021-04-20 哈尔滨工业大学(威海) Indoor old person walking health detection method and system, storage medium and terminal
CN112762923A (en) * 2020-12-31 2021-05-07 合肥科大智能机器人技术有限公司 3D point cloud map updating method and system
CN114216451A (en) * 2021-12-02 2022-03-22 北京云迹科技股份有限公司 Robot map updating method and device
CN114549605A (en) * 2021-12-31 2022-05-27 广州景骐科技有限公司 Image optimization method, device and equipment based on point cloud matching and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115406434B (en) * 2022-08-17 2024-05-07 南京领鹊科技有限公司 Autonomous update method, device and storage medium for navigation map of indoor robot

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106541945A (en) * 2016-11-15 2017-03-29 广州大学 A kind of unmanned vehicle automatic parking method based on ICP algorithm
CN108303710A (en) * 2018-06-12 2018-07-20 江苏中科院智能科学技术应用研究院 Drawing method is built in the more scene positioning of unmanned plane based on three-dimensional laser radar
US20190114777A1 (en) * 2017-10-18 2019-04-18 Tata Consultancy Services Limited Systems and methods for edge points based monocular visual slam

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105843223B (en) * 2016-03-23 2018-11-20 东南大学 A kind of mobile robot three-dimensional based on space bag of words builds figure and barrier-avoiding method
CN108090958B (en) * 2017-12-06 2021-08-27 上海阅面网络科技有限公司 Robot synchronous positioning and map building method and system
CN108180917B (en) * 2017-12-31 2021-05-14 芜湖哈特机器人产业技术研究院有限公司 Top map construction method based on pose graph optimization
CN109887028B (en) * 2019-01-09 2023-02-03 天津大学 Unmanned vehicle auxiliary positioning method based on point cloud data registration

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106541945A (en) * 2016-11-15 2017-03-29 广州大学 A kind of unmanned vehicle automatic parking method based on ICP algorithm
US20190114777A1 (en) * 2017-10-18 2019-04-18 Tata Consultancy Services Limited Systems and methods for edge points based monocular visual slam
CN108303710A (en) * 2018-06-12 2018-07-20 江苏中科院智能科学技术应用研究院 Drawing method is built in the more scene positioning of unmanned plane based on three-dimensional laser radar

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
李凯: "面向数字化工厂的便携式实时三维重建系统研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 06, 15 June 2019 (2019-06-15), pages 3 - 6 *
李凯: "面向数字化工厂的便携式实时三维重建系统研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 06, pages 3 - 6 *
翁潇文;李迪;柳俊城: "基于图优化的二维激光SLAM研究", 《自动化与仪表》, vol. 34, no. 4, pages 1 - 3 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111862162A (en) * 2020-07-31 2020-10-30 湖北亿咖通科技有限公司 Loop detection method and system, readable storage medium and electronic device
CN112684430A (en) * 2020-12-23 2021-04-20 哈尔滨工业大学(威海) Indoor old person walking health detection method and system, storage medium and terminal
CN112762923A (en) * 2020-12-31 2021-05-07 合肥科大智能机器人技术有限公司 3D point cloud map updating method and system
CN114216451A (en) * 2021-12-02 2022-03-22 北京云迹科技股份有限公司 Robot map updating method and device
CN114216451B (en) * 2021-12-02 2024-03-26 北京云迹科技股份有限公司 Robot map updating method and device
CN114549605A (en) * 2021-12-31 2022-05-27 广州景骐科技有限公司 Image optimization method, device and equipment based on point cloud matching and storage medium
CN114549605B (en) * 2021-12-31 2023-08-04 广州景骐科技有限公司 Image optimization method, device, equipment and storage medium based on point cloud matching

Also Published As

Publication number Publication date
CN111161412B (en) 2024-02-09
WO2021109167A1 (en) 2021-06-10

Similar Documents

Publication Publication Date Title
CN111161412B (en) Three-dimensional laser mapping method and system
CN107160395B (en) Map construction method and robot control system
Eade et al. Monocular graph SLAM with complexity reduction
WO2019057179A1 (en) Visual slam method and apparatus based on point and line characteristic
CN109163722B (en) Humanoid robot path planning method and device
CN113674416B (en) Three-dimensional map construction method and device, electronic equipment and storage medium
CN110298914B (en) Method for establishing fruit tree canopy feature map in orchard
CN110726406A (en) Improved nonlinear optimization monocular inertial navigation SLAM method
CN114001733B (en) Map-based consistent efficient visual inertial positioning algorithm
CN113110455B (en) Multi-robot collaborative exploration method, device and system for unknown initial state
Ceriani et al. Pose interpolation slam for large maps using moving 3d sensors
Magistri et al. Contrastive 3D shape completion and reconstruction for agricultural robots using RGB-D frames
CN114924287A (en) Map construction method, apparatus and medium
CN113709662A (en) Ultra-wideband-based autonomous three-dimensional inversion positioning method
CN112307917A (en) Indoor positioning method integrating visual odometer and IMU
CN117053779A (en) Tightly coupled laser SLAM method and device based on redundant key frame removal
CN111815684A (en) Space multivariate feature registration optimization method and device based on unified residual error model
CN114742967B (en) Visual positioning method and device based on building digital twin semantic graph
CN115962773A (en) Method, device and equipment for synchronous positioning and map construction of mobile robot
CN115950414A (en) Adaptive multi-fusion SLAM method for different sensor data
Zhang et al. A Robust Lidar SLAM System Based on Multi-Sensor Fusion
Holmes et al. A relative frame representation for fixed-time bundle adjustment in sfm
Klippenstein et al. Feature initialization for bearing-only visual slam using triangulation and the unscented transform
Yang et al. Localization algorithm design and evaluation for an autonomous pollination robot
CN114047766A (en) Mobile robot data acquisition system and method for long-term application in indoor and outdoor scenes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant