CN113379915B - Driving scene construction method based on point cloud fusion - Google Patents
Driving scene construction method based on point cloud fusion Download PDFInfo
- Publication number
- CN113379915B CN113379915B CN202110755584.3A CN202110755584A CN113379915B CN 113379915 B CN113379915 B CN 113379915B CN 202110755584 A CN202110755584 A CN 202110755584A CN 113379915 B CN113379915 B CN 113379915B
- Authority
- CN
- China
- Prior art keywords
- point cloud
- cloud data
- map
- data
- scan context
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/40—Scaling the whole image or part thereof
- G06T3/4007—Interpolation-based scaling, e.g. bilinear interpolation
Abstract
The invention discloses a driving scene construction method based on point cloud fusion, which mainly adopts a map construction scheme of synchronous positioning and map construction technology; firstly, carrying out distortion compensation and sensor time alignment on point cloud data, then publishing the data, and carrying out front-end mileage estimation; after receiving the point cloud data, the front-end mileage estimation node performs point cloud matching on the point cloud data and generates a preliminary laser odometer estimation value; then, carrying out closed loop detection on the estimated value, and if a closed loop is formed by detection, transmitting information to a rear end to carry out rear-end algorithm optimization; and optimizing by a rear-end algorithm to obtain pose information of the pre-estimated value nodes and a vertex set of a road sign forming graph, and issuing and displaying the generated optimized laser mileage counting data. The method greatly improves the map building precision while ensuring the map building real-time performance, can quickly realize the positioning in the scene and the building of the high-precision map, and overcomes the defects of large error and poor real-time performance of the traditional pure laser mileage map building.
Description
Technical Field
The invention relates to the field of map real-time construction, in particular to a driving scene construction method based on point cloud fusion.
Background
In recent years, with the increasing amount of automobile reserves, the road carrying capacity of many cities has reached full load. Traffic safety, trip efficiency, energy conservation and emission reduction are increasingly prominent, and promoting the intellectualization and networking of vehicles is an important solution for solving the traffic problems. In the development process of vehicle intellectualization and networking, the three-dimensional map reconstruction of a driving scene is one of key technical problems.
With the progress of the times, sensors used by the synchronous positioning and map building technology are continuously and iteratively developed, and a method for fusing the sensors such as a laser radar and an inertial measurement unit is more popular. The synchronous positioning and mapping technology is changed from a method based on a Kalman filter or a Bayesian filter to a method based on graph optimization. Compared with the GPS positioning precision, the method has higher precision, solves the problem that the details of the specific lane line scene are difficult to accurately position, and simultaneously supports the positioning and map construction of special scenes such as tunnels, caves and the like.
Some non-linear optimization methods in the prior art are used for map construction, accumulated errors in a composition process are reduced through a non-linear least square method, sparsity of a system is ignored, and a real-time requirement cannot be met in a driving scene construction by an off-line map construction method used by the method.
In other prior arts, a gaussian-newton iteration method is introduced in a data association link of front-end estimation, and meanwhile, matching between point cloud frames is completed by adopting a method of directly matching laser radar scanning data with a map, but the method lacks back-end optimization and closed-loop detection, is sensitive to an initial registration value, and causes low overall precision.
Disclosure of Invention
The invention aims to provide a driving scene construction method based on point cloud fusion, which is used for solving the problems of insufficient calculation and repeated construction of single equipment in the traditional map construction method.
In order to realize the task, the invention adopts the following technical scheme:
a driving scene construction method based on point cloud fusion comprises the following steps:
step 2, performing sensor time alignment on the point cloud data after distortion compensation, performing linear interpolation calculation on odometer data at the same time while performing distortion compensation on the point cloud data, and aligning the odometer data to the point cloud data for time to obtain a position posture of the robot realizing sensor alignment as an initial predicted position posture of front-end mileage estimation;
step 3, establishing a task queue for storing point cloud data and generating a local map; when each frame of point cloud data is inserted into the task queue, inputting the initial predicted position posture of the robot corresponding to the point cloud data, performing point cloud matching with a local map, performing normal distribution transformation on the point cloud data to obtain all unit voxels of a target point cloud, substituting the unit voxels into a Gaussian probability model, finding the point on the local map which is most similar to the initial predicted position posture, and obtaining a pose pre-estimated value of front-end mileage estimation;
step 4, adopting a scene identification mode based on point cloud, and describing point cloud data in a task queue established by front-end mileage estimation by using a scan Context characteristic diagram; defining a distance calculation formula between scan Context feature graphs; extracting a descriptor ring from the scan Context feature graph, representing the descriptor ring as a multi-dimensional vector to construct a KD tree, obtaining the most similar region of the ring of the current scene, and selecting the scan Context corresponding to the history ring most similar to the ring of the current scene as the scan Context of the candidate scene; calculating the distance between the scan Context of the point cloud data in the task queue and the scan Context of the candidate scene, and performing back-end optimization on the pose estimated value if the candidate scene meeting the threshold is considered as a closed loop;
step 5, abstractly defining pose pre-estimated values obtained by estimating front-end mileage at different moments of the robot as nodes, abstracting constraints generated by the observation of the robot at different positions into edges between the nodes, forming a vertex set of a graph by the pose information of the nodes and the landmark, and representing the constraints of the observation data by the edges;
defining an error function as the difference between the mean of the virtual measurements between the two nodes and the desired virtual measurement; iteration is carried out on the error function through a Gauss-Newton method to obtain an optimal solution, the obtained optimal solution is superposed on the initial predicted position posture to obtain a new pose pre-estimated value, the new pose pre-estimated value is used as the initial predicted position posture of the next iteration, and the iteration is continued until the pose pre-estimated value error is minimum;
and (5) performing the steps in the edge equipment, building a local map, uploading the local map to a cloud server for global map splicing, and finally transmitting the spliced map back to the edge equipment for positioning operation of the vehicle.
Further, the point cloud data is subjected to distortion compensation, and the distortion compensation comprises the following steps:
calculating a point cloud frame included angle according to the head and tail data of the read point cloud frame to obtain the actual time of point cloud frame sampling;
acquiring the angular speed and the linear speed of the vehicle by using the information of the read inertial measurement unit, and multiplying the angular speed by the time to obtain the rotation angle of the vehicle; multiplying the linear speed by time to obtain vehicle offset displacement and generating a rotation offset matrix;
and multiplying the original point cloud data matrix by the rotational offset matrix to obtain point cloud data after distortion compensation.
Further, the calculation formula of the position posture of the robot for realizing the sensor alignment is as follows:
wherein, X (t) n ) Represents t n Position and attitude of robot of point, x (t) o1 )、X(t o2 ) Are respectively t o1 、t o2 Odometry data at the moment.
Further, when the distance between the pose estimated value calculated by inserting one frame of point cloud data and the pose estimated value of the previous frame does not exceed a preset threshold, the step 3 is ended.
Furthermore, any area of the point cloud data can be uniquely indexed through a ring and a fan, and the point cloud data is represented in a two-dimensional form to obtain a scan Context characteristic diagram;
recording the scan context feature map of the two point clouds as I q And I c Defining the distance calculation formula between scan Context feature graphs as follows:
wherein N is s Is the number of sectors (sectors),are respectively a pair of error functions I q And I c The solution result of (2);
for the case of two scan context feature maps in the same location but accessed at different times, the sum of the column vectors of all the scan context feature maps where column translation may occur and find the minimum as the final distance, the best match relation n for column translation * Is defined as follows:
the minimum distance is defined as:
wherein N is s Representing the number of sectors in the point cloud, and n represents one of the sectors.
Further, calculating the distance between the scan Context of the point cloud data in the task queue and the scan Context of the candidate scene by using formula (2), wherein the candidate scene satisfying the threshold is regarded as a closed loop:
wherein the content of the first and second substances,scan context feature map, I, representing candidate scenes q0 A scan context feature map representing point cloud data in the task queue, D is I calculated by using formula (4) q0 ,The minimum distance between the two scenes, tau is a set distance threshold for judging whether the two scenes are similar or not, c * Is finally detected and whenAnd historical scenes with consistent foreground scenes.
Further, definition e of the error function ij (x i ,x j ) Comprises the following steps:
wherein x is i Representing the pose, x, of node i j Represents the pose of node j, z ij And represents the mean of the virtual measurements between node i and node j,representing a desired virtual measurement, error e ij (x i ,x j ) Depending on the difference between the desired measurement and the actual measurement;
reduce the error function to F (x):
x * =argminF(x) (8)
wherein x is * Defining a corresponding special solution of the formula (6) for the error function, namely, a value with the minimum pose estimated value error;is an error e ij Is a constraint, C = &<e ij ,Ω ij >},Ω ij An information matrix representing virtual measurements between node i and node j.
Further, the iterative solution of the error function by the gauss-newton method to find the optimal solution includes:
let the initial point beTo it is given an (Δ x) i ,Δx j ) The first order Taylor expansion is performed on the error function:
where Δ x represents the differential, representing a functionAt an initial predicted position attitudeThe amount of change in position; j. the design is a square ij As a pair of error functionsThe calculated Jacobian matrix has, for the objective function term of the edge between node i and node j:
the objective function terms of the edges between all nodes are:
wherein c = ∑ c ij ,b=∑b ij ,H=∑H ij ,Δx T 、b T Are the transpose of deltax, b, respectively, sigma <i,j>∈C Express pair functionUnder the constraint condition<i,j>Summing all values under the condition of belonging to the C,the value of i and j in the step (a) is in a constraint range C, and C is a constraint;
and solving to obtain an optimal solution delta x.
The terminal equipment comprises a memory, a processor and a computer program which is stored in the memory and can run on the processor, and the steps of the driving scene construction method based on point cloud fusion are realized when the processor executes the computer program.
A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out the steps of a method for constructing a driving scene based on point cloud fusion.
Compared with the prior art, the invention has the following technical characteristics:
1. the method creatively provides and applies a point cloud fused SLAM map construction method, and the method greatly improves the map construction precision while ensuring the map construction real-time property by fusing inertial measurement unit (inertial sensor) IMU/GPS sensor data and combining optimization means such as point cloud data distortion compensation, sensor time alignment, map optimization and closed loop detection, can quickly realize the positioning in a scene and the construction of a high-precision map, and overcomes the defects of large map construction error and poor real-time property of the traditional pure laser odometry.
2. According to the method, a map construction and positioning method based on edge cooperation is designed, the edge device is issued by the cloud, the sub-maps are constructed by the edge device and then uploaded to the cloud for splicing and combining, the computing power of the edge is fully utilized, the resource utilization rate is improved, meanwhile, the repeated construction work of the map of a single vehicle is reduced, the work redundancy is reduced, the load of the edge device is lightened, and the problems of long time consumption and low precision caused by insufficient computing power in the traditional map construction method are solved.
Drawings
FIG. 1 is a flow diagram of a method of the present invention;
FIG. 2 is a schematic flow chart of distortion compensation for point cloud data;
FIG. 3 is a flow chart of a front end estimated mileage implementation;
FIG. 4 is a schematic diagram of a closed loop detection process;
FIG. 5 is a schematic diagram of sensor time alignment.
Detailed Description
The invention provides a driving scene construction method based on point cloud fusion, which mainly adopts a map construction scheme of synchronous positioning and map construction technology. Firstly, carrying out distortion compensation and sensor time alignment on point cloud data, then issuing the data, and carrying out front-end mileage estimation; after receiving the point cloud data, the front-end mileage estimation node performs point cloud matching on the point cloud data and generates a preliminary laser odometer estimation value; then, carrying out closed loop detection on the estimated value, and if a closed loop is formed by detection, transmitting information to a rear end to carry out rear-end algorithm optimization; and optimizing by a rear-end algorithm to obtain pose information of the pre-estimated value nodes and a vertex set of a landmark composition graph, and issuing and displaying the generated optimized laser mileage count data.
Referring to the attached drawings, the method for constructing the driving scene based on point cloud fusion comprises the following specific implementation steps:
The vehicle-mounted laser radar scans and collects point cloud data in the process of automobile movement, but the coordinate system of the laser point is changed constantly, so that the collected point cloud data has distortion phenomenon and needs to be subjected to distortion compensation.
a. Calculating a point cloud frame included angle according to the head and tail data of the read point cloud frame to obtain the actual time of point cloud frame sampling;
b. and acquiring the angular speed and the linear speed of the vehicle by using the information of the read inertial measurement unit. Multiplying the angular velocity by the time to obtain a vehicle rotation angle; multiplying the linear speed by time to obtain vehicle offset displacement and generating a rotation offset matrix;
c. and multiplying the original point cloud data matrix by the rotational offset matrix to obtain point cloud data after distortion compensation.
Step 2, aligning the sensor time
And carrying out sensor time alignment on the point cloud data after the distortion compensation. According to the method, the influence caused by sampling rate difference is eliminated by performing linear interpolation calculation on linear mileage counting data, so that the position and the posture of the robot realize sensor alignment when the laser radar scans and acquires point cloud data. The specific calculation formula is as follows:
wherein, X (t) n ) Represents t n Position and attitude of robot of point, X (t) o1 )、X(t o2 ) Are respectively t o1 、t o2 Odometry data at the moment. The position pose calculated in this step will be the initial predicted position pose for the step 3 front end range estimation.
When the point cloud data is subjected to distortion compensation, linear interpolation calculation is carried out on linear odometer data collected by a sensor at the same time, the odometer data is aligned with the point cloud data for time, the time alignment of the sensor is finished, and the position and the attitude X (t) are obtained n ) The influence caused by the difference of the sampling rates is eliminated.
Step 3, front-end mileage estimation
And (3) creating a task queue for storing the point cloud data processed in the steps (1) and (2) and generating a local map. When each frame of point cloud data is inserted into the queue, inputting the initial predicted position posture of the robot corresponding to the point cloud data, and performing point cloud matching with a local map; point cloud matching, namely, carrying out normal distribution transformation on point cloud data to obtain all unit voxels of the target point cloud, substituting the unit voxels into a Gaussian probability model, finding out a point on the local map which is most similar to the initial predicted position posture, and obtaining a pose pre-estimated value of front-end mileage estimation, wherein the pose pre-estimated value is used for updating the local map once. And when the distance between the pose estimated value calculated by inserting one frame of point cloud data and the pose estimated value of the previous frame does not exceed a preset threshold value, finishing the step.
Step 4, closed loop detection
The pose estimated value obtained in the process determines whether the robot returns to the previous position through closed-loop detection, and accumulated errors caused by error of the pose estimated value in the step (3) in a downward transmission frame by frame are avoided.
And 4.1, describing the point cloud data in the task queue created by the front-end mileage estimation by using a scanContext characteristic diagram (a non-histogram-based global descriptor) in a point cloud-based scene identification mode.
Because any area of the point cloud can be uniquely indexed by ring and sector, representing the point cloud data in a two-dimensional form obtains a scan Context feature map, that is, a two-dimensional feature map, which is used for representing the index relationship.
Taking scan context feature map of two point clouds as I q And I c Defining the distance calculation formula between scan Context feature graphs as follows:
wherein, N s Is the number of sectors (sectors),respectively, using an error function pair I q And I c See equations (6) to (8).
For the case of two scan context feature maps with the same position but different time access, the method can calculate the sum of column vectors of all scan context feature maps with possible column translation and find the minimum value as the final distance, and the best matching relation n of column translation * Is defined as follows:
the minimum distance is defined as:
in this scheme, N s Representing the number of sectors in the point cloud, N represents a sector, N r Representing the number of rings in the point cloud data.
The point cloud data can be divided into a plurality of sectors, the distance calculation results obtained by taking different sectors are different, and the purpose of the formula is to calculate the minimum distance between two scan contexts, namely the distance between two point clouds can be represented by the distance between two corresponding sectors which are closest to each other.
Step 4.2, extracting a descriptor ring insensitive to rotation from the scan context feature map, and representing the descriptor ring as N r The vector k of the dimension is used for constructing a KD tree (k-dimension tree), the most similar region of the ring of the current scene is obtained, and the scan context corresponding to a plurality of historical rings most similar to the ring of the current scene is selected as the scan context of the candidate scene.
Each scan context represents a frame of point cloud data, each frame of point cloud data is obtained by scanning an actual scene through laser, each scan context corresponds to one actual scene, and if the distance between the two scenes is small enough, the two scenes can be considered to be basically consistent. The candidate scene is the scene corresponding to the candidate scan context.
Since the point cloud data is input frame by frame, the scan context corresponding to the point cloud data input before the current frame is defined as a historical scene. The closed loop detection is to judge whether the robot returns to the previous position by comparing the similarity between the current scene and the historical scene, i.e. whether the map forms a closed loop.
Step 4.3, calculating the distance between the scan Context of the point cloud data of the task queue in step 4.1 and the scan Context of the candidate scene obtained in step 4.2 by using a formula (2), wherein the candidate scene meeting the threshold is regarded as a closed loop:
wherein the content of the first and second substances,scan context feature map, I, representing candidate scenes q0 A scan context feature map representing point cloud data in the task queue, D is I calculated by using formula (4) q0 ,The minimum distance between the two scenes, tau is a set distance threshold value for judging whether the two scenes are similar or not, c * The historical scene which is finally detected and is consistent with the current scene is obtained.
And (3) judging through a formula 5, and if the similarity is high enough, namely the similarity is smaller than a distance threshold value, considering that a closed loop is formed, namely the pose estimated value obtained in the step (3) has no error, and transmitting information to a rear end for optimization. Otherwise, returning to the step 3.
Step 5, optimizing a back-end algorithm
The method abstractly defines pose estimated values obtained by estimating front-end mileage at different moments of the robot as nodes, abstracts constraints generated by observation of the robot at different positions as edges between the nodes, and forms a vertex set of the graph by the pose information of the nodes and road signs; in the graph optimization, the constraint of the observed data is represented by edges, and the edges can also be regarded as a virtual measurement value.
Definition of the error function e ij (x i ,x j ) Comprises the following steps:
wherein x is i Representing the pose, x, of node i j Represents the pose of node j, z ij And represents the mean of the virtual measurements between node i and node j,representing a desired virtual measurement, error e ij (x i ,x j ) GetDepending on the difference between the desired measurement and the actual measurement.
Reduce the error function to F (x):
x * =argminF(x) (8)
wherein x is * Defining a corresponding special solution of the formula (6) for the error function, namely, a value with the minimum pose estimated value error;is an error e ij Is a constraint, C = &<e ij ,Ω ij >},Ω ij An information matrix representing virtual measurements between node i and node j.
Iteratively solving the optimal solution by a Gauss Newton method or a Levenberg-Marquardt algorithm:
let the initial point beTo it is given an (Δ x) i ,Δx j ) The first order Taylor expansion is performed on the error function:
where Δ x represents a differential, representative functionAt an initial predicted position attitudeThe amount of change in (approaching infinity) from; j. the design is a square ij As a pair of error functionsCalculated Jacobian matrix, target function for edge between node i and node jThe number of items is:
the objective function terms of the edges between all nodes are:
wherein c = ∑ c ij ,b=∑b ij ,H=∑H ij ,Δx T 、b T Are the transpositions of deltax, b respectively,for initial prediction of position attitude, ∑ <i,j>∈C Representing the function to be followedUnder the constraint condition<i,j>Summing all values under the condition of the element C,the value of i and j in (1) is in a constraint range C, and C is constraint.
the optimal solution delta x is obtained by solving and is superposed on the initial prediction position attitudeObtaining a new pose estimated value, taking the new pose estimated value as the initial predicted position posture of the next iteration, returning to the formula 5, and continuing the iteration until x is obtained * I.e. byAnd the pose estimated value error is minimum.
As can be seen from step 3, in the front-end mileage estimation, a corresponding pose prediction value is obtained once every frame of point cloud data is inserted into the task queue, each pose prediction value corresponds to a frame of point cloud data, each frame of point cloud data corresponds to a scan context feature map, and is represented as I in formula 5 q 。
x * The pose estimated value error is minimized, the rear end optimization aims to minimize the pose estimated value error and the actual measurement value error, and x is calculated * The pose estimated value is continuously updated, the map is continuously updated, and x is calculated * The map implementation error of time is minimized.
And 3, performing closed-loop detection on the local map obtained by estimating the front-end mileage in step 4 to check whether the map route has errors. In step 5, the pose estimated value with the minimized error is obtained by a graph optimization method and is used for updating the local map, so that the map error is minimized, and the final local map is obtained.
Step 6, edge collaboration
The construction scheme of the steps 1-5 is carried out in the edge equipment, the local map is constructed and then uploaded to the cloud server for global map splicing, and finally the spliced map is transmitted back to the edge equipment for positioning operation of the vehicle.
The above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.
Claims (9)
1. A driving scene construction method based on point cloud fusion is characterized by comprising the following steps:
step 1, carrying out distortion compensation on point cloud data scanned and collected by a vehicle-mounted laser radar in the automobile movement process;
step 2, performing sensor time alignment on the point cloud data after distortion compensation, performing linear interpolation calculation on odometer data at the same time while performing distortion compensation on the point cloud data, and aligning the odometer data to the point cloud data for time to obtain a position posture of the robot realizing sensor alignment as an initial predicted position posture of front-end mileage estimation;
step 3, establishing a task queue for storing point cloud data and generating a local map; when each frame of point cloud data is inserted into the task queue, inputting the initial predicted position posture of the robot corresponding to the point cloud data, performing point cloud matching with a local map, performing normal distribution transformation on the point cloud data to obtain all unit voxels of a target point cloud, substituting the unit voxels into a Gaussian probability model, finding the point on the local map which is most similar to the initial predicted position posture, and obtaining a pose pre-estimated value of front-end mileage estimation;
step 4, adopting a scene identification mode based on point cloud, and describing point cloud data in a task queue established by front-end mileage estimation by using a scan Context characteristic diagram; defining a distance calculation formula between scan Context feature graphs; extracting a descriptor ring from the scan Context feature graph, representing the descriptor ring as a multi-dimensional vector to construct a KD tree, obtaining the most similar region of the ring of the current scene, and selecting the scan Context corresponding to the history ring most similar to the ring of the current scene as the scan Context of the candidate scene; calculating the distance between the scan Context of the point cloud data in the task queue and the scan Context of the candidate scene, and performing back-end optimization on the pose estimated value if the candidate scene meeting the threshold is considered as a closed loop;
step 5, abstractly defining pose prediction values obtained by estimating the front-end mileage at different moments of the robot as nodes, abstracting constraints generated by the observation of the robot at different positions as edges between the nodes, forming a vertex set of a graph by the pose information of the nodes and the road signs, and representing the constraints of the observation data by the edges;
defining an error function as the difference between the mean of the virtual measurements between the two nodes and the desired virtual measurement; iteration is carried out on the error function through a Gauss-Newton method to obtain an optimal solution, the obtained optimal solution is superposed on the initial predicted position posture to obtain a new pose estimated value, the new pose estimated value is used as the initial predicted position posture of the next iteration, and the iteration is continued until the pose estimated value error is minimum;
and (5) performing step 1-step 5 in the edge equipment, building a local map, uploading the local map to a cloud server for global map splicing, and finally transmitting the spliced map back to the edge equipment for positioning operation of the vehicle.
2. The driving scene construction method based on point cloud fusion of claim 1, wherein the point cloud data is subjected to distortion compensation and comprises:
calculating a point cloud frame included angle according to the head and tail data of the read point cloud frame to obtain the actual time of point cloud frame sampling;
acquiring the angular speed and the linear speed of the vehicle by using the information of the read inertial measurement unit, and multiplying the angular speed by the time to obtain the rotation angle of the vehicle; multiplying the linear speed by time to obtain vehicle offset displacement and generating a rotation offset matrix;
and multiplying the original point cloud data matrix by the rotational offset matrix to obtain point cloud data after distortion compensation.
3. The driving scene construction method based on point cloud fusion as claimed in claim 1, wherein the calculation formula of the position and posture of the robot for realizing sensor alignment is as follows:
wherein, X (t) n ) Represents t n Position and attitude of robot of point, X (t) o1 )、X(t o2 ) Are respectively t o1 、t o2 Odometry data at the moment.
4. The driving scene construction method based on point cloud fusion as claimed in claim 1, wherein any area of the point cloud data can be uniquely indexed by a ring and a sector, and the point cloud data is represented in a two-dimensional form to obtain a scan Context characteristic diagram;
taking scan context feature map of two point clouds as I q And I c Defining the distance calculation formula between scan Context feature graphs as follows:
wherein N is s Representing the number of sectors in the point cloud,are respectively a pair of error functions I q And I c The solution result of (2);
for the case of two scan context feature maps in the same location but accessed at different times, the sum of the column vectors of all the scan context feature maps where column translation may occur and find the minimum as the final distance, the best match relation n for column translation * Is defined as:
the minimum distance is defined as:
wherein, N s Representing the number of sectors in the point cloud, and n represents one of the sectors.
5. The method for constructing driving scenes based on point cloud fusion as claimed in claim 4, wherein the distance between the scan Context of the point cloud data in the task queue and the scan Context of the candidate scenes is calculated by formula (2), and the candidate scenes satisfying the threshold are considered as closed loops:
wherein the content of the first and second substances,scan context feature map, I, representing candidate scenes q A scan context feature map representing point cloud data in the task queue, D is I calculated by using formula (4) q ,The minimum distance between the two scenes, tau is a set distance threshold value for judging whether the two scenes are similar or not, c * The historical scene which is finally detected and is consistent with the current scene is obtained.
6. The method for constructing driving scene based on point cloud fusion as claimed in claim 1, wherein the definition e of the error function ij (x i ,x j ) Comprises the following steps:
wherein x is i Represents the pose, x, of node i j Represents the pose of node j, z ij Representing the mean of the virtual measurements between node i and node j,representing a desired virtual measurement, error e ij (x i ,x j ) Depending on the difference between the desired measurement and the actual measurement;
reduce the error function to F (x):
x * =argmin F(x) (8)
wherein x is * Defining a corresponding special solution of the formula (6) for the error function, namely, a value with the minimum pose estimated value error;is an error e ij Is a constraint, C = &<e ij ,Ω ij >},Ω ij An information matrix representing virtual measurements between node i and node j.
7. The method for constructing a driving scene based on point cloud fusion as claimed in claim 6, wherein the iterative solution of the error function by gauss-newton method comprises:
let the initial point beTo it is given an (Δ x) i ,Δx j ) The first order Taylor expansion is performed on the error function:
where Δ x represents a differential, representative functionAt the initial predicted position attitudeThe amount of change in (c); j is a unit of ij As a pair of error functionsThe calculated Jacobian matrix has, for the objective function term of the edge between node i and node j:
the objective function terms for the edges between all nodes are:
wherein c = ∑ c ij ,b=∑b ij ,H=∑H ij ,Δx T 、b T Are the transpose of deltax, b, respectively, sigma <i,j>∈C Express pair functionUnder the constraint condition<i,j>Summing all values under the condition of belonging to the C,the value of i and j in (1) is in a constraint range C, and C is constraint;
and solving to obtain an optimal solution delta x.
8. Terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method for constructing driving scenes based on point cloud fusion according to any one of claims 1 to 7 when executing the computer program.
9. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, implements the steps of the method for constructing a driving scene based on point cloud fusion according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110755584.3A CN113379915B (en) | 2021-07-05 | 2021-07-05 | Driving scene construction method based on point cloud fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110755584.3A CN113379915B (en) | 2021-07-05 | 2021-07-05 | Driving scene construction method based on point cloud fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113379915A CN113379915A (en) | 2021-09-10 |
CN113379915B true CN113379915B (en) | 2022-12-23 |
Family
ID=77580986
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110755584.3A Active CN113379915B (en) | 2021-07-05 | 2021-07-05 | Driving scene construction method based on point cloud fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113379915B (en) |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3078935A1 (en) * | 2015-04-10 | 2016-10-12 | The European Atomic Energy Community (EURATOM), represented by the European Commission | Method and device for real-time mapping and localization |
CA3026914C (en) * | 2018-07-02 | 2021-11-30 | Beijing Didi Infinity Science And Development Co., Ltd. | Vehicle navigation system using pose estimation based on point cloud |
KR102379295B1 (en) * | 2019-01-30 | 2022-03-25 | 바이두닷컴 타임즈 테크놀로지(베이징) 컴퍼니 리미티드 | RGB point cloud-based map generation system for autonomous vehicles |
CN109974707B (en) * | 2019-03-19 | 2022-09-23 | 重庆邮电大学 | Indoor mobile robot visual navigation method based on improved point cloud matching algorithm |
CN110689622B (en) * | 2019-07-05 | 2021-08-27 | 电子科技大学 | Synchronous positioning and composition method based on point cloud segmentation matching closed-loop correction |
CN110648389A (en) * | 2019-08-22 | 2020-01-03 | 广东工业大学 | 3D reconstruction method and system for city street view based on cooperation of unmanned aerial vehicle and edge vehicle |
CN111882977B (en) * | 2020-05-06 | 2022-04-29 | 北京嘀嘀无限科技发展有限公司 | High-precision map construction method and system |
CN111337947B (en) * | 2020-05-18 | 2020-09-22 | 深圳市智绘科技有限公司 | Instant mapping and positioning method, device, system and storage medium |
CN111929699B (en) * | 2020-07-21 | 2023-05-09 | 北京建筑大学 | Laser radar inertial navigation odometer considering dynamic obstacle and map building method and system |
CN112966542A (en) * | 2020-12-10 | 2021-06-15 | 武汉工程大学 | SLAM system and method based on laser radar |
CN112734852B (en) * | 2021-03-31 | 2021-06-29 | 浙江欣奕华智能科技有限公司 | Robot mapping method and device and computing equipment |
CN113066105B (en) * | 2021-04-02 | 2022-10-21 | 北京理工大学 | Positioning and mapping method and system based on fusion of laser radar and inertial measurement unit |
-
2021
- 2021-07-05 CN CN202110755584.3A patent/CN113379915B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN113379915A (en) | 2021-09-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108759833B (en) | Intelligent vehicle positioning method based on prior map | |
CN109709801B (en) | Indoor unmanned aerial vehicle positioning system and method based on laser radar | |
Lenac et al. | Fast planar surface 3D SLAM using LIDAR | |
CN110930495A (en) | Multi-unmanned aerial vehicle cooperation-based ICP point cloud map fusion method, system, device and storage medium | |
CN113781582B (en) | Synchronous positioning and map creation method based on laser radar and inertial navigation combined calibration | |
CN114526745B (en) | Drawing construction method and system for tightly coupled laser radar and inertial odometer | |
Huang | Review on LiDAR-based SLAM techniques | |
CN112634451A (en) | Outdoor large-scene three-dimensional mapping method integrating multiple sensors | |
Li et al. | Road DNA based localization for autonomous vehicles | |
CN112444246B (en) | Laser fusion positioning method in high-precision digital twin scene | |
CN112965063B (en) | Robot mapping and positioning method | |
CN110487286B (en) | Robot pose judgment method based on point feature projection and laser point cloud fusion | |
WO2021021862A1 (en) | Mapping and localization system for autonomous vehicles | |
CN114323033B (en) | Positioning method and equipment based on lane lines and feature points and automatic driving vehicle | |
CN113920198B (en) | Coarse-to-fine multi-sensor fusion positioning method based on semantic edge alignment | |
CN113763549A (en) | Method, device and storage medium for simultaneous positioning and mapping by fusing laser radar and IMU | |
CN116878501A (en) | High-precision positioning and mapping system and method based on multi-sensor fusion | |
CN113379915B (en) | Driving scene construction method based on point cloud fusion | |
CN115239899B (en) | Pose map generation method, high-precision map generation method and device | |
CN116202487A (en) | Real-time target attitude measurement method based on three-dimensional modeling | |
CN114088103B (en) | Method and device for determining vehicle positioning information | |
CN115371662A (en) | Static map construction method for removing dynamic objects based on probability grid | |
CN114777775A (en) | Multi-sensor fusion positioning method and system | |
Li et al. | An SLAM Algorithm Based on Laser Radar and Vision Fusion with Loop Detection Optimization | |
CN113554705A (en) | Robust positioning method for laser radar in changing scene |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |