CN113379915A - Driving scene construction method based on point cloud fusion - Google Patents

Driving scene construction method based on point cloud fusion Download PDF

Info

Publication number
CN113379915A
CN113379915A CN202110755584.3A CN202110755584A CN113379915A CN 113379915 A CN113379915 A CN 113379915A CN 202110755584 A CN202110755584 A CN 202110755584A CN 113379915 A CN113379915 A CN 113379915A
Authority
CN
China
Prior art keywords
point cloud
cloud data
data
map
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110755584.3A
Other languages
Chinese (zh)
Other versions
CN113379915B (en
Inventor
李柔仪
黄梓欣
全芷莹
蔡健苹
李贺
余荣
谭北海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN202110755584.3A priority Critical patent/CN113379915B/en
Publication of CN113379915A publication Critical patent/CN113379915A/en
Application granted granted Critical
Publication of CN113379915B publication Critical patent/CN113379915B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a driving scene construction method based on point cloud fusion, which mainly adopts a map construction scheme of synchronous positioning and map construction technology; firstly, carrying out distortion compensation and sensor time alignment on point cloud data, then issuing the data, and carrying out front-end mileage estimation; after receiving the point cloud data, the front-end mileage estimation node performs point cloud matching on the point cloud data and generates a preliminary laser odometer estimation value; then, carrying out closed loop detection on the estimated value, and if a closed loop is formed by detection, transmitting information to a rear end to carry out rear-end algorithm optimization; and optimizing by a rear-end algorithm to obtain pose information of the pre-estimated value nodes and a vertex set of a road sign forming graph, and issuing and displaying the generated optimized laser mileage counting data. The method greatly improves the map building precision while ensuring the map building real-time performance, can quickly realize the positioning in the scene and the building of the high-precision map, and overcomes the defects of large error and poor real-time performance of the traditional pure laser mileage map building.

Description

Driving scene construction method based on point cloud fusion
Technical Field
The invention relates to the field of map real-time construction, in particular to a driving scene construction method based on point cloud fusion.
Background
In recent years, with the ever increasing amount of automobile keeping, the road bearing capacity of many cities has reached full load. Traffic safety, trip efficiency, energy conservation and emission reduction are increasingly prominent, and promoting the intellectualization and networking of vehicles is an important solution for solving the traffic problems. In the development process of vehicle intellectualization and networking, the three-dimensional map reconstruction of a driving scene is one of key technical problems.
With the progress of the times, sensors used by the synchronous positioning and map building technology are continuously and iteratively developed, and a method for fusing the sensors such as a laser radar and an inertial measurement unit is more popular. The synchronous positioning and mapping technology is changed from a method based on a Kalman filter or a Bayesian filter to a method based on graph optimization. Compared with the GPS positioning precision, the method has higher precision, solves the problem that the details of the specific lane line scene are difficult to accurately position, and simultaneously supports the positioning and map construction of special scenes such as tunnels, caves and the like.
Some non-linear optimization methods in the prior art are used for map construction, accumulated errors in a composition process are reduced through a non-linear least square method, sparsity of a system is ignored, and a real-time requirement cannot be met in a driving scene construction by an off-line map construction method used by the method.
In other prior arts, a gaussian-newton iteration method is introduced in a data association link of front-end estimation, and meanwhile, matching between point cloud frames is completed by adopting a method of directly matching laser radar scanning data with a map, but the method lacks back-end optimization and closed-loop detection, is sensitive to an initial registration value, and causes low overall precision.
Disclosure of Invention
The invention aims to provide a driving scene construction method based on point cloud fusion, which is used for solving the problems of insufficient calculation and repeated construction of single equipment in the traditional map construction method.
In order to realize the task, the invention adopts the following technical scheme:
a driving scene construction method based on point cloud fusion comprises the following steps:
step 1, carrying out distortion compensation on point cloud data scanned and collected by a vehicle-mounted laser radar in the automobile movement process;
step 2, performing sensor time alignment on the point cloud data after distortion compensation, performing linear interpolation calculation on odometer data at the same time while performing distortion compensation on the point cloud data, and aligning the odometer data to the point cloud data for time to obtain a position posture of the robot realizing sensor alignment as an initial predicted position posture of front-end mileage estimation;
step 3, a task queue is created for storing point cloud data, and a local map is generated; when each frame of point cloud data is inserted into the task queue, inputting the initial predicted position posture of the robot corresponding to the point cloud data, performing point cloud matching with a local map, performing normal distribution transformation on the point cloud data to obtain all unit voxels of a target point cloud, substituting the unit voxels into a Gaussian probability model, finding the point on the local map which is most similar to the initial predicted position posture, and obtaining a pose pre-estimated value of front-end mileage estimation;
step 4, describing the point cloud data in the task queue created by the front-end mileage estimation by using a scan Context characteristic diagram by adopting a scene identification mode based on point cloud; defining a distance calculation formula between scan Context feature graphs; extracting a descriptor ring from the scan Context feature graph, representing the descriptor ring as a multi-dimensional vector to construct a KD tree, obtaining the most similar region of the ring of the current scene, and selecting the scan Context corresponding to the history ring most similar to the ring of the current scene as the scan Context of the candidate scene; calculating the distance between the scan Context of the point cloud data in the task queue and the scan Context of the candidate scene, and performing back-end optimization on the pose estimated value if the candidate scene meeting the threshold is considered as a closed loop;
step 5, abstractly defining pose prediction values obtained by estimating the front-end mileage at different moments of the robot as nodes, abstracting constraints generated by the observation of the robot at different positions as edges between the nodes, forming a vertex set of a graph by the pose information of the nodes and the road signs, and representing the constraints of the observation data by the edges;
defining an error function as the difference between the mean of the virtual measurements between the two nodes and the desired virtual measurement; iteration is carried out on the error function through a Gauss-Newton method to obtain an optimal solution, the obtained optimal solution is superposed on the initial predicted position posture to obtain a new pose estimated value, the new pose estimated value is used as the initial predicted position posture of the next iteration, and the iteration is continued until the pose estimated value error is minimum;
and (5) performing step 1-step 5 in the edge equipment, building a local map, uploading the local map to a cloud server for global map splicing, and finally transmitting the spliced map back to the edge equipment for positioning operation of the vehicle.
Further, the point cloud data is subjected to distortion compensation, and the distortion compensation comprises the following steps:
calculating a point cloud frame included angle according to the head and tail data of the read point cloud frame to obtain the actual time of point cloud frame sampling;
and acquiring the angular speed and the linear speed of the vehicle by using the information of the read inertial measurement unit. Multiplying the angular velocity by the time to obtain a vehicle rotation angle; multiplying the linear speed by time to obtain vehicle offset displacement and generating a rotation offset matrix;
and multiplying the original point cloud data matrix by the rotational offset matrix to obtain point cloud data after distortion compensation.
Further, the calculation formula of the position and posture of the robot for realizing the sensor alignment is as follows:
Figure RE-GDA0003188571960000031
wherein, X (t)n) Represents tnPosition and attitude of robot of point, X (t)o1)、X(to2) Are respectively to1、to2Odometry data at the moment.
Further, when the distance between the pose estimated value calculated by inserting one frame of point cloud data and the pose estimated value of the previous frame does not exceed a preset threshold, the step 3 is ended.
Furthermore, any area of the point cloud data can be uniquely indexed through a ring and a fan, and the point cloud data is represented in a two-dimensional form to obtain a scan Context characteristic diagram;
taking scan context feature map of two point clouds as IqAnd IcDefining the distance calculation formula between scan Context feature graphs as follows:
Figure RE-GDA0003188571960000032
wherein N issIs the number of sectors (sectors),
Figure RE-GDA0003188571960000033
respectively, using an error function pair IqAnd IcThe solution result of (2);
for the case of two scan context feature maps in the same location but accessed at different times, the sum of the column vectors of all the scan context feature maps where column translation may occur and find the minimum as the final distance, the best match relation n for column translation*Is defined as:
Figure RE-GDA0003188571960000034
the minimum distance is defined as:
Figure RE-GDA0003188571960000035
wherein N issRepresentsThe number of sectors in the point cloud, N, represents one of the sectors, NrRepresenting the number of rings in the point cloud data.
Further, calculating the distance between the scan Context of the point cloud data in the task queue and the scan Context of the candidate scene by using formula (2), wherein the candidate scene satisfying the threshold is regarded as a closed loop:
Figure RE-GDA0003188571960000036
wherein,
Figure RE-GDA0003188571960000041
scan context, I, representing candidate scenesqScan context representing point cloud data in a task queue, D being I calculated using equation (4)q
Figure RE-GDA0003188571960000042
The minimum distance between the two scenes, tau is a set distance threshold value for judging whether the two scenes are similar or not, c*The final detected historical scene is consistent with the current scene.
Further, definition e of the error functionij(xi,xj) Comprises the following steps:
Figure RE-GDA0003188571960000043
wherein x isiRepresenting the pose of node i, zijAnd represents the mean of the virtual measurements between node i and node j,
Figure RE-GDA0003188571960000044
representing a desired virtual measurement, error eij(xi,xj) Depending on the difference between the desired measurement and the actual measurement;
reduce the error function to f (x):
Figure RE-GDA0003188571960000045
x*=argmin F(x) (8)
wherein x is*Defining a corresponding special solution of the formula (6) for the error function, namely, a value with the minimum pose estimated value error;
Figure RE-GDA0003188571960000046
is an error eijC is a constraint, C ═ C is a great curl<eij,Ωij>},ΩijAn information matrix representing virtual measurements between node i and node j.
Further, the iterative solution of the error function by the gauss-newton method includes:
let the initial point be
Figure RE-GDA0003188571960000047
To it is given an (Δ x)i,Δxj) The first order Taylor expansion is performed on the error function:
Figure RE-GDA0003188571960000048
where Δ x represents a differential, representative function
Figure RE-GDA0003188571960000049
At an initial predicted position attitude
Figure RE-GDA00031885719600000412
The amount of change in (c); j. the design is a squareijAs a pair of error functions
Figure RE-GDA00031885719600000413
The calculated Jacobian matrix has, for the objective function term of the edge between node i and node j:
Figure RE-GDA00031885719600000410
the objective function terms for the edges between all nodes are:
Figure RE-GDA00031885719600000411
Figure RE-GDA0003188571960000051
wherein c ═ Σ cij,b=∑bij,H=∑Hij,ΔxT、bTAre the transpose of deltax, b, respectively, sigma<i,j>∈CExpress pair function
Figure RE-GDA0003188571960000052
Under the constraint condition<i,j>Summing all values under the condition of the element C,
Figure RE-GDA0003188571960000053
the value of i and j in (1) is in a constraint range C, and C is constraint;
the goal is to find Δ x, order
Figure RE-GDA0003188571960000054
Comprises the following steps:
Figure RE-GDA0003188571960000055
and solving to obtain an optimal solution delta x.
The terminal equipment comprises a memory, a processor and a computer program which is stored in the memory and can run on the processor, and the steps of the driving scene construction method based on point cloud fusion are realized when the processor executes the computer program.
A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out the steps of a method for constructing a driving scene based on point cloud fusion.
Compared with the prior art, the invention has the following technical characteristics:
1. the method creatively provides and applies a point cloud fused SLAM map construction method, and the method greatly improves the map construction precision while ensuring the map construction real-time property by fusing inertial measurement unit (inertial sensor) IMU/GPS sensor data and combining optimization means such as point cloud data distortion compensation, sensor time alignment, map optimization and closed loop detection, can quickly realize the positioning in a scene and the construction of a high-precision map, and overcomes the defects of large map construction error and poor real-time property of the traditional pure laser odometry.
2. According to the method, a map construction positioning method based on edge cooperation is designed, edge equipment is issued by the cloud, sub-maps are constructed by the edge equipment and then uploaded to the cloud for splicing and combining, the calculation power of edges is fully utilized, the resource utilization rate is improved, meanwhile, the repeated construction work of the map of a single vehicle is reduced, the work redundancy is reduced, the load of the edge equipment is reduced, and the problems of long time consumption and low precision caused by insufficient calculation power in the traditional map construction method are solved.
Drawings
FIG. 1 is a flow diagram of the process of the present invention;
FIG. 2 is a schematic flow chart of distortion compensation for point cloud data;
FIG. 3 is a flow chart of a front end estimated mileage implementation;
FIG. 4 is a schematic diagram of a closed loop detection process;
FIG. 5 is a schematic diagram of sensor time alignment.
Detailed Description
The invention provides a driving scene construction method based on point cloud fusion, which mainly adopts a map construction scheme of synchronous positioning and map construction technology. Firstly, carrying out distortion compensation and sensor time alignment on point cloud data, then issuing the data, and carrying out front-end mileage estimation; after receiving the point cloud data, the front-end mileage estimation node performs point cloud matching on the point cloud data and generates a preliminary laser odometer estimation value; then, carrying out closed loop detection on the estimated value, and if a closed loop is formed by detection, transmitting information to a rear end to carry out rear-end algorithm optimization; and optimizing by a rear-end algorithm to obtain pose information of the pre-estimated value nodes and a vertex set of a road sign forming graph, and issuing and displaying the generated optimized laser mileage counting data.
Referring to the attached drawings, the method for constructing the driving scene based on point cloud fusion comprises the following specific implementation steps:
step 1, point cloud data distortion compensation
The vehicle-mounted laser radar scans and collects point cloud data in the process of automobile movement, but the acquired point cloud data has distortion phenomenon due to the continuous change of a coordinate system of a laser point, and distortion compensation is needed.
a. Calculating a point cloud frame included angle according to the head and tail data of the read point cloud frame to obtain the actual time of point cloud frame sampling;
b. and acquiring the angular speed and the linear speed of the vehicle by using the information of the read inertial measurement unit. Multiplying the angular velocity by the time to obtain a vehicle rotation angle; multiplying the linear speed by time to obtain vehicle offset displacement and generating a rotation offset matrix;
c. and multiplying the original point cloud data matrix by the rotational offset matrix to obtain point cloud data after distortion compensation.
Step 2, aligning the sensor time
And carrying out sensor time alignment on the point cloud data after the distortion compensation. According to the method, the influence caused by sampling rate difference is eliminated by performing linear interpolation calculation on linear mileage counting data, so that the position and the posture of the robot realize sensor alignment when the laser radar scans and acquires point cloud data. The specific calculation formula is as follows:
Figure RE-GDA0003188571960000061
wherein, X (t)n) Represents tnPosition and attitude of robot of point, X (t)o1)、X(to2) Are respectively to1、to2Odometry data at the moment. The position and posture obtained by the calculation in the step are used as the front-end mileage estimation in the step 3An initial predicted position attitude of the meter.
When the point cloud data is subjected to distortion compensation, linear interpolation calculation is carried out on linear odometer data collected by the sensor at the same time, the time alignment of the odometer data to the point cloud data is completed, and the position posture X (t) is obtainedn) The influence caused by the difference of the sampling rates is eliminated.
Step 3, front-end mileage estimation
And (3) creating a task queue for storing the point cloud data processed in the steps (1) and (2) and generating a local map. When each frame of point cloud data is inserted into the queue, inputting the initial predicted position posture of the robot corresponding to the point cloud data, and performing point cloud matching with a local map; point cloud matching, namely, carrying out normal distribution transformation on point cloud data to obtain all unit voxels of the target point cloud, substituting the unit voxels into a Gaussian probability model, finding out a point on the local map which is most similar to the initial predicted position posture, and obtaining a pose pre-estimated value of front-end mileage estimation, wherein the pose pre-estimated value is used for updating the local map once. And finishing the step when the distance between the pose estimated value calculated by inserting one frame of point cloud data and the pose estimated value of the previous frame does not exceed a preset threshold value.
Step 4, closed loop detection
The pose estimated value obtained in the process determines whether the robot returns to the previous position through closed-loop detection, and accumulated errors caused by error of the pose estimated value in the step (3) in a downward transmission frame by frame are avoided.
And 4.1, describing the point cloud data in the task queue created by the front-end mileage estimation by using a scan Context characteristic diagram (a non-histogram-based global descriptor) in a point cloud-based scene identification mode.
Because any area of the point cloud can be uniquely indexed by ring and sector, representing the point cloud data in a two-dimensional form obtains a scan Context feature map, that is, a two-dimensional feature map, which is used for representing the index relationship.
Taking scan context feature map of two point clouds as IqAnd IcTo determineThe distance between the semantic scan Context feature graphs is calculated as follows:
Figure RE-GDA0003188571960000071
wherein N issIs the number of sectors (sectors),
Figure RE-GDA0003188571960000072
respectively, using an error function pair IqAnd IcSee formulas (6) to (8) for the solution result of (a).
For the case that two scan context feature maps with the same position but different time access exist, the method can calculate the sum of column vectors of all scan context feature maps with possible column translation and find the minimum value as the final distance, and the best matching relation n of column translation*Is defined as:
Figure RE-GDA0003188571960000073
the minimum distance is defined as:
Figure RE-GDA0003188571960000081
in this scheme, NsRepresenting the number of sectors in the point cloud, N representing one of the sectors, NrRepresenting the number of rings in the point cloud data.
The point cloud data can be divided into a plurality of sectors, the distance calculation results obtained by taking different sectors are different, and the purpose of the formula is to calculate the minimum distance between two scan contexts, namely the distance between two point clouds can be represented by the distance between two corresponding sectors which are closest to each other.
Step 4.2, extracting a descriptor ring insensitive to rotation from the scan context feature map, and representing the descriptor ring as NrThe vector k of dimension is used for constructing KD tree (k-dimension tree) to obtainAnd selecting a scan context corresponding to a plurality of historical rings most similar to the ring of the current scene from the most similar regions of the ring of the previous scene as the scan context of the candidate scene.
Each scan context represents a frame of point cloud data, each frame of point cloud data is obtained by scanning an actual scene through laser, each scan context corresponds to one actual scene, and if the distance between the two scenes is small enough, the two scenes can be considered to be basically consistent. The candidate scene is the scene corresponding to the candidate scan context.
Since the point cloud data is input frame by frame, the scan context corresponding to the point cloud data input before the current frame is defined as a historical scene. The closed loop detection is to judge whether the robot returns to the previous position by comparing the similarity between the current scene and the historical scene, i.e. whether the map forms a closed loop.
Step 4.3, calculating the distance between the scan Context of the point cloud data of the task queue in step 4.1 and the scan Context of the candidate scene obtained in step 4.2 by using a formula (2), wherein the candidate scene meeting the threshold is regarded as a closed loop:
Figure RE-GDA0003188571960000082
wherein,
Figure RE-GDA0003188571960000083
scan context, I, representing candidate scenesqScan context representing point cloud data in a task queue, D being I calculated using equation (4)q
Figure RE-GDA0003188571960000084
The minimum distance between the two scenes, tau is a set distance threshold value for judging whether the two scenes are similar or not, c*The final detected historical scene is consistent with the current scene.
And (3) judging through a formula 5, and if the similarity is high enough, namely the similarity is smaller than a distance threshold value, considering that a closed loop is formed, namely the pose estimated value obtained in the step (3) has no error, and transmitting information to a rear end for optimization. Otherwise, returning to the step 3.
Step 5, optimizing a back-end algorithm
The method abstractly defines pose estimated values obtained by estimating front-end mileage at different moments of the robot as nodes, abstracts constraints generated by observation of the robot at different positions as edges between the nodes, and forms a vertex set of the graph by the pose information of the nodes and road signs; in graph optimization, we use edges to represent the constraints of these observations, and can also consider the edges as a kind of "virtual measurement".
Definition of error function eij(xi,xj) Comprises the following steps:
Figure RE-GDA0003188571960000091
wherein x isiRepresenting the pose of node i, zijAnd represents the mean of the virtual measurements between node i and node j,
Figure RE-GDA0003188571960000092
representing a desired virtual measurement, error eij(xi,xj) Depending on the difference between the desired measurement and the actual measurement.
Reduce the error function to f (x):
Figure RE-GDA0003188571960000093
x*=argmin F(x) (8)
wherein x is*Defining a corresponding special solution of the formula (6) for the error function, namely, a value with the minimum pose estimated value error;
Figure RE-GDA0003188571960000094
is an error eijC is a constraint, C ═ C is a great curl<eij,Ωij>},ΩijAn information matrix representing virtual measurements between node i and node j.
Iteratively solving the optimal solution by a Gauss Newton method or a Levenberg-Marquardt algorithm:
let the initial point be
Figure RE-GDA0003188571960000095
To it is given an (Δ x)i,Δxj) The first order Taylor expansion is performed on the error function:
Figure RE-GDA0003188571960000096
where Δ x represents a differential, representative function
Figure RE-GDA0003188571960000097
At an initial predicted position attitude
Figure RE-GDA0003188571960000098
The amount of change in (approaching infinity) is; j. the design is a squareijAs a pair of error functions
Figure RE-GDA00031885719600000910
The calculated Jacobian matrix has, for the objective function term of the edge between node i and node j:
Figure RE-GDA0003188571960000099
the objective function terms for the edges between all nodes are:
Figure RE-GDA0003188571960000101
wherein c ═ Σ cij,b=∑bij,H=∑Hij,ΔxT、bTAre the transpositions of deltax, b respectively,
Figure RE-GDA0003188571960000106
for initial prediction of position attitude, ∑<i,j>∈CRepresenting the function to be followed
Figure RE-GDA0003188571960000102
Under the constraint condition<i,j>Summing all values under the condition of the element C,
Figure RE-GDA0003188571960000103
the value of i and j in (1) is in a constraint range C, and C is constraint.
The goal of the method is to find Δ x, such that
Figure RE-GDA0003188571960000104
Comprises the following steps:
Figure RE-GDA0003188571960000105
the optimal solution delta x is obtained by solving and is superposed on the initial prediction position attitude
Figure RE-GDA0003188571960000107
Obtaining a new pose estimated value, taking the new pose estimated value as the initial predicted position posture of the next iteration, returning to the formula 5, and continuing the iteration until x is obtained*I.e. the attitude estimate error is minimal.
As can be seen from step 3, in the front-end mileage estimation, a corresponding pose prediction value is obtained once every frame of point cloud data is inserted into the task queue, each pose prediction value corresponds to a frame of point cloud data, each frame of point cloud data corresponds to a scan context feature map, and is represented as I in formula 5q
x*The pose estimated value error is minimized, the rear end optimization aims to minimize the pose estimated value error and the actual measurement value error, and x is calculated*The pose estimated value is continuously updated, the map is continuously updated, and x is calculated*The map implementation error of time is minimized.
And 3, performing closed-loop detection on the local map obtained by estimating the front-end mileage in step 4 to check whether the map route has errors. In step 5, the pose estimated value with the minimized error is obtained by a graph optimization method and is used for updating the local map, so that the map error is minimized, and the final local map is obtained.
Step 6, edge collaboration
The construction scheme of the steps 1-5 is carried out in the edge equipment, the local map is constructed and then uploaded to the cloud server for global map splicing, and finally the spliced map is transmitted back to the edge equipment for positioning operation of the vehicle.
The above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A driving scene construction method based on point cloud fusion is characterized by comprising the following steps:
step 1, carrying out distortion compensation on point cloud data scanned and collected by a vehicle-mounted laser radar in the automobile movement process;
step 2, performing sensor time alignment on the point cloud data after distortion compensation, performing linear interpolation calculation on odometer data at the same time while performing distortion compensation on the point cloud data, and aligning the odometer data to the point cloud data for time to obtain a position posture of the robot realizing sensor alignment as an initial predicted position posture of front-end mileage estimation;
step 3, a task queue is created for storing point cloud data, and a local map is generated; when each frame of point cloud data is inserted into the task queue, inputting the initial predicted position posture of the robot corresponding to the point cloud data, performing point cloud matching with a local map, performing normal distribution transformation on the point cloud data to obtain all unit voxels of a target point cloud, substituting the unit voxels into a Gaussian probability model, finding the point on the local map which is most similar to the initial predicted position posture, and obtaining a pose pre-estimated value of front-end mileage estimation;
step 4, describing the point cloud data in the task queue created by the front-end mileage estimation by using a scan Context characteristic diagram by adopting a scene identification mode based on point cloud; defining a distance calculation formula between scan Context feature graphs; extracting a descriptor ring from the scan Context feature graph, representing the descriptor ring as a multi-dimensional vector to construct a KD tree, obtaining the most similar region of the ring of the current scene, and selecting the scan Context corresponding to the history ring most similar to the ring of the current scene as the scan Context of the candidate scene; calculating the distance between the scan Context of the point cloud data in the task queue and the scan Context of the candidate scene, and performing back-end optimization on the pose estimated value if the candidate scene meeting the threshold is considered as a closed loop;
step 5, abstractly defining pose prediction values obtained by estimating the front-end mileage at different moments of the robot as nodes, abstracting constraints generated by the observation of the robot at different positions as edges between the nodes, forming a vertex set of a graph by the pose information of the nodes and the road signs, and representing the constraints of the observation data by the edges;
defining an error function as the difference between the mean of the virtual measurements between the two nodes and the desired virtual measurement; iteration is carried out on the error function through a Gauss-Newton method to obtain an optimal solution, the obtained optimal solution is superposed on the initial predicted position posture to obtain a new pose estimated value, the new pose estimated value is used as the initial predicted position posture of the next iteration, and the iteration is continued until the pose estimated value error is minimum;
and (5) performing step 1-step 5 in the edge equipment, building a local map, uploading the local map to a cloud server for global map splicing, and finally transmitting the spliced map back to the edge equipment for positioning operation of the vehicle.
2. The driving scene construction method based on point cloud fusion of claim 1, wherein the point cloud data is subjected to distortion compensation and comprises:
calculating a point cloud frame included angle according to the head and tail data of the read point cloud frame to obtain the actual time of point cloud frame sampling;
and acquiring the angular speed and the linear speed of the vehicle by using the information of the read inertial measurement unit. Multiplying the angular velocity by the time to obtain a vehicle rotation angle; multiplying the linear speed by time to obtain vehicle offset displacement and generating a rotation offset matrix;
and multiplying the original point cloud data matrix by the rotational offset matrix to obtain point cloud data after distortion compensation.
3. The driving scene construction method based on point cloud fusion of claim 1, wherein the calculation formula of the position and the posture of the robot for realizing the sensor alignment is as follows:
Figure FDA0003147212300000021
wherein, X (t)n) Represents tnPosition and attitude of robot of point, X (t)o1)、X(to2) Are respectively to1、to2Odometry data at the moment.
4. The method for constructing a driving scene based on point cloud fusion as claimed in claim 1, wherein the step 3 is ended when the distance between the pose estimated value calculated by inserting one frame of point cloud data and the pose estimated value of the previous frame does not exceed a preset threshold.
5. The driving scene construction method based on point cloud fusion of claim 1, wherein any area of the point cloud data can be uniquely indexed by a ring and a sector, and the point cloud data is represented in a two-dimensional form to obtain a scan Context feature map;
scan connection for recording two point cloudst is a characteristic diagram of IqAnd IcDefining the distance calculation formula between scan Context feature graphs as follows:
Figure FDA0003147212300000022
wherein N issIs the number of sectors (sectors),
Figure FDA0003147212300000023
respectively, using an error function pair IqAnd IcThe solution result of (2);
for the case of two scan context feature maps in the same location but accessed at different times, the sum of the column vectors of all the scan context feature maps where column translation may occur and find the minimum as the final distance, the best match relation n for column translation*Is defined as:
Figure FDA0003147212300000024
the minimum distance is defined as:
Figure FDA0003147212300000031
wherein N issRepresenting the number of sectors in the point cloud, N representing one of the sectors, NrRepresenting the number of rings in the point cloud data.
6. The method for constructing driving scenes based on point cloud fusion as claimed in claim 1, wherein the distance between the scan Context of the point cloud data in the task queue and the scan Context of the candidate scenes is calculated by formula (2), and the candidate scenes satisfying the threshold are considered as closed loops:
Figure FDA0003147212300000032
wherein,
Figure FDA0003147212300000033
scan context, I, representing candidate scenesqScan context representing point cloud data in a task queue, D being I calculated using equation (4)q
Figure FDA0003147212300000034
The minimum distance between the two scenes, tau is a set distance threshold value for judging whether the two scenes are similar or not, c*The final detected historical scene is consistent with the current scene.
7. The method for constructing driving scene based on point cloud fusion as claimed in claim 1, wherein the definition e of the error functionij(xi,xj) Comprises the following steps:
Figure FDA0003147212300000035
wherein x isiRepresenting the pose of node i, zijAnd represents the mean of the virtual measurements between node i and node j,
Figure FDA0003147212300000036
representing a desired virtual measurement, error eij(xi,xj) Depending on the difference between the desired measurement and the actual measurement;
reduce the error function to f (x):
Figure FDA0003147212300000037
x*=argminF(x) (8)
wherein x is*Defining a corresponding special solution of equation (6) for the error functionEven if the pose estimated value error is the minimum value;
Figure FDA0003147212300000038
is an error eijC is a constraint, C ═ C is a great curl<eij,Ωij>},ΩijAn information matrix representing virtual measurements between node i and node j.
8. The method for constructing a driving scene based on point cloud fusion as claimed in claim 1, wherein the iterative solution of the error function by gauss-newton method comprises:
let the initial point be
Figure FDA0003147212300000039
To it is given an (Δ x)i,Δxj) The first order Taylor expansion is performed on the error function:
Figure FDA00031472123000000310
where Δ x represents a differential, representative function
Figure FDA0003147212300000041
At an initial predicted position attitude
Figure FDA0003147212300000048
The amount of change in (c); j. the design is a squareijAs a pair of error functions
Figure FDA0003147212300000049
The calculated Jacobian matrix has, for the objective function term of the edge between node i and node j:
Figure FDA0003147212300000042
the objective function terms for the edges between all nodes are:
Figure FDA0003147212300000043
wherein c ═ Σ cij,b=∑bij,H=∑Hij,ΔxT、bTAre the transpose of deltax, b, respectively, sigma<i,j>∈CExpress pair function
Figure FDA0003147212300000044
Under the constraint condition<i,j>Summing all values under the condition of the element C,
Figure FDA0003147212300000045
the value of i and j in (1) is in a constraint range C, and C is constraint;
the goal is to find Δ x, order
Figure FDA0003147212300000046
Comprises the following steps:
Figure FDA0003147212300000047
and solving to obtain an optimal solution delta x.
9. Terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method for constructing driving scenes based on point cloud fusion according to any one of claims 1 to 8 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, implements the steps of the method for constructing a driving scene based on point cloud fusion according to any one of claims 1 to 8.
CN202110755584.3A 2021-07-05 2021-07-05 Driving scene construction method based on point cloud fusion Active CN113379915B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110755584.3A CN113379915B (en) 2021-07-05 2021-07-05 Driving scene construction method based on point cloud fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110755584.3A CN113379915B (en) 2021-07-05 2021-07-05 Driving scene construction method based on point cloud fusion

Publications (2)

Publication Number Publication Date
CN113379915A true CN113379915A (en) 2021-09-10
CN113379915B CN113379915B (en) 2022-12-23

Family

ID=77580986

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110755584.3A Active CN113379915B (en) 2021-07-05 2021-07-05 Driving scene construction method based on point cloud fusion

Country Status (1)

Country Link
CN (1) CN113379915B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116229119A (en) * 2022-08-30 2023-06-06 智瞰深鉴(北京)科技有限公司 Substation inspection robot and repositioning method thereof

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3078935A1 (en) * 2015-04-10 2016-10-12 The European Atomic Energy Community (EURATOM), represented by the European Commission Method and device for real-time mapping and localization
CN109974707A (en) * 2019-03-19 2019-07-05 重庆邮电大学 A kind of indoor mobile robot vision navigation method based on improvement cloud matching algorithm
CN110648389A (en) * 2019-08-22 2020-01-03 广东工业大学 3D reconstruction method and system for city street view based on cooperation of unmanned aerial vehicle and edge vehicle
CN110689622A (en) * 2019-07-05 2020-01-14 电子科技大学 Synchronous positioning and composition algorithm based on point cloud segmentation matching closed-loop correction
AU2018278849A1 (en) * 2018-07-02 2020-01-16 Beijing Didi Infinity Technology And Development Co., Ltd. Vehicle navigation system using pose estimation based on point cloud
CN111337947A (en) * 2020-05-18 2020-06-26 深圳市智绘科技有限公司 Instant mapping and positioning method, device, system and storage medium
WO2020154966A1 (en) * 2019-01-30 2020-08-06 Baidu.Com Times Technology (Beijing) Co., Ltd. A rgb point clouds based map generation system for autonomous vehicles
CN111882977A (en) * 2020-05-06 2020-11-03 北京嘀嘀无限科技发展有限公司 High-precision map construction method and system
CN111929699A (en) * 2020-07-21 2020-11-13 北京建筑大学 Laser radar inertial navigation odometer considering dynamic obstacles and mapping method and system
CN112734852A (en) * 2021-03-31 2021-04-30 浙江欣奕华智能科技有限公司 Robot mapping method and device and computing equipment
CN112966542A (en) * 2020-12-10 2021-06-15 武汉工程大学 SLAM system and method based on laser radar
CN113066105A (en) * 2021-04-02 2021-07-02 北京理工大学 Positioning and mapping method and system based on fusion of laser radar and inertial measurement unit

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3078935A1 (en) * 2015-04-10 2016-10-12 The European Atomic Energy Community (EURATOM), represented by the European Commission Method and device for real-time mapping and localization
AU2018278849A1 (en) * 2018-07-02 2020-01-16 Beijing Didi Infinity Technology And Development Co., Ltd. Vehicle navigation system using pose estimation based on point cloud
WO2020154966A1 (en) * 2019-01-30 2020-08-06 Baidu.Com Times Technology (Beijing) Co., Ltd. A rgb point clouds based map generation system for autonomous vehicles
CN109974707A (en) * 2019-03-19 2019-07-05 重庆邮电大学 A kind of indoor mobile robot vision navigation method based on improvement cloud matching algorithm
CN110689622A (en) * 2019-07-05 2020-01-14 电子科技大学 Synchronous positioning and composition algorithm based on point cloud segmentation matching closed-loop correction
CN110648389A (en) * 2019-08-22 2020-01-03 广东工业大学 3D reconstruction method and system for city street view based on cooperation of unmanned aerial vehicle and edge vehicle
CN111882977A (en) * 2020-05-06 2020-11-03 北京嘀嘀无限科技发展有限公司 High-precision map construction method and system
CN111337947A (en) * 2020-05-18 2020-06-26 深圳市智绘科技有限公司 Instant mapping and positioning method, device, system and storage medium
CN111929699A (en) * 2020-07-21 2020-11-13 北京建筑大学 Laser radar inertial navigation odometer considering dynamic obstacles and mapping method and system
CN112966542A (en) * 2020-12-10 2021-06-15 武汉工程大学 SLAM system and method based on laser radar
CN112734852A (en) * 2021-03-31 2021-04-30 浙江欣奕华智能科技有限公司 Robot mapping method and device and computing equipment
CN113066105A (en) * 2021-04-02 2021-07-02 北京理工大学 Positioning and mapping method and system based on fusion of laser radar and inertial measurement unit

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
DAI Y ET AL.: "Artificial intelligence empowered edge computing and caching for internet of vehicles", 《IEEE WIRELESS COMMUNICATIONS》 *
SHIN K ET AL.: "Roarnet: A robust 3d object detection based on region", 《IEEE INTELLIGENT VEHICLES SYMPOSIUM》 *
李丰哲: "基于多传感器融合的电力巡检机器人定位技术研究", 《全国优秀硕士学位论文全文数据库》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116229119A (en) * 2022-08-30 2023-06-06 智瞰深鉴(北京)科技有限公司 Substation inspection robot and repositioning method thereof

Also Published As

Publication number Publication date
CN113379915B (en) 2022-12-23

Similar Documents

Publication Publication Date Title
CN109709801B (en) Indoor unmanned aerial vehicle positioning system and method based on laser radar
CN109211251B (en) Instant positioning and map construction method based on laser and two-dimensional code fusion
CN114526745B (en) Drawing construction method and system for tightly coupled laser radar and inertial odometer
Lenac et al. Fast planar surface 3D SLAM using LIDAR
CN110930495A (en) Multi-unmanned aerial vehicle cooperation-based ICP point cloud map fusion method, system, device and storage medium
US11158065B2 (en) Localization of a mobile unit by means of a multi hypothesis kalman filter method
CN112965063B (en) Robot mapping and positioning method
CN114323033B (en) Positioning method and equipment based on lane lines and feature points and automatic driving vehicle
CN113916243A (en) Vehicle positioning method, device, equipment and storage medium for target scene area
CN112444246B (en) Laser fusion positioning method in high-precision digital twin scene
CN115690338A (en) Map construction method, map construction device, map construction equipment and storage medium
CN115077519A (en) Positioning and mapping method and device based on template matching and laser inertial navigation loose coupling
CN113763549A (en) Method, device and storage medium for simultaneous positioning and mapping by fusing laser radar and IMU
Zhou et al. Lane information extraction for high definition maps using crowdsourced data
CN115371662A (en) Static map construction method for removing dynamic objects based on probability grid
CN113379915B (en) Driving scene construction method based on point cloud fusion
CN114777775A (en) Multi-sensor fusion positioning method and system
Zhang et al. Continuous indoor visual localization using a spatial model and constraint
Qiao et al. Online monocular lane mapping using catmull-rom spline
CN117387604A (en) Positioning and mapping method and system based on 4D millimeter wave radar and IMU fusion
CN115239899B (en) Pose map generation method, high-precision map generation method and device
CN110849349A (en) Fusion positioning method based on magnetic sensor and wheel type odometer
CN116202487A (en) Real-time target attitude measurement method based on three-dimensional modeling
CN114088103B (en) Method and device for determining vehicle positioning information
Li et al. An SLAM algorithm based on laser radar and vision fusion with loop detection optimization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant