CN115760636A - Distortion compensation method, device and equipment for laser radar point cloud and storage medium - Google Patents

Distortion compensation method, device and equipment for laser radar point cloud and storage medium Download PDF

Info

Publication number
CN115760636A
CN115760636A CN202211511205.7A CN202211511205A CN115760636A CN 115760636 A CN115760636 A CN 115760636A CN 202211511205 A CN202211511205 A CN 202211511205A CN 115760636 A CN115760636 A CN 115760636A
Authority
CN
China
Prior art keywords
point cloud
target
data
laser radar
laser
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211511205.7A
Other languages
Chinese (zh)
Inventor
黄宏
邓皓匀
任凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Changan Automobile Co Ltd
Original Assignee
Chongqing Changan Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Changan Automobile Co Ltd filed Critical Chongqing Changan Automobile Co Ltd
Priority to CN202211511205.7A priority Critical patent/CN115760636A/en
Publication of CN115760636A publication Critical patent/CN115760636A/en
Pending legal-status Critical Current

Links

Images

Abstract

The application relates to a distortion compensation method, a distortion compensation device, distortion compensation equipment and a storage medium for laser radar point cloud, wherein the method comprises the following steps: based on the laser radar data and the measurement data of the inertial measurement unit IMU, acquiring a point cloud set of a target point cloud and a point cloud set of an original laser point cloud with the target point cloud removed; based on the point cloud set, unifying the moving target point cloud of the laser radar to the target time through coordinate conversion, and calculating the time relation and the speed relation with the target point cloud point by point to obtain point cloud data after static scene distortion compensation; meanwhile, the relative speed relationship and the relative time relationship between the laser radar moving target point cloud and the target point cloud are calculated to obtain point cloud data after the distortion compensation of the moving target; and the two types of point cloud data are spliced to obtain the compensated moving target point cloud, so that the actual surrounding environment of the automatic driving vehicle can be determined, the position information of the target can be accurately sensed, the collision point of the target can be reflected, and the safety and the reliability of the vehicle can be improved.

Description

Distortion compensation method, device and equipment for laser radar point cloud and storage medium
Technical Field
The present disclosure relates to laser radar technologies, and in particular, to a method, an apparatus, a device, and a storage medium for compensating for distortion of a laser radar point cloud.
Background
In the automatic driving technology, the sensing module has an important function as a front-end module, the accuracy of sensing input determines the effect of rear-end automatic driving, and meanwhile, the laser radar sensor has a leading position in the sensing sensor due to the fact that the laser radar sensor has an accurate distance measuring function and scans a three-dimensional environment model. However, when the lidar is configured on an autonomous vehicle, because the TOF (Time Of Flight) lidar is based on a slow exposure principle, the autonomous vehicle is in a motion state and a target is in a motion state, a three-dimensional environment model established by scanning Of the lidar is distorted, the surrounding environment Of the autonomous vehicle at a certain Time cannot be truly reflected, a certain error also exists in the position Of the target, the specific position Of the target cannot be accurately reflected, and therefore an error will occur in prediction Of a collision point.
At present, the related art may use the acquisition time of an initial laser point selected from a frame of laser point cloud as a target time by acquiring the original data of the frame of laser point cloud; and interpolating the coordinate conversion relations corresponding to the selected starting laser point and the selected ending laser point to obtain the coordinate conversion relations corresponding to other laser points, and converting the coordinates of the other laser points to the target time. In addition, in the related technology, the lidar three-dimensional point cloud data and IMU (Inertial Measurement Unit) data can be respectively sequenced according to a timestamp sequence, and meanwhile, data block division is carried out on each frame of lidar three-dimensional point cloud data according to a time sequence of outputting the lidar three-dimensional point cloud data by the IMU, and three-axis rotation compensation is carried out on the lidar three-dimensional point cloud data; and finally, estimating the motion amount between the point cloud frames according to the point cloud data frames after the rotation compensation, and performing translation compensation on the point cloud data.
However, the related technologies compensate for the static scene of the three-dimensional point cloud scene, and cannot accurately reflect the motion state and the actual position information of the moving object at a certain time. In addition, TOF laser radar can not measure the speed information of the moving target in the correlation technique, and the problem of point cloud distortion of the TOF laser radar is difficult to effectively solve.
Disclosure of Invention
The application provides a distortion compensation method, a distortion compensation device, distortion compensation equipment and a storage medium for laser radar point clouds, and aims to solve the problems that the motion state and the actual position information of a moving target at a certain moment cannot be truly reflected in the related technology, TOF laser radar cannot measure the speed information of the moving target, and the like.
The embodiment of the first aspect of the application provides a distortion compensation method for laser radar point cloud, which comprises the following steps: the method comprises the steps of obtaining laser radar data of a laser radar, and obtaining measurement data of an inertial measurement unit IMU; acquiring a point cloud set of a target point cloud and a point cloud set of an original laser point cloud from which the target point cloud is removed according to the laser radar data and the measurement data; based on the point cloud set, selecting a timestamp of a starting laser point of the current frame laser radar as a target time, unifying moving target point clouds of the laser radar to the target time through coordinate conversion, and calculating a time relation and a speed relation with the target point clouds point by point to obtain a transformation relation so as to obtain first point cloud data after static scene distortion compensation; based on the point cloud set, selecting a timestamp of a starting laser point of the current frame laser radar as the target time, unifying the laser radar moving target point cloud to the target time through coordinate conversion, and calculating the relative speed relation and the relative time relation between the laser radar moving target point cloud and the target point cloud to obtain the coordinate conversion relation of the laser radar moving target point cloud so as to obtain second point cloud data after the moving target distortion compensation; and splicing the first point cloud data and the second point cloud data to obtain a compensated moving target point cloud and determine the actual surrounding environment of the automatic driving vehicle.
According to the technical means, the laser radar data and the IMU data are acquired through time-space synchronization of the laser radar and the IMU, data preprocessing is carried out, distortion compensation is carried out on a static scene and a moving target, point cloud data after compensation is spliced, and moving target point cloud is output, so that the actual surrounding environment of an automatic driving vehicle can be determined, the position information of the target and the collision point of the target can be accurately perceived, and the safety and the reliability of the vehicle are improved.
Optionally, in an embodiment of the application, before obtaining the second point cloud data after the distortion compensation of the moving object, the method further includes: and performing association matching on the detection result of the current frame of laser point cloud based on the detection result of the previous frame of laser point cloud, and estimating state information of a target so as to perform distortion compensation on the point cloud set of the target point cloud.
According to the technical means, the detection results of the front and rear frames of laser point clouds are associated and matched, and the target speed information is obtained to perform distortion compensation on the point cloud set of the target point clouds, so that the motion distortion of the vehicle is considered, the distortion caused by the motion of the target is also considered, the motion state and the position information of the target, the collision point of the target and the like are accurately reflected, the reliability of target detection is effectively improved, and the vehicle is more intelligent and scientific and technological.
Optionally, in an embodiment of the present application, before acquiring the lidar data and the measurement data, the method further includes: time synchronizing the lidar and the IMU.
According to the technical means, the time synchronization is carried out on the laser radar and the IMU, and the reliability of data is effectively guaranteed.
Optionally, in an embodiment of the present application, the acquiring a point cloud set of a target and a point cloud set of an original laser point cloud with a target point cloud removed according to the lidar data and the measurement data includes: and based on the laser radar data and the measurement data, acquiring point cloud information, a box and header information of the target by a preset clustering algorithm, and separating a point cloud set of the target point cloud and a point cloud set of the original laser point cloud from which the target point cloud is removed based on a clustering result.
According to the technical means, the laser radar data and the measurement data are preprocessed to separate the point cloud set of the target and the point cloud set of the original laser point cloud with the target point cloud removed, so that the data quality is further improved, and the performance of subsequent distortion compensation is guaranteed.
Optionally, in an embodiment of the present application, the lidar data includes three-axis coordinate information, intensity information, and timestamp information for each point cloud, and the measurement data includes three-axis angular velocity, three-axis linear velocity, and timestamp information.
According to the technical means, reliable data support is provided for distortion compensation of subsequent laser radar point clouds by collecting data information of the laser radar point clouds such as three-axis coordinates and the like and data information of inertial navigation IMU such as three-axis angular velocity and the like.
The embodiment of the second aspect of the application provides a distortion compensation device for laser radar point cloud, which comprises: the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring laser radar data of a laser radar and acquiring measurement data of an Inertial Measurement Unit (IMU); the second acquisition module is used for acquiring a point cloud set of a target point cloud and a point cloud set of an original laser point cloud from which the target point cloud is removed according to the laser radar data and the measurement data; the third acquisition module is used for selecting a timestamp of a starting laser point of the current frame laser radar as a target time based on the point cloud set, unifying moving target point clouds of the laser radar to the target time through coordinate conversion, and calculating a time relation and a speed relation with the target point clouds point by point to obtain a conversion relation so as to obtain first point cloud data after static scene distortion compensation; a fourth obtaining module, configured to select, based on the point cloud set, a timestamp of a starting laser point of the current frame lidar as the target time, unify the lidar moving target point cloud to the target time through coordinate transformation, and calculate a relative velocity relationship and a relative time relationship between the lidar moving target point cloud and the target point cloud to obtain a coordinate transformation relationship between the lidar moving target point cloud and obtain second point cloud data after the lidar moving target distortion compensation; and the splicing module is used for splicing the first point cloud data and the second point cloud data to obtain a compensated moving target point cloud and determine the actual surrounding environment of the automatic driving vehicle.
Optionally, in an embodiment of the present application, the method further includes: and the matching module is used for performing association matching on the detection result of the current frame of laser point cloud based on the detection result of the previous frame of laser point cloud before the second point cloud data after the distortion compensation of the moving target is obtained, and estimating the state information of the target so as to perform distortion compensation on the point cloud set of the target point cloud.
Optionally, in an embodiment of the present application, the method further includes: a synchronization module to time synchronize the lidar and the IMU prior to acquiring the lidar data and the measurement data.
Optionally, in an embodiment of the present application, the second obtaining module includes: and the separation unit is used for acquiring point cloud information, a box and head information of the target through a preset clustering algorithm based on the laser radar data and the measurement data so as to separate the point cloud set of the target point cloud and the point cloud set of the original laser point cloud with the target point cloud removed based on a clustering result.
Optionally, in an embodiment of the present application, the lidar data includes three-axis coordinate information, intensity information, and timestamp information for each point cloud, and the measurement data includes three-axis angular velocity, three-axis linear velocity, and timestamp information.
An embodiment of a third aspect of the present application provides an electronic device, including: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor executing the program to implement the distortion compensation method of lidar point cloud as described in the above embodiments.
An embodiment of a fourth aspect of the present application provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the distortion compensation method for a lidar point cloud as above.
Thus, the embodiment of the application has the following beneficial effects:
(1) According to the embodiment of the application, the laser radar data and the IMU data are acquired through the time-space synchronization of the laser radar and the IMU, the data are preprocessed, then the distortion compensation is carried out on a static scene and a moving target, the point cloud data after the compensation is spliced, the moving target point cloud is output, the actual surrounding environment of an automatic driving vehicle can be determined, the position information of the target and the collision point of the target can be accurately perceived, and the safety and the reliability of the vehicle are improved.
(2) The embodiment of the application carries out correlation matching through the detection results of the front and back frame laser point clouds, acquires target speed information to carry out distortion compensation on the point cloud set of the target point cloud, thereby considering the motion distortion of the vehicle and the distortion brought by the target motion, accurately reflecting the motion state and the position information of the target, and the collision point and the like of the target, effectively improving the reliability of target detection, and enabling the vehicle to have more intellectualization and scientific and technological sense.
(3) According to the embodiment of the application, the time synchronization is carried out on the laser radar and the IMU, so that the reliability of data is effectively guaranteed.
(4) According to the embodiment of the application, the laser radar data and the measurement data are preprocessed to separate the point cloud set of the target and the point cloud set of the original laser point cloud with the target point cloud removed, so that the quality of the data is further improved, and the performance of subsequent distortion compensation is guaranteed.
(5) According to the method and the device, reliable data support is provided for distortion compensation of follow-up laser radar point cloud through collection of point cloud data information such as three-axis coordinates of the laser radar and data information such as three-axis angular velocity of the inertial navigation IMU.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a flowchart of a distortion compensation method for a laser radar point cloud according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of TOF lidar static scene point cloud compensation execution logic provided in accordance with an embodiment of the present application;
FIG. 3 is a flow chart for obtaining velocity information of a moving object according to an embodiment of the present application;
FIG. 4 is a schematic diagram of TOF lidar moving target point cloud compensation execution logic provided in accordance with an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating TOF lidar moving target point cloud pre-compensation imaging according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram illustrating TOF lidar moving target point cloud compensated imaging according to an embodiment of the present application;
FIG. 7 is a logic diagram illustrating an implementation of a method for distortion compensation of a lidar point cloud according to an embodiment of the present disclosure;
FIG. 8 is an exemplary diagram of an apparatus for distortion compensation of a lidar point cloud in accordance with an embodiment of the present application;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
The system comprises a 10-laser radar point cloud distortion compensation device, 100-a first acquisition module, 200-a second acquisition module, 300-a third acquisition module, 400-a fourth acquisition module, 500-a splicing module, 901-a memory, 902-a processor and 903-a communication interface.
Detailed Description
Reference will now be made in detail to the embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
A method, an apparatus, a device, and a storage medium for distortion compensation of a laser radar point cloud according to an embodiment of the present application are described below with reference to the accompanying drawings. In order to solve the problems mentioned in the background technology, the application provides a distortion compensation method for laser radar point clouds, wherein a point cloud set of a target point cloud and a point cloud set of an original laser point cloud with the target point cloud removed are obtained based on laser radar data and measurement data of an inertial measurement unit IMU; based on the point cloud set, unifying the moving target point cloud of the laser radar to the target time through coordinate conversion, and calculating the time relation and the speed relation with the target point cloud point by point to obtain point cloud data after static scene distortion compensation; meanwhile, the relative speed relationship and the relative time relationship between the laser radar moving target point cloud and the target point cloud are calculated to obtain point cloud data after the distortion compensation of the moving target; and the two types of point cloud data are spliced to obtain the compensated moving target point cloud, so that the actual surrounding environment of the automatic driving vehicle can be determined, the position information of the target can be accurately sensed, the collision point of the target can be reflected, and the safety and the reliability of the vehicle can be improved. Therefore, the problems that the motion state and the actual position information of the moving target at a certain moment cannot be truly reflected, the TOF laser radar cannot measure the speed information of the moving target and the like in the related technology are solved.
Specifically, fig. 1 is a schematic flow chart of a laser radar point cloud distortion compensation method according to an embodiment of the present disclosure.
As shown in fig. 1, the distortion compensation method for the laser radar point cloud includes the following steps:
in step S101, measurement data of the inertial measurement unit IMU is acquired while laser radar data of the laser radar is acquired.
According to the embodiment of the application, the laser radar arranged at the front part of the automatic driving automobile can be utilized to complete scanning of a horizontal visual angle and a vertical visual angle in one period so as to obtain a frame of laser point cloud, the frame of laser point cloud data comprises all point cloud data scanned and imaged in one period, and therefore a three-dimensional environment model can be established in a scanning mode. In addition, the embodiment of the application can also acquire the measurement data of the inertial measurement unit IMU.
Therefore, the embodiment of the application can provide the speed information of the vehicle by combining the TOF laser radar with the inertial navigation IMU, and the follow-up realization of motion compensation on the static point cloud is powerfully guaranteed.
Optionally, in an embodiment of the present application, before acquiring the lidar data and the measurement data, the method further includes: and carrying out time synchronization on the laser radar and the IMU.
It should be noted that, before the laser radar data and the measurement data are acquired, in the embodiment of the present application, a Precision Time Protocol (PTP) may also be used to Time the laser radar and the IMU sensor through a host, so that the sensors all acquire timestamps in the same clock, and thus, time synchronization is performed on the laser radar and the IMU, and the reliability of the data is effectively ensured.
Optionally, in an embodiment of the present application, the lidar data includes three-axis coordinate information, intensity information, and timestamp information for each point cloud, and the measurement data includes three-axis angular velocity, three-axis linear velocity, and timestamp information.
After the time synchronization is performed on the laser radar and the IMU, the embodiment of the application can acquire data of the laser radar and the IMU. The data of the laser radar comprises three-axis coordinate information, intensity information and timestamp information of each point cloud, and the IMU data comprises three-axis angular velocity, three-axis linear velocity and timestamp information.
Therefore, reliable data support is provided for distortion compensation of subsequent laser radar point clouds by acquiring data information of the laser radar point clouds such as three-axis coordinates and the like and data information of inertial navigation IMU (inertial measurement unit) such as three-axis angular velocity and the like.
In step S102, a point cloud set of the target point cloud and a point cloud set of the original laser point cloud from which the target point cloud is removed are obtained according to the laser radar data and the measurement data.
After the laser radar point cloud data information and the inertial navigation IMU data information are obtained, the point cloud set of the target point cloud and the point cloud set of the original laser point cloud with the target point cloud removed can be further obtained, and therefore reliable technical support is provided for obtaining point cloud data after a static scene and a moving target are subjected to distortion compensation.
Optionally, in an embodiment of the present application, acquiring a point cloud set of a target and a point cloud set of an original laser point cloud from laser radar data and measurement data, where the target point cloud is removed, includes: based on the laser radar data and the measurement data, point cloud information, a box and heading information of the target are obtained through a preset clustering algorithm, and a point cloud set of the target point cloud is separated from a point cloud set of the original laser point cloud, and the point cloud set of the target point cloud is removed.
Before distortion compensation is performed on a static scene and a moving target, the embodiment of the application can perform preprocessing operation on the acquired point cloud data, namely, the point cloud information, the box and the head information of the detected target and the like are acquired through a clustering algorithm, and according to a clustering result, a point cloud set of the target is separated from a point cloud set of the original laser point cloud with the target point cloud removed.
Therefore, the laser radar data and the measurement data are preprocessed to separate the point cloud set of the target and the point cloud set of the original laser point cloud with the target point cloud removed, so that the data quality is further improved, and the performance of subsequent distortion compensation is guaranteed.
In step S103, based on the point cloud set, a timestamp of a starting laser point of the current frame of laser radar is selected as a target time, the moving target point clouds of the laser radar are unified to the target time through coordinate conversion, and a time relationship and a speed relationship between the moving target point clouds and the target point clouds are calculated point by point to obtain a transformation relationship, so as to obtain first point cloud data after static scene distortion compensation.
After the point cloud set of the target point cloud and the point cloud set of the original laser point cloud with the target point cloud removed are obtained, the embodiment of the application can perform distortion compensation on the point cloud set of the original laser point cloud with the target point cloud removed, namely, the static scene is compensated.
Specifically, the embodiment of the application can obtain target information through a clustering algorithm according to current frame laser point cloud original data, then separate out a point cloud set of the original laser point cloud with the target point cloud removed, and finally convert the original laser point cloud set with the target removed to the starting laser point time of current frame data.
It should be noted that the speed of the static scene point is zero, and distortion compensation caused by the motion of the vehicle is only considered in this scene, which is a conversion process of the static laser point as follows:
1. calculating translation amount:
Δx=-v x0 *(t k -t 0 )
Δy=-v y0 *(t k -t 0 )
Δz=-v z0 *(t k -t 0 )
2. translation transformation:
Figure BDA0003969136610000071
Figure BDA0003969136610000072
Figure BDA0003969136610000073
3. and traversing the point cloud set of the removed target point cloud to obtain a result of the laser point cloud compensation of the moving target.
The process of distortion compensation of the original laser point cloud of the static scene is further described below by one embodiment.
FIG. 2 is a schematic diagram of the logic for performing TOF lidar static scene point cloud compensation. As shown in fig. 2, a specific process of performing distortion compensation on an original laser point cloud of a static scene according to an embodiment of the present application is as follows:
s21: current frame laser point cloud original data;
s22: acquiring target information by a clustering algorithm;
s23: removing an original laser point cloud set of the target;
s24: performing point cloud translation compensation on the moving target, and moving the coordinate system to the initial laser point of the frame;
s25: and outputting the compensated moving target point cloud.
Therefore, distortion compensation is performed on the original laser point cloud point by point, so that the result after the distortion compensation is more accurate, three-dimensional scene information can be really constructed, and the distortion compensation performance of the laser radar point cloud point is effectively guaranteed.
In step S104, based on the point cloud set, a timestamp of a starting laser point of the current frame of laser radar is selected as a target time, the moving target point clouds of the laser radar are unified to the target time through coordinate conversion, and a relative speed relationship and a relative time relationship between the moving target point clouds of the laser radar and the target point clouds are calculated to obtain a coordinate conversion relationship of the moving target point clouds of the laser radar, so as to obtain second point cloud data after distortion compensation of the moving target.
After distortion compensation is performed on the original laser point cloud of the static scene, further, distortion compensation can be performed on the moving target in the embodiment of the application, that is, motion compensation is performed on the original laser point of the target point cloud set.
It should be noted that, since the target includes motion data information, the embodiment of the present application needs to consider not only the distortion caused by the motion of the vehicle, but also the distortion caused by the motion of the target, so as to implement distortion compensation of the moving target, and accurately reflect the motion state and the position information of the target.
Optionally, in an embodiment of the present application, before obtaining the second point cloud data after the distortion compensation of the moving object, the method further includes: and performing association matching on the detection result of the current frame of laser point cloud based on the detection result of the previous frame of laser point cloud, and estimating state information of the target so as to perform distortion compensation on the point cloud set of the target point cloud.
It should be noted that, before performing distortion compensation on a moving object, the embodiment of the present application needs to acquire speed information of the moving object. As shown in fig. 3, in the embodiment of the present application, a current target detection result may be obtained through clustering, parameter information of a target may be obtained through tracking gating, and then a detection result of a previous frame of laser point cloud is used as an observation value, and correlation matching is performed on a detection result of a current frame, and state information of the target is estimated, where the state information includes three-axis linear velocities of the target, that is, velocities of the target laser point cloud, and the three-axis velocities are v, respectively x 、v y And v z
Then, the embodiment of the application can transform the moving target point cloud to the initial laser point moment of the current frame data through the translation transformation of the target point cloud, and the transformation process of the laser point cloud is as follows:
1. obtaining translation amount:
Δx=(v x -v x0 )*(t k -t 0 )
Δy=(v y -v y0 )*(t k -t 0 )
Δz=(v z -v z0 )*(t k -t 0 )
wherein, Δ x: the index is the displacement of the k laser points relative to the initial laser point of the frame in the direction of the x axis;
v x : the index is the speed of the k laser points in the x-axis direction;
v x0 : the speed of the vehicle in the x-axis direction;
t k : the index is the timestamp of the k laser points;
t 0 : a time stamp of the initial laser point of the frame;
Δ y: the index is the displacement of the k laser point relative to the initial laser point of the frame in the y-axis direction;
v y : the index is the speed of the k laser points in the y-axis direction;
v y0 : the speed of the vehicle in the y-axis direction;
Δ z: the index is the displacement of the k laser point relative to the initial laser point of the frame in the direction of the z axis;
v z : the index is the speed of the k laser point in the z-axis direction;
v z0 : speed of the vehicle in the z-axis direction.
2. Translation transformation:
Figure BDA0003969136610000091
Figure BDA0003969136610000092
Figure BDA0003969136610000093
wherein the content of the first and second substances,
Figure BDA0003969136610000094
the index is a value of the k laser point which is translated and transformed in the x-axis direction relative to the frame initial laser point;
x k : the index is the value of the k laser points in the x direction;
Figure BDA0003969136610000095
the index is a value of the k laser point which is translated and transformed in the y-axis direction relative to the initial laser point of the frame;
y k : the index is the value of the k laser spot in the y direction;
Figure BDA0003969136610000096
the index is a value of the k laser point which is translated and transformed in the direction of the z axis relative to the initial laser point of the frame;
x k : the index is the value of the k laser spot in the z direction.
3. And traversing all the laser points of the target to obtain a compensated result of the moving target laser point cloud. And respectively carrying out the same operation on all targets of the frame data to obtain the result of the compensation of the moving target laser point cloud in the frame of laser point cloud.
The following description will be further made of a process of performing distortion compensation on a moving object by using an embodiment.
FIG. 4 is a schematic diagram of TOF lidar moving target point cloud compensation execution logic. As shown in fig. 4, a specific process of performing distortion compensation on a moving target point cloud according to an embodiment of the present application is as follows:
s41: current frame laser point cloud original data;
s42: acquiring target information by a clustering algorithm;
s43: separating a target original laser spot cloud set;
s44: obtaining target speed information by the correlation matching of targets of the front frame and the rear frame;
s45: performing point cloud translation compensation on the moving target, and moving the coordinate system to the initial laser point of the frame;
s46: and outputting the compensated moving target point cloud.
It should be noted that fig. 5 is a schematic diagram of imaging of a moving target before distortion compensation, and as shown in fig. 5, the target is stretched in the mirror image direction of laser radar scanning, and the moving state and position information of the target cannot be accurately reflected. Fig. 6 is a schematic diagram of imaging of a moving object after distortion compensation, and it can be seen that the above problem is effectively solved after the distortion of the moving object is compensated.
It can be understood that, the embodiment of the application carries out correlation matching through the detection results of the front and back frame laser point clouds, and obtains the target speed information to carry out distortion compensation on the point cloud set of the target point cloud, thereby considering the motion distortion of the vehicle, also considering the distortion brought by the target motion, accurately reflecting the motion state and the position information of the target, and the collision point of the target, and the like, effectively improving the reliability of target detection, and enabling the vehicle to have more intellectualization and scientific and technological sense.
In step S105, the first point cloud data and the second point cloud data are merged to obtain a compensated moving target point cloud, and an actual surrounding environment of the autonomous vehicle is determined.
After the point cloud data after the distortion compensation of the static scene and the moving target is obtained, the embodiment of the application can splice the original laser point cloud after the distortion compensation of the static scene and the original laser point cloud after the distortion compensation of the moving target to obtain a frame of complete original laser point cloud data after the distortion compensation, thereby realizing the compensation of the TOF laser radar on the laser point cloud, determining the actual surrounding environment of the automatic driving vehicle, effectively solving the problem of the laser point cloud distortion of the TOF laser radar, enabling the laser point cloud after the distortion compensation to reflect the actual environment information and the moving state and the actual position information of the moving target at a certain moment, and greatly improving the safety performance of the vehicle.
The following process of distortion compensation of a moving object is further described by a specific embodiment.
Fig. 7 is a logic diagram illustrating an implementation of distortion compensation for lidar point cloud according to an embodiment of the application. As shown in fig. 7, the specific process of performing distortion compensation on the laser radar point cloud in the embodiment of the present application is as follows:
s71: the laser radar and the IMU are synchronized in time and space;
s72: acquiring laser radar data and IMU data;
s73: preprocessing data;
s74: static scene distortion compensation and moving object distortion compensation;
s75: point cloud splicing;
s76: and outputting the compensated moving target point cloud.
According to the distortion compensation method for the laser radar point cloud, the point cloud set of the target point cloud and the point cloud set of the original laser point cloud with the target point cloud removed are obtained through measurement data based on laser radar data and an Inertial Measurement Unit (IMU); based on the point cloud set, unifying the moving target point cloud of the laser radar to the target time through coordinate conversion, and calculating the time relation and the speed relation with the target point cloud point by point to obtain point cloud data after static scene distortion compensation; meanwhile, the relative speed relationship and the relative time relationship between the laser radar moving target point cloud and the target point cloud are calculated to obtain point cloud data after the distortion compensation of the moving target; and the two types of point cloud data are spliced to obtain the compensated moving target point cloud, so that the actual surrounding environment of the automatic driving vehicle can be determined, the position information of the target can be accurately sensed, the collision point of the target can be reflected, and the safety and the reliability of the vehicle can be improved.
Next, a distortion compensation apparatus for a lidar point cloud according to an embodiment of the present application will be described with reference to the drawings.
Fig. 8 is a block diagram schematically illustrating an apparatus for compensating for distortion of a lidar point cloud according to an embodiment of the present disclosure.
As shown in fig. 8, the distortion compensation apparatus 10 for lidar point cloud includes: a first acquisition module 100, a second acquisition module 200, a third acquisition module 300, a fourth acquisition module 400, and a stitching module 500.
The first obtaining module 100 is configured to obtain laser radar data of a laser radar and obtain measurement data of an inertial measurement unit IMU.
And a second obtaining module 200, configured to obtain a point cloud set of the target point cloud and a point cloud set of the original laser point cloud from which the target point cloud is removed according to the laser radar data and the measurement data.
The third obtaining module 300 is configured to select a timestamp of a starting laser point of the current frame laser radar as a target time based on the point cloud set, unify moving target point clouds of the laser radar to the target time through coordinate conversion, and calculate a time relationship and a speed relationship with the target point clouds point by point to obtain a transformation relationship, so as to obtain first point cloud data after static scene distortion compensation.
And the fourth obtaining module 400 is configured to select a timestamp of a starting laser point of the current frame laser radar as a target time based on the point cloud set, unify the laser radar moving target point cloud to the target time through coordinate conversion, and calculate a relative speed relationship and a relative time relationship between the laser radar moving target point cloud and the target point cloud to obtain a coordinate conversion relationship of the laser radar moving target point cloud so as to obtain second point cloud data after distortion compensation of a moving target.
And the splicing module 500 is configured to splice the first point cloud data and the second point cloud data to obtain a compensated moving target point cloud, and determine an actual surrounding environment of the autonomous vehicle.
Optionally, in an embodiment of the present application, the distortion compensation apparatus 10 for a lidar point cloud of the embodiment of the present application further includes: and the matching module is used for performing correlation matching on the detection result of the current frame of laser point cloud based on the detection result of the previous frame of laser point cloud before the second point cloud data after the distortion compensation of the moving target is obtained, and estimating the state information of the target so as to perform distortion compensation on the point cloud set of the target point cloud.
Optionally, in an embodiment of the present application, the distortion compensation apparatus 10 for a lidar point cloud of the embodiment of the present application further includes: and the synchronization module is used for carrying out time synchronization on the laser radar and the IMU before the laser radar data and the measurement data are acquired.
Optionally, in an embodiment of the present application, the second obtaining module 200 includes: and the separation unit is used for acquiring point cloud information, a box and head information of the target through a preset clustering algorithm based on the laser radar data and the measurement data so as to separate the point cloud set of the target point cloud and the point cloud set of the original laser point cloud with the target point cloud removed based on the clustering result.
Optionally, in an embodiment of the present application, the lidar data includes three-axis coordinate information, intensity information, and timestamp information for each point cloud, and the measurement data includes three-axis angular velocity, three-axis linear velocity, and timestamp information.
It should be noted that the explanation of the embodiment of the distortion compensation method for laser radar point cloud is also applicable to the distortion compensation apparatus for laser radar point cloud of this embodiment, and details are not repeated here.
According to the distortion compensation device for the laser radar point cloud, the point cloud set of the target point cloud and the point cloud set of the original laser point cloud with the target point cloud removed are obtained through measurement data based on laser radar data and an Inertial Measurement Unit (IMU); based on the point cloud set, unifying the moving target point cloud of the laser radar to the target time through coordinate conversion, and calculating the time relation and the speed relation with the target point cloud point by point to obtain point cloud data after static scene distortion compensation; meanwhile, the relative speed relationship and the relative time relationship between the laser radar moving target point cloud and the target point cloud are calculated to obtain point cloud data after the distortion compensation of the moving target; and the two types of point cloud data are spliced to obtain the compensated moving target point cloud, so that the actual surrounding environment of the automatic driving vehicle can be determined, the position information of the target can be accurately sensed, the collision point of the target can be reflected, and the safety and the reliability of the vehicle can be improved.
Fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device may include:
memory 901, processor 902, and computer programs stored on memory 901 and operable on processor 902.
The processor 902, when executing the program, implements the distortion compensation method for the lidar point cloud provided in the above embodiments.
Further, the electronic device further includes:
a communication interface 903 for communication between the memory 901 and the processor 902.
A memory 901 for storing computer programs executable on the processor 902.
Memory 901 may comprise high-speed RAM memory and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
If the memory 901, the processor 902, and the communication interface 903 are implemented independently, the communication interface 903, the memory 901, and the processor 902 may be connected to each other through a bus and perform communication with each other. The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 9, but this does not indicate only one bus or one type of bus.
Alternatively, in practical implementation, if the memory 901, the processor 902 and the communication interface 903 are integrated on one chip, the memory 901, the processor 902 and the communication interface 903 may complete mutual communication through an internal interface.
The processor 902 may be a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement embodiments of the present Application.
The present embodiment also provides a computer-readable storage medium on which a computer program is stored, which when executed by a processor implements the distortion compensation method of a lidar point cloud as above.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or N embodiments or examples. Moreover, various embodiments or examples and features of various embodiments or examples described in this specification can be combined and combined by one skilled in the art without being mutually inconsistent.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "N" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or N executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or N wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Further, the computer readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the N steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a separate product, may also be stored in a computer-readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (12)

1. A distortion compensation method for laser radar point cloud is characterized by comprising the following steps:
the method comprises the steps of obtaining laser radar data of a laser radar, and obtaining measurement data of an inertial measurement unit IMU;
acquiring a point cloud set of a target point cloud and a point cloud set of an original laser point cloud with the target point cloud removed according to the laser radar data and the measurement data;
based on the point cloud set, selecting a timestamp of a starting laser point of the current frame laser radar as a target time, unifying moving target point clouds of the laser radar to the target time through coordinate conversion, and calculating a time relation and a speed relation with the target point clouds point by point to obtain a transformation relation so as to obtain first point cloud data after static scene distortion compensation;
based on the point cloud set, selecting a timestamp of a starting laser point of the current frame laser radar as the target time, unifying the laser radar moving target point cloud to the target time through coordinate conversion, and calculating the relative speed relation and the relative time relation between the laser radar moving target point cloud and the target point cloud to obtain the coordinate conversion relation of the laser radar moving target point cloud so as to obtain second point cloud data after the moving target distortion compensation; and
and splicing the first point cloud data and the second point cloud data to obtain a compensated moving target point cloud, and determining the actual surrounding environment of the automatic driving vehicle.
2. The method of claim 1, further comprising, prior to obtaining the motion object distortion compensated second point cloud data:
and performing association matching on the detection result of the current frame of laser point cloud based on the detection result of the previous frame of laser point cloud, and estimating state information of a target so as to perform distortion compensation on the point cloud set of the target point cloud.
3. The method of claim 1, further comprising, prior to acquiring the lidar data and the measurement data:
time synchronizing the lidar and the IMU.
4. The method of claim 1, wherein the obtaining a point cloud set of target points and a point cloud set of raw laser point clouds from the lidar data and the measurement data with target point clouds removed comprises:
and acquiring point cloud information, a box and header information of the target through a preset clustering algorithm based on the laser radar data and the measurement data, and separating a point cloud set of the target point cloud and a point cloud set of the original laser point cloud with the target point cloud removed based on a clustering result.
5. The method of any of claims 1-4, wherein the lidar data includes three-axis coordinate information, intensity information, and timestamp information for each point cloud, and wherein the measurement data includes three-axis angular velocity, three-axis linear velocity, and timestamp information.
6. A distortion compensation device for laser radar point cloud is characterized by comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring laser radar data of a laser radar and acquiring measurement data of an Inertial Measurement Unit (IMU);
the second acquisition module is used for acquiring a point cloud set of a target point cloud and a point cloud set of an original laser point cloud from which the target point cloud is removed according to the laser radar data and the measurement data;
the third acquisition module is used for selecting a timestamp of a starting laser point of the current frame laser radar as a target time based on the point cloud set, unifying moving target point clouds of the laser radar to the target time through coordinate conversion, and calculating a time relation and a speed relation with the target point clouds point by point to obtain a conversion relation so as to obtain first point cloud data after static scene distortion compensation;
a fourth obtaining module, configured to select, based on the point cloud set, a timestamp of a starting laser point of the current frame laser radar as the target time, unify the laser radar moving target point clouds to the target time through coordinate conversion, and calculate a relative velocity relationship and a relative time relationship between the laser radar moving target point cloud and the target point cloud to obtain a coordinate conversion relationship of the laser radar moving target point cloud, so as to obtain second point cloud data after distortion compensation of a moving target; and
and the splicing module is used for splicing the first point cloud data and the second point cloud data to obtain a compensated moving target point cloud and determine the actual surrounding environment of the automatic driving vehicle.
7. The apparatus of claim 6, further comprising:
and the matching module is used for performing association matching on the detection result of the current frame of laser point cloud based on the detection result of the previous frame of laser point cloud before the second point cloud data after the distortion compensation of the moving target is obtained, and estimating the state information of the target so as to perform distortion compensation on the point cloud set of the target point cloud.
8. The apparatus of claim 6, further comprising:
a synchronization module to time synchronize the lidar and the IMU prior to acquiring the lidar data and the measurement data.
9. The apparatus of claim 6, wherein the second obtaining module comprises:
and the separation unit is used for acquiring point cloud information, a box and header information of the target through a preset clustering algorithm based on the laser radar data and the measurement data so as to separate the point cloud set of the target point cloud and the point cloud set of the original laser point cloud with the target point cloud removed based on a clustering result.
10. The apparatus of any of claims 6-9, wherein the lidar data includes three-axis coordinate information, intensity information, and timestamp information for each point cloud, and wherein the measurement data includes three-axis angular velocity, three-axis linear velocity, and timestamp information.
11. An electronic device, comprising: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor executing the program to implement the distortion compensation method of lidar point cloud of any of claims 1-5.
12. A computer-readable storage medium, on which a computer program is stored, characterized in that the program is executed by a processor for implementing the distortion compensation method of a lidar point cloud according to any of claims 1 to 5.
CN202211511205.7A 2022-11-29 2022-11-29 Distortion compensation method, device and equipment for laser radar point cloud and storage medium Pending CN115760636A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211511205.7A CN115760636A (en) 2022-11-29 2022-11-29 Distortion compensation method, device and equipment for laser radar point cloud and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211511205.7A CN115760636A (en) 2022-11-29 2022-11-29 Distortion compensation method, device and equipment for laser radar point cloud and storage medium

Publications (1)

Publication Number Publication Date
CN115760636A true CN115760636A (en) 2023-03-07

Family

ID=85340187

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211511205.7A Pending CN115760636A (en) 2022-11-29 2022-11-29 Distortion compensation method, device and equipment for laser radar point cloud and storage medium

Country Status (1)

Country Link
CN (1) CN115760636A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116359938A (en) * 2023-05-31 2023-06-30 未来机器人(深圳)有限公司 Object detection method, device and carrying device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116359938A (en) * 2023-05-31 2023-06-30 未来机器人(深圳)有限公司 Object detection method, device and carrying device
CN116359938B (en) * 2023-05-31 2023-08-25 未来机器人(深圳)有限公司 Object detection method, device and carrying device

Similar Documents

Publication Publication Date Title
US6438507B1 (en) Data processing method and processing device
CN113091771B (en) Laser radar-camera-inertial navigation combined calibration method and system
CN107167826B (en) Vehicle longitudinal positioning system and method based on variable grid image feature detection in automatic driving
US20100165105A1 (en) Vehicle-installed image processing apparatus and eye point conversion information generation method
WO2021072696A1 (en) Target detection and tracking method and system, and movable platform, camera and medium
WO2021213432A1 (en) Data fusion
US11935250B2 (en) Method, device and computer-readable storage medium with instructions for processing sensor data
US9414044B2 (en) Method and device for processing image data of two sensors of a stereo sensor system suitable for capturing images
WO2020104423A1 (en) Method and apparatus for data fusion of lidar data and image data
JP7164721B2 (en) Sensor data processing method, device, electronic device and system
CN115760636A (en) Distortion compensation method, device and equipment for laser radar point cloud and storage medium
CN111882655B (en) Method, device, system, computer equipment and storage medium for three-dimensional reconstruction
CN112241978A (en) Data processing method and device
EP3782363B1 (en) Method for dynamic stereoscopic calibration
CN114758504A (en) Online vehicle overspeed early warning method and system based on filtering correction
CN115097419A (en) External parameter calibration method and device for laser radar IMU
JP2006318062A (en) Image processor, image processing method and image processing program
CN112666550B (en) Moving object detection method and device, fusion processing unit and medium
JP7173373B2 (en) Synchronizer, Synchronization Method and Synchronization Program
JP2009092551A (en) Method, apparatus and system for measuring obstacle
CN117079238A (en) Road edge detection method, device, equipment and storage medium
CN111693043B (en) Map data processing method and apparatus
KR102225321B1 (en) System and method for building road space information through linkage between image information and position information acquired from a plurality of image sensors
JPS63132107A (en) Light cutting line extracting circuit
JPH1096607A (en) Object detector and plane estimation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination