CN109919893B - Point cloud correction method and device and readable storage medium - Google Patents

Point cloud correction method and device and readable storage medium Download PDF

Info

Publication number
CN109919893B
CN109919893B CN201910210622.XA CN201910210622A CN109919893B CN 109919893 B CN109919893 B CN 109919893B CN 201910210622 A CN201910210622 A CN 201910210622A CN 109919893 B CN109919893 B CN 109919893B
Authority
CN
China
Prior art keywords
point cloud
cloud data
data
correction result
external parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910210622.XA
Other languages
Chinese (zh)
Other versions
CN109919893A (en
Inventor
余杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ecarx Hubei Tech Co Ltd
Original Assignee
Hubei Ecarx Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei Ecarx Technology Co Ltd filed Critical Hubei Ecarx Technology Co Ltd
Priority to CN201910210622.XA priority Critical patent/CN109919893B/en
Publication of CN109919893A publication Critical patent/CN109919893A/en
Application granted granted Critical
Publication of CN109919893B publication Critical patent/CN109919893B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The invention provides a point cloud correction method, a point cloud correction device and a readable storage medium, and relates to the technical field of unmanned driving.

Description

Point cloud correction method and device and readable storage medium
Technical Field
The invention relates to the technical field of unmanned driving, in particular to a point cloud correction method and device and a readable storage medium.
Background
At present, in order to obtain high-precision, robust and stable point cloud, a conventional scheme is to fuse the point cloud generated by interaction between a laser radar and an external environment and the point cloud generated by a physical model algorithm after a visual sensor senses the external environment, so as to generate the high-precision point cloud. In the process, due to the fact that the two point cloud generating schemes are different in characteristics, the fused point cloud generally has the phenomena of delay, misalignment, disorder and the like, and the subsequent use effect in the field of unmanned driving is poor.
Disclosure of Invention
In view of this, the present invention aims to provide a point cloud correction method, a point cloud correction device, and a readable storage medium, so as to improve the fusion accuracy of laser point cloud and visual point cloud, further improve the overall accuracy of output fusion point cloud, and provide a stable and reliable data source for the subsequent unmanned technical scheme.
In a first aspect, an embodiment of the present invention provides a point cloud correction method, including:
acquiring first point cloud data and second point cloud data at the same time;
mapping the first point cloud data and the second point cloud data to the same coordinate system, wherein the overlapping area of the first point cloud data and the second point cloud data is third point cloud data;
constructing a multi-stage error equation according to the third point cloud data, and calculating a correction result;
and obtaining the corrected first point cloud data and/or the corrected second point cloud data according to the correction result.
With reference to the first aspect, an embodiment of the present invention provides a first possible implementation manner of the first aspect, where the correction result includes an external parameter data correction result of the second point cloud data, or an internal parameter data correction result of the first point cloud data, an external parameter data correction result of the first point cloud data, and an external parameter data correction result of the second point cloud data.
With reference to the first aspect, an embodiment of the present invention provides a second possible implementation manner of the first aspect, where the mapping the first point cloud data and the second point cloud data to a same coordinate system, and an overlapping area of the first point cloud data and the second point cloud data is third point cloud data, includes:
obtaining a point cloud conversion relation between the first point cloud data and the second point cloud data according to external reference data in the second point cloud data;
and obtaining an overlapping area of the first point cloud data and the second point cloud data according to the internal reference data in the first point cloud data and the point cloud transformation relation, and obtaining third point cloud data according to the overlapping area of the first point cloud data and the second point cloud data.
With reference to the first aspect, an embodiment of the present invention provides a third possible implementation manner of the first aspect, where the constructing a multistage error equation according to the third point cloud data and calculating a correction result include:
performing up-sampling operation on the third point cloud data to obtain a point cloud weighted average value;
and constructing an error equation through the point cloud weighted average, calculating to obtain a multi-level error, and calculating to obtain an external parameter data correction result of the second point cloud data according to the multi-level error, or obtaining an internal parameter data correction result of the first point cloud data, an external parameter data correction result of the first point cloud data and an external parameter data correction result of the second point cloud data.
With reference to the first aspect, an embodiment of the present invention provides a fourth possible implementation manner of the first aspect, where the performing an upsampling operation on the third point cloud data to obtain a point cloud weighted average includes:
and performing up-sampling operation on the third point cloud data according to the sequence of the sampling resolution from large to small step by step to obtain a point cloud weighted average value.
With reference to the first aspect, an embodiment of the present invention provides a fifth possible implementation manner of the first aspect, where the internal reference data includes a field angle, a distortion factor, a focal length, and an aperture center, and obtaining an overlapping area of the first point cloud data and the second point cloud data according to the internal reference data in the first point cloud data and the point cloud transformation relationship, and obtaining third point cloud data according to the overlapping area of the first point cloud data and the second point cloud data includes:
obtaining a length-width range limit value of the second point cloud data in a field angle of a visual system according to the field angle in the first point cloud data and the point cloud transformation relation;
and obtaining third point cloud data which accord with the field angle length and width range according to the length and width range limit value.
With reference to the first aspect, an embodiment of the present invention provides a sixth possible implementation manner of the first aspect, where the obtaining corrected first point cloud data and/or corrected second point cloud data according to the correction result includes:
adjusting the external parameter data of the second point cloud data according to the external parameter data correction result of the second point cloud data to obtain corrected first point cloud data and/or second point cloud data;
or the like, or, alternatively,
and adjusting the external parameter data of the second point cloud data, the internal parameter data of the first point cloud data and the external parameter data of the first point cloud data according to the internal parameter data correction result of the first point cloud data, the external parameter data correction result of the first point cloud data and the external parameter data correction result of the second point cloud data to obtain the corrected first point cloud data and/or second point cloud data.
With reference to the first aspect, an embodiment of the present invention provides a seventh possible implementation manner of the first aspect, where the method further includes:
respectively acquiring the corrected first point cloud data and the corrected second point cloud data from the adjusted vision system and the adjusted sensing system;
obtaining corrected third point cloud data according to the corrected first point cloud data and the corrected second point cloud data;
acquiring a projection error of the corrected third point cloud data, and comparing the projection error with a threshold value;
and when the projection error is larger than the threshold value, acquiring a new external parameter data correction result, and adjusting the external parameter data of the corrected second point cloud data again until the projection error is smaller than the threshold value.
In a second aspect, an embodiment of the present invention further provides a point cloud correction apparatus, including:
the acquisition module is used for acquiring first point cloud data and second point cloud data at the same time;
the transformation module is used for mapping the first point cloud data and the second point cloud data to the same coordinate system, and the overlapping area of the first point cloud data and the second point cloud data is third point cloud data;
the calculation module is used for constructing a multistage error equation according to the third point cloud data and calculating a correction result;
and the correction module is used for obtaining the corrected first point cloud data and/or the corrected second point cloud data according to the correction result.
In a third aspect, an embodiment of the present invention further provides a readable storage medium, where a computer program is stored, and when the computer program is executed, the method for correcting a point cloud as described above is implemented.
The embodiment of the invention provides a point cloud correction method, a device and a readable storage medium, wherein third point cloud data comprising an overlapped area under the same coordinate system are obtained according to point cloud data acquired by a multi-sensor, internal reference data of a camera and a laser radar and external reference data corresponding to the camera and the laser radar, a multi-level error equation is established through the third point cloud data to obtain a correction result, and the external reference data of the radar point cloud and the internal and external reference data of the camera point cloud are adjusted on line according to the correction result to obtain adjusted and corrected first point cloud data and second point cloud data, so that the fusion precision of visual point cloud (first point cloud data) and laser point cloud (second point cloud data) is improved, the overall precision of output point cloud is further improved, and a stable and reliable data source is provided for the subsequent technical scheme without driving.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and drawings.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flow chart of a point cloud correction method according to an embodiment of the present invention;
fig. 2 is a flowchart of an external parameter adjustment method in the point cloud correction method according to the embodiment of the present invention;
fig. 3 is a schematic block diagram of an electronic device for implementing the point cloud correction method according to an embodiment of the present invention.
Icon: 100-an electronic device; 110-a storage medium; 120-a processor; 200-point cloud correction means; 210-an obtaining module; 220-a transformation module; 230-a calculation module; 240-correction module.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Perception, high-precision positioning, decision making and control jointly form a core module in the field of unmanned driving. Wherein, perception and high-precision positioning are extremely dependent on point cloud. So-called point clouds are usually generated in two ways: the first method is that the sensor and the external environment are interactively generated, such as by means of laser radar, millimeter wave radar and the like; and secondly, after the external environment is sensed through a visual sensor, point cloud is generated through a physical model and an algorithm. The two schemes have the advantages and the disadvantages respectively, the first scheme has stable and reliable performance and can be used in a complex environment, but the scheme is expensive, and the sensor is easily subjected to electromagnetic interference in the using process; the scheme is cheap and can provide rich environment information, but the scheme is seriously influenced by the environment and limited by an algorithm and is not suitable for a complex weather environment.
Therefore, in order to obtain a high-precision, robust and stable point cloud, a conventional scheme is to fuse two parts of point clouds to generate a high-precision point cloud. In this process, delay, misalignment, confusion, etc. are usually generated due to the different characteristics of the two schemes.
Based on the point cloud correction method, the point cloud correction device and the readable storage medium, provided by the embodiment of the invention, the fusion precision of the laser point cloud and the visual point cloud is improved, so that the overall precision of the output fusion point cloud is improved, and a stable and reliable data source is provided for the subsequent technical scheme of unmanned driving.
For the convenience of understanding the embodiment, a detailed description will be given to a point cloud correction method disclosed in the embodiment of the present invention.
Fig. 1 is a flowchart of a point cloud correction method according to an embodiment of the present invention.
The point cloud correction method provided by the embodiment of the invention is applied to a multi-sensor point cloud fusion scene, comprises a visual sensor such as a camera and an induction sensor such as a laser radar, and corrects the point cloud, and referring to fig. 1, the point cloud correction method under the multi-sensor fusion comprises the following steps:
step S110, obtaining first point cloud data and second point cloud data at the same time, where the first point cloud data may be a camera point cloud, the second point cloud data may be a lidar point cloud, the camera point cloud includes collected original image data, internal reference and external reference data, and the radar point cloud includes collected original image data, physical characteristics, internal reference and external reference data.
It should be noted that the camera point cloud may include a monocular, binocular or multiocular stereo matching point cloud, which is collected by one or more monocular, binocular or multiocular cameras, respectively, and in this case, the external reference data of the camera point cloud may include a relative position, an orientation and/or a posture between the cameras. The extrinsic parameters in the radar point cloud include the point cloud transformation relationship of the radar and the camera and the relative position, direction and/or attitude between the radar and the camera.
Step S120, mapping the first point cloud data and the second point cloud data to the same coordinate system, wherein an overlapping area of the first point cloud data and the second point cloud data is third point cloud data, a part of overlapping area is generated between the first point cloud and the second point cloud due to the fact that a camera point cloud range of the first point cloud is fan-shaped and the radar point cloud of the second point cloud is circular, and the third point cloud data comprises the first point cloud data and the second point cloud data of the overlapping area;
step S130, a multilevel error equation is constructed according to the third point cloud data, and a correction result is obtained through calculation;
step S140, obtaining the corrected first point cloud data and second point cloud data according to the correction result.
In a preferred embodiment of practical application, due to the existence of calibration errors, the second point cloud data set acquired by the laser radar and the first point cloud data set acquired by the camera often have the condition that the point clouds are overlapped and crossed and the like and cannot be completely aligned, and the errors cause serious consequences on subsequent perception and high-precision positioning, in the embodiment of the invention, third point cloud data including an overlapped area under the same coordinate system is obtained according to the point cloud data acquired by a plurality of sensors, the internal reference data of the camera and the laser radar and the external reference data corresponding to the camera and the laser radar, a multi-level error equation is constructed through the third point cloud data to obtain a correction result, the external reference data of the radar point cloud and the internal and external reference data of the camera point cloud are adjusted on line according to the correction result to obtain the adjusted and corrected first point cloud data and second point cloud data, and further improve the fusion precision of the visual point cloud (first point cloud data) and the laser point cloud (second point cloud data), and the overall precision of the output point cloud is further improved, and a stable and reliable data source is provided for the subsequent technical scheme of unmanned driving.
In order to further improve the accuracy of the point cloud data correction, step S110 in the above embodiment includes: the method comprises the steps of obtaining first point cloud data from a vision system and second point cloud data from a sensing system, wherein the first point cloud data and the second point cloud data have the same time.
External reference data (external reference calibration result) between the vision system and the sensing system and information (internal reference data) of the sensing system are obtained through calibration. The vision system comprises hardware which can stably output a point cloud data source under a certain algorithm and condition, such as a vision sensor like a camera, and outputs point cloud data corresponding to an environment by calculating input environment image data; the sensing system comprises sensor hardware capable of generating point cloud data, such as a laser radar, so as to collect environment point cloud data, meanwhile, in the running process of a vehicle, in order to obtain consistent data quantity, a camera and the radar need to be set to be collected at the same time, and the collection frequency is consistent, namely, in order that the point cloud data output by the vision system and the sensing system at the same time correspond to the same scene, the point cloud data obtained through the vision system and the sensing system need to have strict time synchronization and stable output frequency, namely, the point cloud output frequency of the vision system and the sensing system is consistent at the same time.
As an alternative embodiment, step S120 may be implemented by the following steps, including:
step S210, obtaining a point cloud conversion relation between the first point cloud data and the second point cloud data according to the external reference data in the second point cloud data;
as a preferred embodiment, since the second sensor employs a laser radar, and the accuracy of point cloud data acquisition of the laser radar is relatively high, the first point cloud data set is converted into the coordinate system of the second point cloud data set through a point cloud conversion relationship, and the coordinate system is compared with the second point cloud data set to obtain an error and an external parameter data correction result, and then the first sensor camera is adjusted through the external parameter data correction result, which has higher accuracy.
Step S220, obtaining an overlapping area of the first point cloud data and the second point cloud data according to the internal reference data (mainly applying the angle of view in the internal reference data) in the first point cloud data and the point cloud transformation relationship, and obtaining third point cloud data according to the overlapping area of the first point cloud data and the second point cloud data.
Here, the characteristics of the sensing system include model, number; the intrinsic data includes an angle of view, a distortion factor, a focal length, and a center of an aperture, and step S220 further includes:
step S310, obtaining a length-width range limit value of the second point cloud data in the field angle of the vision system according to the field angle in the first point cloud data and the point cloud conversion relation;
and step S320, obtaining third point cloud data which accord with the field angle length and width range according to the length and width range limit value.
Wherein, the limit value of the sensing system laser radar in the camera FOV in the length and width range can be obtained through the point cloud conversion relation between the visual angle FOV of the visual system camera and the external reference data of the sensing system and is recorded as the maximum length UmaxMinimum value of length UminMaximum value of width VmaxMinimum width minV. Third Point Cloud data which is selected from the second Point Cloud data LPC (radar Point Cloud) and accords with the FOV range of the visual system camera field angle is marked as LPCC, wherein the LPCC belongs to LPC, and the camera projection Point Cloud of the LPCC accords with Umin<ULPCC<Umax, minV<VLPCC<Vmax
Here, the identifiers may be used to associate respective corresponding point clouds of the first point cloud data and the second point cloud data in the third point cloud data, so as to distinguish the first point cloud data from the second point cloud data.
It should be noted that not all point clouds in the third point cloud data can be associated with the point clouds in the first point cloud data or the second point cloud data, but only the point clouds of the first point cloud data or the second point cloud data obtained through the point cloud conversion relation conversion are associated with the identification, wherein if the point clouds obtain the first point cloud data through the conversion, the point clouds are associated with the point clouds in the first point cloud data, and the second point clouds are similar to the first point cloud data, which is not described herein again.
In some alternative embodiments, because the front or rear camera angle is limited, the camera internal reference data, such as the field angle FOV; and acquiring external reference data of the camera and the laser radar by using a certain acquisition mode (such as a manual calibration mode) through a specific calibration plate. However, due to sensor errors, calibration methods, data delay and the like, the error of the result of camera internal reference calibration is usually about 0.1-1 pixel, the error of the external reference calibration is usually about 0.5-2 pixel, and the error of the calibration result is large, so that point cloud correction is performed on the basis of the rough calibration result to ensure the accuracy of fused point cloud data.
In the practical application process, the calculation mode of the third point cloud data and the error relationship thereof comprises the following steps:
step S410, obtaining a pose relation by calibrating the first point cloud data and the first point cloud data in the third point cloud data;
step S420, calculating according to the pose relationship and the first point cloud data to obtain each point cloud pose data corresponding to the third point cloud data, and if the pose relationship between the first point cloud data, the second point cloud data, and the third point cloud data is known, the relationship between the point clouds is as shown in formula (1) under the condition of no error:
PLPCC=TLC·PCPC(1)
wherein, CPC Point Cloud (CPC), PLPCCAs point cloud pose data of LPCC, PCPCFor the corresponding CPC point cloud pose data, TLCIs the external parameter transformation relation (pose relation) between camera and radarLCThe laser radar coordinate system is used as a reference origin, and the corresponding data relation (incidence relation) between two point clouds can be determined. However, due to the existence of errors, we cannot obtain a result completely conforming to the formula (1), and further can deduce the error relationship that we need to correct, as shown in the formulas (2), (3) and (4):
Figure BDA0002000371630000101
Figure BDA0002000371630000102
Figure BDA0002000371630000103
the point cloud pose data are divided into x, y and z axis components, and then errors of corresponding coordinate axes are obtained. When the camera system is a multi-view stereo camera system, S is a prior value which does not need to be corrected and can be obtained in the calibration process; when the camera system is a monocular system, the generated point cloud is a point cloud value updated immediately by up-to-scale, and needs to be changed in real time according to the scale with the real world, and the scale S at the moment is an uncertain value.
In order to obtain the optimal external parameter value, step S130 in the above embodiment includes:
step S510, performing upsampling operation on the third point cloud data to obtain a point cloud weighted average.
Step S520 further includes: and performing upsampling operation on each third point cloud data according to the sequence of gradually increasing sampling resolution ratios to obtain a point cloud weighted average value, and sampling, namely upsampling, the third point cloud data from the coarsest sampling resolution ratio (namely, the maximum sampling resolution ratio) level, so that the calculated amount is reduced and the error equation correction result is prevented from falling into local optimum. It can be understood that, by means of upsampling, the result of the sampling operation at the current level can be iteratively applied to the sampling operation at the next level, so as to achieve the purpose of reducing the amount of calculation.
Because the number of the point clouds of the LPCC (third point cloud data) and the CPC is large, and the generated CPC has an internal reference error, the LPCC and the CPC need to be up-sampled for a plurality of times so as to obtain a mean point cloud (point cloud weighted average value)
Figure BDA0002000371630000111
The calculation amount can be reduced and the calculation accuracy can be improved. Let us remember that the upsampling level is N, N is the original sampling resolution, N ═ 1 indicates that the upsampling result is only one point cloud in the visible range, N ═ N indicates the original resolution, and the specific upsampling formula is shown in formula (5):
Figure BDA0002000371630000112
step S530, constructing an error equation through a point cloud weighted average, calculating to obtain a multi-level error, and calculating to obtain an external parameter data correction result of the second point cloud data according to the multi-level error, or obtaining an internal parameter data correction result of the first point cloud data, an external parameter data correction result of the first point cloud data and an external parameter data correction result of the second point cloud data;
in order to obtain the correction result of the extrinsic data of the second point cloud data, the error equation can be further expressed as shown in formula (6) according to the weighted average obtained above:
Figure BDA0002000371630000113
an optimized objective function is constructed according to an error equation, and an external parameter data correction result T is obtained through calculationLCnSee equation (7):
Figure BDA0002000371630000114
obtaining an external parameter correction result T of the laser radar in an up-sampling modeLCnAnd the external parameter T of the laser radar can be optimized step by step, so that the optimal external parameter value is obtained.
In order to obtain the external parameter data correction result of the first point cloud data and the external parameter data correction result of the second point cloud data, on the basis of the external parameter data correction result of the second point cloud data, the first point cloud itself needs to optimize an equation, which can be referred to as formula (8):
Figure BDA0002000371630000121
wherein, the external parameters of the first point cloud data can be respectively represented by rotation R and displacement t, R belongs to SO (3), and the displacement t belongs to
Figure BDA0002000371630000122
SO (3) is a three-dimensional special orthogonal group,
Figure BDA0002000371630000123
is an array of three degrees of freedom, xiIs a pixel point value, x, on the imageiIncluding the image coordinates (X, y, z), XiThe point cloud location information values for the spatial mapping include spatial coordinates
Figure BDA0002000371630000124
XiThe internal reference data of the first point cloud data are included. f is a mapping function, and is specifically expressed as formula (9), cx、cy、czRepresents the shift of the optical center, b represents the baseline;
Figure BDA0002000371630000125
here, X is calibrated in advanceiSubstituting the formula (8) into the formula (8), calculating to obtain an R, t initial value corresponding to the external parameter data of the first point cloud data when the minimum error exists between the image pixel point and the point cloud mapped in the space, and then adjusting the minimum error to be within the error threshold range to further obtain an external parameter data correction result corresponding to R, t of the first point cloud data and an internal reference data correction result of the corrected first point cloud data, wherein the point cloud position information value X of the internal reference data mapped in the space isiIs shown.
Further, step S140 includes: adjusting the extrinsic parameter data of the second point cloud data (laser radar point cloud) according to the extrinsic parameter data correction result to obtain corrected first point cloud data and/or second point cloud data;
or the like, or, alternatively,
and adjusting the external parameter data of the second point cloud data, the internal parameter data of the first point cloud data and the external parameter data of the first point cloud data according to the internal parameter data correction result of the first point cloud data, the external parameter data correction result of the first point cloud data and the external parameter data correction result of the second point cloud data to obtain the corrected first point cloud data and/or second point cloud data.
Here, the embodiment of the present invention may adjust the corresponding first point cloud internal parameter data, external parameter data, and second point cloud external parameter data according to the correction result, or adjust the second point cloud external parameter data, to obtain the corrected first point cloud data, second point cloud data, first point cloud data, and second point cloud data, where the three situations are determined by the point cloud situation required in the practical application.
Further, the point cloud correction method provided by the embodiment of the present invention further includes:
step S610, acquiring modified first point cloud data and modified second point cloud data from the adjusted vision system and the adjusted sensing system respectively, and further acquiring modified third point cloud data of an overlapping area of the first point cloud data and the modified second point cloud data according to the modified first point cloud data and the modified second point cloud data;
step S620, acquiring a projection error of the corrected third point cloud data, and comparing the projection error with a threshold value;
step S630, projecting the first point cloud data acquired from the adjusted vision system and the second point cloud data acquired from the sensing system in the same coordinate system;
here, as described in the above embodiments, the coordinate system of the vision system or the sensing system can be selected as the reference coordinate system, and the two point cloud data can be projected in the reference coordinate system.
And step S640, when the projection error is larger than the threshold value, acquiring a new external parameter data correction result, and adjusting the external parameter data of the corrected second point cloud data, or the internal parameter data and the external parameter data of the first point cloud data and the external parameter data of the second point cloud data again until the projection error is smaller than the threshold value.
Further, as an optional implementation manner, as shown in fig. 2, after step S140, the point cloud correction method provided in the embodiment of the present invention further includes:
step S710, projecting the first point cloud data acquired from the adjusted visual system and the second point cloud data acquired from the sensing system in the same coordinate system;
step S720, obtaining the error of the projection result, and comparing the error with a threshold value;
it should be noted that, according to the above method for calculating the extrinsic parameter data correction result of the second point cloud, an extrinsic parameter data correction result is calculated for each level of sampling resolution, the extrinsic parameter data of the laser radar is corrected according to the extrinsic parameter data correction result, the corrected first point cloud data and the second point cloud data obtained from the sensing system are projected in the same coordinate system, and the projection is compared with the projection using the original extrinsic parameters to obtain the error of the projection result.
Step S730, judging whether the error is larger than a threshold value;
step S740, when the error is larger than the threshold value, acquiring a new external parameter data correction result, and readjusting the external parameter data of the sensing system until the error is smaller than the threshold value;
and step S750, finishing the optimization and correction process when the error is smaller than the threshold value.
Here, since the calculation is performed according to the sampling resolution from large to small, if the error of the projection obtained by the correction result of the external parameter data obtained by the current sampling resolution is larger than the threshold, the error is obtained by the correction result of the external parameter data obtained by the calculation of the next level and the smaller sampling resolution, and the error is compared with the threshold until the error is smaller than the threshold; if the error of the projection obtained by the external parameter data correction result obtained by the current sampling resolution is smaller than the threshold value, the external parameter data correction result obtained by the current level of sampling resolution calculation is optimal, and the next level of sampling resolution calculation process is not needed.
Due to the fact that a priori external parameter calibration result exists, the LPCC and the CPC can be projected to the same coordinate system together, after external parameter data of the laser radar are corrected according to the external parameter data correction result, the projection error of the LPCC and the CPC is obtained, an error threshold eta is set, whether the projection error is smaller than the error threshold is judged, and if yes, optimization is completed; if not, sampling is carried out again, the correction result of the extrinsic parameter data is recalculated, the extrinsic parameter data is optimized again according to the correction result of the extrinsic parameter data until whether the projection error is smaller than the error threshold value or not, and the adjusted original point cloud data is fused and constructed.
After the point cloud correction method provided by the embodiment of the invention is completed once, the LPC and CPC of each frame continuously use the optimized external parameter T, and if and only if the sensor configuration changes and the system self-starting verification is carried out each time, the external parameter data can be adjusted and corrected through a small amount of point cloud data.
Meanwhile, the embodiment of the invention is suitable for correcting point clouds under multi-sensor fusion, is suitable for radar and visual point cloud generation schemes of different models, has wider adaptability, can obviously reduce the difficulty of multi-sensor combined calibration, is suitable for combined calibration of various radar and visual sensor schemes, reduces the calibration cost and improves the calibration accuracy on line; and the online calibration result is suitable for the subsequent process, so that the calculation time of secondary calibration is reduced, and the whole operation expense is reduced.
Further, as shown in fig. 3, the electronic device 100 for implementing the point cloud correction method according to the embodiment of the present invention is schematically illustrated. In this embodiment, the electronic device 100 may be, but is not limited to, a Personal Computer (PC), a notebook Computer, a monitoring device, a server, and other Computer devices with point cloud analysis and processing capabilities.
The electronic device 100 further includes a point cloud correction apparatus 200, a storage medium 110, and a processor 120. In a preferred embodiment of the present invention, the point cloud correction apparatus 200 includes at least one software functional module, which can be stored in the storage medium 110 in the form of software or Firmware (Firmware) or solidified in an Operating System (OS) of the electronic device 100. The processor 120 is configured to execute executable software modules stored in the storage medium 110, such as software functional modules and computer programs included in the point cloud correction apparatus 200. In this embodiment, the point cloud correction apparatus 200 may also be integrated into the operating system as a part of the operating system. Specifically, the point cloud correction device 200 includes:
an obtaining module 210, configured to obtain first point cloud data and second point cloud data at the same time;
a transformation module 220, configured to map the first point cloud data and the second point cloud data to a same coordinate system, where an overlapping area of the first point cloud data and the second point cloud data is third point cloud data;
the calculating module 230 is used for constructing a multi-stage error equation according to the third point cloud data and calculating to obtain a correction result;
and the correcting module 240 is configured to obtain the corrected first point cloud data and/or the corrected second point cloud data according to the correction result.
The point cloud correction device provided by the embodiment of the invention has the same technical characteristics as the point cloud correction method provided by the embodiment, so that the same technical problems can be solved, and the same technical effects can be achieved.
The computer program product of the point cloud correction method and apparatus provided in the embodiments of the present invention includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the method described in the foregoing method embodiments, and specific implementation may refer to the method embodiments, and details are not described here.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In addition, in the description of the embodiments of the present invention, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
The embodiment of the invention also provides an electronic device, which comprises a memory, a processor and a computer program which is stored on the memory and can be run on the processor, wherein the processor realizes the steps of the point cloud correction method under the multi-sensor fusion provided by the embodiment when executing the computer program.
The embodiment of the invention also provides a computer-readable storage medium, wherein a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the point cloud correction method under multi-sensor fusion of the embodiment are executed.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein.

Claims (6)

1. A point cloud correction method is characterized by comprising the following steps:
acquiring first point cloud data and second point cloud data at the same time;
mapping the first point cloud data and the second point cloud data to the same coordinate system, wherein the overlapping area of the first point cloud data and the second point cloud data is third point cloud data;
constructing a multi-stage error equation according to the third point cloud data, and calculating a correction result;
obtaining corrected first point cloud data and/or corrected second point cloud data according to the correction result;
the correction result comprises an external parameter data correction result of the second point cloud data, or an internal parameter data correction result of the first point cloud data, an external parameter data correction result of the first point cloud data and an external parameter data correction result of the second point cloud data;
the mapping the first point cloud data and the second point cloud data to the same coordinate system, wherein an overlapping area of the first point cloud data and the second point cloud data is third point cloud data, and the mapping comprises:
obtaining a point cloud conversion relation between the first point cloud data and the second point cloud data according to external reference data in the second point cloud data;
obtaining an overlapping area of the first point cloud data and the second point cloud data according to the point cloud transformation relation and internal reference data in the first point cloud data, and obtaining third point cloud data according to the overlapping area of the first point cloud data and the second point cloud data;
the establishing of a multilevel error equation according to the third point cloud data and the calculation of a correction result comprise:
performing up-sampling operation on the third point cloud data to obtain a point cloud weighted average value;
constructing an error equation through the point cloud weighted average, calculating to obtain a multi-level error, and calculating to obtain an external parameter data correction result of the second point cloud data according to the multi-level error, or obtaining an internal parameter data correction result of the first point cloud data, an external parameter data correction result of the first point cloud data and an external parameter data correction result of the second point cloud data;
and performing up-sampling operation on the third point cloud data according to the sequence of sampling resolution from large to small step by step to obtain a point cloud weighted average value.
2. The point cloud correction method according to claim 1, wherein the internal reference data includes a field angle, a distortion factor, a focal length, and an aperture center, and the obtaining an overlapping area of the first point cloud data and the second point cloud data according to the internal reference data in the first point cloud data and the point cloud transformation relationship, and obtaining third point cloud data according to the overlapping area of the first point cloud data and the second point cloud data includes:
obtaining a length-width range limit value of the second point cloud data in a field angle of a visual system according to the field angle in the first point cloud data and the point cloud transformation relation;
and obtaining third point cloud data which accord with the field angle length and width range according to the length and width range limit value.
3. The point cloud correction method according to claim 1, wherein the obtaining of the corrected first point cloud data and/or the corrected second point cloud data according to the correction result includes:
adjusting the external parameter data of the second point cloud data according to the external parameter data correction result of the second point cloud data to obtain corrected first point cloud data and/or second point cloud data;
or the like, or, alternatively,
and adjusting the external parameter data of the second point cloud data, the internal parameter data of the first point cloud data and the external parameter data of the first point cloud data according to the internal parameter data correction result of the first point cloud data, the external parameter data correction result of the first point cloud data and the external parameter data correction result of the second point cloud data to obtain the corrected first point cloud data and/or second point cloud data.
4. The point cloud correction method of claim 3, further comprising:
respectively acquiring the corrected first point cloud data and the corrected second point cloud data from the adjusted vision system and the adjusted sensing system;
obtaining corrected third point cloud data according to the corrected first point cloud data and the corrected second point cloud data;
acquiring a projection error of the corrected third point cloud data, and comparing the projection error with a threshold value;
and when the projection error is larger than the threshold value, acquiring a new external parameter data correction result, and adjusting the external parameter data of the corrected second point cloud data again until the projection error is smaller than the threshold value.
5. A point cloud correction apparatus, comprising:
the acquisition module is used for acquiring first point cloud data and second point cloud data at the same time;
the transformation module is used for mapping the first point cloud data and the second point cloud data to the same coordinate system, and the overlapping area of the first point cloud data and the second point cloud data is third point cloud data;
the calculation module is used for constructing a multistage error equation according to the third point cloud data and calculating a correction result;
the transformation module is configured to map the first point cloud data and the second point cloud data to the same coordinate system, and an overlapping area of the first point cloud data and the second point cloud data is third point cloud data, and is further configured to:
obtaining a point cloud conversion relation between the first point cloud data and the second point cloud data according to external reference data in the second point cloud data;
obtaining an overlapping area of the first point cloud data and the second point cloud data according to the point cloud transformation relation and internal reference data in the first point cloud data, and obtaining third point cloud data according to the overlapping area of the first point cloud data and the second point cloud data;
the calculation module is used for calculating a correction result according to the following steps:
performing up-sampling operation on the third point cloud data to obtain a point cloud weighted average value;
constructing an error equation through the point cloud weighted average, calculating to obtain a multi-level error, and calculating to obtain an external parameter data correction result of the second point cloud data according to the multi-level error, or obtaining an internal parameter data correction result of the first point cloud data, an external parameter data correction result of the first point cloud data and an external parameter data correction result of the second point cloud data; carrying out up-sampling operation on the third point cloud data according to the sequence of the sampling resolution from large to small step by step to obtain a point cloud weighted average value;
the correction result comprises an external parameter data correction result of the second point cloud data, or an internal parameter data correction result of the first point cloud data, an external parameter data correction result of the first point cloud data and an external parameter data correction result of the second point cloud data;
and the correction module is used for obtaining the corrected first point cloud data and/or the corrected second point cloud data according to the correction result.
6. A readable storage medium, characterized in that a computer program is stored therein, which when executed implements the point cloud correction method of any one of claims 1-4.
CN201910210622.XA 2019-03-20 2019-03-20 Point cloud correction method and device and readable storage medium Active CN109919893B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910210622.XA CN109919893B (en) 2019-03-20 2019-03-20 Point cloud correction method and device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910210622.XA CN109919893B (en) 2019-03-20 2019-03-20 Point cloud correction method and device and readable storage medium

Publications (2)

Publication Number Publication Date
CN109919893A CN109919893A (en) 2019-06-21
CN109919893B true CN109919893B (en) 2021-04-23

Family

ID=66965701

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910210622.XA Active CN109919893B (en) 2019-03-20 2019-03-20 Point cloud correction method and device and readable storage medium

Country Status (1)

Country Link
CN (1) CN109919893B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110298891A (en) * 2019-06-25 2019-10-01 北京智行者科技有限公司 The method and device that Camera extrinsic precision is assessed automatically
CN110322519B (en) * 2019-07-18 2023-03-31 天津大学 Calibration device and calibration method for combined calibration of laser radar and camera
CN111427028B (en) * 2020-03-20 2022-03-25 新石器慧通(北京)科技有限公司 Parameter monitoring method, device, equipment and storage medium
CN112578356B (en) * 2020-12-25 2024-05-17 上海商汤临港智能科技有限公司 External parameter calibration method and device, computer equipment and storage medium
CN113486795A (en) * 2021-07-06 2021-10-08 广州小鹏自动驾驶科技有限公司 Visual identification performance test method, device, system and equipment
CN114419075B (en) * 2022-03-28 2022-06-24 天津云圣智能科技有限责任公司 Point cloud cutting method and device and terminal equipment
CN114549608B (en) * 2022-04-22 2022-10-18 季华实验室 Point cloud fusion method and device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107796397A (en) * 2017-09-14 2018-03-13 杭州迦智科技有限公司 A kind of Robot Binocular Vision localization method, device and storage medium
CN107886477A (en) * 2017-09-20 2018-04-06 武汉环宇智行科技有限公司 Unmanned neutral body vision merges antidote with low line beam laser radar
CN109270534A (en) * 2018-05-07 2019-01-25 西安交通大学 A kind of intelligent vehicle laser sensor and camera online calibration method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9305364B2 (en) * 2013-02-19 2016-04-05 Caterpillar Inc. Motion estimation systems and methods
US10866101B2 (en) * 2017-06-13 2020-12-15 Tusimple, Inc. Sensor calibration and time system for ground truth static scene sparse flow generation
CN107703499B (en) * 2017-08-22 2020-11-24 北京航空航天大学 Point cloud error correction method based on self-made foundation laser radar alignment error

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107796397A (en) * 2017-09-14 2018-03-13 杭州迦智科技有限公司 A kind of Robot Binocular Vision localization method, device and storage medium
CN107886477A (en) * 2017-09-20 2018-04-06 武汉环宇智行科技有限公司 Unmanned neutral body vision merges antidote with low line beam laser radar
CN109270534A (en) * 2018-05-07 2019-01-25 西安交通大学 A kind of intelligent vehicle laser sensor and camera online calibration method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
35.1: Distinguished Paper: Auto-Calibration for Screen Correction and Point Cloud Generation;Deglint J,Cameron A,Scharfenberger C;《Sid Symposium Digest of Technical Papers》;20151215;第1107-1115页 *
激光视觉融合下的运动检测与失配矫正;张强, 赵江海, 袁雅薇;《光电工程》;20171215;第507-510页 *

Also Published As

Publication number Publication date
CN109919893A (en) 2019-06-21

Similar Documents

Publication Publication Date Title
CN109919893B (en) Point cloud correction method and device and readable storage medium
CN106780590B (en) Method and system for acquiring depth map
WO2021098448A1 (en) Sensor calibration method and device, storage medium, calibration system, and program product
CN113409391B (en) Visual positioning method and related device, equipment and storage medium
JP2006252473A (en) Obstacle detector, calibration device, calibration method and calibration program
CN112985360B (en) Lane line-based binocular ranging correction method, device, equipment and storage medium
CN113327296B (en) Laser radar and camera online combined calibration method based on depth weighting
CN111932637B (en) Vehicle body camera external parameter self-adaptive calibration method and device
WO2022183685A1 (en) Target detection method, electronic medium and computer storage medium
CN110940312A (en) Monocular camera ranging method and system combined with laser equipment
CN113240734B (en) Vehicle cross-position judging method, device, equipment and medium based on aerial view
CN114078093A (en) Image correction method, intelligent terminal and storage medium
CN111340737A (en) Image rectification method, device and electronic system
CN112946609A (en) Calibration method, device and equipment for laser radar and camera and readable storage medium
CN115272452A (en) Target detection positioning method and device, unmanned aerial vehicle and storage medium
CN113015884B (en) Image processing apparatus and image processing method
CN118230231A (en) Pose construction method and device of unmanned vehicle, electronic equipment and storage medium
CN113807182B (en) Method, device, medium and electronic equipment for processing point cloud
CN112529011B (en) Target detection method and related device
CN115937325B (en) Vehicle-end camera calibration method combined with millimeter wave radar information
CN114091562A (en) Multi-sensing data fusion method, device, system, equipment and storage medium
CN112116644A (en) Vision-based obstacle detection method and device and obstacle distance calculation method and device
CN117537815A (en) Aircraft positioning method based on three-dimensional terrain matching-inertial navigation-speed measurement combination
CN115908551A (en) Vehicle distance measuring method and device, electronic equipment and storage medium
CN115937449A (en) High-precision map generation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220325

Address after: 430090 No. b1336, chuanggu startup area, taizihu cultural Digital Creative Industry Park, No. 18, Shenlong Avenue, Wuhan Economic and Technological Development Zone, Hubei Province

Patentee after: Yikatong (Hubei) Technology Co.,Ltd.

Address before: 430000 no.c101, chuanggu start up area, taizihu cultural Digital Industrial Park, No.18 Shenlong Avenue, Wuhan Economic and Technological Development Zone, Hubei Province

Patentee before: HUBEI ECARX TECHNOLOGY Co.,Ltd.