CN111640158A - End-to-end camera based on corresponding mask and laser radar external reference calibration method - Google Patents

End-to-end camera based on corresponding mask and laser radar external reference calibration method Download PDF

Info

Publication number
CN111640158A
CN111640158A CN202010532027.0A CN202010532027A CN111640158A CN 111640158 A CN111640158 A CN 111640158A CN 202010532027 A CN202010532027 A CN 202010532027A CN 111640158 A CN111640158 A CN 111640158A
Authority
CN
China
Prior art keywords
point cloud
checkerboard
point
points
mask
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010532027.0A
Other languages
Chinese (zh)
Other versions
CN111640158B (en
Inventor
尹露
罗斌
王伟
赵青
王晨捷
李成源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Binguo Technology Co ltd
Original Assignee
Wuhan Binguo Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Binguo Technology Co ltd filed Critical Wuhan Binguo Technology Co ltd
Priority to CN202010532027.0A priority Critical patent/CN111640158B/en
Publication of CN111640158A publication Critical patent/CN111640158A/en
Application granted granted Critical
Publication of CN111640158B publication Critical patent/CN111640158B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/60Rotation of whole images or parts thereof
    • G06T3/604Rotation of whole images or parts thereof using coordinate rotation digital computer [CORDIC] devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses an end-to-end camera and laser radar external reference calibration method based on a corresponding mask, which comprises the following steps: acquiring point cloud and image sequence data; extracting angular points of the checkerboards in the image, and estimating a checkerboard mask by combining the inherent parameters of the checkerboards, thereby generating Euclidean distance transformation of the checkerboard mask; removing ground points and background points from the point cloud data, performing point cloud segmentation, and extracting checkerboard point clouds; constructing an energy function of backward projection on a distance transformation field for the distance transformation of the checkerboard 2D mask and the checkerboard point cloud; and obtaining an external reference initial value which meets the requirement of projecting the 3D point cloud to an image plane area through a genetic algorithm, and further optimizing the external reference initial value through an LM method. The invention provides a corresponding relation between a mask in an image and a point cluster in laser point cloud, which is clear as a whole and has the characteristic of fuzzy details, thereby greatly improving the noise resistance and robustness, simplifying the external reference calibration process and achieving high precision.

Description

End-to-end camera based on corresponding mask and laser radar external reference calibration method
Technical Field
The invention relates to a method for calibrating external parameters of a camera and a laser radar, and belongs to the field of multi-sensor fusion sensing. In particular to an end-to-end camera and laser radar external reference calibration method based on a corresponding mask.
Background
The camera sensor has the advantages of low cost, rich texture semantic information, high frame rate and the like, but is easily limited by illumination conditions and is difficult to recover accurate geometric information. The lidar sensor has objective geometric measurement accuracy and is basically not limited by illumination conditions, but the lidar sensor has low spatial resolution, difficult extraction of semantic information and low frame rate. The sensor with complementary characteristics is combined for application, and a great deal of research attention is attracted and good results are obtained in the fields of current 3D reconstruction, augmented reality and virtual reality, unmanned 3D target detection, multi-level high-precision map construction and positioning and the like. With the continuous development and landing of the unmanned technology, low-cost laser radars are continuously researched and developed, and the possibility is provided for solving the problem that mass-production vehicles use laser and vision fusion. On the other hand, with the development of deep learning, research issues such as monocular depth perception and the like have been achieved, and this task depends on a large amount of image samples with depth information labels, and the labels of these samples need to be marked by laser radar. Obtaining accurate external reference relation between the camera and the laser radar is a precondition and a key for realizing fusion application of vision and laser point cloud, so that the realization of accurate and robust calibration of the camera and the laser radar is of great importance.
The precise external parameters cannot be obtained only by measuring the installation position of the sensor, and the image and laser point cloud data which are obtained by the sensor and have the accuracy meeting the application requirements are obtained. The process of external reference calibration by means of images and corresponding laser point clouds is actually a process of finding a transformation matrix for mutual transformation between two coordinate systems. The laser point cloud and the image data have large difference in semantic, resolution and geometric relations, and the process of solving the transformation relation by using the multimode data is substantially a process of finding a measure to make up the gaps. One of the ideas is to find a set of 3D-3D or 3D-2D feature pairs, which may be points, line segments or planar objects, and this approach often requires the use of special scenes and markers. Another method is to find the correlation information between the image and the laser point cloud, for example, using the image luminosity and the reflectivity of the laser radar, the image luminosity discontinuity and the depth discontinuity of the laser radar, etc., and the method generally does not depend on a calibration object but only needs a target in a natural scene, so that the online calibration is often possible. The method using the correlation information has uncertainty, calibration accuracy depends on scenes seriously, and initial parameters in a relatively large range cannot be converged to correct values, so that the method cannot be applied to scenes such as data acquisition vehicles, mass production vehicles of vehicles and factories which need robustness, high accuracy and automation at present. The method studied by the invention is based on accurate robust calibration of a calibration object.
The method based on calibration objects usually needs to design various calibration plates, calibration balls, calibration holes or other marks and other experimental devices, most of extracted 3D-3D corresponding points, 3D-3D corresponding normal or 3D-2D corresponding points with deterministic relationships need to be obtained, the method often needs to limit experimental settings or manually intervene because of the corresponding relationships, the optimization precision depends on the accurate extraction of the 3D-3D or 3D-2D corresponding points and lines, the method is extremely easy to be interfered by noise, and robustness can be lost along with the deterioration of experimental conditions. Based on the consideration, the invention seeks a corresponding relation between the mask in the image and the point cluster in the laser point cloud, the corresponding relation is clear on the whole, but the details are fuzzy, the characteristic greatly improves the noise resistance and robustness of the system, simplifies the experimental setting and steps and achieves high precision.
The method is closer to the method of the point pair of the 3D-2D based on the checkerboard calibration plate, but the method utilizes the checkerboard to generate the deterministic corresponding relation, and when the environment is complex or the noise becomes large, the correct corresponding point relation or even the checkerboard point cloud can not be extracted. The method close to the method of the invention also comprises the external reference calibration of the natural scene based on the contour registration, but the method depends on the natural scene with great uncertainty, and the extraction precision of the natural scene image and the laser point cloud contour seriously influences the result of the final external reference calibration, which brings uncertainty to the calibration. The extrinsic parameter calibration method provided by the invention uses the object-level 3D-2D corresponding relation between the checkerboard point cloud cluster and the checkerboard mask area, and designs an effective method for extracting the checkerboard point cloud, so that the accuracy, stability and noise resistance of the algorithm can be ensured, and the dependence on the environment is reduced.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides an end-to-end camera and laser radar external reference calibration method based on a corresponding mask so as to solve the problems in the technical background.
In order to achieve the purpose, the invention is realized by the following technical scheme:
the end-to-end camera and laser radar external reference calibration method based on the corresponding mask comprises the following steps:
acquiring point cloud data obtained through a laser radar and image sequence data obtained through a camera;
step two, estimating a checkerboard mask by extracting angular points of the checkerboard in the image obtained by the camera in the step one and combining inherent parameters of the checkerboard, and then further generating Euclidean distance transformation of the checkerboard mask according to the checkerboard mask to obtain 2D (two-dimensional) mask distance transformation of the checkerboard;
thirdly, ground points of the target point cloud and the background point cloud obtained by the laser radar in the first step are removed, and then the background points are removed in a point cloud difference mode; performing KNN nearest neighbor search matching on the point cloud pair after the ground points are removed based on a KD tree, thereby removing background point clouds;
step four, removing the background point cloud and the rest point cloud of the ground point cloud in the step three, carrying out point cloud segmentation based on the depth continuity measure of the distance map, and segmenting the point cloud cluster with spatial correlation; then, respectively carrying out plane extraction based on RANSAC on each divided point cloud cluster, giving out corresponding confidence score containing plane significance, and taking the point cloud cluster with the highest confidence score as a checkerboard candidate point cloud cluster; secondly, generating a distance map of the point cloud cluster according to the extracted plane equation for the candidate point cloud cluster of the checkerboard, and binarizing the distance map and deleting discrete small connected areas to eliminate external points which are located on a plane and do not belong to the checkerboard to obtain the checkerboard point cloud, namely the 3D laser radar point cloud of the checkerboard;
step five, constructing a loss function of an energy function of back projection on a distance conversion field for the checkerboard 2D mask distance conversion obtained in the step two and the checkerboard point cloud obtained in the step four, and establishing a corresponding relation between a 3D point set and a 2D area;
and step six, obtaining an external parameter initial value which meets the requirement of projecting the 3D point cloud to the image plane area based on the corresponding relation between the 3D point set and the 2D area established in the step five and through a genetic algorithm, and then further optimizing the external parameter initial value through an LM method to obtain accurate external parameters.
In the above technical solution, in the second step, the specific estimation method of the checkerboard mask is as follows: after obtaining the angular points of the checkerboard in the image obtained by the camera, calculating the image plane coordinates of the vertexes of the peripheral frames of the checkerboard according to the image plane (u, v) coordinate values of the angular points and the inherent parameters of the checkerboard, and obtaining a binary mask image of the checkerboard;
the UV coordinate of the top point of the peripheral frame of the checkerboard is represented by the following formula:
Figure BDA0002535661150000031
when the checkerboard has no outer frame,
Figure BDA0002535661150000032
the uv coordinates of (a) can be simplified to the following formula:
Figure BDA0002535661150000033
in the formulas (1) and (2),
Figure BDA0002535661150000034
are respectively points
Figure BDA0002535661150000035
P1、P1+NAnd P2(ii) image plane UV coordinates; p1、P1+N、P2、P2+NRespectively are the checkerboard angular points automatically extracted near the top left corner vertex, wherein N is the number of angular points in the direction Y of the checkerboard coordinate system, Delta s is the side length of the checkerboard square,
Figure BDA0002535661150000036
is the top point of the outer frame at the upper left corner of the checkerboard to be solved,
Figure BDA0002535661150000037
is the vertex
Figure BDA0002535661150000038
Offset in the X-axis direction from the top left corner vertex of the checkerboard pattern,
Figure BDA0002535661150000039
is the vertex
Figure BDA00025356611500000310
Offset in the Y-axis direction from the top left corner vertex of the checkerboard pattern.
In the above calculation scheme, in the third step, the specific method for ground point elimination is as follows: ordering the disordered laser point clouds to generate a distance map, and judging ground points on each row of the distance map;
the angle between the horizontal plane and the vector formed by two points in adjacent rows in the same column is used to determine whether the point is a ground point, and the determination is as follows:
Figure BDA00025356611500000311
set in formula (3)
Figure BDA00025356611500000312
Is a set of ground points, ΘKIs PKAnd PK+1Angle between the generated vector and the horizontal plane, ΘTThe threshold value for ground point judgment is generally 2 degrees in an experiment; pK-1、PK、PK+1Is a laser point cloud on the same column.
In the above technical solution, in the third step, the interference of the background point is removed by means of point cloud difference, and the specific method includes: constructing a KD tree of background point clouds for the point cloud pairs after the ground points are removed, searching a point pair which is closest to the original point cloud in the target point cloud by using a KNN (K nearest neighbor) method based on the KD tree, and filtering out the point cloud which is considered as the newly added point cloud and has the nearest point pair distance larger than a threshold value, namely removing the background points;
the background points are removed in a point cloud difference mode by adopting the following formula:
Figure BDA0002535661150000041
in the formula (4)
Figure BDA0002535661150000042
For a new set of points, P, against the backgroundTIs a point in the target point cloud and,
Figure BDA0002535661150000043
for the searched and PTBackground of nearest distanceThe points in the point cloud are selected from a group of points,
Figure BDA0002535661150000044
the point set of ground points is removed for the target frame,
Figure BDA0002535661150000045
the point set of ground points is removed for the background point cloud,
Figure BDA0002535661150000046
representing the Euclidean distance difference of all matched point pairs, DTThe standard deviation of Euclidean distance is 3 times of the standard deviation of all the matched points.
In the technical scheme, in the fourth step, the confidence score of each divided point cloud cluster is judged based on a plane, and the point cloud cluster with the highest confidence score is determined as a checkerboard candidate point cloud cluster; specifically, the confidence score determination is performed through the following formula to filter out the point clusters that do not contain the checkerboard point cloud:
Figure BDA0002535661150000047
calculating a confidence score for each candidate point cloud cluster by equation (5)clusterWherein N isinliersNumber of in-plane points of current point cloud cluster, NouterliersNumber of outer points, NclusterThe number of all the point clouds in the current cluster,
Figure BDA0002535661150000048
the number of all point clouds in the ith point cloud cluster,
Figure BDA0002535661150000049
for the total number of points of all point cloud clusters, α and β are the corresponding coefficients, α is set to 0.3 and β is set to 0.7, respectively.
In the technical scheme, in the fifth step, based on the distance transformation of the checkered 2D mask obtained in the second step and the 3D laser radar point cloud of the checkered obtained in the fourth step, the laser point cloud is projected into a distance transformation field through backward propagation, a loss function based on an energy function of the distance transformation field is constructed, and the accurate external parameter transformation of the laser radar relative to the camera is obtained by minimizing the energy function;
the specific process of the laser radar relative to the external parameter transformation of the camera is as follows:
(6-1) setting the current checkerboard laser point cloud cluster as Q, and aiming at each laser point QiThen, then
Qi=[XiYiZi]T,Qi∈Q; (6)
Each laser point QiProjected onto the image plane to obtain a point PiThen, then
Figure BDA0002535661150000051
Pi∈P; (7)
Where pi () represents the transformation of the projection of the image space coordinates into the image plane coordinates, i.e. for the image space coordinate point Pc=[XcYcZc]TThere are the following transformations:
Figure BDA0002535661150000052
Figure BDA0002535661150000053
where g () is the camera distortion transformation, fx、fy、Cx、CyRespectively, the focal length in the x-axis direction, the focal length in the y-direction, the principal point in the x-direction and the principal point in the y-direction of the camera lens, k1、k2、k3Respectively, the radial distortion parameter, p, of the lens1、p2Respectively, are the parameters of the tangential distortion,
Figure BDA0002535661150000054
Pcpoints in the image space coordinate system, Xc、Yc、ZcIs PcCoordinate value of (2), XcgIs an X component, Y, in a homogeneous coordinate system in image spacecgIs the Y component in the homogeneous coordinate system of the image space;
(6-2)
Figure BDA0002535661150000055
representing the change of the lidar coordinate system to the camera coordinate system, which is an exponential mapping from SE (3) lie algebraic space to SE (3) lie group space, where
Figure BDA0002535661150000056
Se (3) lie algebra, which can be expressed as a rotation vector and a translation vector;
Figure BDA0002535661150000057
Figure BDA0002535661150000058
wherein R is a rotation matrix from a laser radar coordinate system to a camera coordinate system, and t is a translation matrix from the laser radar coordinate system to the camera coordinate system;
Figure BDA0002535661150000059
the rotation amount of se (3) space, rho is the translation amount of se (3) space;
(6-3) based on the formulas (6) to (11) and the definition of the energy loss function of the distance transformation field, the external parameter of the laser radar relative to the camera is obtained, and the external parameter is shown as follows:
Figure BDA00025356611500000510
formula (III) ξ*For the outer reference in optimized se (3) space, M is the number of observation samples, NjThe number of checkerboard points for the jth pair of observations,
Figure BDA0002535661150000061
checkerboard mask distance transform distribution for jth pair observation, WijWeight, Q, for the ith checkerboard point of the jth pair observationiIs the ith point in the checkerboard point cloud for the jth observation.
Compared with the prior art, the invention has the beneficial effects that:
1) in order to extract the checkerboard point cloud from a complex scene, complex measures such as planeness of the checkerboard point cloud, point cloud length and width, point cloud density and other indexes are often required to be set to filter out interference point clusters in the background in most calibration methods based on calibration objects such as checkerboards in the prior art.
2) In the prior art, most of the external parameter estimation methods based on 3D-2D points, lines and surfaces or 3D-3D points, lines and surfaces and other characteristics need to be accurately estimated to obtain 3D geometric objects such as vertex coordinates, line equations or surface equations in a laser point cloud space, the estimation of the parameters is obviously interfered by laser point cloud noise, and the laser radar with low cost is likely to fail; meanwhile, the 3D-based method generally needs to obtain 3D geometric information of a calibration object in a camera coordinate system space, the information depends on accurate camera internal parameters seriously, and when the internal parameters have certain distortion, the calibration precision is greatly limited. According to the method directly based on the corresponding mask, the geometric elements of a 3D space do not need to be extracted, end-to-end accurate external parameter estimation can be realized according to optimization of distance transformation, the tolerance of the accuracy of the external parameter estimation to noise is greatly increased, and the method is suitable for degradation of data quality of a low-cost sensor.
3) Compared with the prior method which needs high design requirements on the calibration plate and has a plurality of requirements on the attitude, distance, quantity and the like of the calibration data acquisition, the method can finish calibration only by a simple checkerboard or a plane plate and does not need to provide new requirements on the pose of the calibration plate.
4) The invention provides a new method for refining extraction of checkerboard point clouds, which can effectively separate targets which are in close contact with a checkerboard, such as an operator or a support rod, and the like, so that the calibration data acquisition is basically not required, the data acquisition difficulty is reduced, and the environmental adaptability is improved.
Drawings
FIG. 1 is a flow chart of end-to-end camera and lidar external reference calibration based on a corresponding mask in the present invention;
FIG. 2 is a schematic diagram of estimating coordinates of top left corner image planes of the outline of the checkerboard from coordinates of near top left corner image planes of the checkerboard;
FIG. 3a is a 7x7 binary mask image block with the area with a value of 1 being the mask area;
FIG. 3b is a diagram illustrating a corresponding pixel of a mask region closest to a current pixel in the image block of FIG. 3a in Euclidean distance; wherein the numerical value in each gray scale region represents a position index of a pixel in the corresponding mask region;
FIG. 3c is a distance transform map of a binary mask map; wherein the numerical table in each gray scale region is a corresponding distance value;
FIG. 3d is a grayscale visualization of the distance transformed field;
fig. 4 is a schematic diagram of ground point filtering.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention, and the components related to the present invention are only shown in the drawings rather than drawn according to the number, shape and size of the components in actual implementation, and the type, quantity and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated.
Referring to fig. 1, the invention provides an end-to-end camera and laser radar external reference calibration method based on a corresponding mask, which can be used for accurately calibrating multiple sensor external references when a data acquisition vehicle and an intelligent networked automobile mass production vehicle leave a factory, and comprises the following steps:
acquiring point cloud data obtained through a laser radar and image sequence data obtained through a camera;
step two, estimating a checkerboard mask by extracting angular points of the checkerboard in the image obtained by the camera in the step one and combining inherent parameters of the checkerboard, and then further generating Euclidean distance transformation of the checkerboard mask according to the checkerboard mask to obtain 2D (two-dimensional) mask distance transformation of the checkerboard;
the invention uses the existing method to extract the checkerboard angular points in the image. After obtaining the angular points of the checkerboard in the image obtained by the camera, calculating the image plane coordinates of the vertexes of the peripheral frames of the checkerboard according to the image plane (u, v) coordinate values of the angular points and the inherent parameters of the checkerboard, and obtaining a binary mask image of the checkerboard;
as shown in FIG. 2, a schematic diagram of the estimation of the coordinates of the top left corner image plane of the out-of-checkerboard border from the coordinates of the image plane of the corner points near the top left corner of the checkerboard is shown. In FIG. 2, P1、P1+N、P2、P2+NRespectively are the checkerboard angular points automatically extracted near the top left corner vertex, wherein N is the number of angular points in the direction Y of the checkerboard coordinate system, Delta s is the side length of the checkerboard square,
Figure BDA0002535661150000071
is the top point of the outer frame at the upper left corner of the checkerboard to be solved,
Figure BDA0002535661150000072
is the vertex
Figure BDA0002535661150000073
Offset in the X-axis direction from the top left corner vertex of the checkerboard pattern,
Figure BDA0002535661150000074
is the vertex
Figure BDA0002535661150000075
The offset from the top left corner of the checkerboard pattern in the Y-axis direction; the UV coordinates of the vertices of the checkerboard peripheral border, without distortion considered, are given by:
Figure BDA0002535661150000076
when the checkerboard has no outer frame,
Figure BDA0002535661150000081
the uv coordinates of (a) can be simplified to the following formula:
Figure BDA0002535661150000082
in the formulas (1) and (2),
Figure BDA0002535661150000083
are respectively points
Figure BDA0002535661150000084
P1、P1+NAnd P2(ii) image plane UV coordinates;
the establishment of the above relations (1) and (2) requires the assumption that the extraction of corner points of the squares satisfies the square constraint and does not consider the distortion of the checkerboard image. Experiments prove that the checkerboard image is not obviously distorted because the checkerboard cannot be positioned in a larger field of view in an observation sample. In addition, the extraction precision of the checkerboard corner points is in sub-pixel precision, and the square constraint can be basically met. Linear extrapolation based on the above equation is therefore feasible, and it turns out that the checkerboard mask extraction errors introduced by neglecting distortion and corner extraction noise are essentially negligible.
After the binary mask image of the checkerboard is obtained, Euclidean distance conversion is carried out on the binary mask image of the checkerboard, namely the distance of the pixel coordinate position in the checkerboard mask is 0, and the farther away from the checkerboard mask, the larger the distance value is.
As shown in fig. 3a to 3d, schematic diagrams of generating a distance transform for a binary mask map, wherein fig. 3a is a 7 × 7 binary mask image block, wherein the area with a value of 1 is a mask area; traversing each pixel in the image block of fig. 3a, finding out a corresponding pixel of a mask region closest to the current pixel in euclidean distance, as shown in fig. 3b, different gray scale regions all represent that the current region is closest to a pixel point in the same mask region, wherein a numerical value in each gray scale region represents a position index of the pixel in the corresponding mask region; calculating the distance value between the current pixel and the nearest mask area pixel to obtain the distance transformation of the binary mask image, as shown in fig. 3c, the numerical value table in each gray area is the corresponding distance value; fig. 3d is a grayscale visualization of the distance transformed field, the darker the grayscale, indicating the farther the current pixel is from the mask area.
The Euclidean distance mask distance field can measure the region pair very skillfully and has good local convexity. When only one checkerboard mask is present on each image, it is guaranteed that all observations have good convexity in the imaged area, which is important for fast convergence of the iteration.
Thirdly, ground points of the target point cloud and the background point cloud obtained by the laser radar in the first step are removed, and then the background points are removed in a point cloud difference mode; performing KNN nearest neighbor search matching on the point cloud pair after the ground points are removed based on a KD tree, thereby removing background point clouds;
in the third step, the ground points in the target point cloud and the background point cloud obtained by the laser radar are filtered to eliminate interference, and then the target point cloud and the background point cloud are differentiated to remove most of complex backgrounds. In the invention, most background points are removed by differentiating the target point cloud and the background point cloud. Due to the factors such as vibration of the vehicle platform, vibration generated by rotation of the laser radar, noise of the laser point cloud and the like, the background of the target point cloud cannot be completely overlapped with the background point cloud even if the vehicle platform is static. In order to filter out the background point cloud as much as possible, the point cloud recall threshold is relaxed, and the influence caused by the decision can ensure that the checkerboard points close to the ground are considered to be too close to the ground points and filtered out, so that the integrity of the checkerboard point cloud is damaged, and whether the point cloud extraction is complete or not has a heavy influence on the final external parameter estimation; therefore, the ground points need to be filtered first to eliminate the interference of the ground points.
The specific method for removing the ground points comprises the following steps: ordering the disordered laser point clouds obtained by the laser radar to generate a distance map, and judging ground points on each row of the distance map; FIG. 4 is a schematic diagram of ground point filtering, where PK-1、PK、PK+1The laser point clouds in the same row are equal, and the included angle between the vector formed by two points in adjacent rows in the same row and the horizontal plane is used as an important basis for judging whether the point is a ground point, and the judgment basis is shown as follows:
Figure BDA0002535661150000091
set in formula (3)
Figure BDA0002535661150000092
Is a set of ground points, ΘKIs PKAnd PK+1Angle between the generated vector and the horizontal plane, ΘTThe threshold value for ground point judgment is generally 2 degrees in an experiment;
in the third step, in order to prevent the complex background from interfering with the extraction of the checkerboard point cloud, the interference of the background points is removed in a point cloud difference mode. The specific method comprises the following steps: by constructing a KD tree of background point cloud, searching a point pair which is closest to the original point cloud in the target point cloud by using a KNN method, wherein the point pair with the distance of the nearest point pair being greater than a threshold value is considered as newly added point cloud and filtered out, namely the point cloud with the background points removed;
the background points are removed in a point cloud difference mode by adopting the following formula:
Figure BDA0002535661150000093
in the formula (4)
Figure BDA0002535661150000094
For a new set of points, P, against the backgroundTIs a point in the target point cloud and,
Figure BDA0002535661150000095
for the searched and PTThe closest point in the background point cloud,
Figure BDA0002535661150000096
the point set of ground points is removed for the target frame,
Figure BDA0002535661150000097
the point set of ground points is removed for the background point cloud,
Figure BDA0002535661150000098
representing the Euclidean distance difference of all matched point pairs, DTThe standard deviation of Euclidean distance is 3 times of the standard deviation of all the matched points.
There are a lot of outliers in the newly added point clouds against the background, and we must further cluster the point clouds with spatial correlation, so as to filter out the point clusters that do not contain the checkerboard point clouds.
Step four, carrying out point cloud segmentation based on depth continuity measure of a distance map on the residual point clouds, namely the serialized point clouds, from which the background point clouds and the ground point clouds are removed in the step three, and segmenting the point cloud clusters with spatial correlation; then, respectively carrying out plane extraction based on an RANSAC algorithm on each divided point cloud cluster, giving out corresponding confidence score containing plane significance, and taking the point cloud cluster with the highest confidence score as a checkerboard candidate point cloud cluster; secondly, generating a distance map of the point cloud cluster according to the extracted plane equation for the candidate point cloud cluster of the checkerboard, and carrying out binarization on the distance map and deleting discrete small connected areas to eliminate external points which are located on a plane and do not belong to the checkerboard, so as to obtain the checkerboard point cloud;
specifically, the confidence score determination is performed through the following formula to filter out the point clusters that do not contain the checkerboard point cloud:
Figure BDA0002535661150000101
calculating a confidence score for each candidate point cloud cluster by equation (5)clusterWherein N isinliersNumber of in-plane points of current point cloud cluster, NouterliersNumber of outer points, NclusterThe number of all the point clouds in the current cluster,
Figure BDA0002535661150000102
the number of all point clouds in the ith point cloud cluster,
Figure BDA0002535661150000103
α and β are corresponding coefficients for the total number of points in all the point cloud clusters, respectively, and in the present invention, α is set to 0.3 and β is set to 0.7.
According to the method, a distance map is generated according to the most significant plane equation of the candidate point cloud cluster of the checkerboard, binarization is performed on the distance map, external points are screened out according to connectivity, and finally accurate point clouds belonging to the checkerboard are obtained. The method for extracting the checkerboard point cloud has strong environmental adaptability and noise immunity, and can be conveniently expanded to a scene with a plurality of checkerboards in an observation scene.
When the chessboard is used, the head and part of the body of the operator are positioned on the same plane with the chessboard. In fact, the point cloud that the operator, supporting the bar equal to the checkerboard contact, will always have some outliers that are in the checkerboard plane but not in the checkerboard area. Considering the noise of the real point cloud, in order to extract the complete point cloud of the checkerboard, the flatness of the checkerboard plane is often relaxed, which further increases the possibility of misjudging the above-mentioned external points as the point cloud of the checkerboard, and these external points must be filtered out.
The method comprises the steps of projecting candidate checkerboard point cloud clusters to the most significant plane equation of the point cloud clusters, calculating the distance value of each point relative to a plane, ordering the distance values to generate a distance map, carrying out binarization on the distance map, carrying out operation on the binary map to eliminate discrete points and viscosity, further extracting the largest communicated area in the binary map, and finally screening the plane points in the communicated area into the checkerboard point cloud.
Fifthly, constructing an energy function loss function of the backward projection on the distance transformation field for the 2D mask distance transformation of the checkerboard obtained in the second step and the checkerboard point cloud obtained in the fourth step, and establishing a corresponding relation between a 3D point set and the 2D area;
specifically, the distance transformation of the checkerboard mask on the 2D image plane obtained in the step two and the 3D laser radar point cloud of the checkerboard obtained in the step four are projected into a distance transformation field through backward propagation, an energy function based on the distance transformation field is constructed, and the accurate external parameter transformation of the laser radar relative to the camera is obtained through minimizing the energy function;
the specific process of the laser radar relative to the external parameter transformation of the camera is as follows:
(6-1) setting the current checkerboard laser point cloud cluster as Q, and aiming at each laser point QiThen, then
Qi=[XiYiZi]T,Qi∈Q; (6)
Each laser point QiProjected onto the image plane to obtain a point PiThen, then
Figure BDA0002535661150000111
Pi∈P; (7)
Where pi () represents the transformation of the projection of the image space coordinates into the image plane coordinates, i.e. for the image space coordinate point Pc=[XcYcZc]TThere are the following transformations:
Figure BDA0002535661150000112
Figure BDA0002535661150000113
where g () is the camera distortion transformation, fx、fy、Cx、CyRespectively, the focal length in the x-axis direction, the focal length in the y-direction, the principal point in the x-direction and the principal point in the y-direction of the camera lens, k1、k2、k3、p1、p2Respectively the radial distortion and the tangential distortion of the lens,
Figure BDA0002535661150000114
Pcas points in the image space coordinate system, Xc、Yc、ZcIs PcCoordinate value of (2), XcgIs an X component, Y, in a homogeneous coordinate system in image spacecgIs the Y component in the homogeneous coordinate system of the image space;
(6-2)
Figure BDA0002535661150000115
representing the change of the lidar coordinate system to the camera coordinate system, which is an exponential mapping from SE (3) lie algebraic space to SE (3) lie group space, where
Figure BDA0002535661150000116
Se (3) lie algebra, which can be expressed as a rotation vector and a translation vector;
Figure BDA0002535661150000117
Figure BDA0002535661150000118
wherein R is a rotation matrix from a laser radar coordinate system to a camera coordinate system, and t is a translation matrix from the laser radar coordinate system to the camera coordinate system;
(6-3) based on the formulas (6) to (11) and the definition of the energy loss function of the distance transformation field, the external parameter of the laser radar relative to the camera is obtained, and the external parameter is shown as follows:
Figure BDA0002535661150000119
formula (III) ξ*For the outer reference in optimized se (3) space, M is the number of observation samples, NjThe number of checkerboard points for the jth pair of observations,
Figure BDA0002535661150000121
checkerboard mask distance transform distribution for jth pair observation, WijWeight, Q, for the ith checkerboard point of the jth pair observationiIs the ith point in the checkerboard point cloud for the jth observation.
Because the energy function has good convexity and an analytic Jacobian matrix is obtained, the LM method can be used for fast convergence and iteration to a true value.
And step six, obtaining an external parameter initial value which meets the requirement of projecting the 3D point cloud to the image plane area based on the corresponding relation between the 3D point set and the 2D area established in the step five and through a genetic algorithm, and then further optimizing the external parameter initial value through an LM method to obtain accurate external parameters.
The energy function in the imaging region is convex, the optimal value can be obtained through a gradient descent method, but a uniform large penalty is applied outside the imaging region, so that the problem of gradient disappearance can occur, and in order to enable the external reference calibration method provided by the invention to be optimized from any initial value and not depend on any a priori set information, a genetic algorithm is firstly used to obtain an approximately correct initial value before optimization. A small amount of non-checkerboard point noise existing in the checkerboard 3D point cloud in the observation sample has no great influence on the overall optimization precision due to the sample averaging effect, but in order to further improve the precision, the weight of the checkerboard points is dynamically adjusted in the optimization process, and the weight of the outliers is continuously reduced so as to eliminate the interference of the outliers on the precision.
When a data acquisition vehicle and an intelligent networked automobile mass production vehicle leave a factory, accurate calibration among multiple sensor external parameters is required, and the process always needs to be capable of robustly calibrating accurate external parameters and does not depend on specific environments. With the use of a large number of low-cost sensors, especially laser radars, the precision, robustness and usability of calibration between a camera and the laser radars will be greatly affected by the noise of the point cloud of the laser radars and the human interference in the data acquisition process, and in order to make up for the adaptability to a scene, manual intervention is often required or an experimental scene is limited in the existing large number of calibration methods, so that the difficulty and complexity of application are greatly increased. The method for calibrating the external parameters of the end-to-end camera and the laser radar based on the corresponding mask solves the external parameters, and can realize full-automatic accurate external parameter estimation; the corresponding relation between the mask in the image and the point cluster in the laser point cloud is found, the corresponding relation is clear on the whole, but the details are fuzzy, the characteristic greatly improves the noise resistance and robustness of the system, simplifies the experimental setting and steps and achieves high precision.
The above-mentioned embodiments only express the specific embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention.

Claims (6)

1. The end-to-end camera and laser radar external reference calibration method based on the corresponding mask is characterized by comprising the following steps of:
acquiring point cloud data obtained through a laser radar and image sequence data obtained through a camera;
step two, estimating a checkerboard mask by extracting angular points of the checkerboard in the image obtained by the camera in the step one and combining inherent parameters of the checkerboard, and then further generating Euclidean distance transformation of the checkerboard mask according to the checkerboard mask to obtain 2D (two-dimensional) mask distance transformation of the checkerboard;
thirdly, ground points of the target point cloud and the background point cloud obtained by the laser radar in the first step are removed, and then the background points are removed in a point cloud difference mode;
step four, removing the background point cloud and the rest point cloud of the ground point cloud in the step three, carrying out point cloud segmentation based on the depth continuity measure of the distance map, and segmenting the point cloud cluster with spatial correlation; then, respectively carrying out plane extraction based on RANSAC on each divided point cloud cluster, giving out corresponding confidence score containing plane significance, and taking the point cloud cluster with the highest confidence score as a checkerboard candidate point cloud cluster; secondly, generating a distance map of the point cloud cluster according to the extracted plane equation for the candidate point cloud cluster of the checkerboard, and carrying out binarization on the distance map and deleting discrete small connected areas to eliminate external points which are located on a plane and do not belong to the checkerboard, so as to obtain the checkerboard point cloud;
fifthly, constructing an energy loss function of the back projection on a distance transformation field for the 2D mask distance transformation of the checkerboard obtained in the second step and the checkerboard point cloud obtained in the fourth step, and establishing a corresponding relation between a 3D point set and a 2D area;
and step six, obtaining an external parameter initial value which meets the requirement of projecting the 3D point cloud to the image plane area based on the corresponding relation between the 3D point set and the 2D area established in the step five and through a genetic algorithm, and then further optimizing the external parameter initial value through an LM method to obtain accurate external parameters.
2. The end-to-end camera and lidar external reference calibration method based on the corresponding mask as claimed in claim 1, wherein in step two, the specific estimation method of the checkerboard mask is: after obtaining the angular points of the checkerboard in the image obtained by the camera, calculating the image plane coordinates of the vertexes of the peripheral frames of the checkerboard according to the image plane (u, v) coordinate values of the angular points and the inherent parameters of the checkerboard, and obtaining a binary mask image of the checkerboard;
the UV coordinate of the top point of the peripheral frame of the checkerboard is represented by the following formula:
Figure FDA0002535661140000011
when the checkerboard has no outer frame,
Figure FDA0002535661140000012
the uv coordinates of (a) can be simplified to the following formula:
Figure FDA0002535661140000013
in the formulas (1) and (2),
Figure FDA0002535661140000014
are respectively points
Figure FDA0002535661140000015
P1、P1+NAnd P2(ii) image plane UV coordinates; p1、P1+N、P2、P2+NRespectively are the checkerboard angular points automatically extracted near the top left corner vertex, wherein N is the number of angular points in the direction Y of the checkerboard coordinate system, Delta s is the side length of the checkerboard square,
Figure FDA0002535661140000021
is the top point of the outer frame at the upper left corner of the checkerboard to be solved,
Figure FDA0002535661140000022
is the vertex
Figure FDA0002535661140000023
Offset in the X-axis direction from the top left corner vertex of the checkerboard pattern,
Figure FDA0002535661140000024
is the vertex
Figure FDA0002535661140000025
Offset in the Y-axis direction from the top left corner vertex of the checkerboard pattern.
3. The end-to-end camera and lidar external reference calibration method based on the corresponding mask as claimed in claim 2, wherein in step three, ground point elimination is performed, and the specific method comprises: ordering the disordered laser point clouds to generate a distance map, and judging ground points on each row of the distance map;
the angle between the horizontal plane and the vector formed by two points in adjacent rows in the same column is used to determine whether the point is a ground point, and the determination is as follows:
Figure FDA0002535661140000026
set in formula (3)
Figure FDA0002535661140000027
Is a set of ground points, ΘKIs PKAnd PK+1Angle between the generated vector and the horizontal plane, ΘTThe threshold value for ground point judgment is generally 2 degrees in an experiment; pK-1、PK、PK+1Is a laser point cloud on the same column.
4. The end-to-end camera and lidar external reference calibration method based on the corresponding mask as claimed in claim 3, wherein in step three, background points are removed by means of point cloud differentiation, and the specific method is as follows: constructing a KD tree of background point clouds for the point cloud pairs after the ground points are removed, searching a point pair which is closest to the original point cloud in the target point cloud by using a KNN (K nearest neighbor) method based on the KD tree, and filtering out the point cloud which is considered as the newly added point cloud and has the nearest point pair distance larger than a threshold value, namely removing the background points;
the background points are removed in a point cloud difference mode by adopting the following formula:
Figure FDA0002535661140000028
in the formula (4)
Figure FDA0002535661140000029
For a new set of points, P, against the backgroundTIs a point in the target point cloud and,
Figure FDA00025356611400000210
for the searched and PTThe closest point in the background point cloud,
Figure FDA00025356611400000211
the point set of ground points is removed for the target frame,
Figure FDA00025356611400000212
the point set of ground points is removed for the background point cloud,
Figure FDA00025356611400000213
representing the Euclidean distance difference of all matched point pairs, DTThe standard deviation of the Euclidean distance is 3 times of that of all the matched points.
5. The end-to-end camera and lidar external reference calibration method based on the corresponding mask of claim 4, wherein in step four, a plane-based confidence score determination is performed on each of the segmented point cloud clusters, and the point cloud cluster with the highest confidence score is determined as a checkerboard candidate point cloud cluster; specifically, the confidence score determination is performed through the following formula to filter out the point clusters that do not contain the checkerboard point cloud:
Figure FDA0002535661140000031
calculating a confidence score for each candidate point cloud cluster by equation (5)clusterWherein N isinliersNumber of in-plane points of current point cloud cluster, NouterliersNumber of outer points, NclusterThe number of all the point clouds in the current cluster,
Figure FDA0002535661140000032
the number of all point clouds in the ith point cloud cluster,
Figure FDA0002535661140000033
for the total number of points of all point cloud clusters, α and β are the corresponding coefficients, α is set to 0.3 and β is set to 0.7, respectively.
6. The end-to-end camera and lidar extrinsic parameter calibration method based on corresponding masks according to claim 5, characterized in that in step five, a distance transformation of a checkered 2D mask and a checkered point cloud obtained in step four are obtained based on step two, the laser point cloud is projected into a distance transformation field through backward propagation, an energy loss function based on the distance transformation field is constructed, and an accurate extrinsic parameter transformation of the lidar relative to the camera is obtained by minimizing the energy function;
the specific process of the laser radar relative to the external parameter transformation of the camera is as follows:
(6-1) setting the current checkerboard laser point cloud cluster as Q, and aiming at each laser point QiThen, then
Qi=[XiYiZi]T,Qi∈Q; (6)
Each laser point QiProjected onto the image plane to obtain a point PiThen, then
Figure FDA0002535661140000034
Where pi () represents the transformation of the projection of the image space coordinates into the image plane coordinates, i.e. for the image space coordinate point Pc=[XcYcZc]TThere are the following transformations:
Figure FDA0002535661140000035
Figure FDA0002535661140000036
where g () is the camera distortion transformation, fx、fy、Cx、CyRespectively, the focal length in the x-axis direction, the focal length in the y-direction, the principal point in the x-direction and the principal point in the y-direction of the camera lens, k1、k2、k3Respectively, the radial distortion parameter, p, of the lens1、p2As a parameter of the tangential distortion,
Figure FDA0002535661140000041
Pcas points in the image space coordinate system, Xc、Yc、ZcIs a coordinate value of Pc; xcgIs an X component, Y, in a homogeneous coordinate system in image spacecgIs the Y component in the homogeneous coordinate system of the image space;
(6-2)
Figure FDA0002535661140000042
representing the change of the lidar coordinate system to the camera coordinate system, which is an exponential mapping from SE (3) lie algebraic space to SE (3) lie group space, where
Figure FDA0002535661140000043
Se (3) lie algebra, which can be expressed as a rotation vector and a translation vector;
Figure FDA0002535661140000044
Figure FDA0002535661140000045
wherein R is a rotation matrix from a laser radar coordinate system to a camera coordinate system, and t is a translation matrix from the laser radar coordinate system to the camera coordinate system;
Figure FDA0002535661140000046
is the amount of rotation of the se (3) space,rho is the translation amount of se (3) space;
(6-3) based on the formulas (6) to (11) and the definition of the energy loss function of the distance conversion field, as shown in the formula (12), the external parameter of the laser radar relative to the camera is obtained;
Figure FDA0002535661140000047
ξ in formula (12)*For the outer reference in optimized se (3) space, M is the number of observation samples, NjThe number of checkerboard points for the jth pair of observations,
Figure FDA0002535661140000048
checkerboard mask distance transform distribution for jth pair observation, WijWeight, Q, for the ith checkerboard point of the jth pair observationiIs the ith point in the checkerboard point cloud for the jth observation.
CN202010532027.0A 2020-06-11 2020-06-11 End-to-end camera and laser radar external parameter calibration method based on corresponding mask Active CN111640158B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010532027.0A CN111640158B (en) 2020-06-11 2020-06-11 End-to-end camera and laser radar external parameter calibration method based on corresponding mask

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010532027.0A CN111640158B (en) 2020-06-11 2020-06-11 End-to-end camera and laser radar external parameter calibration method based on corresponding mask

Publications (2)

Publication Number Publication Date
CN111640158A true CN111640158A (en) 2020-09-08
CN111640158B CN111640158B (en) 2023-11-10

Family

ID=72330689

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010532027.0A Active CN111640158B (en) 2020-06-11 2020-06-11 End-to-end camera and laser radar external parameter calibration method based on corresponding mask

Country Status (1)

Country Link
CN (1) CN111640158B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112233187A (en) * 2020-10-20 2021-01-15 深圳无境智能机器人有限公司 Convenient and stable rgbd camera external parameter calibration method
CN112991372A (en) * 2021-04-21 2021-06-18 聚时科技(江苏)有限公司 2D-3D camera external parameter calibration method based on polygon matching
CN113256740A (en) * 2021-06-29 2021-08-13 湖北亿咖通科技有限公司 Calibration method of radar and camera, electronic device and storage medium
CN113298941A (en) * 2021-05-27 2021-08-24 广州市工贸技师学院(广州市工贸高级技工学校) Map construction method, device and system based on laser radar aided vision
CN113589263A (en) * 2021-08-06 2021-11-02 北京易航远智科技有限公司 Multi-homologous sensor combined calibration method and system
CN114387347A (en) * 2021-10-26 2022-04-22 浙江智慧视频安防创新中心有限公司 Method and device for determining external parameter calibration, electronic equipment and medium
CN114627275A (en) * 2022-03-29 2022-06-14 南京航空航天大学 Whole machine measurement point cloud fusion method based on multi-source heterogeneous data
CN117830439A (en) * 2024-03-05 2024-04-05 南昌虚拟现实研究院股份有限公司 Multi-camera system pose calibration method and device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102017109039A1 (en) * 2017-04-27 2018-10-31 Sick Ag Method for calibrating a camera and a laser scanner
US20180356831A1 (en) * 2017-06-13 2018-12-13 TuSimple Sparse image point correspondences generation and correspondences refinement method for ground truth static scene sparse flow generation
CN109270534A (en) * 2018-05-07 2019-01-25 西安交通大学 A kind of intelligent vehicle laser sensor and camera online calibration method
CN109754438A (en) * 2019-01-15 2019-05-14 中国科学技术大学 Near infrared camera caliberating device, calibration point extracting method and system under specific band
CN109978955A (en) * 2019-03-11 2019-07-05 武汉环宇智行科技有限公司 A kind of efficient mask method for combining laser point cloud and image
CN110246159A (en) * 2019-06-14 2019-09-17 湖南大学 The 3D target motion analysis method of view-based access control model and radar information fusion
US20190391244A1 (en) * 2018-06-25 2019-12-26 Ricoh Company, Ltd. Distance-measuring apparatus, mobile object, distance-measuring method, and distance-measuring system
CN111243032A (en) * 2020-01-10 2020-06-05 大连理工大学 Full-automatic checkerboard angular point detection method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102017109039A1 (en) * 2017-04-27 2018-10-31 Sick Ag Method for calibrating a camera and a laser scanner
US20180356831A1 (en) * 2017-06-13 2018-12-13 TuSimple Sparse image point correspondences generation and correspondences refinement method for ground truth static scene sparse flow generation
CN109270534A (en) * 2018-05-07 2019-01-25 西安交通大学 A kind of intelligent vehicle laser sensor and camera online calibration method
US20190391244A1 (en) * 2018-06-25 2019-12-26 Ricoh Company, Ltd. Distance-measuring apparatus, mobile object, distance-measuring method, and distance-measuring system
CN109754438A (en) * 2019-01-15 2019-05-14 中国科学技术大学 Near infrared camera caliberating device, calibration point extracting method and system under specific band
CN109978955A (en) * 2019-03-11 2019-07-05 武汉环宇智行科技有限公司 A kind of efficient mask method for combining laser point cloud and image
CN110246159A (en) * 2019-06-14 2019-09-17 湖南大学 The 3D target motion analysis method of view-based access control model and radar information fusion
CN111243032A (en) * 2020-01-10 2020-06-05 大连理工大学 Full-automatic checkerboard angular point detection method

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
DIMITRIOS S. ALEXIADIS 等: "Fast deformable model-based human performance capture and FVV using consumer-grade RGB-D sensors" *
JUN ZHANG 等: "A Two-step Method for Extrinsic Calibration between a Sparse 3D LiDAR and a Thermal Camera" *
JURAJ PERŠIĆ 等: "Extrinsic 6DoF calibration of a radar–LiDAR–camera system enhanced by radar cross section estimates evaluation" *
艾裕丰 等: "基于亚像素边缘的棋盘格的角点检测" *
许小徐 等: "智能汽车激光雷达和相机数据融合系统标定" *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112233187A (en) * 2020-10-20 2021-01-15 深圳无境智能机器人有限公司 Convenient and stable rgbd camera external parameter calibration method
CN112233187B (en) * 2020-10-20 2022-06-03 深圳无境智能机器人有限公司 Convenient and stable rgbd camera external parameter calibration method
CN112991372A (en) * 2021-04-21 2021-06-18 聚时科技(江苏)有限公司 2D-3D camera external parameter calibration method based on polygon matching
CN113298941A (en) * 2021-05-27 2021-08-24 广州市工贸技师学院(广州市工贸高级技工学校) Map construction method, device and system based on laser radar aided vision
CN113298941B (en) * 2021-05-27 2024-01-30 广州市工贸技师学院(广州市工贸高级技工学校) Map construction method, device and system based on laser radar aided vision
CN113256740A (en) * 2021-06-29 2021-08-13 湖北亿咖通科技有限公司 Calibration method of radar and camera, electronic device and storage medium
CN113589263A (en) * 2021-08-06 2021-11-02 北京易航远智科技有限公司 Multi-homologous sensor combined calibration method and system
CN113589263B (en) * 2021-08-06 2023-10-31 北京易航远智科技有限公司 Method and system for jointly calibrating multiple homologous sensors
CN114387347A (en) * 2021-10-26 2022-04-22 浙江智慧视频安防创新中心有限公司 Method and device for determining external parameter calibration, electronic equipment and medium
CN114387347B (en) * 2021-10-26 2023-09-19 浙江视觉智能创新中心有限公司 Method, device, electronic equipment and medium for determining external parameter calibration
CN114627275A (en) * 2022-03-29 2022-06-14 南京航空航天大学 Whole machine measurement point cloud fusion method based on multi-source heterogeneous data
CN114627275B (en) * 2022-03-29 2022-11-29 南京航空航天大学 Whole machine measurement point cloud fusion method based on multi-source heterogeneous data
CN117830439A (en) * 2024-03-05 2024-04-05 南昌虚拟现实研究院股份有限公司 Multi-camera system pose calibration method and device

Also Published As

Publication number Publication date
CN111640158B (en) 2023-11-10

Similar Documents

Publication Publication Date Title
CN111640158A (en) End-to-end camera based on corresponding mask and laser radar external reference calibration method
CN110569704B (en) Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
CN107063228B (en) Target attitude calculation method based on binocular vision
Fan et al. Road surface 3D reconstruction based on dense subpixel disparity map estimation
CN112767490B (en) Outdoor three-dimensional synchronous positioning and mapping method based on laser radar
Kang et al. Automatic targetless camera–lidar calibration by aligning edge with gaussian mixture model
CN107481284A (en) Method, apparatus, terminal and the system of target tracking path accuracy measurement
CN107767456A (en) A kind of object dimensional method for reconstructing based on RGB D cameras
CN109859226B (en) Detection method of checkerboard corner sub-pixels for graph segmentation
CN108225319B (en) Monocular vision rapid relative pose estimation system and method based on target characteristics
Munoz-Banon et al. Targetless camera-lidar calibration in unstructured environments
CN108921864A (en) A kind of Light stripes center extraction method and device
CN113470090A (en) Multi-solid-state laser radar external reference calibration method based on SIFT-SHOT characteristics
CN103727930A (en) Edge-matching-based relative pose calibration method of laser range finder and camera
CN113744351B (en) Underwater structure light measurement calibration method and system based on multi-medium refraction imaging
CN106952262B (en) Ship plate machining precision analysis method based on stereoscopic vision
CN110310331A (en) A kind of position and orientation estimation method based on linear feature in conjunction with point cloud feature
CN113834625A (en) Aircraft model surface pressure measuring method and system
Lin et al. The initial study of LLS-based binocular stereo-vision system on underwater 3D image reconstruction in the laboratory
CN112525106B (en) Three-phase machine cooperative laser-based 3D detection method and device
CN113674218A (en) Weld characteristic point extraction method and device, electronic equipment and storage medium
CN112767459A (en) Unmanned aerial vehicle laser point cloud and sequence image registration method based on 2D-3D conversion
CN112017259A (en) Indoor positioning and image building method based on depth camera and thermal imager
CN109815966A (en) A kind of mobile robot visual odometer implementation method based on improvement SIFT algorithm
CN116563377A (en) Mars rock measurement method based on hemispherical projection model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant