CN116433737A - Method and device for registering laser radar point cloud and image and intelligent terminal - Google Patents

Method and device for registering laser radar point cloud and image and intelligent terminal Download PDF

Info

Publication number
CN116433737A
CN116433737A CN202310466475.9A CN202310466475A CN116433737A CN 116433737 A CN116433737 A CN 116433737A CN 202310466475 A CN202310466475 A CN 202310466475A CN 116433737 A CN116433737 A CN 116433737A
Authority
CN
China
Prior art keywords
point cloud
dimensional
checkerboard
data
boundary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310466475.9A
Other languages
Chinese (zh)
Inventor
刘博�
倪洪印
王健
王莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin University
Northeast Electric Power University
Original Assignee
Jilin University
Northeast Dianli University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin University, Northeast Dianli University filed Critical Jilin University
Priority to CN202310466475.9A priority Critical patent/CN116433737A/en
Publication of CN116433737A publication Critical patent/CN116433737A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/35Determination of transform parameters for the alignment of images, i.e. image registration using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method for registering a vehicle-mounted laser radar point cloud and an image, which comprises the following steps: acquiring point cloud data acquired by a vehicle-mounted laser radar and image data acquired by a camera under the same time and space; according to the point cloud data, three-dimensional point cloud characteristics of the checkerboard boundary are obtained; obtaining two-dimensional characteristics of the checkerboard boundary according to the image data; registering according to the three-dimensional point cloud features of the checkerboard boundary and the two-dimensional features of the checkerboard boundary to obtain an external calibration matrix; and carrying out point cloud registration according to the external calibration matrix. The invention can automatically process the data with different dimensionalities measured by the laser radar and the camera into error reduction processing of various algorithms, and realize high-precision registration by means of the checkerboard.

Description

Method and device for registering laser radar point cloud and image and intelligent terminal
Technical Field
The invention relates to the field of vision measurement, in particular to a method and a device for registering laser radar point cloud and an image and an intelligent terminal.
Background
Cameras and lidar provide supplementary information of the environment. The camera captures color, texture and appearance information and the lidar provides three-dimensional structural information of the environment. Thus, they are commonly used to perceive an environment. External parameters consisting of rotation and translation are a prerequisite for fusing information from both sensors. With the widespread use of unmanned mechanical devices, vehicles equipped with lidar and cameras are rapidly increasing. The point cloud scene of the automatic driving perception task is diversified, the perception task based on the automatic driving scene is complex, and the accuracy of the perception task is determined by the point cloud attribute information, so that an effective and efficient external calibration algorithm is needed for large-scale commercial application.
Since the laser point cloud is data with three-dimensional characteristics, and the image is two-dimensional data, the dimensions of the two are different, and therefore the laser radar point cloud data and the camera image need to be registered. The registration of the laser radar and the camera can be converted into two-dimensional data of an image from three-dimensional data of a laser point cloud, and the three-dimensional point cloud is projected onto a two-dimensional plane by solving a high-precision rigid body transformation matrix. However, when the existing method is used for joint calibration, the alignment points need to be manually selected in the checkerboard, so that the method is complex, and a system flow which can be automatically realized only by input is lacked. The lack of the processing of the influence of the angle of view and distortion of the camera on the result is also caused, and meanwhile, the error evaluation analysis on the registration quality is lacking on the premise that the registration work can be completed by sampling a plurality of groups of data. Current registration methods also include converting two-dimensional data of the image to three-dimensional data. However, since the image does not have depth information, even in a picture taken with a depth camera, the accuracy of the depth information is not high, and it is difficult to accurately restore the image to three-dimensional data.
Accordingly, the prior art is still in need of improvement and development.
Disclosure of Invention
The invention aims to solve the technical problems that aiming at the defects in the prior art, a vehicle-mounted laser radar point cloud and image registration method is provided, and aims to solve the problems that in the prior art, manual registration is complicated, error evaluation analysis on registration quality is lacking, and accuracy of depth information is low.
The technical scheme adopted for solving the technical problems is as follows:
in a first aspect, the present invention provides a vehicle-mounted lidar point cloud and image registration method, where the method includes:
acquiring point cloud data acquired by a vehicle-mounted laser radar and image data acquired by a camera under the same time and space;
obtaining three-dimensional point cloud characteristics of the checkerboard boundary according to the point cloud data;
obtaining checkerboard boundary two-dimensional characteristics according to the image data;
registering according to the three-dimensional point cloud features of the checkerboard boundary and the two-dimensional features of the checkerboard boundary to obtain an external calibration matrix;
and carrying out point cloud registration according to the external calibration matrix.
In one implementation manner, after the point cloud data collected by the vehicle-mounted laser radar and the image data collected by the camera under the same time and space are obtained, the method further includes:
performing data preprocessing on the point cloud data and the image data to obtain processed point cloud data and processed image data; wherein the data preprocessing includes timestamp synchronization and de-skew processing.
In one implementation manner, the obtaining the three-dimensional point cloud feature of the checkerboard boundary according to the point cloud data includes:
Removing noise in the point cloud data by using a method based on field density distribution to obtain a noiseless point cloud file frame;
positioning the noiseless point cloud file frame by adopting a clustering method to obtain checkerboard point cloud clusters;
fitting the optimal plane of the checkerboard point cloud cluster by adopting a RANSAC method to obtain the optimal point cloud plane of the checkerboard;
and detecting the boundary of the optimal point cloud plane of the checkerboard to obtain the boundary three-dimensional point cloud characteristics and the three-dimensional angular points of the checkerboard.
In one implementation manner, the fitting the optimal plane of the tessellation point cloud cluster by using a RANSAC method to obtain a tessellation optimal point cloud plane includes:
the parameter constraint model is constructed as follows:
Figure BDA0004202525260000031
wherein A, B, C and D are estimated parameters of the fitting plane, point set P c Is checkerboard point Yun Cu, P' c Epsilon is the threshold value of iteration and is the first optimal plane;
and shrinking the threshold epsilon and repeating the fitting and filtering processes according to the parameter constraint model, and when all the point clouds are within the iterative threshold epsilon, projecting the noise point clouds onto an ideal plane by using a projection transformation model to obtain a second optimal plane, wherein the second optimal plane is as follows:
P‘’ c ={(x 0 t,y 0 t,z 0 t)|(x 0 ,y 0 ,z 0 )∈P’ c }
wherein t= -D/(Ax) 0 +By 0+ Cz 0 );
Dividing the second optimal plane into a plurality of rectangular grids, and randomly sampling point clouds in each rectangular grid to obtain the checkerboard optimal point cloud plane.
In one implementation, the obtaining the two-dimensional characteristics of the checkerboard boundary according to the image data includes:
detecting internal angular points of the checkerboard through element feature points, and positioning the checkerboard through similarity to obtain two-dimensional features of the checkerboard boundary;
detecting the image data through a chessboard angular point detection algorithm to obtain two-dimensional angular points;
and ordering the two-dimensional angular points to obtain an index relation.
In one implementation, the registering according to the three-dimensional point cloud feature of the checkerboard boundary and the two-dimensional feature of the checkerboard boundary to obtain an external calibration matrix includes:
expanding the point cloud data of the three-dimensional point cloud features of the checkerboard boundary to a two-dimensional plane to obtain two-dimensional point cloud features;
according to the index relation, obtaining an association relation of three-dimensional-two-dimensional corner pairs of the checkerboard boundary;
and registering the two-dimensional point cloud features and the checkerboard boundary two-dimensional features in each frame of data according to the association relation to obtain an external calibration matrix.
In one implementation manner, the registering the two-dimensional point cloud features and the checkerboard boundary two-dimensional features in each frame of data according to the association relationship to obtain an external calibration matrix includes:
According to the pose relation between a laser radar coordinate system and a camera coordinate system, solving the three-dimensional angular points and the two-dimensional angular points in any frame of data by adopting a PnP pose estimation algorithm based on RANSCA to obtain an initial solution matrix;
according to the initial solution matrix, carrying out three-dimensional-two-dimensional projection on each frame of data to obtain two-dimensional chessboard estimation;
taking the two-dimensional characteristics of the checkerboard boundary as a false true value, obtaining a calculated reprojection error according to the two-dimensional checkerboard estimation and the false true value, and setting an error target threshold;
comparing the re-projection error with the error target threshold, if the re-projection error is larger than the error target threshold, deleting the two-dimensional chessboard estimation corresponding to the re-projection error, and updating the estimated initial solution matrix;
and re-executing the three-dimensional corner points and the two-dimensional corner points in any frame of data, and obtaining an initial solution matrix by adopting a RANSCA-based PnP pose estimation algorithm until the re-projection error in each frame of data is smaller than or equal to the error target threshold value, so as to obtain the external calibration matrix.
In a second aspect, an embodiment of the present invention further provides a vehicle-mounted laser radar point cloud and image registration apparatus, where the apparatus includes:
The data and image acquisition module is used for acquiring point cloud data acquired by the vehicle-mounted laser radar and image data acquired by the camera under the same time and space;
the three-dimensional point cloud characteristic acquisition module is used for acquiring three-dimensional point cloud characteristics of the checkerboard boundary according to the point cloud data;
the two-dimensional feature acquisition module is used for acquiring two-dimensional features of the checkerboard boundary according to the image data;
the external calibration matrix acquisition module is used for registering according to the three-dimensional point cloud features of the checkerboard boundary and the two-dimensional features of the checkerboard boundary to obtain an external calibration matrix;
and the registration module is used for carrying out point cloud registration according to the external calibration matrix.
In a third aspect, an embodiment of the present invention further provides an intelligent terminal, where the intelligent terminal includes a memory, a processor, and a vehicle-mounted lidar point cloud and image registration program stored in the memory and capable of running on the processor, where the steps of the vehicle-mounted lidar point cloud and image registration method according to any one of the above are implemented when the processor executes the vehicle-mounted lidar point cloud and image registration program.
In a fourth aspect, an embodiment of the present invention further provides a computer readable storage medium, where the computer readable storage medium stores a vehicle-mounted lidar point cloud and image registration program, where the vehicle-mounted lidar point cloud and image registration program, when executed by a processor, implements the steps of the vehicle-mounted lidar point cloud and image registration method according to any one of the above.
The beneficial effects are that: compared with the prior art, the invention provides a method for registering the point cloud and the image of the vehicle-mounted laser radar. Firstly, a plurality of homonymous point pairs between laser point cloud data and checkerboard boundary features in image data under the same time and space are selected for iterative registration. And synchronizing the three-dimensional point cloud data and the two-dimensional image data into frames through unified space and time. And then, based on the plurality of homonymous point pair data, adopting a PnP algorithm based on RANSCA to obtain an external calibration matrix, wherein the external calibration matrix is the basis for realizing automatic registration, and error data is adjusted by iteratively updating the external calibration matrix. Finally, the point cloud registration is carried out through an external calibration matrix, so that the automation and high-precision registration of the panoramic image under the condition that the position and posture parameters are unknown are realized.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present invention, and other drawings may be obtained according to the drawings without inventive effort to those skilled in the art.
Fig. 1 is a schematic flow chart of a vehicle-mounted laser radar point cloud and image registration method provided by an embodiment of the invention.
Fig. 2 is a schematic diagram of a RANSAC method according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of a coordinate system correspondence relationship according to an embodiment of the present invention.
Fig. 4 is a schematic block diagram of a vehicle-mounted laser radar point cloud and image registration device provided by an embodiment of the invention.
Fig. 5 is a schematic block diagram of an internal structure of an intelligent terminal according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and effects of the present invention clearer and more specific, the present invention will be described in further detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless expressly stated otherwise, as understood by those skilled in the art. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. The term "and/or" as used herein includes all or any element and all combination of one or more of the associated listed items.
It will be understood by those skilled in the art that all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs unless defined otherwise. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
Multi-sensor registration has been indispensable for robotics and autopilot systems to accurately perceive the surrounding environment. In the registration of different types of sensors, the most common is a combination of lidar and optical cameras, especially in vision-based ranging and mapping tasks, object detection and tracking tasks. The sensing task is to use various sensors (laser radar/camera) installed on the vehicle to sense the surrounding environment at any time in the running process of the vehicle, collect data and perform systematic operation and analysis. Wherein, because laser point cloud is the data with three-dimensional characteristic, and the image is two-dimensional data, the two dimensions are different. The invention provides a method and a system for registering point cloud and images of a vehicle-mounted laser radar, which can automatically process error reduction of various algorithms on data of different dimensions measured by the laser radar and a camera, and then register with higher precision by means of a checkerboard.
Exemplary method
The embodiment provides a method for registering point cloud and images of a vehicle-mounted laser radar. As shown in fig. 1, the method comprises the steps of:
step S100, acquiring point cloud data acquired by a vehicle-mounted laser radar and image data acquired by a camera under the same time and space;
in particular, cameras and lidars provide supplemental information of the environment. The camera captures color, texture and appearance information and the lidar provides three-dimensional structural information of the environment. Thus, they are commonly used to perceive an environment. External parameters consisting of rotation and translation are a prerequisite for fusing information from both sensors. The embodiment reads the point cloud and the image file corresponding to the same time and space and the camera internal reference matrix.
Specifically, some type of lidar and camera are mounted on a vehicle, and then a real scene is scanned to capture multiple sets of individually acquired point clouds and image data. When the point cloud data is collected, the distance from the sensor is 5-10 meters, the chessboard is inclined by 45 degrees, the calibration plate is placed in a diagonal mode, the plane of the calibration plate is perpendicular to the sensor as much as possible, more area of the calibration plate is exposed to the scanning line of the laser radar sensor, and the point cloud projected on the calibration plate is increased. The front lattice points cannot be shielded, and the calibration plate is free of shielding, reflection and shadow. The data between the lidar and the camera sensor must be time synchronized, i.e. the image and the point cloud at the same frame, while capturing data from both sensors, capturing data without motion blur effects.
In one implementation, after step S200 in this embodiment, the method further includes:
performing data preprocessing on the point cloud data and the image data to obtain processed point cloud data and processed image data; wherein the data preprocessing includes timestamp synchronization and de-skew processing.
Specifically, in the multi-frame point cloud data and the image data acquired in the embodiment, when the laser radar acquires the point cloud data to be registered with a checkerboard, noise points and abnormal values exist, that is, point clouds within a certain range overlap and stack, and file preprocessing is required to be performed on the acquired file data to be registered. The file preprocessing comprises randomly extracting random data of a preset frame number, and adjusting a point cloud image and a camera image in the random data to be consistent aiming at the random data of each frame, wherein the consistency needs to carry out time stamp synchronization, and the point cloud and the image data under the same time and space can be registered.
Specifically, the image of the preset frame number is subjected to de-distortion treatment:
the image plane coordinates formed by the spatial points, which are biased by lens distortion or the like, can be expressed by the equation:
Figure BDA0004202525260000081
Wherein r is 2 =x 2 +y 2 X and y are actual imaging point coordinates, x ', y' are expected imaging point coordinates, k 1 、k 2 、k 3 、p 1 、p 2 Is a camera model distortion parameter.
Step 200, according to the point cloud data, three-dimensional point cloud characteristics of the checkerboard boundary are obtained;
specifically, in the embodiment, starting from the conversion of three-dimensional data of laser point cloud into two-dimensional data of an image, the joint calibration of the laser radar and the camera by means of a checkerboard is obtained through analysis.
In one implementation, the step S200 in this embodiment includes the following steps:
step S201, removing noise in the point cloud data by using a method based on domain density distribution to obtain a noiseless point cloud file frame;
specifically, in this embodiment, random point cloud data of a preset frame number are respectively and correspondingly analyzed, and a noise-free point cloud file of the density distribution in the Fu Gedian cloud field is obtained through calculation;
the calculation of the density distribution of the point cloud field comprises the following steps:
m-frame point cloud: q= { Q 1 ,Q 2 ,...,Q m -a }; the number of point clouds in a single frame file is n: (ith frame) i εm, Q i ={q 1 ,q 2 ,...,q n -a }; single point cloud q x (x ε n) domain threshold: k; scaling factor: sc;
step one: for any t.epsilon.n,q t ∈Q i Wherein in a single point cloud q t Calculating all point clouds and q within the threshold k range of the Euclidean distance field t Average value Ave of euclidean distance of (a);
step two: for single frame point cloud Q i ={q 1 ,q 2 ,...,q n ' note AVE i ={Ave 1 ,Ave 2 ,...,Ave n };
Step three: for Q i Te=aave+Sc STDave, where Aave is { Ave 1 ,Ave 2 ,...,Ave n Average value of STDave { Ave }, STDave }, ave 1 ,Ave 2 ,...,Ave n Standard deviation of }.
Figure BDA0004202525260000091
Figure BDA0004202525260000092
Step four: for any t.epsilon.n, q t ∈Q i If it Ave<Te from Q i Removal of q t
Step five: returning to the step until all the m-frame point clouds are processed.
Noise point cloud data in each incoming frame is removed by using a method based on field density distribution to enable density distribution of points to be more uniform, and therefore noise-free point frames are obtained in a point cloud frame group.
Step S202, positioning the noiseless point cloud file frame by adopting a clustering method to obtain checkerboard point cloud clusters;
step S203, fitting the optimal plane of the checkerboard point cloud cluster by adopting a RANSAC method to obtain the checkerboard optimal point cloud plane;
and step S204, detecting the boundary of the optimal point cloud plane of the checkerboard to obtain the boundary three-dimensional point cloud characteristics and the three-dimensional angular points of the checkerboard.
Specifically, candidate clusters containing a checkerboard are detected, clusters are ordered by a standard target similarity metric L to measure the difference between the clusters and the calibration target, and only the cluster with the smallest difference is kept as the located checkerboard.
Figure BDA0004202525260000101
Wherein n is the number of candidate checkerboard point cloud clusters, L 1 Similarity measurement, L, of checkerboard point Yun Cu with respect to point cloud position in the checkerboard point 2 And measuring the similarity of the point cloud intensities in the checkerboard point cloud cluster. Considering that after a checkerboard is segmented from the background, it is possible to connect background points to the checkerboard, we delete these points first. It is necessary to determine the upper and lower boundaries of the boundary points of the checkerboard in the z-axis direction of the lidar coordinate system. For the upper boundary. The lower boundary is calculated based on the width distribution of the tessellation points along the z-axis and the tessellation size constraint: because of systematic errors in the measurements, the point cloud scatters along the axis, and in an ideal case all points should fall on the checkerboard plane, it is difficult to estimate accurate boundary coordinates. Therefore, we have devised an iterative refinement method that fits the point cloud to the ideal plane in which the checkerboard lies. In each iteration, the present embodiment first fits the best plane using a RANSAC-based method, as shown in fig. 2. The point cloud coordinates for which the point cloud-to-plane distance is less than the threshold value are preserved.
In one implementation, step S203 in this embodiment includes the following steps:
step S2031, constructing a parameter constraint model, which is:
Figure BDA0004202525260000102
Wherein A, B, C and D are estimated parameters of the fitting plane. Point set P c Is checkerboard point Yun Cu, P' c Epsilon is the threshold value of iteration and is the first optimal plane;
step S2032, shrinking the threshold epsilon and repeating the fitting and filtering processes according to the parameter constraint model, when all the point clouds are within the iterative threshold epsilon, projecting the noise point clouds onto an ideal plane by using a projective transformation model, so as to obtain a second optimal plane as follows:
P‘’ c ={(x 0 t,y 0 t,z 0 t)|(x 0 ,y 0 ,z 0 )∈P’ c },
wherein t= -D/(Ax) 0 +By 0+ Cz 0 );
Step S2033, dividing the second optimal plane into a plurality of rectangular grids, and randomly sampling the point clouds in each rectangular grid to obtain the checkerboard optimal point cloud plane.
Specifically, as shown in fig. 2, the RANSAC method uses four sets of point pairs to sample and ensure that the initial data set is sampled, and performs corresponding fitting by a parametric constraint model. Under the condition that the critical threshold value of the mathematical model is not exceeded, adding the interior point set meeting the threshold value requirement into the largest interior point set, wherein the steps are as follows:
Figure BDA0004202525260000111
wherein A, B, C and D are estimated parameters of the fitting plane. Now, point set P c Is the original checkerboard point Yun Cu, P' c Closer to the ideal checkerboard plane than Pc, then we shrink the threshold epsilon and repeat the fitting and filtering process.
When all point clouds are within the threshold epsilon of the current iteration, the computation stops. According to the ranging principle of lidar, the ranging measurement of the laser beam may contain errors in the radial direction (from the obstacle to the center of the sensor), while errors in the azimuthal direction may be ignored. Based on this experience, the noise point cloud is then projected onto an ideal plane using a projective transformation model.
P‘’ c ={(x 0 t,y 0 t,z 0 t)|(x 0 ,y 0 ,z 0 )∈P’ c }
Wherein:
t=-D/(Ax 0 +By 0+ Cz 0 )
to reduce the effect of point cloud non-uniformity, we will unify P '' c Dividing the system into a plurality of grids, and then adopting random sampling to each grid to keep the density range of the point cloud.
Dividing the point cloud of the pairs in the checkerboard into a plurality of rectangular grids, randomly sampling the point cloud in each grid, and repeating all the steps. By combining a plurality of grids obtained by sampling, the embodiment can obtain the checkerboard point cloud with uniform density. The method can reduce the influence of a high-density area in the later-stage angle fitting process, and brings a better overall characteristic point detection result.
Step S300, obtaining two-dimensional characteristics of the checkerboard boundary according to the image data;
in one implementation, the step S300 in this embodiment includes the following steps:
step S301, detecting internal angular points of the checkerboard through element characteristic points, and positioning the checkerboard through similarity to obtain two-dimensional characteristics of the boundary of the checkerboard;
Step S302, detecting the image data through a chessboard angular point detection algorithm to obtain two-dimensional angular points;
step S303, ordering the two-dimensional angular points to obtain an index relation.
Specifically, the present embodiment performs two-dimensional corner detection from image data. And detecting two-dimensional angular points from the image by adopting a chessboard angular point detection algorithm, wherein the image files correspond to the files processed by the density distribution of the point cloud field from the laser radar one by one. Since the corners are symmetrically distributed along diagonal directions in the checkerboard, the order of the corners detected in the image and the point cloud may be blurred. The present embodiment reorders the detected corner points, indexing them from the lower left corner.
Step S400, registering according to the three-dimensional point cloud features of the checkerboard boundary and the two-dimensional features of the checkerboard boundary to obtain an external calibration matrix;
in one implementation, the step S400 in this embodiment includes the following steps:
step S401, expanding the point cloud data of the three-dimensional point cloud features of the checkerboard boundary to a two-dimensional plane to obtain two-dimensional point cloud features;
step S402, according to the index relation, obtaining the association relation of the three-dimensional-two-dimensional corner pairs of the checkerboard boundary;
And step S403, registering the two-dimensional point cloud features and the checkerboard boundary two-dimensional features in each frame of data according to the association relation to obtain an external calibration matrix.
Specifically, in this embodiment, according to the index relationship, the association relationship of three-dimensional-two-dimensional corner pairs of four groups of checkerboard boundaries is determined, and according to the association relationship, the point cloud and the image in each frame of data are registered. As shown in fig. 2, the principle of joint calibration of the laser radar and the camera is that the checkerboard target in the same scene is transformed by a rotation translation matrix, and projected onto a two-dimensional image under the camera coordinate system by the laser radar coordinate system, and the projection process can be interpreted by imaging transformation.
Specifically, as shown in FIG. 3, the world coordinate system is defined by X w 、Y w 、Z w The shaft is formed. Because the camera can be placed anywhere in the real scene, a reference coordinate system must be chosen in the real scene to represent the orientation of the camera, and this reference system is then used to characterize the objects in the real scene.
The world coordinate system can thus be regarded directly as a lidar coordinate system, i.e. the world coordinate system coincides with the origin of the lidar coordinate system, i.e. (X) w ,Y w ,Z w ) As the point cloud coordinates, according to the relationship between the world coordinate system and the camera coordinate system, the following formula is:
Figure BDA0004202525260000131
Wherein P (x, y, z) is a point in the space under the camera coordinate system, R is a 3×3 orthogonal identity matrix, t is a three-dimensional translation vector, O= (0, 0) T ;M 2 I.e. 4 x 4 external calibration matrix。
In one implementation, step S403 in this embodiment includes the following steps:
step S4031, solving the three-dimensional corner point and the two-dimensional corner point in any frame of data by adopting a PnP pose estimation algorithm based on RANSCA according to the pose relation between a laser radar coordinate system and a camera coordinate system to obtain an initial solution matrix;
step S4032, performing three-dimensional-two-dimensional projection on each frame of data according to the initial solution matrix to obtain two-dimensional chessboard estimation;
step S4033, taking the two-dimensional characteristics of the checkerboard boundary as a false true value, obtaining a calculated reprojection error according to the two-dimensional checkerboard estimation and the false true value, and setting an error target threshold;
step S4034, comparing the re-projection error with the error target threshold, if the re-projection error is greater than the error target threshold, deleting the two-dimensional chessboard estimation corresponding to the re-projection error, and updating the estimated initial solution matrix;
and step S4035, re-executing the three-dimensional angular points and the two-dimensional angular points in any frame of data, and obtaining an initial solution matrix by adopting a PnP pose estimation algorithm based on RANSCA until the re-projection error in each frame of data is smaller than or equal to the error target threshold value, thereby obtaining the external calibration matrix.
Specifically, the pose relation between the laser radar coordinate system and the camera coordinate system is obtained through calculation and solution through calibration of a checkerboard. And (5) carrying out iteration external parameter solving.
Figure BDA0004202525260000141
Wherein u and v are pixel point coordinates and X w 、Y w 、Z w Is the point cloud coordinate of a laser radar coordinate system, in addition
Figure BDA0004202525260000142
Is the scale factor on the u-axis; />
Figure BDA0004202525260000143
Is the scale factor on the v-axis; m is M 1 Is a camera internal parameter matrix; m is M 2 Is a rotation translation matrix, wherein R is a rotation matrix from a laser radar coordinate system to a camera coordinate system, and t is a translation matrix. M is 3×3 projection matrix composed of camera internal parameters and rotation translation matrix.
Because of the perspective projection of the camera, the distance of the checkerboard placement may affect the ratio of the re-projected points (e.g., where the re-projection error is less than the actual value at too far a placement), the present embodiment rescales the re-projection by distance normalization to reduce this deviation. The method comprises the following specific steps:
step one:
firstly, mapping point cloud data under a laser radar coordinate system onto a pixel plane to obtain plane data. Meanwhile, the checkerboard targets identified under the camera coordinates are matched with the checkerboard point clouds identified under the radar coordinate system, and meanwhile, as the fov field of view of the laser radar is large and the camera is small, the point clouds which do not fall into the image resolution are filtered out. Specifically, three-dimensional point cloud data under a laser radar coordinate system are unfolded to a two-dimensional plane, point clouds outside the image plane are filtered, and the unconverted coordinates under a preset coordinate system are saved, namely, the radial distance coordinates are ignored.
Step two:
considering limited samples of checkerboard measurement, for the obtained three-dimensional-two-dimensional angular point index relation pair, adopting PnP based on RANSCA to obtain an initial solution matrix E 0 The method comprises the steps of carrying out a first treatment on the surface of the The process is as follows:
(1) First, in the known camera internal intrinsic parameters (M 1 ) Under the condition of (1), three groups of association point pairs are selected to be substituted into the following formula, and the solving part can be converted into the pair E 0 Solving a matrix:
Figure BDA0004202525260000151
wherein X is c 、Y c 、Z c Is the point cloud coordinates in the camera coordinate system.
Then, based on the calculated E 0 The matrix is used for carrying out three-dimensional-two-dimensional projection on other sample frames;
projection error calculation:
Figure BDA0004202525260000152
wherein:
Figure BDA0004202525260000153
(X ci ,Y ci ,Z ci ) Is the actual coordinates, (X ci ',Y ci ',Z ci ') is the corresponding projection imaging point coordinates, δx i 、δy i And δz i The error value, n is the number of pairs of projection points, and oc is the error target value.
Based on camera intrinsic reference matrix M 1 And estimated external M 2 And projecting the three-dimensional point cloud estimated by the chessboard into a two-dimensional space. The feature points from the two dimensions of the image are then treated as pseudo-true values to calculate the re-projection error (i.e., calculate the difference between the projected value of the three-dimensional point cloud and the two-dimensional points of the image).
Step three:
deleting the sample with higher reprojection error, and recalculating E 0 And (3) returning to the second step, and continuously repeating the PnP solving and re-projection evaluating processes until all errors are lower than the target value ≡. Representing the final external calibration matrix as M 2
And S500, carrying out point cloud registration according to the external calibration matrix.
In summary, the embodiment processes abnormal samples based on the density distribution of the point cloud field, and improves the accuracy of sample data. Less target placement is sampled, resulting in a lack of constraints when solving PnP problems. Factors affecting the re-projection error are considered. Better performance is obtained by adding samples with different distances and poses of target placement, and the error can be kept stable with the smallest sample at 5-6 poses, and the variables are optimized, the minimum value of the similarity measure L and the reprojection error correspond to similar areas, which means that the optimal external parameters can be solved accordingly, and the accuracy is improved.
It should be noted that the method of the present embodiment includes not only obtaining point cloud data from a lidar but also obtaining point cloud data from a semi-solid state lidar.
Exemplary apparatus
As shown in fig. 4, the present embodiment further provides a vehicle-mounted laser radar point cloud and image registration apparatus, where the apparatus includes:
the data and image acquisition module 10 is used for acquiring point cloud data acquired by the vehicle-mounted laser radar and image data acquired by the camera under the same time and space;
The three-dimensional point cloud feature acquisition module 20 is configured to obtain three-dimensional point cloud features of the checkerboard boundary according to the point cloud data;
a two-dimensional feature acquisition module 30, configured to obtain a checkerboard boundary two-dimensional feature according to the image data;
an external calibration matrix acquisition module 40, configured to register according to the three-dimensional point cloud feature of the checkerboard boundary and the two-dimensional feature of the checkerboard boundary to obtain an external calibration matrix;
and the registration module 50 is used for carrying out point cloud registration according to the external calibration matrix.
In one implementation manner, the vehicle-mounted laser radar point cloud and image registration device further comprises:
the preprocessing unit is used for carrying out data preprocessing on the point cloud data and the image data to obtain processed point cloud data and processed image data; wherein the data preprocessing includes timestamp synchronization and de-skew processing.
In one implementation, the three-dimensional point cloud feature acquisition module 20 includes:
the noiseless point cloud file frame acquisition unit is used for removing noise in the point cloud data by using a method based on field density distribution to obtain a noiseless point cloud file frame;
the checkerboard point cloud cluster acquisition unit is used for positioning the noiseless point cloud file frame by adopting a clustering method to obtain a checkerboard point cloud cluster;
The checkerboard optimal point cloud plane acquisition unit is used for fitting the optimal plane of the checkerboard point cloud cluster by adopting a RANSAC method to obtain the checkerboard optimal point cloud plane;
the three-dimensional point cloud characteristic acquisition unit is used for detecting the boundary of the optimal point cloud plane of the checkerboard to obtain the boundary three-dimensional point cloud characteristic and the three-dimensional angular point of the checkerboard.
In one implementation, the tessellation optimal point cloud plane acquisition unit includes:
the modeling subunit is used for constructing a parameter constraint model as follows:
Figure BDA0004202525260000171
wherein A, B, C and D are estimated parameters of the fitting plane, point set P c Is checkerboard point Yun Cu, P' c Epsilon is the threshold value of iteration and is the first optimal plane;
and the fitting subunit is used for shrinking the threshold epsilon and repeating the fitting and filtering processes according to the parameter constraint model, when all the point clouds are within the iterative threshold epsilon, the projection transformation model is used for projecting the noise point clouds onto an ideal plane, and the second optimal plane is obtained by the following steps:
P‘’ c ={(x 0 t,y 0 t,z 0 t)|(x 0 ,y 0 ,z 0 )∈P’ c }
wherein t= -D/(Ax) 0 +By 0+ Cz 0 );
The tessellation optimal point Yun Pingmian obtains a subunit, configured to divide the second optimal plane into a plurality of rectangular grids, and randomly sample a point cloud in each rectangular grid, to obtain the tessellation optimal point cloud plane.
In one implementation, the two-dimensional feature acquisition module 30 of the present embodiment includes:
the two-dimensional characteristic acquisition unit of the checkerboard boundary is used for detecting the angular points in the checkerboard through the element characteristic points and positioning the checkerboard through the similarity to obtain the two-dimensional characteristic of the checkerboard boundary;
the two-dimensional angular point acquisition unit is used for detecting the image data through a chessboard angular point detection algorithm to obtain two-dimensional angular points;
and the index relation acquisition unit is used for sequencing the two-dimensional angular points to obtain an index relation.
In one implementation, the external calibration matrix acquisition module 40 of this embodiment includes:
the two-dimensional point cloud characteristic acquisition unit is used for expanding the point cloud data of the three-dimensional point cloud characteristics of the checkerboard boundary to a two-dimensional plane to obtain two-dimensional point cloud characteristics;
the association relation acquisition unit is used for acquiring the association relation of the three-dimensional-two-dimensional corner pairs of the checkerboard boundary according to the index relation;
and the external calibration matrix acquisition unit is used for registering the two-dimensional point cloud features and the checkerboard boundary two-dimensional features in each frame of data according to the association relation to obtain an external calibration matrix.
In one implementation manner, the external calibration matrix acquisition unit of this embodiment includes:
An initial solution matrix obtaining subunit, configured to solve, according to a pose relationship between a laser radar coordinate system and a camera coordinate system, the three-dimensional corner point and the two-dimensional corner point in any frame of data by using a RANSCA-based PnP pose estimation algorithm, so as to obtain an initial solution matrix;
a two-dimensional chessboard estimation acquisition subunit, configured to perform three-dimensional-two-dimensional projection on each frame of data according to the initial solution matrix, to obtain a two-dimensional chessboard estimation;
the reprojection error calculation subunit is used for taking the two-dimensional characteristics of the checkerboard boundary as a pseudo-true value, obtaining a calculated reprojection error according to the two-dimensional checkerboard estimation and the pseudo-true value, and setting an error target threshold;
an initial solution matrix updating subunit, configured to compare the reprojection error with the error target threshold, and if the reprojection error is greater than the error target threshold, delete a two-dimensional chessboard estimate corresponding to the reprojection error, and update an estimated initial solution matrix;
and the iteration subunit is used for re-executing the three-dimensional angular point and the two-dimensional angular point in any frame of data, adopting a PnP pose estimation algorithm based on RANSCA to obtain an initial solution matrix until the re-projection error in each frame of data is smaller than or equal to the error target threshold value, and obtaining the external calibration matrix.
Based on the above embodiment, the present invention further provides an intelligent terminal, and a functional block diagram thereof may be shown in fig. 5. The intelligent terminal comprises a processor, a memory, a network interface, a display screen and a temperature sensor which are connected through a system bus. The processor of the intelligent terminal is used for providing computing and control capabilities. The memory of the intelligent terminal comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the intelligent terminal is used for communicating with an external terminal through network connection. The computer program when executed by the processor is used for realizing a vehicle-mounted laser radar point cloud and image registration method. The display screen of the intelligent terminal can be a liquid crystal display screen or an electronic ink display screen, and a temperature sensor of the intelligent terminal is arranged in the intelligent terminal in advance and used for detecting the running temperature of internal equipment.
It will be appreciated by those skilled in the art that the schematic block diagram shown in fig. 5 is merely a block diagram of a portion of the structure associated with the present inventive arrangements and is not limiting of the intelligent terminal to which the present inventive arrangements are applied, and that a particular intelligent terminal may include more or less components than those shown, or may combine some of the components, or may have a different arrangement of components.
In one embodiment, an intelligent terminal is provided, the intelligent terminal includes a memory, a processor, and a vehicle-mounted laser radar point cloud and image registration program stored in the memory and capable of running on the processor, and when the processor executes the vehicle-mounted laser radar point cloud and image registration program, the following operation instructions are realized:
acquiring point cloud data acquired by a vehicle-mounted laser radar and image data acquired by a camera under the same time and space;
obtaining three-dimensional point cloud characteristics of the checkerboard boundary according to the point cloud data;
obtaining checkerboard boundary two-dimensional characteristics according to the image data;
registering according to the three-dimensional point cloud features of the checkerboard boundary and the two-dimensional features of the checkerboard boundary to obtain an external calibration matrix;
and carrying out point cloud registration according to the external calibration matrix.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, operational database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), dual operation data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
In summary, the invention discloses a method for registering a vehicle-mounted laser radar point cloud and an image, which comprises the following steps: acquiring point cloud data acquired by a vehicle-mounted laser radar and image data acquired by a camera under the same time and space; according to the point cloud data, three-dimensional point cloud characteristics of the checkerboard boundary are obtained; obtaining two-dimensional characteristics of the checkerboard boundary according to the image data; registering according to the three-dimensional point cloud features of the checkerboard boundary and the two-dimensional features of the checkerboard boundary to obtain an external calibration matrix; and carrying out point cloud registration according to the external calibration matrix. The invention can automatically process the data with different dimensionalities measured by the laser radar and the camera into error reduction processing of various algorithms, and realize high-precision registration by means of the checkerboard.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A method for registering a point cloud of a vehicle-mounted lidar with an image, the method comprising:
acquiring point cloud data acquired by a vehicle-mounted laser radar and image data acquired by a camera under the same time and space;
obtaining three-dimensional point cloud characteristics of the checkerboard boundary according to the point cloud data;
obtaining checkerboard boundary two-dimensional characteristics according to the image data;
registering according to the three-dimensional point cloud features of the checkerboard boundary and the two-dimensional features of the checkerboard boundary to obtain an external calibration matrix;
and carrying out point cloud registration according to the external calibration matrix.
2. The method for registering a point cloud and an image of a vehicle-mounted laser radar according to claim 1, wherein after acquiring the point cloud data acquired by the vehicle-mounted laser radar and the image data acquired by the camera under the same time and space, the method further comprises:
performing data preprocessing on the point cloud data and the image data to obtain processed point cloud data and processed image data; wherein the data preprocessing includes timestamp synchronization and de-skew processing.
3. The method for registering the point cloud and the image of the vehicle-mounted laser radar according to claim 2, wherein the obtaining the three-dimensional point cloud characteristics of the checkerboard boundary according to the point cloud data comprises the following steps:
Removing noise in the point cloud data by using a method based on field density distribution to obtain a noiseless point cloud file frame;
positioning the noiseless point cloud file frame by adopting a clustering method to obtain checkerboard point cloud clusters;
fitting the optimal plane of the checkerboard point cloud cluster by adopting a RANSAC method to obtain the optimal point cloud plane of the checkerboard;
and detecting the boundary of the optimal point cloud plane of the checkerboard to obtain the boundary three-dimensional point cloud characteristics and the three-dimensional angular points of the checkerboard.
4. The method for registering a vehicle-mounted lidar point cloud with an image according to claim 3, wherein the fitting the optimal plane of the tessellated point cloud cluster by using a RANSAC method to obtain the tessellated optimal point cloud plane comprises:
the parameter constraint model is constructed as follows:
Figure QLYQS_1
wherein A, B, C and D are estimated parameters of the fitting plane, point set P c Is checkerboard point Yun Cu, P' c Epsilon is the threshold value of iteration and is the first optimal plane;
and shrinking the threshold epsilon and repeating the fitting and filtering processes according to the parameter constraint model, and when all the point clouds are within the iterative threshold epsilon, projecting the noise point clouds onto an ideal plane by using a projection transformation model to obtain a second optimal plane, wherein the second optimal plane is as follows:
P‘’ c ={(x 0 t,y 0 t,z 0 t)|(x 0 ,y 0 ,z 0 )∈P’ c }
Wherein t= -D (Ax 0 +By 0 +Cz 0 );
Dividing the second optimal plane into a plurality of rectangular grids, and randomly sampling point clouds in each rectangular grid to obtain the checkerboard optimal point cloud plane.
5. The method for registering a point cloud and an image of an on-vehicle lidar of claim 4, wherein the obtaining a checkerboard boundary two-dimensional feature from the image data comprises:
detecting internal angular points of the checkerboard through element feature points, and positioning the checkerboard through similarity to obtain two-dimensional features of the checkerboard boundary;
detecting the image data through a chessboard angular point detection algorithm to obtain two-dimensional angular points;
and ordering the two-dimensional angular points to obtain an index relation.
6. The method for registering a point cloud and an image of a vehicle-mounted lidar of claim 5, wherein the registering based on the three-dimensional point cloud feature of the checkerboard boundary and the two-dimensional feature of the checkerboard boundary to obtain an external calibration matrix comprises:
expanding the point cloud data of the three-dimensional point cloud features of the checkerboard boundary to a two-dimensional plane to obtain two-dimensional point cloud features;
according to the index relation, obtaining an association relation of three-dimensional-two-dimensional corner pairs of the checkerboard boundary;
And registering the two-dimensional point cloud features and the checkerboard boundary two-dimensional features in each frame of data according to the association relation to obtain an external calibration matrix.
7. The method for registering the vehicle-mounted lidar point cloud with the image according to claim 6, wherein registering the two-dimensional point cloud feature and the checkerboard boundary two-dimensional feature in each frame of data according to the association relation to obtain an external calibration matrix comprises:
according to the pose relation between a laser radar coordinate system and a camera coordinate system, solving the three-dimensional angular points and the two-dimensional angular points in any frame of data by adopting a PnP pose estimation algorithm based on RANSCA to obtain an initial solution matrix;
according to the initial solution matrix, carrying out three-dimensional-two-dimensional projection on each frame of data to obtain two-dimensional chessboard estimation;
taking the two-dimensional characteristics of the checkerboard boundary as a false true value, obtaining a calculated reprojection error according to the two-dimensional checkerboard estimation and the false true value, and setting an error target threshold;
comparing the re-projection error with the error target threshold, if the re-projection error is larger than the error target threshold, deleting the two-dimensional chessboard estimation corresponding to the re-projection error, and updating the initial solution matrix;
And re-executing the three-dimensional corner points and the two-dimensional corner points in any frame of data, and obtaining an initial solution matrix by adopting a RANSCA-based PnP pose estimation algorithm until the re-projection error in each frame of data is smaller than or equal to the error target threshold value, so as to obtain the external calibration matrix.
8. An on-vehicle lidar point cloud and image registration device, the device comprising:
the data and image acquisition module is used for acquiring point cloud data acquired by the vehicle-mounted laser radar and image data acquired by the camera under the same time and space;
the three-dimensional point cloud characteristic acquisition module is used for acquiring three-dimensional point cloud characteristics of the checkerboard boundary according to the point cloud data;
the two-dimensional feature acquisition module is used for acquiring two-dimensional features of the checkerboard boundary according to the image data;
the external calibration matrix acquisition module is used for registering according to the three-dimensional point cloud features of the checkerboard boundary and the two-dimensional features of the checkerboard boundary to obtain an external calibration matrix;
and the registration module is used for carrying out point cloud registration according to the external calibration matrix.
9. An intelligent terminal, characterized in that the intelligent terminal comprises a memory, a processor and a vehicle-mounted laser radar point cloud and image registration program stored in the memory and capable of running on the processor, wherein the steps of the vehicle-mounted laser radar point cloud and image registration method according to any one of claims 1-7 are realized when the processor executes the vehicle-mounted laser radar point cloud and image registration program.
10. A computer readable storage medium, wherein a vehicle-mounted lidar point cloud and image registration program is stored on the computer readable storage medium, and when the vehicle-mounted lidar point cloud and image registration program is executed by a processor, the steps of the vehicle-mounted lidar point cloud and image registration method according to any of claims 1 to 7 are implemented.
CN202310466475.9A 2023-04-26 2023-04-26 Method and device for registering laser radar point cloud and image and intelligent terminal Pending CN116433737A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310466475.9A CN116433737A (en) 2023-04-26 2023-04-26 Method and device for registering laser radar point cloud and image and intelligent terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310466475.9A CN116433737A (en) 2023-04-26 2023-04-26 Method and device for registering laser radar point cloud and image and intelligent terminal

Publications (1)

Publication Number Publication Date
CN116433737A true CN116433737A (en) 2023-07-14

Family

ID=87084006

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310466475.9A Pending CN116433737A (en) 2023-04-26 2023-04-26 Method and device for registering laser radar point cloud and image and intelligent terminal

Country Status (1)

Country Link
CN (1) CN116433737A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117115143A (en) * 2023-10-16 2023-11-24 深圳市壹倍科技有限公司 Method and device for detecting wafer core particle, computer equipment and storage medium
CN117329971A (en) * 2023-12-01 2024-01-02 海博泰科技(青岛)有限公司 Compartment balance detection method and system based on three-dimensional laser radar
CN117372987A (en) * 2023-12-08 2024-01-09 山东高速工程检测有限公司 Road three-dimensional data processing method and device, storage medium and electronic equipment
CN117452392A (en) * 2023-12-26 2024-01-26 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Radar data processing system and method for vehicle-mounted auxiliary driving system
CN117470254A (en) * 2023-12-28 2024-01-30 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Vehicle navigation system and method based on radar service
CN118429402A (en) * 2024-07-04 2024-08-02 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Automatic registration method, system and terminal for radar space based on image feature learning

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117115143A (en) * 2023-10-16 2023-11-24 深圳市壹倍科技有限公司 Method and device for detecting wafer core particle, computer equipment and storage medium
CN117115143B (en) * 2023-10-16 2024-01-02 深圳市壹倍科技有限公司 Method and device for detecting wafer core particle, computer equipment and storage medium
CN117329971A (en) * 2023-12-01 2024-01-02 海博泰科技(青岛)有限公司 Compartment balance detection method and system based on three-dimensional laser radar
CN117372987A (en) * 2023-12-08 2024-01-09 山东高速工程检测有限公司 Road three-dimensional data processing method and device, storage medium and electronic equipment
CN117372987B (en) * 2023-12-08 2024-01-30 山东高速工程检测有限公司 Road three-dimensional data processing method and device, storage medium and electronic equipment
CN117452392A (en) * 2023-12-26 2024-01-26 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Radar data processing system and method for vehicle-mounted auxiliary driving system
CN117452392B (en) * 2023-12-26 2024-03-08 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Radar data processing system and method for vehicle-mounted auxiliary driving system
CN117470254A (en) * 2023-12-28 2024-01-30 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Vehicle navigation system and method based on radar service
CN117470254B (en) * 2023-12-28 2024-03-08 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Vehicle navigation system and method based on radar service
CN118429402A (en) * 2024-07-04 2024-08-02 哈尔滨工业大学(深圳)(哈尔滨工业大学深圳科技创新研究院) Automatic registration method, system and terminal for radar space based on image feature learning

Similar Documents

Publication Publication Date Title
CN116433737A (en) Method and device for registering laser radar point cloud and image and intelligent terminal
CN111179358B (en) Calibration method, device, equipment and storage medium
CN109035320B (en) Monocular vision-based depth extraction method
CN109737874B (en) Object size measuring method and device based on three-dimensional vision technology
CN106408609B (en) A kind of parallel institution end movement position and posture detection method based on binocular vision
CN110969668A (en) Stereoscopic calibration algorithm of long-focus binocular camera
US20220036589A1 (en) Multispectral camera external parameter self-calibration algorithm based on edge features
CN109272574B (en) Construction method and calibration method of linear array rotary scanning camera imaging model based on projection transformation
WO2019232793A1 (en) Two-camera calibration method, electronic device and computer-readable storage medium
CN113706635B (en) Long-focus camera calibration method based on point feature and line feature fusion
CN111383264B (en) Positioning method, positioning device, terminal and computer storage medium
Eichhardt et al. Affine correspondences between central cameras for rapid relative pose estimation
CN112257713A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN112381847A (en) Pipeline end head space pose measuring method and system
CN111627070B (en) Method, device and storage medium for calibrating rotation shaft
CN111445513A (en) Plant canopy volume obtaining method and device based on depth image, computer equipment and storage medium
CN113589263B (en) Method and system for jointly calibrating multiple homologous sensors
CN110458951B (en) Modeling data acquisition method and related device for power grid pole tower
CN115631245A (en) Correction method, terminal device and storage medium
Tagoe et al. Determination of the Interior Orientation Parameters of a Non-metric Digital Camera for Terrestrial Photogrammetric Applications
CN113610086A (en) Reading correction method and device for vertical-scale pointer instrument and terminal equipment
CN114779272A (en) Laser radar odometer method and device with enhanced vertical constraint
Paudel et al. 2D–3D synchronous/asynchronous camera fusion for visual odometry
CN114299477A (en) Vehicle vision positioning method, system, equipment and readable storage medium
CN117523010B (en) Method and device for determining camera pose of vehicle, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination