CN117456146B - Laser point cloud splicing method, device, medium and equipment - Google Patents

Laser point cloud splicing method, device, medium and equipment Download PDF

Info

Publication number
CN117456146B
CN117456146B CN202311764435.9A CN202311764435A CN117456146B CN 117456146 B CN117456146 B CN 117456146B CN 202311764435 A CN202311764435 A CN 202311764435A CN 117456146 B CN117456146 B CN 117456146B
Authority
CN
China
Prior art keywords
point cloud
laser
coordinates
absolute
laser point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311764435.9A
Other languages
Chinese (zh)
Other versions
CN117456146A (en
Inventor
闫臻
吴俊�
毛勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huajian Technology Shenzhen Co ltd
Original Assignee
Huajian Technology Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huajian Technology Shenzhen Co ltd filed Critical Huajian Technology Shenzhen Co ltd
Priority to CN202311764435.9A priority Critical patent/CN117456146B/en
Publication of CN117456146A publication Critical patent/CN117456146A/en
Application granted granted Critical
Publication of CN117456146B publication Critical patent/CN117456146B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Software Systems (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a laser point cloud splicing method, a device, a medium and equipment, which aim to acquire collected absolute positioning coordinates, laser point clouds of a target object and panoramic images; the method comprises the steps of firstly determining absolute laser radar coordinates of a laser radar at different acquisition positions and absolute coordinates of laser point clouds at different acquisition positions based on absolute positioning coordinates and calibration parameters between the laser radar and absolute positioning equipment, and carrying out rough matching and rough splicing on the laser point clouds. And extracting the visual point cloud in the panoramic image, registering the laser point cloud and the visual point cloud, and determining the absolute coordinates of the visual point cloud according to the absolute coordinates of the laser point cloud. And finally, extracting characteristic points in laser and visual laser point clouds of the public area to perform laser and visual characteristic point matching, and realizing fine splicing by using a matching result. The invention utilizes absolute coordinate information and combines the feature matching of the visual point cloud, thereby effectively improving the splicing precision and efficiency under the laser weak feature environment.

Description

Laser point cloud splicing method, device, medium and equipment
Technical Field
The invention relates to the technical field of laser point cloud processing, in particular to a laser point cloud splicing method, a device, a medium and equipment.
Background
Laser synchronized localization and mapping (Simultaneous Localization and Mapping, SLAM) has wide application in many fields, such as autopilot, unmanned aerial vehicle, augmented reality, and robotic navigation. In laser SLAM mapping, long laser point cloud acquisition can lead to time drift, so data needs to be acquired in blocks and spliced. Meanwhile, the same point clouds in different data sets differ in absolute coordinates, because SLAM is a relative coordinate, registration is required.
Conventional laser point cloud stitching algorithms typically register by manual stitching or with laser or visual feature points. However, these conventional methods are computationally intensive, inefficient, and may cause poor matching accuracy and effect due to the existence of laser or visual texture weak regions, thereby causing a problem of mismatching when processing large scene data.
Disclosure of Invention
Based on the above, it is necessary to provide a laser point cloud splicing method, device, medium and equipment, so as to solve the problems of low efficiency, poor matching precision and poor effect in processing large scene data.
A laser point cloud stitching method, the method comprising:
acquiring the acquired absolute positioning coordinates, laser point clouds of a target object and a panoramic image;
determining absolute laser radar coordinates of the laser radar at different acquisition positions and absolute coordinates of the laser point cloud at different acquisition positions based on the absolute positioning coordinates and calibration parameters between the laser radar and the absolute positioning equipment, and performing rough matching and rough splicing on the laser point cloud based on the absolute laser radar coordinates;
extracting a visual point cloud in the panoramic image, registering the laser point cloud and the visual point cloud based on calibration parameters between the laser radar and the panoramic camera and common characteristic points of the laser point cloud and the visual point cloud, and determining absolute coordinates of the visual point cloud according to absolute coordinates of the laser point cloud;
extracting geometric feature points in the laser point cloud and performing feature matching on the geometric feature points in the laser point cloud in a public area of two adjacent data sets k and j, extracting geometric feature points and texture feature points in the visual point cloud and performing feature matching on the geometric feature points and the texture feature points in the visual point cloud, determining a comprehensive pose transformation matrix according to a matching result, and performing fine splicing.
In one embodiment, extracting geometric feature points in a laser point cloud includes:
dividing the laser point cloud into preset sections, and calculating the curvature of each section of laser point cloud; wherein, the calculation formula of curvature is:
in the above-mentioned method, the step of,indicating target point, ->Indicating neighboring laser spot clouds within a preset range of the target point,/for>Indicating a nearby point of the target point;
and sequencing each section of laser point cloud according to the curvature, selecting preset N points with the maximum curvature from the non-ground points as edge points in the geometric feature points of the target object, and selecting preset N points with the minimum curvature as plane points in the geometric feature points of the target object.
In one embodiment, feature matching is performed on geometric feature points in a laser point cloud, including:
edge points to current datasetSearching for a pair of matching points in another data set point cloudThe constraint that the point-to-line distance is minimal is established, expressed as:
in the above-mentioned method, the step of,indicating a current data set, j indicating another data set;
plane points to current datasetSearching for 3 matching points in another data set point cloudThe constraint that the point-to-face distance is minimum is established, expressed as:
constructing a target equation based on the geometric feature points, expressed as:
in the above-mentioned method, the step of,a pose transformation matrix between the two data sets;
and iteratively optimizing a target equation through a Levenberg-Marquardt algorithm to perform feature matching on geometric feature points between two data sets k and j, and obtaining a first pose transformation matrix Tr between the two data sets.
In one embodiment, the extracting the geometric feature point and the texture feature point in the visual point cloud and performing feature matching on the geometric feature point and the texture feature point in the visual point cloud includes:
and respectively extracting geometric feature points and texture feature points from the visual point cloud of the public region of the adjacent data sets k and j, and carrying out feature matching by adopting a feature matching algorithm based on distance to obtain a second pose transformation matrix Tc between the adjacent data sets k and j.
In one embodiment, the determining the comprehensive pose transformation matrix according to the matching result and performing fine stitching includes:
according to the first pose transformation matrix Tr and the second pose transformation matrix Tc, a comprehensive pose transformation matrix T is calculated and expressed as follows:
in the above-mentioned method, the step of,,/>
performing point cloud fine splicing on the laser point clouds of the adjacent data sets k and j based on the pose transformation matrix T, calculating and counting splicing errors of the point clouds with the same name in the public area, and adjusting parametersSo that the root mean square error rms of the splice error is minimized.
In one embodiment, determining an absolute laser radar coordinate of the laser radar at different acquisition positions and an absolute coordinate of the laser point cloud at different acquisition positions based on the absolute positioning coordinate and a calibration parameter between the laser radar and the absolute positioning device, and performing rough matching and rough stitching on the laser point cloud based on the absolute laser radar coordinate, including:
in the laser point cloud set at each acquisition position, based on the absolute positioning coordinates synchronously acquired during acquisition of the laser point cloud and calibration parameters between the laser radar and the absolute positioning equipment, converting relative laser radar coordinates and relative coordinates of the laser radar into corresponding absolute laser radar coordinates and absolute coordinates under a world coordinate system so as to obtain absolute laser radar coordinates and absolute coordinates at different acquisition positions;
and carrying out feature extraction, feature matching, coordinate transformation and point cloud splicing based on the absolute laser radar coordinates at different acquisition positions and the laser point clouds acquired at the corresponding acquisition positions so as to realize coarse matching and coarse splicing.
In one embodiment, the method further comprises:
when the absolute positioning equipment is calibrated, acquiring acquired absolute positioning coordinates as reference data, performing internal reference calibration of the absolute positioning equipment by using a calibration tool, estimating a rotation matrix between an antenna of the absolute positioning equipment and a preset fixed point, and performing error correction and parameter adjustment to realize external reference calibration of the absolute positioning equipment;
when the laser radar is calibrated, scanning data of the laser radar on a known calibration target are acquired at a known position and a known gesture, real world coordinates of the calibration target are recorded, internal reference calibration of the laser radar is carried out by using a calibration tool, the position and the direction of the laser radar relative to a preset fixed point are estimated, and error correction and parameter adjustment are carried out to realize external reference calibration of the laser radar;
when the panoramic camera is calibrated, acquiring a known calibration target image, recording real world coordinates of the known calibration target, performing internal parameter calibration of the panoramic camera by using a calibration tool, estimating the position and the direction of the panoramic camera relative to a preset fixed point, and performing error correction and parameter adjustment to realize external parameter calibration of the panoramic camera.
A laser point cloud stitching device, the device comprising:
the data acquisition module is used for acquiring the acquired absolute positioning coordinates, the laser point cloud of the target object and the panoramic image;
the coarse matching and coarse splicing module is used for determining the absolute laser radar coordinates of the laser radar at different acquisition positions and the absolute coordinates of the laser point cloud at different acquisition positions based on the absolute positioning coordinates and the calibration parameters between the laser radar and the absolute positioning equipment, and performing coarse matching and coarse splicing on the laser point cloud based on the absolute laser radar coordinates;
extracting a visual point cloud in the panoramic image, registering the laser point cloud and the visual point cloud based on calibration parameters between the laser radar and the panoramic camera and common characteristic points of the laser point cloud and the visual point cloud, and determining absolute coordinates of the visual point cloud according to absolute coordinates of the laser point cloud;
the fine matching and fine splicing module is used for extracting geometric feature points in the laser point cloud and carrying out feature matching on the geometric feature points in the laser point cloud in the public areas of the two adjacent data sets k and j, extracting geometric feature points and texture feature points in the visual point cloud and carrying out feature matching on the geometric feature points and the texture feature points in the visual point cloud, determining a comprehensive pose transformation matrix according to a matching result and carrying out fine splicing.
A computer readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps of the laser point cloud stitching method described above:
the laser point cloud splicing device comprises a memory and a processor, wherein the memory stores a computer program, and the computer program, when executed by the processor, causes the processor to execute the steps of the laser point cloud splicing method:
the invention provides a laser point cloud splicing method, a device, a medium and equipment, which comprise the steps of acquiring acquired absolute positioning coordinates, and a laser point cloud and a panoramic image of a target object; the method comprises the steps of firstly determining absolute laser radar coordinates of a laser radar at different acquisition positions and absolute coordinates of laser point clouds at different acquisition positions based on absolute positioning coordinates and calibration parameters between the laser radar and absolute positioning equipment, and carrying out rough matching and rough splicing on the laser point clouds. And extracting the visual point cloud in the panoramic image, registering the laser point cloud and the visual point cloud, and determining the absolute coordinates of the visual point cloud according to the absolute coordinates of the laser point cloud. And finally, extracting characteristic points in laser and visual laser point clouds of the public area, performing laser and visual characteristic point matching, and realizing fine splicing by using a matching result. Because absolute coordinate information is utilized, and meanwhile, feature matching of the visual point cloud is combined, the splicing precision under the laser weak feature environment is improved, and the efficiency and accuracy of laser point cloud splicing are effectively improved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Wherein:
FIG. 1 is a flow chart of a laser point cloud stitching method;
fig. 2 is a schematic structural diagram of a laser point cloud splicing device;
fig. 3 is a block diagram of a laser point cloud splicing apparatus.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms first, second and the like in the description and in the claims of the present application and in the above-described figures, are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
As shown in fig. 1, fig. 1 is a flow chart of a laser point cloud splicing method in an embodiment, where the steps provided by the laser point cloud splicing method in the embodiment include:
s101, acquiring the acquired absolute positioning coordinates, and the laser point cloud and panoramic image of the target object.
Specifically, the absolute positioning coordinates may be data acquired based on Real-Time Kinematic (RTK) technology, which is a high-precision global positioning system (Global Navigation Satellite System, GNSS) technology. It provides high precision position information about an absolute positioning apparatus by using data differences between a reference station and a mobile station.
A laser point cloud is a collection of a large number of points, each having three-dimensional coordinates (X, Y, Z) and possibly attribute information (e.g., color, intensity, etc.). The laser point cloud is scanned and acquired by a laser radar and is used for representing a three-dimensional structure of a scene.
Panoramic images are images that cover the entire visible scene, typically including all viewing angles in the horizontal and vertical directions, taken by a panoramic camera and stitched together.
In a specific embodiment, before the data is acquired, calibration is further performed on the acquisition device to ensure accuracy of the data:
when the absolute positioning device is calibrated, a vehicle or a robot is parked at a known position, acquired absolute positioning coordinates are obtained as reference data, and calibration tools (such as a calibration kit in ROS) are used for internal reference calibration of the absolute positioning device, including receiver position, antenna mounting position and attitude information. And simultaneously estimating a rotation matrix between an antenna of the absolute positioning device and a preset fixed point (such as a vehicle base), and performing error correction and parameter adjustment to realize external parameter calibration of the absolute positioning device.
When the laser radar is calibrated, scanning data of the laser radar on known calibration targets are acquired at known positions and attitudes, real world coordinates of the calibration targets are recorded, and internal reference calibration of the laser radar, including angular resolution and offset parameters of the laser radar, is performed by using a calibration tool. And simultaneously estimating the position and the direction of the laser radar relative to a preset fixed point (such as a vehicle base), and carrying out error correction and parameter adjustment so as to realize external parameter calibration of the laser radar.
When the panoramic camera is calibrated, a known calibration target image is acquired, real world coordinates of the known calibration target are recorded, internal parameter calibration of the panoramic camera is carried out by using a calibration tool (such as an OpenCV camera calibration tool), the position and the direction of the panoramic camera relative to a preset fixed point (such as a vehicle base) are estimated, and error correction and parameter adjustment are carried out to realize external parameter calibration of the panoramic camera.
In one implementation, after the required acquisition data is acquired, the acquired laser point cloud is further preprocessed, including outlier removal and downsampling, so that the data size is reduced and the matching efficiency is improved.
S102, determining the absolute laser radar coordinates of the laser radar at different acquisition positions and the absolute coordinates of the laser point cloud at different acquisition positions based on the absolute positioning coordinates and calibration parameters between the laser radar and the absolute positioning equipment, and performing rough matching and rough splicing on the laser point cloud based on the absolute laser radar coordinates.
In one embodiment, within the laser point cloud at each acquisition location, the relative laser radar coordinates and the relative coordinates of the laser radar are converted to corresponding absolute laser radar coordinates and absolute coordinates in the world coordinate system based on the absolute positioning coordinates and calibration parameters between the laser radar and the absolute positioning device that are synchronously acquired when the laser point cloud is acquired, so as to obtain the absolute laser radar coordinates and the absolute coordinates at different acquisition locations. The purpose of this is to ensure that the laser point cloud and the lidar at different acquisition positions can be accurately aligned in the same global coordinate system for a more accurate application later.
And carrying out feature extraction, feature matching, coordinate transformation and point cloud splicing based on the absolute laser radar coordinates at different acquisition positions and the laser point clouds acquired at the corresponding acquisition positions so as to realize coarse matching and coarse splicing.
After the absolute positioning coordinates are introduced, the absolute laser radar coordinates and the absolute coordinates of the laser point cloud at different acquisition positions in the world coordinate system can be obtained, so that the coordinate system is a uniform absolute coordinate even though the data sets are acquired at different time periods. After the point clouds are directly unfolded according to the world coordinate system, the direct interrelation between the point clouds approximately corresponds to the real space, so that coarse matching and coarse splicing can be rapidly realized.
S103, extracting a visual point cloud in the panoramic image, registering the laser point cloud and the visual point cloud based on calibration parameters between the laser radar and the panoramic camera and common feature points of the laser point cloud and the visual point cloud, and determining absolute coordinates of the visual point cloud according to absolute coordinates of the laser point cloud.
In the step, some representative and distinguishing points are extracted from the two point clouds to serve as common characteristic points so as to represent the corresponding relation between the two point clouds. These common feature points may be geometrical features such as edges, corner points, curvatures, etc. And then, searching the optimal rotation and translation transformation to enable the public characteristic points between the two point clouds to coincide as much as possible, thereby realizing the alignment and fusion of the two point clouds. And finally, converting the relative coordinates of the visual point cloud into absolute coordinates, namely the coordinates under the same coordinate system with the laser point cloud, through the registered transformation matrix. Thus, consistency and accuracy of two point clouds can be ensured.
S104, extracting geometric feature points in the laser point cloud and carrying out feature matching on the geometric feature points in the laser point cloud in a public area of two adjacent data sets k and j, extracting geometric feature points and texture feature points in the visual point cloud and carrying out feature matching on the geometric feature points and the texture feature points in the visual point cloud, determining a comprehensive pose transformation matrix according to a matching result, and carrying out fine splicing.
In a specific embodiment, the step of extracting geometric feature points in the laser point cloud specifically includes: in order to ensure uniform distribution of characteristic points, each frame of laser point cloud is divided into preset sections, wherein the specific number can be set according to actual requirements, and the curvature of each section of laser point cloud is calculated; wherein, the calculation formula of curvature is:
in the above-mentioned method, the step of,indicating target point, ->Indicating neighboring laser spot clouds within a preset range of the target point,/for>Indicating a nearby point of the target point;
and then sequencing each section of laser point cloud according to the curvature, judging whether the current point is marked as a ground point, selecting preset N points with the largest curvature from non-ground points as edge points in the geometric feature points of the target object, and selecting preset N points with the smallest curvature as plane points in the geometric feature points of the target object.
In a specific embodiment, performing feature matching on geometric feature points in the laser point cloud includes: firstly, adopting a point-to-line characteristic matching method for the edge points, specifically, the edge points of the current data setSearching for a pair of matching points in the other dataset>Establishing a point-to-line distanceMinimum constraint, expressed as:
in the above-mentioned method, the step of,indicating the current dataset +.>Indicating another data set;
then, adopting a point-to-face feature matching method for the plane points, specifically, the plane points of the current data setSearching for 3 matching points in another dataset>The constraint that the point-to-face distance is minimum is established, expressed as:
further, a target equation is constructed based on the geometric feature points, expressed as:
in the above-mentioned method, the step of,transforming the pose between the two data sets;
finally, the objective equation is iteratively optimized through a Levenberg-Marquarut (Levenberg-Marquardt Algorithm, L-M) algorithm to perform feature matching on geometric feature points between two data sets k and j, and a first pose transformation matrix Tr between the two data sets is obtained. The L-M algorithm is an optimization algorithm of a nonlinear least square problem and is used for solving the problems of parameter estimation and fitting. The method combines the advantages of gradient descent and Gauss-Newton method, and can efficiently solve the problem of nonlinear optimization.
In a specific embodiment, extracting geometric feature points and texture feature points in the visual point cloud and performing feature matching on the geometric feature points and the texture feature points in the visual point cloud includes: and respectively extracting geometric feature points and texture feature points from the visual point cloud of the public region of the adjacent data sets k and j, and carrying out feature matching by adopting a feature matching algorithm based on distance to obtain a second pose transformation matrix Tc between the adjacent data sets k and j.
In the step, for the visual point cloud of the public area, geometrical feature points and texture feature points are respectively extracted. And carrying out feature matching on the extracted geometric feature points and texture feature points by adopting a feature matching algorithm based on distance. The feature matching algorithm based on the distance refers to judging the similarity between feature points according to the distance between the description vectors of the feature points, and common distances are Euclidean distance, hamming distance, cosine distance and the like. By means of feature matching, a second pose transformation matrix Tc between adjacent datasets k and j can be obtained, representing a rotation and translation transformation from dataset k to dataset j.
In a specific embodiment, determining the comprehensive pose transformation matrix according to the matching result and performing fine stitching includes:
according to the first pose transformation matrix Tr and the second pose transformation matrix Tc, a comprehensive pose transformation matrix T is calculated and expressed as follows:
in the above-mentioned method, the step of,,/>
performing point cloud fine splicing on laser point clouds of adjacent data sets k and j based on a pose transformation matrix T, calculating and counting splicing errors of point clouds with the same name in a public area, and adjustingParameters (parameters)So that the root mean square error rms of the splice error is minimized.
The laser point cloud splicing method comprises the steps of acquiring the acquired absolute positioning coordinates, and the laser point cloud and panoramic image of a target object; the method comprises the steps of firstly determining absolute laser radar coordinates of a laser radar at different acquisition positions and absolute coordinates of laser point clouds at different acquisition positions based on absolute positioning coordinates and calibration parameters between the laser radar and absolute positioning equipment, and carrying out rough matching and rough splicing on the laser point clouds. And extracting the visual point cloud in the panoramic image, registering the laser point cloud and the visual point cloud, and determining the absolute coordinates of the visual point cloud according to the absolute coordinates of the laser point cloud. And finally, extracting characteristic points in laser and visual laser point clouds of the public area, performing laser and visual characteristic point matching, and realizing fine splicing by using a matching result. Because absolute coordinate information is utilized, and meanwhile, feature matching of the visual point cloud is combined, the splicing precision under the laser weak feature environment is improved, and the efficiency and accuracy of laser point cloud splicing are effectively improved.
In one embodiment, as shown in fig. 2, a laser point cloud splicing apparatus is provided, which includes:
the data acquisition module 201 is used for acquiring the acquired absolute positioning coordinates, the laser point cloud of the target object and the panoramic image;
the coarse matching and coarse splicing module 202 is configured to determine an absolute laser radar coordinate of the laser radar at different acquisition positions and an absolute coordinate of the laser point cloud at different acquisition positions based on the absolute positioning coordinate and a calibration parameter between the laser radar and the absolute positioning device, and perform coarse matching and coarse splicing on the laser point cloud based on the absolute laser radar coordinate;
the fine matching and fine stitching module 203 is configured to extract a visual point cloud in the panoramic image, register the laser point cloud and the visual point cloud based on calibration parameters between the laser radar and the panoramic camera and common feature points of the laser point cloud and the visual point cloud, and determine an absolute coordinate of the visual point cloud according to an absolute coordinate of the laser point cloud;
extracting geometric feature points in the laser point cloud and performing feature matching on the geometric feature points in the laser point cloud in a public area of two adjacent data sets k and j, extracting geometric feature points and texture feature points in the visual point cloud and performing feature matching on the geometric feature points and the texture feature points in the visual point cloud, determining a comprehensive pose transformation matrix according to a matching result, and performing fine splicing.
Fig. 3 shows an internal block diagram of a laser point cloud stitching device in one embodiment. As shown in fig. 3, the laser point cloud splicing device includes a processor, a memory, and a network interface connected through a system bus. The memory includes a nonvolatile storage medium and an internal memory. The nonvolatile storage medium of the laser point cloud splicing device stores an operating system and can also store a computer program, and when the computer program is executed by a processor, the processor can realize the laser point cloud splicing method. The internal memory may also store a computer program that, when executed by the processor, causes the processor to perform the laser point cloud stitching method. Those skilled in the art will appreciate that the structure shown in fig. 3 is merely a block diagram of a portion of the structure related to the present application and does not constitute a limitation of the laser point cloud splicing apparatus to which the present application is applied, and that a specific laser point cloud splicing apparatus may include more or less components than those shown in the drawings, or may combine some components, or have a different arrangement of components.
A computer readable storage medium storing a computer program which when executed by a processor performs the steps of: determining absolute laser radar coordinates of the laser radar at different acquisition positions and absolute coordinates of the laser point cloud at different acquisition positions based on the absolute positioning coordinates and calibration parameters between the laser radar and the absolute positioning equipment, and performing rough matching and rough splicing on the laser point cloud based on the absolute laser radar coordinates; extracting a visual point cloud in the panoramic image, registering the laser point cloud and the visual point cloud based on calibration parameters between the laser radar and the panoramic camera and common characteristic points of the laser point cloud and the visual point cloud, and determining absolute coordinates of the visual point cloud according to absolute coordinates of the laser point cloud; extracting geometric feature points in the laser point cloud and performing feature matching on the geometric feature points in the laser point cloud in a public area of two adjacent data sets k and j, extracting geometric feature points and texture feature points in the visual point cloud and performing feature matching on the geometric feature points and the texture feature points in the visual point cloud, determining a comprehensive pose transformation matrix according to a matching result, and performing fine splicing.
A laser point cloud stitching device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the following steps when executing the computer program: determining absolute laser radar coordinates of the laser radar at different acquisition positions and absolute coordinates of the laser point cloud at different acquisition positions based on the absolute positioning coordinates and calibration parameters between the laser radar and the absolute positioning equipment, and performing rough matching and rough splicing on the laser point cloud based on the absolute laser radar coordinates; extracting a visual point cloud in the panoramic image, registering the laser point cloud and the visual point cloud based on calibration parameters between the laser radar and the panoramic camera and common characteristic points of the laser point cloud and the visual point cloud, and determining absolute coordinates of the visual point cloud according to absolute coordinates of the laser point cloud; extracting geometric feature points in the laser point cloud and performing feature matching on the geometric feature points in the laser point cloud in a public area of two adjacent data sets k and j, extracting geometric feature points and texture feature points in the visual point cloud and performing feature matching on the geometric feature points and the texture feature points in the visual point cloud, determining a comprehensive pose transformation matrix according to a matching result, and performing fine splicing.
It should be noted that the above laser point cloud splicing method, device, apparatus and computer readable storage medium belong to a general inventive concept, and the content in the embodiments of the laser point cloud splicing method, device, apparatus and computer readable storage medium may be mutually applicable.
Those skilled in the art will appreciate that implementing all or part of the above-described methods in accordance with the embodiments may be accomplished by way of a computer program stored in a non-transitory computer-readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the various embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples represent only a few embodiments of the present application, which are described in more detail and are not thereby to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.

Claims (9)

1. A laser point cloud stitching method, the method comprising:
acquiring the acquired absolute positioning coordinates, laser point clouds of a target object and a panoramic image;
determining absolute laser radar coordinates of the laser radar at different acquisition positions and absolute coordinates of the laser point cloud at different acquisition positions based on the absolute positioning coordinates and calibration parameters between the laser radar and the absolute positioning equipment, and performing rough matching and rough splicing on the laser point cloud based on the absolute laser radar coordinates;
extracting a visual point cloud in the panoramic image, registering the laser point cloud and the visual point cloud based on calibration parameters between the laser radar and the panoramic camera and common characteristic points of the laser point cloud and the visual point cloud, and determining absolute coordinates of the visual point cloud according to absolute coordinates of the laser point cloud;
extracting geometric feature points in the laser point cloud and carrying out feature matching on the geometric feature points in the laser point cloud in a public area of two adjacent data sets k and j, extracting geometric feature points and texture feature points in the visual point cloud and carrying out feature matching on the geometric feature points and the texture feature points in the visual point cloud, determining a comprehensive pose transformation matrix according to a matching result and carrying out fine splicing;
the comprehensive pose transformation matrix T is expressed as:
in the above, tr is a first pose transformation matrix, tc is a second pose transformation matrix,
2. the method of claim 1, wherein extracting geometric feature points in the laser point cloud comprises:
dividing the laser point cloud into preset sections, and calculating the curvature of each section of laser point cloud; wherein, the calculation formula of curvature is:
in the above-mentioned method, the step of,indicating coordinates of neighboring laser spot clouds within a preset range of the target point, +.>Indicating the coordinates of the target point i>Indicating coordinates of a neighboring point j of the target point i;
and sequencing each section of laser point cloud according to the curvature, selecting preset N points with the maximum curvature from the non-ground points as edge points in the geometric feature points of the target object, and selecting preset N points with the minimum curvature as plane points in the geometric feature points of the target object.
3. The method of claim 2, wherein extracting and feature matching geometric feature points and texture feature points in the visual point cloud comprises:
and respectively extracting geometric feature points and texture feature points from the visual point cloud of the public region of the adjacent data sets k and j, and carrying out feature matching by adopting a feature matching algorithm based on distance to obtain a second pose transformation matrix Tc between the adjacent data sets k and j.
4. A method according to claim 3, wherein determining the comprehensive pose transformation matrix according to the matching result and performing fine stitching comprises:
performing point cloud fine splicing on laser point clouds of adjacent data sets k and j based on a pose transformation matrix T, calculating and counting splicing errors of point clouds with the same name in a public area, and adjusting parametersNumber of digitsSo that the root mean square error rms of the splice error is minimized.
5. The method of claim 1, wherein determining the absolute lidar coordinates of the lidar at the different acquisition locations and the absolute coordinates of the laser point cloud at the different acquisition locations based on the absolute positioning coordinates and the calibration parameters between the lidar and the absolute positioning device, and performing coarse matching and coarse stitching on the laser point cloud based on the absolute lidar coordinates, comprises:
in the laser point cloud set at each acquisition position, based on the absolute positioning coordinates synchronously acquired during acquisition of the laser point cloud and calibration parameters between the laser radar and the absolute positioning equipment, converting relative laser radar coordinates and relative coordinates of the laser radar into corresponding absolute laser radar coordinates and absolute coordinates under a world coordinate system so as to obtain absolute laser radar coordinates and absolute coordinates at different acquisition positions;
and carrying out feature extraction, feature matching, coordinate transformation and point cloud splicing based on the absolute laser radar coordinates at different acquisition positions and the laser point clouds acquired at the corresponding acquisition positions so as to realize coarse matching and coarse splicing.
6. The method according to claim 1, characterized in that the method further comprises:
when the absolute positioning equipment is calibrated, acquiring acquired absolute positioning coordinates as reference data, performing internal reference calibration of the absolute positioning equipment by using a calibration tool, estimating a rotation matrix between an antenna of the absolute positioning equipment and a preset fixed point, and performing error correction and parameter adjustment to realize external reference calibration of the absolute positioning equipment;
when the laser radar is calibrated, scanning data of the laser radar on a known calibration target are acquired at a known position and a known gesture, real world coordinates of the calibration target are recorded, internal reference calibration of the laser radar is carried out by using a calibration tool, the position and the direction of the laser radar relative to a preset fixed point are estimated, and error correction and parameter adjustment are carried out to realize external reference calibration of the laser radar;
when the panoramic camera is calibrated, acquiring a known calibration target image, recording real world coordinates of the known calibration target, performing internal parameter calibration of the panoramic camera by using a calibration tool, estimating the position and the direction of the panoramic camera relative to a preset fixed point, and performing error correction and parameter adjustment to realize external parameter calibration of the panoramic camera;
the comprehensive pose transformation matrix T is expressed as:
in the above, tr is a first pose transformation matrix, tc is a second pose transformation matrix,
7. a laser point cloud stitching device, the device comprising:
the data acquisition module is used for acquiring the acquired absolute positioning coordinates, the laser point cloud of the target object and the panoramic image;
the coarse matching and coarse splicing module is used for determining the absolute laser radar coordinates of the laser radar at different acquisition positions and the absolute coordinates of the laser point cloud at different acquisition positions based on the absolute positioning coordinates and the calibration parameters between the laser radar and the absolute positioning equipment, and performing coarse matching and coarse splicing on the laser point cloud based on the absolute laser radar coordinates;
the fine matching and fine splicing module is used for extracting visual point clouds in the panoramic image, registering the laser point clouds and the visual point clouds based on calibration parameters between the laser radar and the panoramic camera and common characteristic points of the laser point clouds and the visual point clouds, and determining absolute coordinates of the visual point clouds according to absolute coordinates of the laser point clouds;
extracting geometric feature points in the laser point cloud and carrying out feature matching on the geometric feature points in the laser point cloud in a public area of two adjacent data sets k and j, extracting geometric feature points and texture feature points in the visual point cloud and carrying out feature matching on the geometric feature points and the texture feature points in the visual point cloud, determining a comprehensive pose transformation matrix according to a matching result and carrying out fine splicing;
the comprehensive pose transformation matrix T is expressed as:
in the above, tr is a first pose transformation matrix, tc is a second pose transformation matrix,
8. a computer readable storage medium, characterized in that a computer program is stored, which, when being executed by a processor, causes the processor to perform the steps of the method according to any of claims 1 to 6.
9. A laser point cloud stitching device comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of the method of any of claims 1 to 6.
CN202311764435.9A 2023-12-21 2023-12-21 Laser point cloud splicing method, device, medium and equipment Active CN117456146B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311764435.9A CN117456146B (en) 2023-12-21 2023-12-21 Laser point cloud splicing method, device, medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311764435.9A CN117456146B (en) 2023-12-21 2023-12-21 Laser point cloud splicing method, device, medium and equipment

Publications (2)

Publication Number Publication Date
CN117456146A CN117456146A (en) 2024-01-26
CN117456146B true CN117456146B (en) 2024-04-12

Family

ID=89593234

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311764435.9A Active CN117456146B (en) 2023-12-21 2023-12-21 Laser point cloud splicing method, device, medium and equipment

Country Status (1)

Country Link
CN (1) CN117456146B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107093210A (en) * 2017-04-20 2017-08-25 北京图森未来科技有限公司 A kind of laser point cloud mask method and device
CN110473239A (en) * 2019-08-08 2019-11-19 刘秀萍 A kind of high-precision point cloud registration method of 3 D laser scanning
CN114140539A (en) * 2021-11-30 2022-03-04 建科公共设施运营管理有限公司 Method and device for acquiring position of indoor object
CN116152310A (en) * 2022-11-28 2023-05-23 武汉船舶通信研究所(中国船舶重工集团公司第七二二研究所) Point cloud registration method, system, equipment and storage medium based on multi-source fusion

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2594111B (en) * 2019-12-18 2023-06-07 Motional Ad Llc Camera-to-LiDAR calibration and validation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107093210A (en) * 2017-04-20 2017-08-25 北京图森未来科技有限公司 A kind of laser point cloud mask method and device
CN110473239A (en) * 2019-08-08 2019-11-19 刘秀萍 A kind of high-precision point cloud registration method of 3 D laser scanning
CN114140539A (en) * 2021-11-30 2022-03-04 建科公共设施运营管理有限公司 Method and device for acquiring position of indoor object
CN116152310A (en) * 2022-11-28 2023-05-23 武汉船舶通信研究所(中国船舶重工集团公司第七二二研究所) Point cloud registration method, system, equipment and storage medium based on multi-source fusion

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
一种基于几何特征由粗到细点云配准算法;胡加涛等;科学技术与工程;20200218(第5期);第1948-1951页 *
胡加涛 ; 吴晓红 ; 何小海 ; 王正勇 ; 龚剑 ; .一种基于几何特征由粗到细点云配准算法.科学技术与工程.2020,(第05期), *

Also Published As

Publication number Publication date
CN117456146A (en) 2024-01-26

Similar Documents

Publication Publication Date Title
CN110033489B (en) Method, device and equipment for evaluating vehicle positioning accuracy
Kim et al. Automatic satellite image registration by combination of matching and random sample consensus
US9799139B2 (en) Accurate image alignment to a 3D model
US8059887B2 (en) System and method for providing mobile range sensing
CN105300362B (en) A kind of photogrammetric survey method applied to RTK receiver
Moussa et al. An automatic procedure for combining digital images and laser scanner data
CN109949232B (en) Image and RTK combined measurement method, system, electronic equipment and medium
CN116295279A (en) Unmanned aerial vehicle remote sensing-based building mapping method and unmanned aerial vehicle
CN112767461A (en) Automatic registration method for laser point cloud and sequence panoramic image
Liu et al. A novel adjustment model for mosaicking low-overlap sweeping images
JP2024527156A (en) System and method for optimal transport and epipolar geometry based image processing - Patents.com
CN113947638A (en) Image orthorectification method for fisheye camera
KR20230003803A (en) Automatic calibration through vector matching of the LiDAR coordinate system and the camera coordinate system
Kang et al. An automatic mosaicking method for building facade texture mapping using a monocular close-range image sequence
Zhang et al. An overlap-free calibration method for LiDAR-camera platforms based on environmental perception
Habib et al. Linear features in photogrammetric activities
CN117456146B (en) Laser point cloud splicing method, device, medium and equipment
US20230196601A1 (en) Apparatuses and methods for determining the volume of a stockpile
Deng et al. Automatic true orthophoto generation based on three-dimensional building model using multiview urban aerial images
CN116137039A (en) Visual and laser sensor external parameter correction method and related equipment
Verykokou et al. Exterior orientation estimation of oblique aerial imagery using vanishing points
Shin et al. Algorithms for multi‐sensor and multi‐primitive photogrammetric triangulation
CN114255457A (en) Same-airplane image direct geographic positioning method and system based on airborne LiDAR point cloud assistance
Elaksher Automatic line matching across multiple views based on geometric and radiometric properties
Berveglieri et al. Multi-scale matching for the automatic location of control points in large scale aerial images using terrestrial scenes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant