CN115690380B - Registration method and system - Google Patents

Registration method and system Download PDF

Info

Publication number
CN115690380B
CN115690380B CN202211409046.XA CN202211409046A CN115690380B CN 115690380 B CN115690380 B CN 115690380B CN 202211409046 A CN202211409046 A CN 202211409046A CN 115690380 B CN115690380 B CN 115690380B
Authority
CN
China
Prior art keywords
registration
image
unit
point
iteration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211409046.XA
Other languages
Chinese (zh)
Other versions
CN115690380A (en
Inventor
罗再谦
向煜
张俊
李兵
华媛媛
黄志�
黄令
韩�熙
黄国洪
毛阆
孟云豪
曹欣
钟敏
周林秋
吴琢珺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CHONGQING CYBERCITY SCI-TECH CO LTD
Original Assignee
CHONGQING CYBERCITY SCI-TECH CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CHONGQING CYBERCITY SCI-TECH CO LTD filed Critical CHONGQING CYBERCITY SCI-TECH CO LTD
Priority to CN202211409046.XA priority Critical patent/CN115690380B/en
Priority to PCT/CN2022/131556 priority patent/WO2024098428A1/en
Publication of CN115690380A publication Critical patent/CN115690380A/en
Application granted granted Critical
Publication of CN115690380B publication Critical patent/CN115690380B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention relates to the technical field of point cloud data, in particular to a registration method and a registration system, wherein the method comprises the steps of carrying out lens distortion correction on a digital camera, obtaining internal and external parameters of the digital camera, carrying out distortion correction on extracted image data, carrying out denoising and simplifying processing on three-dimensional laser point clouds, using POS information data, obtaining a timestamp of the shot image data of the digital camera and the three-dimensional laser radar scanning point clouds at the same moment, extracting registration initial values corresponding to the image data and the laser point clouds from the POS data, carrying out initial registration based on a collineation equation model, carrying out iterative closest point registration by combining an iterative closest point algorithm with a high-precision registration value as an initial value of the iterative closest point algorithm, and carrying out iterative closest point registration by using the image data point clouds and the laser point clouds to obtain high-precision registration; the method realizes high-precision registration of the laser point cloud and the influence data through twice registration, and solves the problems of lower registration precision and more limiting conditions at present.

Description

Registration method and system
Technical Field
The invention relates to the technical field of point cloud data, in particular to a registration method and a registration system.
Background
With the development of the geospatial information technology, the acquisition and analysis of spatial data for accurately describing three-dimensional spatial information becomes the basis for researching geospatial information. The traditional method for acquiring the three-dimensional data is time-consuming, and the sampling density is low, so that the acquired data cannot accurately and completely describe the space object. In addition, the space object and the surrounding environment thereof are complex, and how to accurately and comprehensively acquire corresponding space data is one of the main difficulties in the current space information science research. The ground three-dimensional laser scanning technology provides a nondestructive high-resolution three-dimensional measurement means for us, and can make up for the defects of the traditional measurement means.
Three-dimensional laser scanning is also called as laser radar (Light Detection and Ranging, LIDAR), which has been developed rapidly in more than ten years as a new mapping technology, and has wide application prospects in aspects including topography measurement, reverse engineering, historical remains protection and the like due to the advantages of high scanning speed, high coordinate point density obtained by scanning and the like, and is gradually an important means and method for acquiring a three-dimensional data model of a city, and the technology is also called as another technical innovation in the mapping field subsequent to GPS technology. According to different carriers, three-dimensional laser scanning is divided into airborne laser scanning, vehicle-mounted laser scanning and ground three-dimensional laser scanning. The data acquisition of the paper is mainly completed by using a ground three-dimensional laser scanner, and the data has the characteristics of the data compared with point cloud data acquired by other modes. In the ground scanning laser scanning process, because the measurement equipment and the environment are limited, point cloud data obtained by each measurement often only cover most of the surface of a scanned object, and complete point cloud data is needed to be completed through multi-station measurement, so that the local point cloud data are needed to be spliced and registered, and good basic data can be provided for subsequent three-dimensional modeling of the point cloud through effective and accurate registration.
Because of the shielding between the scanning environments, the point cloud data obtained by the ground three-dimensional laser scanning often comprise various areas which cannot be measured, point cloud holes are generated, and the information of the local areas is lost. These voids not only make the three-dimensional model of the scanned object unable to be visualized completely, but also affect parameter extraction after modeling. Because the close-up images are relatively convenient to acquire, the flexibility is higher, and the cost is lower. Thus, the scanner can take a supplemental shot with the digital camera from different angles. Thus, the advantages of the two sensors are fully utilized, and the overall view of the object is acquired. However, multi-source data fusion has been a difficulty in the field of data processing, and it is necessary to overcome the problem of finding homonymic features between two types of data, and to incorporate the two types of data into a unified coordinate system through a registration operation using information of overlapping portions, and to geometrically align the respective elements. The existing fusion method is mainly focused on the extraction of common geometric features on point cloud and images, is carried out by utilizing a collineation equation based on a center projection principle, and is still actually registration between two dimensions and three dimensions, and has low precision and more limiting conditions.
Therefore, there is a need to develop a registration method to meet the current post-data acquisition processing techniques.
Disclosure of Invention
Aiming at the defects of the prior art, the invention aims to provide a registration method to solve the problems of lower precision and more limiting conditions of registering laser point cloud and image data.
In order to solve the problems, the invention adopts the following technical scheme:
in one aspect, the present invention provides a registration method comprising
Performing lens distortion correction on the digital camera to obtain internal and external parameters of the digital camera, and performing distortion correction on the extracted image data;
denoising and simplifying the three-dimensional laser point cloud;
the method comprises the steps of using position, posture and time information data in a POS sensor, acquiring a timestamp of image data shot by a digital camera and three-dimensional laser radar scanning point cloud at the same moment according to a time synchronization principle, extracting a registration initial value corresponding to the image data and the laser point cloud from the POS data on the basis of the same timestamp, and carrying out initial registration based on a collineation equation model;
and (3) improving the precision of the initial registration value by using a weighted iteration method based on Robust estimation, determining a high-precision registration value, combining an iterative closest point algorithm, taking the high-precision registration value as the initial value of the iterative closest point algorithm, and carrying out iterative closest point registration by using an image data point cloud and a laser point cloud to obtain the high-precision registration.
Further, the correcting the lens distortion of the digital camera, obtaining the internal and external parameters of the digital camera, and correcting the distortion of the extracted key frame of the image data, includes:
fixing the checkerboard image and the digital camera, and acquiring a plurality of images from different directions;
the upper right corner of the checkerboard image is defined as an image coordinate origin, the number of the checkerboards on the checkerboard is obtained according to the length and the width of the image, and corner points are detected;
initializing parameters and performing nonlinear optimization, and calculating internal parameters and distortion coefficients of the digital camera;
and calculating distortion mapping based on the internal parameters and the distortion coefficients, and correcting the image data.
Further, the denoising and simplifying the three-dimensional laser point cloud includes:
calculating the distance from each point in the three-dimensional laser point cloud to a plurality of immediately adjacent points;
calculating the overall average distance and variance in the point cloud according to the distance;
taking the point with the distance larger than or equal to the sum of the overall average distance and the standard deviation multiple as a noise point, and removing the noise point;
wherein t=u+m×σ, u is the average distance, m is the standard deviation multiple, σ is the variance, u i Is the distance;
setting the minimum voxel scale, putting the original point cloud into a three-dimensional grid, replacing each minimum voxel containing a plurality of points with one point, and completing voxel downsampling.
Further, the method for acquiring the time stamp of the image data shot by the digital camera and the three-dimensional laser radar scanning point cloud at the same moment by using the position, the gesture and the time information data in the POS sensor according to the time synchronization principle, extracting the registration initial value corresponding to the image data and the laser point cloud from the POS data on the basis of the same time stamp, and carrying out initial registration based on a collineation equation model comprises the following steps:
determining the relation between the three-dimensional laser radar coordinates of the laser point cloud and the image space auxiliary coordinates of the same-name image points as
In the formula, [ X ] I Y I Z I ] T Auxiliary coordinates for an image space of the image point; [ X ] H Y H Z H ] T The coordinates of the laser points in a three-dimensional laser radar coordinate system; r is R IH And C IH Respectively representing a rotation matrix and a translation variable between an image space auxiliary coordinate system and a three-dimensional laser radar coordinate system;
obtaining the conversion relation between the image point image space auxiliary coordinates of the image data key frame and the WGS-84 coordinate system thereof:
C IW =T KW +R KW T HK +R KW R HK C IH
based on the conversion relation, a collinearity equation is obtained as follows:
wherein: x, y are the image plane coordinates of the image point; x is x 0 ,y 0 F is an internal azimuth element of the image; x is X C ,Y C ,Z C An object space coordinate which is the center of the camera; x, Y and Z are the object space coordinates of the laser points; a, a i ,b i ,c i (i=1, 2, 3) is 3 external orientation elements of the image;
and performing primary registration on the laser point cloud and the image data through the collineation equation.
Further, the method for improving the accuracy of the initial registration value by using a weighted iteration method based on Robust estimation, determining a high-accuracy registration value, combining an iterative closest point algorithm, using the high-accuracy registration value as the initial value of the iterative closest point algorithm, and performing iterative closest point registration by using an image data point cloud and a laser point cloud to obtain the high-accuracy registration, and comprises the following steps:
s401, starting with least square iteration according to an error equation of a collineation equation, and applying a weight matrix of POS position and posture data values as an identity matrix in the first iteration;
s402, calculating a weight function of the second iteration by applying the residual error of the first iteration, and performing improved Danish method weight selection iteration;
s403, calculating an error in the unit weight, and if the error does not meet the condition, circulating to S402 until the precision meets the requirement;
s404, according to an error equation of the collineation equation, taking the third iteration as a demarcation point, and terminating the iteration under the condition that the difference value of the two times of operation on the image angle element correction of the key frame of the image data is 0.1'.
On the other hand, the invention provides a registration system, which comprises an image data correction module, a denoising simplifying processing module, an initial registration module and a high-precision registration module;
the image data correction module is used for correcting lens distortion of the digital camera, acquiring internal and external parameters of the digital camera and correcting the distortion of the extracted image data;
the denoising simplifying processing module is used for denoising and simplifying the three-dimensional laser point cloud;
the initial registration module is used for acquiring a time stamp of the image data shot by the digital camera and the three-dimensional laser radar scanning point cloud at the same moment according to a time synchronization principle by using position, posture and time information data in the POS sensor, extracting registration initial values corresponding to the image data and the laser point cloud from the POS data on the basis of the same time stamp, and carrying out initial registration based on a collineation equation model;
the high-precision registration module is used for improving the precision of the initial registration value by using a weighted iteration method based on Robust estimation, determining a high-precision registration value, combining an iterative closest point algorithm, taking the high-precision registration value as the initial value of the iterative closest point algorithm, and carrying out iterative closest point registration by using an image data point cloud and a laser point cloud to obtain the high-precision registration.
Further, the image data correction module comprises an image acquisition unit, a corner detection unit, a calculation unit and a correction unit:
the image acquisition unit is used for fixing between the checkerboard image and the digital camera and acquiring a plurality of images from different directions;
the corner detection unit is used for determining the upper right corner of the checkered image as an image coordinate origin, acquiring the number of squares on the checkered image according to the length and the width of the image, and detecting the corner;
the calculating unit is used for initializing parameters and performing nonlinear optimization, and calculating internal parameters and distortion coefficients of the digital camera;
the correction unit is used for calculating distortion mapping based on the internal parameters and the distortion coefficients and correcting the image data.
Further, the denoising and simplifying processing module comprises a distance calculating unit, an overall average distance and variance calculating unit, a removing unit and a downsampling unit;
the distance calculation unit is used for calculating the distance from each point in the three-dimensional laser point cloud to a plurality of immediately adjacent points;
the integral average distance and variance calculation unit is used for calculating the integral average distance and variance in the point cloud according to the distance;
the removing unit is used for regarding the point with the distance larger than or equal to the sum of the overall average distance and the standard deviation multiple as a noise point and removing the noise point;
wherein t=u+m×σ, u is the average distance, m is the standard deviation multiple, σ is the variance, u i Is the distance;
the downsampling unit is used for setting the minimum voxel scale, placing the original point cloud into the three-dimensional grid, replacing each minimum voxel containing a plurality of points with one point, and completing voxel downsampling.
Further, the initial registration module comprises a coordinate relation determination unit, a coordinate conversion unit, a collineation equation determination unit and an initial registration unit;
the coordinate relation determining unit is used for determining that the relation between the three-dimensional laser radar coordinates of the laser point cloud and the image space auxiliary coordinates of the same-name image points is
In the formula, [ X ] I Y I Z I ] T Auxiliary coordinates for an image space of the image point; [ X ] H Y H Z H ] T The coordinates of the laser points in a three-dimensional laser radar coordinate system; r is R IH And C IH Respectively representing a rotation matrix and a translation variable between an image space auxiliary coordinate system and a three-dimensional laser radar coordinate system;
the coordinate conversion unit is used for obtaining the conversion relation between the image point image space auxiliary coordinate of the image data key frame and the WGS-84 coordinate system thereof:
C IW =T KW +R KW T HK +R KW R HK C IH
the collinearity equation determining unit is configured to obtain, based on the conversion relation, the collinearity equation as follows:
wherein: x, y are the image plane coordinates of the image point; x is x 0 ,y 0 F is an internal azimuth element of the image; x is X C ,Y C ,Z C An object space coordinate which is the center of the camera; x, Y and Z are the object space coordinates of the laser points; a, a i ,b i ,c i (i=1, 2, 3) is 3 external orientation elements of the image;
and the primary registration unit is used for carrying out primary registration on the laser point cloud and the image data through the collineation equation.
Further, the high-precision registration module comprises a first iteration unit, a weight selection iteration unit, a judging unit and an iteration termination unit;
the first iteration unit is used for starting with least square iteration according to an error equation of the collineation equation, and the first iteration uses a weight matrix of the POS position and posture data value as a unit matrix;
the weight selecting iteration unit is used for applying the residual error of the first iteration, calculating the weight function of the second iteration and carrying out improved danish method weight selecting iteration;
the judging unit is used for calculating the error in the unit weight, and if the error does not meet the condition, the process is circulated to S402 until the precision meets the requirement;
the iteration termination unit is used for taking the third iteration as a demarcation point according to the error equation of the collineation equation, and the condition of iteration termination is that the difference value of the two times of operation on the image angle element correction of the key frame of the image data is 0.1'.
The invention has the beneficial effects that: according to the invention, the high-precision registration value is used as the initial value of the iterative closest point algorithm in combination with the iterative closest point algorithm, so that the precision of the initial registration value is improved, and the iterative closest point registration is performed by using the image data point cloud and the laser point cloud, so that the high-precision registration is obtained, and the problem that acquisition target data cannot be comprehensively and conveniently acquired due to the adoption of the laser point cloud data or influence data is solved.
Drawings
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings, in which:
fig. 1 is a schematic flow chart of a registration method according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a registration system according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to specific examples.
It should be noted that these examples are only for illustrating the present invention, and not for limiting the present invention, and simple modifications of the method under the premise of the inventive concept are all within the scope of the claimed invention.
Referring to FIG. 1, a registration method is provided, comprising
S100, performing lens distortion correction on a digital camera to obtain internal and external parameters of the digital camera, and performing distortion correction on the extracted image data;
s200, denoising and simplifying the three-dimensional laser point cloud;
s300, acquiring a time stamp of the image data shot by the digital camera and the three-dimensional laser radar scanning point cloud at the same moment according to a time synchronization principle by using position, posture and time information data in the POS sensor, extracting a registration initial value corresponding to the image data and the laser point cloud from the POS data on the basis of the same time stamp, and carrying out initial registration based on a collineation equation model;
s400, improving the accuracy of the initial registration value by using a weighted iteration method based on Robust estimation, determining a high-accuracy registration value, combining an iterative closest point algorithm, taking the high-accuracy registration value as the initial value of the iterative closest point algorithm, and performing iterative closest point registration by using an image data point cloud and a laser point cloud to obtain the high-accuracy registration.
As an embodiment, the S100 may be:
s101, fixing between a checkerboard image and a digital camera, and acquiring a plurality of images from different directions;
s102, determining the upper right corner of the checkered image as an image coordinate origin, acquiring the number of squares on the checkered image according to the length and the width of the image, and detecting corner points;
s103, initializing parameters and performing nonlinear optimization, and calculating internal parameters and distortion coefficients of the digital camera;
s104, calculating distortion mapping based on the internal parameters and the distortion coefficients, and correcting the image data.
As an embodiment, the S200 includes:
s201, calculating the distance from each point in the three-dimensional laser point cloud to a plurality of immediately adjacent points;
s202, calculating the overall average distance and variance in the point cloud according to the distance;
s203, regarding the point with the distance larger than or equal to the sum of the integral average distance and the standard deviation times sigma as a noise point, and removing the noise point;
wherein t=u+m×σ, u is the average distance, m is the standard deviation multiple, σ is the variance, u i Is the distance;
s204, setting a minimum voxel scale, putting the original point cloud into a three-dimensional grid, replacing each minimum voxel containing a plurality of points with one point, and completing voxel downsampling.
As an embodiment, the S300 includes:
s301, determining the relation between the three-dimensional laser radar coordinates of the laser point cloud and the image space auxiliary coordinates of the same-name image points as
In the formula, [ X ] I Y I Z I ] T Auxiliary coordinates for an image space of the image point; [ X ] H Y H Z H ] T The coordinates of the laser points in a three-dimensional laser radar coordinate system; r is R IH And C IH Respectively representing a rotation matrix and a translation variable between an image space auxiliary coordinate system and a three-dimensional laser radar coordinate system;
s302, obtaining a conversion relation between an image point image space auxiliary coordinate of the image data key frame and a WGS-84 coordinate system of the image point image space auxiliary coordinate:
C IW =T Kw +R KW T HK +R KW R HK C IH
s303, based on the conversion relation, obtaining a collinearity equation as follows:
wherein: x, y are the image plane coordinates of the image point; x is x 0 ,y 0 F is an internal azimuth element of the image; x is X C ,Y C ,Z C An object space coordinate which is the center of the camera; x, Y and Z are the object space coordinates of the laser points; a, a i ,b i ,c i (i=1, 2, 3) is 3 external orientation elements of the image;
s304, performing primary registration on the laser point cloud and the image data through the collineation equation.
As an embodiment, the S400 includes:
the conditions for the option iteration may be:
the weight formula is:
p t+1 =p t f(u),t=0,1,2,…,
using a normalized residual error as a parameter of the weight function, wherein the normalized residual error is:
in the method, in the process of the invention, to normalize the residual error, u j For the residual error of the observed value, m is the number of the observed values, m 0 For the standard deviation of the observed values, +.>Is Q uu The main diagonal outer element in P,
Q uu P=E-AQ xx A T P
wherein E is an observation error vector, A is an observation equation coefficient array;
the weighting iteration weight function is:
wherein P is 0 Is a weight factor;
the error equation of the contribution equation is then:
s401, starting with least square iteration according to an error equation of a collineation equation, and applying a weight matrix of POS position and posture data values as an identity matrix in the first iteration;
s402, calculating a weight function of the second iteration by applying the residual error of the first iteration, and performing improved Danish method weight selection iteration;
s403, calculating an error in the unit weight, and if the error does not meet the condition, circulating to S402 until the precision meets the requirement;
s404, according to an error equation of the collineation equation, taking the third iteration as a demarcation point, and terminating the iteration under the condition that the difference value of the two times of operation on the image angle element correction of the key frame of the image data is 0.1'.
Referring to fig. 2, a registration system includes an image data correction module 100, a denoising and simplifying processing module 200, an initial registration module 300, and a high-precision registration module 400;
the image data correction module 100 is configured to perform lens distortion correction on the digital camera, obtain internal and external parameters of the digital camera, and perform distortion correction on the extracted image data;
the denoising and simplifying processing module 200 is used for denoising and simplifying the three-dimensional laser point cloud;
the initial registration module 300 is configured to obtain, according to a time synchronization principle, a timestamp of the image data shot by the digital camera and the three-dimensional laser radar scanning point cloud at the same moment by using position, posture and time information data in the POS sensor, extract, on the basis of the same timestamp, a registration initial value corresponding to the image data and the laser point cloud from the POS data, and perform initial registration based on a collineation equation model;
the high-precision registration module 400 is configured to improve the precision of the initial registration value by using a weighted iteration method based on the Robust estimation, determine a high-precision registration value, combine an iterative closest point algorithm, use the high-precision registration value as the initial value of the iterative closest point algorithm, and perform iterative closest point registration by using an image data point cloud and a laser point cloud to obtain high-precision registration.
As an embodiment, the image data correction module 100 includes an image acquisition unit 101, a corner detection unit 102, a calculation unit 103, and a correction unit 104:
the image acquisition unit 101 is configured to fix between the checkerboard image and the digital camera, and acquire a plurality of images from different directions;
the corner detection unit 102 is configured to determine an upper right corner of the checkered image as an origin of coordinates of the image, obtain the number of squares on the checkered image according to the length and width of the image, and perform corner detection;
the calculating unit 103 is used for initializing parameters and performing nonlinear optimization, and calculating internal parameters and distortion coefficients of the digital camera;
the correction unit 104 is configured to calculate a distortion map based on the internal parameter and the distortion coefficient, and correct the image data.
As an implementation manner, the denoising and simplifying processing module 200 includes a distance calculating unit 201, an ensemble average distance and variance calculating unit 202, a removing unit 203, and a downsampling unit 204;
the distance calculating unit 201 is configured to calculate distances from each point in the three-dimensional laser point cloud to a plurality of immediately adjacent points;
the overall average distance and variance calculating unit 202 is configured to calculate an overall average distance and variance in the point cloud according to the distance;
the removing unit 203 is configured to take a point whose distance is equal to or greater than a sum of the overall average distance and a standard deviation multiple as a noise point, and remove the noise point;
wherein t=u+m×σ, u is the average distance, m is the standard deviation multiple, σ is the variance, u i Is the distance;
the downsampling unit 204 is configured to set a minimum voxel scale, place the original point cloud into a three-dimensional grid, replace each minimum voxel containing a plurality of points with one point, and complete voxel downsampling.
As an embodiment, the initial registration module 300 includes a coordinate relation determination unit 301, a coordinate conversion unit 302, a collineation equation determination unit 303, and an initial registration unit 304;
the coordinate relation determining unit 301 is configured to determine a relation between a three-dimensional laser radar coordinate of the laser point cloud and an image space auxiliary coordinate of the same-name image point as follows:
in the formula, [ X ] I Y I Z I ] T Auxiliary coordinates for an image space of the image point; [ X ] H Y H Z H ] T The coordinates of the laser points in a three-dimensional laser radar coordinate system; r is R IH And C IH Rotation matrix respectively representing image space auxiliary coordinate system and three-dimensional laser radar coordinate systemAnd a translation variable;
the coordinate conversion unit 302 is configured to obtain a conversion relationship between the auxiliary coordinates of the image point image space of the key frame of the image data and the WGS-84 coordinate system thereof:
C IW =T KW +R iW T HK +R KW R HK C IH
the collinearity equation determining unit 303 is configured to derive, based on the conversion relation, a collinearity equation as follows:
wherein: x, y are the image plane coordinates of the image point; x is x 0 ,y 0 F is an internal azimuth element of the image; x is X C ,Y C ,Z C An object space coordinate which is the center of the camera; x, Y and Z are the object space coordinates of the laser points; a, a i ,b i ,c i (i=1, 2, 3) is 3 external orientation elements of the image;
the primary registration unit 304 is configured to perform primary registration on the laser point cloud and the image data through the collinearity equation.
As an implementation manner, the high-precision registration module 400 includes a first iteration unit 401, a weighting iteration unit 402, a determination unit 403, and an iteration termination unit 404;
the first iteration unit 401 is configured to start with least square iteration according to an error equation of the collineation equation, where the first iteration uses a weight matrix of POS position and posture data values as an identity matrix;
the weight selecting iteration unit 402 is configured to apply the residual error of the first iteration, calculate a weight function of the second iteration, and perform improved danish method weight selecting iteration;
the determining unit 403 is configured to calculate an error in the unit weight, and if the error does not meet the condition, loop to S402 until the accuracy meets the requirement;
the iteration termination unit 404 is configured to take the third iteration as a demarcation point according to the error equation of the collineation equation, and the condition for iteration termination is that the difference value of the two operations of the image angle element correction of the key frame of the image data is 0.1'.
Finally, it is noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be understood that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (8)

1. A registration method, comprising
Performing lens distortion correction on the digital camera to obtain internal and external parameters of the digital camera, and performing distortion correction on the extracted image data;
denoising and simplifying the three-dimensional laser point cloud;
the method comprises the steps of using position, posture and time information data in a POS sensor, acquiring a timestamp of image data shot by a digital camera and three-dimensional laser radar scanning point cloud at the same moment according to a time synchronization principle, extracting a registration initial value corresponding to the image data and the laser point cloud from the POS data on the basis of the same timestamp, and carrying out initial registration based on a collineation equation model;
the accuracy of the initial registration value is improved by using a weighted iteration method based on Robust estimation, a high-accuracy registration value is determined, the high-accuracy registration value is used as the initial value of the iterative closest point algorithm by combining with the iterative closest point algorithm, and the image data point cloud and the laser point cloud are used for iterative closest point registration, so that high-accuracy registration is obtained;
the method for acquiring the time stamp of the image data shot by the digital camera and the three-dimensional laser radar scanning point cloud at the same moment by using the position, the gesture and the time information data in the POS sensor according to the time synchronization principle, extracting the registration initial value corresponding to the image data and the laser point cloud from the POS data on the basis of the same time stamp, and carrying out initial registration based on a collineation equation model comprises the following steps:
determining the relation between the three-dimensional laser radar coordinates of the laser point cloud and the image space auxiliary coordinates of the same-name image points as
In the formula, [ X ] I Y I Z I ] T Auxiliary coordinates for an image space of the image point; [ X ] H Y H Z H ] T The coordinates of the laser points in a three-dimensional laser radar coordinate system; r is R IH And C IH Respectively representing a rotation matrix and a translation variable between an image space auxiliary coordinate system and a three-dimensional laser radar coordinate system;
obtaining the conversion relation between the image point image space auxiliary coordinates of the image data key frame and the WGS-84 coordinate system thereof:
in the formula, [ X ] w Y w Z w ]Is the coordinate in WGS-84 coordinate system, R IW And C IW Representing rotation matrix and translation variable between the image space auxiliary coordinate system and the WGS-84 coordinate system, R KW And T KW Representing the rotation matrix and the translation amount between the inertial navigation coordinate system and the WGS-84 coordinate system respectively, R HW And T HW Respectively representing rotation matrix and translation quantity between three-dimensional laser radar coordinate system and WGS-84 coordinate system, R HK And T HK Respectively representing a rotation matrix and a translation amount between an inertial navigation coordinate system and a three-dimensional laser radar coordinate system;
based on the conversion relation, a collinearity equation is obtained as follows:
wherein: x, y are the image plane coordinates of the image point; x is x 0 ,y 0 F is an internal azimuth element of the image; x is X C ,Y C ,Z C An object space coordinate which is the center of the camera; x, Y and Z are the object space coordinates of the laser points; a, a i ,b i ,c i (i=1, 2, 3) is 3 external orientation elements of the image;
and performing primary registration on the laser point cloud and the image data through the collineation equation.
2. The registration method according to claim 1, wherein the performing lens distortion correction on the digital camera, obtaining internal and external parameters of the digital camera, and performing distortion correction on the extracted key frame of the image data, includes:
fixing the checkerboard image and the digital camera, and acquiring a plurality of images from different directions;
the upper right corner of the checkerboard image is defined as an image coordinate origin, the number of the checkerboards on the checkerboard is obtained according to the length and the width of the image, and corner points are detected;
initializing parameters and performing nonlinear optimization, and calculating internal parameters and distortion coefficients of the digital camera;
and calculating distortion mapping based on the internal parameters and the distortion coefficients, and correcting the image data.
3. The registration method according to claim 1, wherein the denoising and compacting process for the three-dimensional laser point cloud includes:
calculating the distance from each point in the three-dimensional laser point cloud to a plurality of immediately adjacent points;
calculating the overall average distance and variance in the point cloud according to the distance;
taking the point with the distance larger than or equal to the sum of the integral average distance and the standard deviation multiple times variance as a noise point, and removing the noise point;
setting the minimum voxel scale, putting the original point cloud into a three-dimensional grid, replacing each minimum voxel containing a plurality of points with one point, and completing voxel downsampling.
4. The registration method according to claim 1, wherein the step of using a weighted iteration method based on Robust estimation to improve the accuracy of initial registration values, determining high-accuracy registration values, using the high-accuracy registration values as initial values of an iterative closest point algorithm in combination with the iterative closest point algorithm, performing iterative closest point registration using an image data point cloud and a laser point cloud, and obtaining high-accuracy registration includes:
s401, starting with least square iteration according to an error equation of a collineation equation, and applying a weight matrix of POS position and posture data values as an identity matrix in the first iteration;
s402, calculating a weight function of the second iteration by applying the residual error of the first iteration, and performing improved Danish method weight selection iteration;
s403, calculating an error in the unit weight, and if the error does not meet the condition, circulating to S402 until the precision meets the requirement;
s404, according to an error equation of the collineation equation, taking the third iteration as a demarcation point, and terminating the iteration under the condition that the difference value of the two times of operation on the image angle element correction of the key frame of the image data is 0.1'.
5. The registration system is characterized by comprising an image data correction module, a denoising simplifying processing module, an initial registration module and a high-precision registration module;
the image data correction module is used for correcting lens distortion of the digital camera, acquiring internal and external parameters of the digital camera and correcting the distortion of the extracted image data;
the denoising simplifying processing module is used for denoising and simplifying the three-dimensional laser point cloud;
the initial registration module is used for acquiring a time stamp of the image data shot by the digital camera and the three-dimensional laser radar scanning point cloud at the same moment according to a time synchronization principle by using position, posture and time information data in the POS sensor, extracting registration initial values corresponding to the image data and the laser point cloud from the POS data on the basis of the same time stamp, and carrying out initial registration based on a collineation equation model;
the high-precision registration module is used for improving the precision of the initial registration value by using a weighted iteration method based on Robust estimation, determining a high-precision registration value, combining an iterative closest point algorithm, taking the high-precision registration value as the initial value of the iterative closest point algorithm, and carrying out iterative closest point registration by using an image data point cloud and a laser point cloud to obtain high-precision registration;
the initial registration module comprises a coordinate relation determination unit, a coordinate conversion unit, a collineation equation determination unit and an initial registration unit;
the coordinate relation determining unit is used for determining that the relation between the three-dimensional laser radar coordinates of the laser point cloud and the image space auxiliary coordinates of the same-name image points is
In the formula, [ X ] I Y I Z I ] T Auxiliary coordinates for an image space of the image point; [ X ] H Y H Z H ] T The coordinates of the laser points in a three-dimensional laser radar coordinate system; r is R IH And C IH Respectively representing a rotation matrix and a translation variable between an image space auxiliary coordinate system and a three-dimensional laser radar coordinate system;
the coordinate conversion unit is used for obtaining the conversion relation between the image point image space auxiliary coordinate of the image data key frame and the WGS-84 coordinate system thereof:
in the formula, [ X ] w Y w Z w ]Is the coordinate in WGS-84 coordinate system, R IW And C IW Representing rotation matrix and translation variable between the image space auxiliary coordinate system and the WGS-84 coordinate system, R KW And T KW Representing the rotation matrix and the translation amount between the inertial navigation coordinate system and the WGS-84 coordinate system respectively, R HW And T HW Respectively representing three-dimensional laser radar coordinatesRotation matrix and translation amount between system and WGS-84 coordinate system, R HK And T HK Respectively representing a rotation matrix and a translation amount between an inertial navigation coordinate system and a three-dimensional laser radar coordinate system;
the collinearity equation determining unit is configured to obtain, based on the conversion relation, the collinearity equation as follows:
wherein: x, y are the image plane coordinates of the image point; x is x 0 ,y 0 F is an internal azimuth element of the image; x is X C ,Y C ,Z C An object space coordinate which is the center of the camera; x, Y and Z are the object space coordinates of the laser points; a, a i ,b i ,c i (i=1, 2, 3) is 3 external orientation elements of the image;
and the primary registration unit is used for carrying out primary registration on the laser point cloud and the image data through the collineation equation.
6. The registration system of claim 5, wherein the image data correction module comprises an image acquisition unit, a corner detection unit, a calculation unit, and a correction unit:
the image acquisition unit is used for fixing between the checkerboard image and the digital camera and acquiring a plurality of images from different directions;
the corner detection unit is used for determining the upper right corner of the checkered image as an image coordinate origin, acquiring the number of squares on the checkered image according to the length and the width of the image, and detecting the corner;
the calculating unit is used for initializing parameters and performing nonlinear optimization, and calculating internal parameters and distortion coefficients of the digital camera;
the correction unit is used for calculating distortion mapping based on the internal parameters and the distortion coefficients and correcting the image data.
7. The registration system of claim 5, wherein the denoising refinement processing module comprises a distance calculation unit, an ensemble average distance and variance calculation unit, a removal unit, and a downsampling unit;
the distance calculation unit is used for calculating the distance from each point in the three-dimensional laser point cloud to a plurality of immediately adjacent points;
the integral average distance and variance calculation unit is used for calculating the integral average distance and variance in the point cloud according to the distance;
the removing unit is used for taking the point with the distance larger than or equal to the sum of the integral average distance and the standard deviation multiple times variance as a noise point and removing the noise point;
the downsampling unit is used for setting the minimum voxel scale, placing the original point cloud into the three-dimensional grid, replacing each minimum voxel containing a plurality of points with one point, and completing voxel downsampling.
8. The registration system of claim 5, wherein the high-precision registration module comprises a first iteration unit, a weighted iteration unit, a decision unit, and an iteration termination unit;
the first iteration unit is used for starting with least square iteration according to an error equation of the collineation equation, and the first iteration uses a weight matrix of the POS position and posture data value as a unit matrix;
the weight selecting iteration unit is used for applying the residual error of the first iteration, calculating the weight function of the second iteration and carrying out improved danish method weight selecting iteration;
the judging unit is used for calculating the error in the unit weight, and if the error does not meet the condition, the judging unit is circulated to the weight selecting iteration unit until the precision meets the requirement;
the iteration termination unit is used for taking the third iteration as a demarcation point according to the error equation of the collineation equation, and the condition of iteration termination is that the difference value of the two times of operation on the image angle element correction of the key frame of the image data is 0.1'.
CN202211409046.XA 2022-11-11 2022-11-11 Registration method and system Active CN115690380B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211409046.XA CN115690380B (en) 2022-11-11 2022-11-11 Registration method and system
PCT/CN2022/131556 WO2024098428A1 (en) 2022-11-11 2022-11-11 Registration method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211409046.XA CN115690380B (en) 2022-11-11 2022-11-11 Registration method and system

Publications (2)

Publication Number Publication Date
CN115690380A CN115690380A (en) 2023-02-03
CN115690380B true CN115690380B (en) 2023-07-21

Family

ID=85052974

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211409046.XA Active CN115690380B (en) 2022-11-11 2022-11-11 Registration method and system

Country Status (2)

Country Link
CN (1) CN115690380B (en)
WO (1) WO2024098428A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117518197B (en) * 2024-01-08 2024-03-26 太原理工大学 Contour marking method for underground coal mine tunneling roadway
CN118196154A (en) * 2024-04-02 2024-06-14 西南交通大学 Absolute pose registration method, device, equipment and medium for regular revolving body vessel
CN118334016A (en) * 2024-06-12 2024-07-12 青岛交通工程监理咨询有限公司 Highway engineering construction quality detection method

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8194936B2 (en) * 2008-04-25 2012-06-05 University Of Iowa Research Foundation Optimal registration of multiple deformed images using a physical model of the imaging distortion
KR100973052B1 (en) * 2009-04-24 2010-07-30 서울시립대학교 산학협력단 Automatic matching method of digital aerial images using lidar data
CN109087339A (en) * 2018-06-13 2018-12-25 武汉朗视软件有限公司 A kind of laser scanning point and Image registration method
CN109410256B (en) * 2018-10-29 2021-10-15 北京建筑大学 Automatic high-precision point cloud and image registration method based on mutual information
CN110458874B (en) * 2019-07-17 2023-03-28 苏州博芮恩光电科技有限公司 Image non-rigid registration method and system
CN112465849B (en) * 2020-11-27 2022-02-15 武汉大学 Registration method for laser point cloud and sequence image of unmanned aerial vehicle
KR20220112072A (en) * 2021-02-03 2022-08-10 한국전자통신연구원 Apparatus and Method for Searching Global Minimum of Point Cloud Registration Error

Also Published As

Publication number Publication date
WO2024098428A1 (en) 2024-05-16
CN115690380A (en) 2023-02-03

Similar Documents

Publication Publication Date Title
CN115690380B (en) Registration method and system
CN109917356B (en) Airborne laser scanning system error calibration method
CN105046251B (en) A kind of automatic ortho-rectification method based on environment No.1 satellite remote-sensing image
KR20130138247A (en) Rapid 3d modeling
CN102243299B (en) Image orthographic correction device of unmanned airborne SAR (Synthetic Aperture Radar)
CN112648976B (en) Live-action image measuring method and device, electronic equipment and storage medium
CN112465732A (en) Registration method of vehicle-mounted laser point cloud and sequence panoramic image
CN110363758B (en) Optical remote sensing satellite imaging quality determination method and system
CN112862966B (en) Method, device, equipment and storage medium for constructing surface three-dimensional model
CN112767461A (en) Automatic registration method for laser point cloud and sequence panoramic image
CN116563377A (en) Mars rock measurement method based on hemispherical projection model
CN112801983A (en) Slope global settlement detection method and system based on photogrammetry
CN105571598B (en) A kind of assay method of laser satellite altimeter footmark camera posture
CN107504959B (en) Method for measuring house wall base outline by utilizing inclined aerial image
CN117128861A (en) Monitoring system and monitoring method for station-removing three-dimensional laser scanning bridge
CN112767459A (en) Unmanned aerial vehicle laser point cloud and sequence image registration method based on 2D-3D conversion
Elhalawani et al. Implementation of close range photogrammetry using modern non-metric digital cameras for architectural documentation
Deng et al. Automatic true orthophoto generation based on three-dimensional building model using multiview urban aerial images
CN116203544A (en) Method, device and medium for back-and-forth detection and return uncontrolled self-checking of mobile measurement system
CN111369610B (en) Point cloud data gross error positioning and eliminating method based on credibility information
Huang et al. An Innovative Approach of Evaluating the Accuracy of Point Cloud Generated by Photogrammetry-Based 3D Reconstruction
Zhang et al. Registration of CBERS-02B satellite imagery in quick GIS updating
CN114004949B (en) Airborne point cloud-assisted mobile measurement system placement parameter checking method and system
CN117994463B (en) Construction land mapping method and system
CN115201779B (en) Method for acquiring imaging origin spatial position and baseline horizontal azimuth angle of radar

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant