WO2024098428A1 - 一种配准方法及系统 - Google Patents

一种配准方法及系统 Download PDF

Info

Publication number
WO2024098428A1
WO2024098428A1 PCT/CN2022/131556 CN2022131556W WO2024098428A1 WO 2024098428 A1 WO2024098428 A1 WO 2024098428A1 CN 2022131556 W CN2022131556 W CN 2022131556W WO 2024098428 A1 WO2024098428 A1 WO 2024098428A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
registration
point cloud
unit
image data
Prior art date
Application number
PCT/CN2022/131556
Other languages
English (en)
French (fr)
Inventor
罗再谦
向煜
张俊
李兵
华媛媛
黄志�
黄令
韩�熙
黄国洪
毛阆
孟云豪
曹欣
钟敏
周林秋
吴琢珺
Original Assignee
重庆数字城市科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 重庆数字城市科技有限公司 filed Critical 重庆数字城市科技有限公司
Publication of WO2024098428A1 publication Critical patent/WO2024098428A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Definitions

  • the present invention relates to the technical field of point cloud data, and in particular to a registration method and system.
  • 3D laser scanning is also known as Light Detection and Ranging (LIDAR).
  • LIDAR Light Detection and Ranging
  • 3D laser scanning is divided into airborne laser scanning, vehicle-mounted laser scanning, and ground 3D laser scanning. The data acquisition in this paper is mainly completed by ground 3D laser scanners. Compared with point cloud data obtained by other methods, this data has its own characteristics.
  • the point cloud data obtained by ground 3D laser scanning often contains various areas that cannot be measured, resulting in point cloud holes and loss of local area information. These holes not only make it impossible to fully visualize the 3D model of the scanned object, but also affect the extraction of parameters after modeling. Since close-range images are relatively convenient to obtain, with high flexibility and low cost, the scanner can use a digital camera to take supplementary photos from different angles. In this way, the advantages of the two sensors are fully utilized to obtain the full picture of the object. However, multi-source data fusion has always been a difficult point in the field of data processing.
  • the present invention aims to provide a registration method to solve the problems of low accuracy and many restrictions in the registration of laser point cloud and image data.
  • the present invention adopts the following technical solutions:
  • the present invention provides a registration method, comprising
  • the timestamps of the image data taken by the digital camera and the 3D laser radar scanning point cloud at the same time are obtained.
  • the initial registration value corresponding to the image data and the laser point cloud is extracted from the POS data, and the initial registration is performed based on the collinear equation model.
  • the weighted iteration method based on robust estimation is used to improve the accuracy of the initial registration value, determine the high-precision registration value, and combine iterative closest point algorithm.
  • the high-precision registration value is used as the initial value of the iterative closest point algorithm, and the image data point cloud and laser point cloud are used for iterative closest point registration to obtain high-precision registration.
  • the step of performing lens distortion correction on the digital camera, obtaining internal and external parameters of the digital camera, and performing distortion correction on the extracted key frames of the image data includes:
  • the upper right corner of the chessboard image is set as the image coordinate origin, and the number of squares on the chessboard is obtained according to the length and width of the image, and corner point detection is performed;
  • a distortion map is calculated to correct the image data.
  • the denoising and streamlining of the three-dimensional laser point cloud includes:
  • Points whose distance is greater than or equal to the sum of the overall average distance and the standard deviation multiple * ⁇ are considered as noise points and removed;
  • t u + m * ⁇
  • u is the average distance
  • m is the standard deviation multiple
  • is the variance
  • u i is the distance
  • the minimum voxel scale is set, the original point cloud is placed in a three-dimensional grid, and each minimum voxel containing multiple points is replaced by one of the points to complete voxel downsampling.
  • the position, posture and time information data in the POS sensor are used to obtain the timestamps of the image data taken by the digital camera and the three-dimensional laser radar scanning point cloud at the same time according to the time synchronization principle.
  • the initial registration value corresponding to the image data and the laser point cloud is extracted from the POS data, and the initial registration is performed based on the collinear equation model, including:
  • [X I Y I Z I ] T is the image space auxiliary coordinate of the image point
  • [X H Y H Z H ] T is the coordinate of the laser point in the 3D laser radar coordinate system
  • R IH and C IH represent the rotation matrix and translation variable between the image space auxiliary coordinate system and the 3D laser radar coordinate system, respectively;
  • x, y are the image plane coordinates of the image point
  • x0 , y0 , f are the internal orientation elements of the image
  • XC , YC , ZC are the object space coordinates of the camera center
  • X, Y, Z are the object space coordinates of the laser point
  • the laser point cloud and image data are initially aligned using the collinearity equation.
  • the method of using the weighted iteration method based on robust estimation to improve the accuracy of the initial registration value, determining a high-precision registration value, combining the iterative closest point algorithm, using the high-precision registration value as the initial value of the iterative closest point algorithm, and using the image data point cloud and the laser point cloud for iterative closest point registration to obtain a high-precision registration includes:
  • the third iteration is taken as the demarcation point, and the condition for terminating the iteration is that the difference of the correction number of the image angle element of the key frame of the image data after two operations is 0.1′.
  • the present invention provides a registration system, including an image data correction module, a denoising and streamlining processing module, an initial registration module and a high-precision registration module;
  • the image data correction module is used to perform lens distortion correction on the digital camera, obtain internal and external parameters of the digital camera, and perform distortion correction on the extracted image data;
  • the denoising and streamlining processing module is used to perform denoising and streamlining processing on the three-dimensional laser point cloud
  • the initial registration module is used to use the position, posture and time information data in the POS sensor, and according to the principle of time synchronization, obtain the timestamps of the image data taken by the digital camera and the three-dimensional laser radar scanning point cloud at the same time, extract the initial registration value corresponding to the image data and the laser point cloud from the POS data on the basis of the same timestamp, and perform initial registration based on the collinear equation model;
  • the high-precision registration module is used to improve the accuracy of the initial registration value by using the weighted iteration method based on robust estimation, determine the high-precision registration value, combine iterative closest point algorithm, use the high-precision registration value as the initial value of the iterative closest point algorithm, use the image data point cloud and the laser point cloud to perform iterative closest point registration, and obtain high-precision registration.
  • the image data correction module includes an image acquisition unit, a corner point detection unit, a calculation unit and a correction unit:
  • the image acquisition unit is used to be fixed between the chessboard image and the digital camera to acquire multiple images from different directions;
  • the corner point detection unit is used to define the upper right corner of the chessboard image as the image coordinate origin, obtain the number of squares on the chessboard according to the length and width of the image, and perform corner point detection;
  • the calculation unit is used to perform parameter initialization and nonlinear optimization, and calculate the intrinsic parameters and distortion coefficients of the digital camera;
  • the correction unit is used to calculate the distortion map based on the intrinsic parameters and the distortion coefficients, and correct the image data.
  • the denoising and streamlining processing module includes a distance calculation unit, an overall average distance and variance calculation unit, a removal unit and a downsampling unit;
  • the distance calculation unit is used to calculate the distance from each point in the three-dimensional laser point cloud to the previous several adjacent points;
  • the overall average distance and variance calculation unit is used to calculate the overall average distance and variance in the point cloud according to the distance;
  • the removal unit is used to regard the points whose distance is greater than or equal to the sum of the overall average distance and the standard deviation multiple * ⁇ as noise points and remove them;
  • t u + m * ⁇
  • u is the average distance
  • m is the standard deviation multiple
  • is the variance
  • u i is the distance
  • the downsampling unit is used to set the minimum voxel scale, put the original point cloud into a three-dimensional grid, and replace each minimum voxel containing multiple points with one of the points to complete voxel downsampling.
  • the initial registration module includes a coordinate relationship determination unit, a coordinate conversion unit, a collinearity equation determination unit and an initial registration unit;
  • the coordinate relationship determination unit is used to determine the relationship between the three-dimensional laser radar coordinates of the laser point cloud and the image space auxiliary coordinates of the same-name image point:
  • [X I Y I Z I ] T is the image space auxiliary coordinate of the image point
  • [X H Y H Z H ] T is the coordinate of the laser point in the 3D laser radar coordinate system
  • R IH and C IH represent the rotation matrix and translation variable between the image space auxiliary coordinate system and the 3D laser radar coordinate system, respectively;
  • the coordinate conversion unit is used to obtain the conversion relationship between the image point image space auxiliary coordinates of the key frame of the image data and its WGS-84 coordinate system:
  • the collinear equation determining unit is used to obtain the collinear equation based on the conversion relationship as follows:
  • x, y are the image plane coordinates of the image point
  • x0 , y0 , f are the internal orientation elements of the image
  • XC , YC , ZC are the object space coordinates of the camera center
  • X, Y, Z are the object space coordinates of the laser point
  • the initial registration unit is used to perform initial registration of the laser point cloud and the image data through the collinearity equation.
  • the high-precision registration module includes a first iteration unit, a weighted iteration unit, a determination unit, and an iteration termination unit;
  • the first iteration unit is used to start with a least squares iteration according to the error equation of the collinear equation, and the weight matrix of the POS position posture data value is used as the unit matrix in the first iteration;
  • the weight selection iteration unit is used to apply the residual of the first iteration, calculate the weight function of the second iteration, and perform the improved Danish method weight selection iteration;
  • the determination unit is used to calculate the unit weighted mean error. If the condition is not met, loop to S402 until the accuracy meets the requirement;
  • the iteration termination unit is used to use the third iteration as a demarcation point according to the error equation of the collinearity equation, and the condition for iteration termination is that the difference between the correction numbers of the image angle elements of the key frame of the image data after two operations is 0.1'.
  • the present invention combines an iterative closest point algorithm, uses a high-precision registration value as the initial value of the iterative closest point algorithm, improves the accuracy of the initial registration value, and uses image data point cloud and laser point cloud for iterative closest point registration to obtain high-precision registration, thereby solving the problem of not being able to comprehensively and conveniently acquire target data due to the use of laser point cloud data or influencing data.
  • FIG1 is a schematic diagram of a registration method flow chart according to an embodiment of the present invention.
  • FIG. 2 is a schematic diagram of a registration system according to an embodiment of the present invention.
  • a registration method including
  • the S100 may be:
  • S104 Calculate the distortion map based on the intrinsic parameters and the distortion coefficients, and correct the image data.
  • the S200 includes:
  • t u + m * ⁇
  • u is the average distance
  • m is the standard deviation multiple
  • is the variance
  • u i is the distance
  • the S300 includes:
  • [X I Y I Z I ] T is the image space auxiliary coordinate of the image point
  • [X H Y H Z H ] T is the coordinate of the laser point in the 3D laser radar coordinate system
  • R IH and C IH represent the rotation matrix and translation variable between the image space auxiliary coordinate system and the 3D laser radar coordinate system, respectively;
  • x, y are the image plane coordinates of the image point
  • x0 , y0 , f are the internal orientation elements of the image
  • XC , YC , ZC are the object space coordinates of the camera center
  • X, Y, Z are the object space coordinates of the laser point
  • the S400 includes:
  • the conditions for weight iteration can be:
  • the weight formula is:
  • the standardized residuals are:
  • E is the observation error vector
  • A is the observation equation coefficient matrix
  • P 0 is the weight factor
  • the third iteration is taken as the demarcation point, and the condition for terminating the iteration is that the difference of the correction number of the image angle element of the key frame of the image data after two operations is 0.1′.
  • a registration system which includes an image data correction module 100 , a denoising and streamlining processing module 200 , an initial registration module 300 , and a high-precision registration module 400 ;
  • the image data correction module 100 is used to perform lens distortion correction on the digital camera, obtain internal and external parameters of the digital camera, and perform distortion correction on the extracted image data;
  • the denoising and streamlining processing module 200 is used to perform denoising and streamlining processing on the three-dimensional laser point cloud;
  • the initial registration module 300 is used to use the position, posture and time information data in the POS sensor, and according to the time synchronization principle, obtain the timestamps of the image data taken by the digital camera and the three-dimensional laser radar scanning point cloud at the same time, extract the initial registration value corresponding to the image data and the laser point cloud from the POS data on the basis of the same timestamp, and perform initial registration based on the collinear equation model;
  • the high-precision registration module 400 is used to improve the accuracy of the initial registration value by using the weighted iteration method based on robust estimation, determine the high-precision registration value, combine iterative closest point algorithm, use the high-precision registration value as the initial value of the iterative closest point algorithm, use the image data point cloud and the laser point cloud to perform iterative closest point registration, and obtain high-precision registration.
  • the image data correction module 100 includes an image acquisition unit 101, a corner point detection unit 102, a calculation unit 103 and a correction unit 104:
  • the image acquisition unit 101 is used to be fixed between the chessboard image and the digital camera to acquire multiple images from different directions;
  • the corner point detection unit 102 is used to define the upper right corner of the chessboard image as the image coordinate origin, obtain the number of squares on the chessboard according to the length and width of the image, and perform corner point detection;
  • the calculation unit 103 is used to perform parameter initialization and nonlinear optimization, and calculate the intrinsic parameters and distortion coefficients of the digital camera;
  • the correction unit 104 is used to calculate the distortion map based on the intrinsic parameters and the distortion coefficients, and correct the image data.
  • the denoising and streamlining processing module 200 includes a distance calculation unit 201, an overall average distance and variance calculation unit 202, a removal unit 203 and a downsampling unit 204;
  • the distance calculation unit 201 is used to calculate the distance from each point in the three-dimensional laser point cloud to the previous several adjacent points;
  • the overall average distance and variance calculation unit 202 is used to calculate the overall average distance and variance in the point cloud according to the distance;
  • the removal unit 203 is used to regard the points whose distance is greater than or equal to the sum of the overall average distance and the standard deviation multiple * ⁇ as noise points and remove them;
  • t u + m * ⁇
  • u is the average distance
  • m is the standard deviation multiple
  • is the variance
  • u i is the distance
  • the downsampling unit 204 is used to set the minimum voxel scale, put the original point cloud into a three-dimensional grid, and replace each minimum voxel containing multiple points with one of the points to complete voxel downsampling.
  • the initial registration module 300 includes a coordinate relationship determination unit 301, a coordinate conversion unit 302, a collinear equation determination unit 303 and a preliminary registration unit 304;
  • the coordinate relationship determination unit 301 is used to determine the relationship between the three-dimensional laser radar coordinates of the laser point cloud and the image space auxiliary coordinates of the same-name image point as follows:
  • [X I Y I Z I ] T is the image space auxiliary coordinate of the image point
  • [X H Y H Z H ] T is the coordinate of the laser point in the 3D laser radar coordinate system
  • R IH and C IH represent the rotation matrix and translation variable between the image space auxiliary coordinate system and the 3D laser radar coordinate system, respectively;
  • the coordinate conversion unit 302 is used to obtain the conversion relationship between the image point image space auxiliary coordinates of the key frame of the image data and its WGS-84 coordinate system:
  • the collinear equation determining unit 303 is used to obtain the collinear equation based on the conversion relationship:
  • x, y are the image plane coordinates of the image point
  • x0 , y0 , f are the internal orientation elements of the image
  • XC , YC , ZC are the object space coordinates of the camera center
  • X, Y, Z are the object space coordinates of the laser point
  • the initial registration unit 304 is used to perform initial registration of the laser point cloud and the image data through the collinearity equation.
  • the high-precision registration module 400 includes a first iteration unit 401, a weighted iteration unit 402, a determination unit 403 and an iteration termination unit 404;
  • the first iteration unit 401 is used to start with least square iteration according to the error equation of the collinear equation, and the weight matrix of the POS position and posture data values is used as the unit matrix in the first iteration;
  • the weight selection iteration unit 402 is used to apply the residual of the first iteration to calculate the weight function of the second iteration and perform the improved Danish weight selection iteration;
  • the determination unit 403 is used to calculate the unit weighted mean error. If the condition is not met, loop back to S402 until the accuracy meets the requirement;
  • the iteration termination unit 404 is used to use the third iteration as a demarcation point according to the error equation of the collinearity equation, and the condition for iteration termination is that the difference between the correction values of the image angle elements of the key frame of the image data after two operations is 0.1′.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

本发明涉及点云数据技术领域,具体涉及一种配准方法及系统,本方法包括对数码相机进行镜头畸变校正,获取数码相机内外参数,并对提取的影像数据进行畸变矫正,对三维激光点云进行去噪精简处理,使用POS信息数据,获取数码相机拍摄影像数据与三维激光雷达扫描点云在同一时刻下的时间戳,从POS数据中提取出影像数据与激光点云对应的配准初始值,并基于共线方程模型进行初始配准,结合迭代最近点算法,把高精度的配准值作为迭代最近点算法的初始值,使用影像数据点云与激光点云进行迭代最近点配准,获得高精度的配准;本发明方法通过两次配准,实现对激光点云与影响数据的高精度配准,解决目前配准精度较低,限制条件较多的问题。

Description

一种配准方法及系统 技术领域
本发明涉及点云数据技术领域,具体涉及一种配准方法及系统。
背景技术
随着地球空间信息技术发展进入一个新的时期,获取、分析用以准确描述三维空间信息的空间数据成为研究地球空间信息的基础。传统的获取三维数据的方式不仅耗时,而且采样密度低,造成采集到的数据无法对空间对象进行精确完整描述。另外,空间对象及其周围环境纷繁复杂,如何准确、全方位获取相应的空间数据也成为当前空间信息科学研究的主要困难之一。地面三维激光扫描技术的出现为我们提供了一种非破坏性的高分辨率三维测量手段,可以弥补传统测量手段的不足。
三维激光扫描也被称为激光雷达(Light Detection and Ranging,LIDAR),其作为一种测绘新技术在近十多年来得到了快速的发展,由于其扫描速度快,扫描得到的坐标点密度较大等优点,在包括地形测量,逆向工程,历史遗迹保护等方面均具有广阔的应用前景,并逐渐成为城市三维数据模型获取的一种重要手段和方法,该技术也被称为继GPS技术以来测绘领域的又一次技术革新。根据载体的不同,三维激光扫描分为机载激光扫描、车载激光扫描及地面三维激光扫描。本论文数据采集主要利用地面三维激光扫描仪完成,该数据与其它方式获取的点云数据相比有其自身的特点。在地面扫描激光扫描过程中,由于受到测量设备和环境的限制,每次测量得到的点云数据往往只覆盖扫描对象的大部分表面,完整的点云数据则需要通过多站测量完成,因此需要对这些局部点云数据进行拼接和配准,有效而精确的配准可以为后续的点云三维建模提供良好的基础数据。
由于受到扫描环境之间的遮挡,地面三维激光扫描获取的点云数据常常包含有各种无法测量到的区域,产生点云空洞,导致局部区域信息丢失。这些空洞不仅使得扫描对象的三维模型无法完整地实现可视化,也会影响建模后参数提取。由于近景影像获取相对方便,灵活性较高,而且成本较低。因此,扫描人员可以从不同角度利用数码相机进行补拍。这样,充分利用两种传感器的优点,获取对象全貌。然而,多源数据融合一直是数据处理领域的难点,需要克服在两种数据之间寻找同名特征的问题,利用重叠部分的信息,通过配准操作将两种类型的数 据纳入到统一的坐标系统中,并使得各个元素在几何上对准。目前已有的融合方法主要专注于点云与图像上共同几何特征的提取,基于中心投影原理利用共线方程进行,实际上仍是二维与三维之间的配准,精度较低,并且限制条件较多。
所以,还需要再开发一种配准方法,以满足目前数据采集后的处理技术。
发明内容
针对上述现有技术的不足,本发明旨在提供一种配准方法,以解决对激光点云和影像数据配准的精度较低,限制条件较多的问题。
为了解决上述问题,本发明采用了如下的技术方案:
一方面,本发明提供一种配准方法,包括
对数码相机进行镜头畸变校正,获取数码相机内外参数,并对提取的影像数据进行畸变矫正;
对三维激光点云进行去噪精简处理;
使用POS传感器中的位置、姿态与时间信息数据,根据时间同步原理,获取数码相机拍摄影像数据与三维激光雷达扫描点云在同一时刻下的时间戳,在同一时间戳的基础上,从POS数据中提取出影像数据与激光点云对应的配准初始值,并基于共线方程模型进行初始配准;
利用基于Robust估计的选权迭代法提高配准初始值精度,确定高精度的配准值,结合迭代最近点算法,把高精度的配准值作为迭代最近点算法的初始值,使用影像数据点云与激光点云进行迭代最近点配准,获得高精度的配准。
进一步,所述对数码相机进行镜头畸变校正,获取数码相机内外参数,并对提取的影像数据关键帧进行畸变矫正,包括:
在棋盘格图像和数码相机之间固定,从不同方向获取多张影像;
将棋盘格影像的右上角定为图像坐标原点,并根据影像的长和宽获取棋盘格上的方格数目,并进行角点检测;
进行参数的初始化和非线性最优化,计算数码相机的内参数和畸变系数;
基于所述内参数和畸变系数,计算畸变映射,对影像数据进行校正。
进一步,所述对三维激光点云进行去噪和精简处理,包括:
计算三维激光点云中每个点到前若干个紧邻点的距离;
根据所述距离计算点云中的整体平均距离和方差;
将其距离大于等于整体平均距离与标准差倍数*σ之和的点视为噪声点,并进行去除;
Figure PCTCN2022131556-appb-000001
其中,其中t=u+m*σ,u为平均距离,m为标准差倍数,σ为方差,u i为距离;
设定最小体素尺度,将原始点云放入三维网格中,每个包含多个点的最小体素用其中一个点替代,完成体素降采样。
进一步,所述使用POS传感器中的位置、姿态与时间信息数据,根据时间同步原理,获取数码相机拍摄影像数据与三维激光雷达扫描点云在同一时刻下的时间戳,在同一时间戳的基础上,从POS数据中提取出影像数据与激光点云对应的配准初始值,并基于共线方程模型进行初始配准,包括:
确定激光点云的三维激光雷达坐标与同名图像点的像空间辅助坐标间的关系为
Figure PCTCN2022131556-appb-000002
式中,[X I Y I Z I] T为图像点的像空间辅助坐标;[X H Y H Z H] T为激光点在三维激光雷达坐标系下的坐标;R IH和C IH分别表示像空间辅助坐标系和三维激光雷达坐标系间的旋转矩阵和平移变量;
求得影像数据关键帧的图像点像空间辅助坐标与其WGS-84坐标系间的转换关系:
Figure PCTCN2022131556-appb-000003
Figure PCTCN2022131556-appb-000004
C IW=T KW+R KWT HK+R KWR HKC IH
基于所述转换关系,得出共线方程为:
Figure PCTCN2022131556-appb-000005
式中:x,y为像点的像平面坐标;x 0,y 0,f为影像的内方位元素;X C,Y C,Z C为摄像机中心的物方空间坐标;X,Y,Z为激光点的物方空间坐标;a i,b i,c i(i=1,2,3)为影像的3个外方位元素;
通过所述共线方程将激光点云和影像数据进行初配准。
进一步,所述利用基于Robust估计的选权迭代法提高配准初始值精度,确定高精度的配准值,结合迭代最近点算法,把高精度的配准值作为迭代最近点算法的初始值,使用影像数据点云与激光点云进行迭代最近点配准,获得高精度的配准,包括:
S401、根据共线方程的误差方程以最小二乘迭代开始,第一次迭代应用POS位置姿态数据值的权阵为单位矩阵;
S402、应用第一次迭代的残差,计算第二次迭代的权函数,进行改进的丹麦法选权迭代;
S403、计算单位权中误差,如果不满足条件,循环至S402,直至精度满足要求;
S404、按照所述共线方程的误差方程,以第三次迭代为分界点,迭代终止的条件为影像数据关键帧图像角元素改正数经两次运算的差值为0.1′。
另一方面,本发明提供一种配准系统,包括影像数据矫正模块、去噪精简处理模块、初始配准模块和高精度配准模块;
所述影像数据矫正模块,用于对数码相机进行镜头畸变校正,获取数码相机内外参数,并对提取的影像数据进行畸变矫正;
所述去噪精简处理模块,用于对三维激光点云进行去噪精简处理;
所述初始配准模块,用于使用POS传感器中的位置、姿态与时间信息数据,根据时间同步原理,获取数码相机拍摄影像数据与三维激光雷达扫描点云在同一 时刻下的时间戳,在同一时间戳的基础上,从POS数据中提取出影像数据与激光点云对应的配准初始值,并基于共线方程模型进行初始配准;
所述高精度配准模块,用于利用基于Robust估计的选权迭代法提高配准初始值精度,确定高精度的配准值,结合迭代最近点算法,把高精度的配准值作为迭代最近点算法的初始值,使用影像数据点云与激光点云进行迭代最近点配准,获得高精度的配准。
进一步,所述影像数据矫正模块包括影像获取单元、角点检测单元、计算单元和校正单元:
所述影像获取单元,用于在棋盘格图像和数码相机之间固定,从不同方向获取多张影像;
所述角点检测单元,用于将棋盘格影像的右上角定为图像坐标原点,并根据影像的长和宽获取棋盘格上的方格数目,并进行角点检测;
所述计算单元,用于进行参数的初始化和非线性最优化,计算数码相机的内参数和畸变系数;
所述校正单元,用于基于所述内参数和畸变系数,计算畸变映射,对影像数据进行校正。
进一步,所述去噪精简处理模块包括距离计算单元、整体平均距离和方差计算单元、去除单元和降采样单元;
所述距离计算单元,用于计算三维激光点云中每个点到前若干个紧邻点的距离;
所述整体平均距离和方差计算单元,用于根据所述距离计算点云中的整体平均距离和方差;
所述去除单元,用于将其距离大于等于整体平均距离与标准差倍数*σ之和的点视为噪声点,并进行去除;
Figure PCTCN2022131556-appb-000006
其中,其中t=u+m*σ,u为平均距离,m为标准差倍数,σ为方差,u i为距离;
所述降采样单元,用于设定最小体素尺度,将原始点云放入三维网格中,每个包含多个点的最小体素用其中一个点替代,完成体素降采样。
进一步,所述初始配准模块包括坐标关系确定单元、坐标转换单元、共线方程确定单元和初配准单元;
所述坐标关系确定单元,用于确定激光点云的三维激光雷达坐标与同名图像点的像空间辅助坐标间的关系为
Figure PCTCN2022131556-appb-000007
式中,[X I Y I Z I] T为图像点的像空间辅助坐标;[X H Y H Z H] T为激光点在三维激光雷达坐标系下的坐标;R IH和C IH分别表示像空间辅助坐标系和三维激光雷达坐标系间的旋转矩阵和平移变量;
所述坐标转换单元,用于求得影像数据关键帧的图像点像空间辅助坐标与其WGS-84坐标系间的转换关系:
Figure PCTCN2022131556-appb-000008
Figure PCTCN2022131556-appb-000009
C IW=T KW+R KWT HK+R KWR HKC IH
所述共线方程确定单元,用于基于所述转换关系,得出共线方程为:
Figure PCTCN2022131556-appb-000010
式中:x,y为像点的像平面坐标;x 0,y 0,f为影像的内方位元素;X C,Y C,Z C为摄像机中心的物方空间坐标;X,Y,Z为激光点的物方空间坐标;a i,b i,c i(i=1,2,3)为影像的3个外方位元素;
所述初配准单元,用于通过所述共线方程将激光点云和影像数据进行初配准。
进一步,所述高精度配准模块包括第一次迭代单元、选权迭代单元、判定单元和迭代终止单元;
所述第一次迭代单元,用于根据共线方程的误差方程以最小二乘迭代开始,第一次迭代应用POS位置姿态数据值的权阵为单位矩阵;
所述选权迭代单元,用于应用第一次迭代的残差,计算第二次迭代的权函数,进行改进的丹麦法选权迭代;
所述判定单元,用于计算单位权中误差,如果不满足条件,循环至S402,直至精度满足要求;
所述迭代终止单元,用于按照所述共线方程的误差方程,以第三次迭代为分界点,迭代终止的条件为影像数据关键帧图像角元素改正数经两次运算的差值为0.1′。
本发明的有益效果在于:本发明结合迭代最近点算法,把高精度的配准值作为迭代最近点算法的初始值,提高了初始配准值的精度,并使用影像数据点云与激光点云进行迭代最近点配准,获得高精度的配准,解决了因采用激光点云数据或影响数据不能全面、方便的获取采集目标数据的问题。
附图说明
为了使本发明的目的、技术方案和优点更加清楚,下面将结合附图对本发明作进一步的详细描述,其中:
图1为本发明实施例一种配准方法流程示意图。
图2为本发明实施例一种配准系统示意图。
具体实施方式
下面结合具体实施例对本发明作进一步的详细说明。
需要说明的是,这些实施例仅用于说明本发明,而不是对本发明的限制,在本发明的构思前提下本方法的简单改进,都属于本发明要求保护的范围。
参见图1,为一种配准方法,包括
S100、对数码相机进行镜头畸变校正,获取数码相机内外参数,并对提取的影像数据进行畸变矫正;
S200、对三维激光点云进行去噪精简处理;
S300、使用POS传感器中的位置、姿态与时间信息数据,根据时间同步原 理,获取数码相机拍摄影像数据与三维激光雷达扫描点云在同一时刻下的时间戳,在同一时间戳的基础上,从POS数据中提取出影像数据与激光点云对应的配准初始值,并基于共线方程模型进行初始配准;
S400、利用基于Robust估计的选权迭代法提高配准初始值精度,确定高精度的配准值,结合迭代最近点算法,把高精度的配准值作为迭代最近点算法的初始值,使用影像数据点云与激光点云进行迭代最近点配准,获得高精度的配准。
作为一种可实施方式,所述S100可为:
S101、在棋盘格图像和数码相机之间固定,从不同方向获取多张影像;
S102、将棋盘格影像的右上角定为图像坐标原点,并根据影像的长和宽获取棋盘格上的方格数目,并进行角点检测;
S103、进行参数的初始化和非线性最优化,计算数码相机的内参数和畸变系数;
S104、基于所述内参数和畸变系数,计算畸变映射,对影像数据进行校正。
作为一种可实施方式,所述S200包括:
S201、计算三维激光点云中每个点到前若干个紧邻点的距离;
S202、根据所述距离计算点云中的整体平均距离和方差;
S203、将其距离大于等于整体平均距离与标准差倍数*σ之和的点视为噪声点,并进行去除;
Figure PCTCN2022131556-appb-000011
其中,其中t=u+m*σ,u为平均距离,m为标准差倍数,σ为方差,u i为距离;
S204、设定最小体素尺度,将原始点云放入三维网格中,每个包含多个点的最小体素用其中一个点替代,完成体素降采样。
作为一种可实施方式,所述S300包括:
S301、确定激光点云的三维激光雷达坐标与同名图像点的像空间辅助坐标间的关系为
Figure PCTCN2022131556-appb-000012
式中,[X I Y I Z I] T为图像点的像空间辅助坐标;[X H Y H Z H] T为激光点在三维激光雷达坐标系下的坐标;R IH和C IH分别表示像空间辅助坐标系和三维激光雷达坐标系间的旋转矩阵和平移变量;
S302、求得影像数据关键帧的图像点像空间辅助坐标与其WGS-84坐标系间的转换关系:
Figure PCTCN2022131556-appb-000013
Figure PCTCN2022131556-appb-000014
C IW=T KW+R KWT HK+R KWR HKC IH
S303、基于所述转换关系,得出共线方程为:
Figure PCTCN2022131556-appb-000015
式中:x,y为像点的像平面坐标;x 0,y 0,f为影像的内方位元素;X C,Y C,Z C为摄像机中心的物方空间坐标;X,Y,Z为激光点的物方空间坐标;a i,b i,c i(i=1,2,3)为影像的3个外方位元素;
S304、通过所述共线方程将激光点云和影像数据进行初配准。
作为一种可实施方式,所述S400包括:
选权迭代的条件可为:
Figure PCTCN2022131556-appb-000016
权公式为:
Figure PCTCN2022131556-appb-000017
使用标准化残差替换残差作为权函数的参数,标准化残差为:
Figure PCTCN2022131556-appb-000018
式中,
Figure PCTCN2022131556-appb-000019
为标准化残差,u j为观测值残差,m为观测值个数,m 0为观测值标准偏差,
Figure PCTCN2022131556-appb-000020
为Q uuP中的主对角外元素,
Q uuP=E-AQ xxA TP
式中,E为观测误差矢量,A为观测方程系数阵;
所以选权迭代权函数为:
Figure PCTCN2022131556-appb-000021
Figure PCTCN2022131556-appb-000022
式中,P 0为权因子;
进而贡献方程的误差方程为:
Figure PCTCN2022131556-appb-000023
S401、根据共线方程的误差方程以最小二乘迭代开始,第一次迭代应用POS位置姿态数据值的权阵为单位矩阵;
S402、应用第一次迭代的残差,计算第二次迭代的权函数,进行改进的丹麦 法选权迭代;
S403、计算单位权中误差,如果不满足条件,循环至S402,直至精度满足要求;
S404、按照所述共线方程的误差方程,以第三次迭代为分界点,迭代终止的条件为影像数据关键帧图像角元素改正数经两次运算的差值为0.1′。
参见图2,为一种配准系统,包括影像数据矫正模块100、去噪精简处理模块200、初始配准模块300和高精度配准模块400;
所述影像数据矫正模块100,用于对数码相机进行镜头畸变校正,获取数码相机内外参数,并对提取的影像数据进行畸变矫正;
所述去噪精简处理模块200,用于对三维激光点云进行去噪精简处理;
所述初始配准模块300,用于使用POS传感器中的位置、姿态与时间信息数据,根据时间同步原理,获取数码相机拍摄影像数据与三维激光雷达扫描点云在同一时刻下的时间戳,在同一时间戳的基础上,从POS数据中提取出影像数据与激光点云对应的配准初始值,并基于共线方程模型进行初始配准;
所述高精度配准模块400,用于利用基于Robust估计的选权迭代法提高配准初始值精度,确定高精度的配准值,结合迭代最近点算法,把高精度的配准值作为迭代最近点算法的初始值,使用影像数据点云与激光点云进行迭代最近点配准,获得高精度的配准。
作为一种可实施方式,所述影像数据矫正模块100包括影像获取单元101、角点检测单元102、计算单元103和校正单元104:
所述影像获取单元101,用于在棋盘格图像和数码相机之间固定,从不同方向获取多张影像;
所述角点检测单元102,用于将棋盘格影像的右上角定为图像坐标原点,并根据影像的长和宽获取棋盘格上的方格数目,并进行角点检测;
所述计算单元103,用于进行参数的初始化和非线性最优化,计算数码相机的内参数和畸变系数;
所述校正单元104,用于基于所述内参数和畸变系数,计算畸变映射,对影像数据进行校正。
作为一种可实施方式,所述去噪精简处理模块200包括距离计算单元201、 整体平均距离和方差计算单元202、去除单元203和降采样单元204;
所述距离计算单元201,用于计算三维激光点云中每个点到前若干个紧邻点的距离;
所述整体平均距离和方差计算单元202,用于根据所述距离计算点云中的整体平均距离和方差;
所述去除单元203,用于将其距离大于等于整体平均距离与标准差倍数*σ之和的点视为噪声点,并进行去除;
Figure PCTCN2022131556-appb-000024
其中,其中t=u+m*σ,u为平均距离,m为标准差倍数,σ为方差,u i为距离;
所述降采样单元204,用于设定最小体素尺度,将原始点云放入三维网格中,每个包含多个点的最小体素用其中一个点替代,完成体素降采样。
作为一种可实施方式,所述初始配准模块300包括坐标关系确定单元301、坐标转换单元302、共线方程确定单元303和初配准单元304;
所述坐标关系确定单元301,用于确定激光点云的三维激光雷达坐标与同名图像点的像空间辅助坐标间的关系为:
Figure PCTCN2022131556-appb-000025
式中,[X I Y I Z I] T为图像点的像空间辅助坐标;[X H Y H Z H] T为激光点在三维激光雷达坐标系下的坐标;R IH和C IH分别表示像空间辅助坐标系和三维激光雷达坐标系间的旋转矩阵和平移变量;
所述坐标转换单元302,用于求得影像数据关键帧的图像点像空间辅助坐标与其WGS-84坐标系间的转换关系:
Figure PCTCN2022131556-appb-000026
Figure PCTCN2022131556-appb-000027
C IW=T KW+R KWT HK+R KWR HKC IH
所述共线方程确定单元303,用于基于所述转换关系,得出共线方程为:
Figure PCTCN2022131556-appb-000028
式中:x,y为像点的像平面坐标;x 0,y 0,f为影像的内方位元素;X C,Y C,Z C为摄像机中心的物方空间坐标;X,Y,Z为激光点的物方空间坐标;a i,b i,c i(i=1,2,3)为影像的3个外方位元素;
所述初配准单元304,用于通过所述共线方程将激光点云和影像数据进行初配准。
作为一种可实施方式,所述高精度配准模块400包括第一次迭代单元401、选权迭代单元402、判定单元403和迭代终止单元404;
所述第一次迭代单元401,用于根据共线方程的误差方程以最小二乘迭代开始,第一次迭代应用POS位置姿态数据值的权阵为单位矩阵;
所述选权迭代单元402,用于应用第一次迭代的残差,计算第二次迭代的权函数,进行改进的丹麦法选权迭代;
所述判定单元403,用于计算单位权中误差,如果不满足条件,循环至S402,直至精度满足要求;
所述迭代终止单元404,用于按照所述共线方程的误差方程,以第三次迭代为分界点,迭代终止的条件为影像数据关键帧图像角元素改正数经两次运算的差值为0.1′。
最后说明的是,以上实施例仅用以说明本发明的技术方案而非限制,尽管通过参照本发明的优选实施例已经对本发明进行了描述,但本领域的普通技术人员 应当理解,可以在形式上和细节上对其作出各种各样的改变,而不偏离所附权利要求书所限定的本发明的精神和范围。

Claims (10)

  1. 一种配准方法,其特征在于,包括
    对数码相机进行镜头畸变校正,获取数码相机内外参数,并对提取的影像数据进行畸变矫正;
    对三维激光点云进行去噪精简处理;
    使用POS传感器中的位置、姿态与时间信息数据,根据时间同步原理,获取数码相机拍摄影像数据与三维激光雷达扫描点云在同一时刻下的时间戳,在同一时间戳的基础上,从POS数据中提取出影像数据与激光点云对应的配准初始值,并基于共线方程模型进行初始配准;
    利用基于Robust估计的选权迭代法提高配准初始值精度,确定高精度的配准值,结合迭代最近点算法,把高精度的配准值作为迭代最近点算法的初始值,使用影像数据点云与激光点云进行迭代最近点配准,获得高精度的配准。
  2. 根据权利要求1所述配准方法,其特征在于,所述对数码相机进行镜头畸变校正,获取数码相机内外参数,并对提取的影像数据关键帧进行畸变矫正,包括:
    在棋盘格图像和数码相机之间固定,从不同方向获取多张影像;
    将棋盘格影像的右上角定为图像坐标原点,并根据影像的长和宽获取棋盘格上的方格数目,并进行角点检测;
    进行参数的初始化和非线性最优化,计算数码相机的内参数和畸变系数;
    基于所述内参数和畸变系数,计算畸变映射,对影像数据进行校正。
  3. 根据权利要求1所述配准方法,其特征在于,所述对三维激光点云进行去噪和精简处理,包括:
    计算三维激光点云中每个点到前若干个紧邻点的距离;
    根据所述距离计算点云中的整体平均距离和方差;
    将其距离大于等于整体平均距离与标准差倍数*方差之和的点视为噪声点,并进行去除;
    设定最小体素尺度,将原始点云放入三维网格中,每个包含多个点的最小体素用其中一个点替代,完成体素降采样。
  4. 根据权利要求1所述配准方法,其特征在于,
    使用POS传感器中的位置、姿态与时间信息数据,根据时间同步原理,获 取数码相机拍摄影像数据与三维激光雷达扫描点云在同一时刻下的时间戳,在同一时间戳的基础上,从POS数据中提取出影像数据与激光点云对应的配准初始值,并基于共线方程模型进行初始配准,包括:
    确定激光点云的三维激光雷达坐标与同名图像点的像空间辅助坐标间的关系为
    Figure PCTCN2022131556-appb-100001
    式中,[X I Y I Z I] T为图像点的像空间辅助坐标;[X H Y H Z H] T为激光点在三维激光雷达坐标系下的坐标;R IH和C IH分别表示像空间辅助坐标系和三维激光雷达坐标系间的旋转矩阵和平移变量;
    求得影像数据关键帧的图像点像空间辅助坐标与其WGS-84坐标系间的转换关系:
    Figure PCTCN2022131556-appb-100002
    Figure PCTCN2022131556-appb-100003
    C IW=T KW+R KWT HK+R KWR HKC IH
    基于所述转换关系,得出共线方程为:
    Figure PCTCN2022131556-appb-100004
    式中:x,y为像点的像平面坐标;x 0,y 0,f为影像的内方位元素;X C,Y C,Z C为摄像机中心的物方空间坐标;X,Y,Z为激光点的物方空间坐标;a i,b i,c i(i=1,2,3)为影像的3个外方位元素;
    通过所述共线方程将激光点云和影像数据进行初配准。
  5. 根据权利要求1所述配准方法,其特征在于,所述利用基于Robust估计 的选权迭代法提高配准初始值精度,确定高精度的配准值,结合迭代最近点算法,把高精度的配准值作为迭代最近点算法的初始值,使用影像数据点云与激光点云进行迭代最近点配准,获得高精度的配准,包括:
    S401、根据共线方程的误差方程以最小二乘迭代开始,第一次迭代应用POS位置姿态数据值的权阵为单位矩阵;
    S402、应用第一次迭代的残差,计算第二次迭代的权函数,进行改进的丹麦法选权迭代;
    S403、计算单位权中误差,如果不满足条件,循环至S402,直至精度满足要求;
    S404、按照所述共线方程的误差方程,以第三次迭代为分界点,迭代终止的条件为影像数据关键帧图像角元素改正数经两次运算的差值为0.1′。
  6. 一种配准系统,其特征在于,包括影像数据矫正模块、去噪精简处理模块、初始配准模块和高精度配准模块;
    所述影像数据矫正模块,用于对数码相机进行镜头畸变校正,获取数码相机内外参数,并对提取的影像数据进行畸变矫正;
    所述去噪精简处理模块,用于对三维激光点云进行去噪精简处理;
    所述初始配准模块,用于使用POS传感器中的位置、姿态与时间信息数据,根据时间同步原理,获取数码相机拍摄影像数据与三维激光雷达扫描点云在同一时刻下的时间戳,在同一时间戳的基础上,从POS数据中提取出影像数据与激光点云对应的配准初始值,并基于共线方程模型进行初始配准;
    所述高精度配准模块,用于利用基于Robust估计的选权迭代法提高配准初始值精度,确定高精度的配准值,结合迭代最近点算法,把高精度的配准值作为迭代最近点算法的初始值,使用影像数据点云与激光点云进行迭代最近点配准,获得高精度的配准。
  7. 根据权利要求6所述配准系统,其特征在于,所述影像数据矫正模块包括影像获取单元、角点检测单元、计算单元和校正单元:
    所述影像获取单元,用于在棋盘格图像和数码相机之间固定,从不同方向获取多张影像;
    所述角点检测单元,用于将棋盘格影像的右上角定为图像坐标原点,并根据 影像的长和宽获取棋盘格上的方格数目,并进行角点检测;
    所述计算单元,用于进行参数的初始化和非线性最优化,计算数码相机的内参数和畸变系数;
    所述校正单元,用于基于所述内参数和畸变系数,计算畸变映射,对影像数据进行校正。
  8. 根据权利要求6所述配准系统,其特征在于,所述去噪精简处理模块包括距离计算单元、整体平均距离和方差计算单元、去除单元和降采样单元;
    所述距离计算单元,用于计算三维激光点云中每个点到前若干个紧邻点的距离;
    所述整体平均距离和方差计算单元,用于根据所述距离计算点云中的整体平均距离和方差;
    所述去除单元,用于将其距离大于等于整体平均距离与标准差倍数*方差之和的点视为噪声点,并进行去除;
    所述降采样单元,用于设定最小体素尺度,将原始点云放入三维网格中,每个包含多个点的最小体素用其中一个点替代,完成体素降采样。
  9. 根据权利要求6所述配准系统,其特征在于,所述初始配准模块包括坐标关系确定单元、坐标转换单元、共线方程确定单元和初配准单元;
    所述坐标关系确定单元,用于确定激光点云的三维激光雷达坐标与同名图像点的像空间辅助坐标间的关系为
    Figure PCTCN2022131556-appb-100005
    式中,[X I Y I Z I] T为图像点的像空间辅助坐标;[X H Y H Z H] T为激光点在三维激光雷达坐标系下的坐标;R IH和C IH分别表示像空间辅助坐标系和三维激光雷达坐标系间的旋转矩阵和平移变量;
    所述坐标转换单元,用于求得影像数据关键帧的图像点像空间辅助坐标与其WGS-84坐标系间的转换关系:
    Figure PCTCN2022131556-appb-100006
    Figure PCTCN2022131556-appb-100007
    C IW=T KW+R KWT HK+R KWR HKC IH
    所述共线方程确定单元,用于基于所述转换关系,得出共线方程为:
    Figure PCTCN2022131556-appb-100008
    式中:x,y为像点的像平面坐标;x 0,y 0,f为影像的内方位元素;X C,Y C,Z C为摄像机中心的物方空间坐标;X,Y,Z为激光点的物方空间坐标;a i,b i,c i(i=1,2,3)为影像的3个外方位元素;
    所述初配准单元,用于通过所述共线方程将激光点云和影像数据进行初配准。
  10. 根据权利要求6所述配准系统,其特征在于,所述高精度配准模块包括第一次迭代单元、选权迭代单元、判定单元和迭代终止单元;
    所述第一次迭代单元,用于根据共线方程的误差方程以最小二乘迭代开始,第一次迭代应用POS位置姿态数据值的权阵为单位矩阵;
    所述选权迭代单元,用于应用第一次迭代的残差,计算第二次迭代的权函数,进行改进的丹麦法选权迭代;
    所述判定单元,用于计算单位权中误差,如果不满足条件,循环至S402,直至精度满足要求;
    所述迭代终止单元,用于按照所述共线方程的误差方程,以第三次迭代为分界点,迭代终止的条件为影像数据关键帧图像角元素改正数经两次运算的差值为0.1′。
PCT/CN2022/131556 2022-11-11 2022-11-11 一种配准方法及系统 WO2024098428A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211409046.XA CN115690380B (zh) 2022-11-11 2022-11-11 一种配准方法及系统
CN202211409046.X 2022-11-11

Publications (1)

Publication Number Publication Date
WO2024098428A1 true WO2024098428A1 (zh) 2024-05-16

Family

ID=85052974

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/131556 WO2024098428A1 (zh) 2022-11-11 2022-11-11 一种配准方法及系统

Country Status (2)

Country Link
CN (1) CN115690380B (zh)
WO (1) WO2024098428A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118334016A (zh) * 2024-06-12 2024-07-12 青岛交通工程监理咨询有限公司 一种公路工程施工质量检测方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117518197B (zh) * 2024-01-08 2024-03-26 太原理工大学 煤矿井下掘进巷道的轮廓标示方法
CN118196154B (zh) * 2024-04-02 2024-10-18 西南交通大学 一种规则回转体器皿绝对位姿配准方法、装置、设备及介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100061601A1 (en) * 2008-04-25 2010-03-11 Michael Abramoff Optimal registration of multiple deformed images using a physical model of the imaging distortion
CN109087339A (zh) * 2018-06-13 2018-12-25 武汉朗视软件有限公司 一种激光扫描点与影像配准方法
CN110458874A (zh) * 2019-07-17 2019-11-15 苏州博芮恩光电科技有限公司 一种图像非刚性配准方法和系统
US20220254095A1 (en) * 2021-02-03 2022-08-11 Electronics And Telecommunications Research Institute Apparatus and method for searching for global minimum of point cloud registration error

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100973052B1 (ko) * 2009-04-24 2010-07-30 서울시립대학교 산학협력단 라이다데이터를 이용한 디지털 항공영상의 자동 정합기법
CN109410256B (zh) * 2018-10-29 2021-10-15 北京建筑大学 基于互信息的点云与影像自动高精度配准方法
CN112465849B (zh) * 2020-11-27 2022-02-15 武汉大学 一种无人机激光点云与序列影像的配准方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100061601A1 (en) * 2008-04-25 2010-03-11 Michael Abramoff Optimal registration of multiple deformed images using a physical model of the imaging distortion
CN109087339A (zh) * 2018-06-13 2018-12-25 武汉朗视软件有限公司 一种激光扫描点与影像配准方法
CN110458874A (zh) * 2019-07-17 2019-11-15 苏州博芮恩光电科技有限公司 一种图像非刚性配准方法和系统
US20220254095A1 (en) * 2021-02-03 2022-08-11 Electronics And Telecommunications Research Institute Apparatus and method for searching for global minimum of point cloud registration error

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZHA LUJIU, ZHA LUJIU;XIA YONGHUA;WANG BIN;YANG MINGLONG;PAN YIRONG;CHEN RUO;ZHU QI: "Registration Method of Video Image and Laser Point Cloud Under Moving Measurement", LASER & OPTOELECTRONICS PROGRESS, vol. 59, no. 10, 1 January 2022 (2022-01-01), pages 234 - 243, XP093171511, ISSN: 1006-4125, DOI: 10.3788/LOP202259.1013001 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118334016A (zh) * 2024-06-12 2024-07-12 青岛交通工程监理咨询有限公司 一种公路工程施工质量检测方法

Also Published As

Publication number Publication date
CN115690380A (zh) 2023-02-03
CN115690380B (zh) 2023-07-21

Similar Documents

Publication Publication Date Title
CN107316325B (zh) 一种基于图像配准的机载激光点云与影像配准融合方法
WO2024098428A1 (zh) 一种配准方法及系统
CN110021046B (zh) 相机与激光雷达组合传感器的外参数标定方法及系统
AU2011312140B2 (en) Rapid 3D modeling
CN102376089B (zh) 一种标靶校正方法及系统
CN109727278B (zh) 一种机载LiDAR点云数据与航空影像的自动配准方法
Rüther et al. A comparison of close-range photogrammetry to terrestrial laser scanning for heritage documentation
CN112465732A (zh) 一种车载激光点云与序列全景影像的配准方法
CN116433737A (zh) 一种激光雷达点云与图像配准的方法、装置及智能终端
CN112270698A (zh) 基于最邻近曲面的非刚性几何配准方法
CN112767461A (zh) 激光点云与序列全景影像自动配准方法
CN115326025B (zh) 一种用于海浪的双目影像测量与预测方法
CN112767459A (zh) 基于2d-3d转换的无人机激光点云与序列影像配准方法
CN117994463B (zh) 一种建设用地测绘方法及系统
CN110986888A (zh) 一种航空摄影一体化方法
CN115496783A (zh) 一种室内空间三维彩色点云生成方法
CN115423863A (zh) 相机位姿估计方法、装置及计算机可读存储介质
CN110631555A (zh) 基于二阶多项式无控制点区域网平差的历史影像正射纠方法
RU2692970C2 (ru) Способ калибровки видеодатчиков многоспектральной системы технического зрения
CN107941241B (zh) 一种用于航空摄影测量质量评价的分辨率板及其使用方法
CN109029379A (zh) 一种高精度小基高比立体测绘方法
Elhalawani et al. Implementation of close range photogrammetry using modern non-metric digital cameras for architectural documentation
Deng et al. Automatic true orthophoto generation based on three-dimensional building model using multiview urban aerial images
CN115588127A (zh) 一种机载激光点云与多光谱影像融合的方法
CN112819900B (zh) 一种智能立体摄影内方位、相对定向和畸变系数标定方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22964896

Country of ref document: EP

Kind code of ref document: A1