CN1312633C - Automatic registration method for large-scale three-dimensional scene multi-view laser scanning data - Google Patents
Automatic registration method for large-scale three-dimensional scene multi-view laser scanning data Download PDFInfo
- Publication number
- CN1312633C CN1312633C CNB2004100311577A CN200410031157A CN1312633C CN 1312633 C CN1312633 C CN 1312633C CN B2004100311577 A CNB2004100311577 A CN B2004100311577A CN 200410031157 A CN200410031157 A CN 200410031157A CN 1312633 C CN1312633 C CN 1312633C
- Authority
- CN
- China
- Prior art keywords
- point
- registration
- viewpoint
- data
- laser
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 60
- 238000000605 extraction Methods 0.000 claims abstract description 7
- 239000013598 vector Substances 0.000 claims description 21
- 238000005070 sampling Methods 0.000 claims description 17
- 239000011159 matrix material Substances 0.000 claims description 10
- 238000006243 chemical reaction Methods 0.000 claims description 8
- 230000003044 adaptive effect Effects 0.000 claims description 5
- 239000000284 extract Substances 0.000 claims description 5
- 238000002310 reflectometry Methods 0.000 claims description 3
- 238000005259 measurement Methods 0.000 claims description 2
- 241000209094 Oryza Species 0.000 claims 2
- 235000007164 Oryza sativa Nutrition 0.000 claims 2
- 235000009566 rice Nutrition 0.000 claims 2
- 238000005267 amalgamation Methods 0.000 claims 1
- 238000001514 detection method Methods 0.000 claims 1
- 230000008676 import Effects 0.000 claims 1
- 238000000926 separation method Methods 0.000 claims 1
- 230000007704 transition Effects 0.000 claims 1
- 230000008569 process Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 230000009466 transformation Effects 0.000 description 5
- 230000006978 adaptation Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 239000002131 composite material Substances 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 230000001186 cumulative effect Effects 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000013173 literature analysis Methods 0.000 description 1
- 238000013332 literature search Methods 0.000 description 1
- 230000000149 penetrating effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Landscapes
- Processing Or Creating Images (AREA)
- Image Analysis (AREA)
Abstract
本发明涉及的大规模三维场景多视点激光扫描数据自动配准方法,其步骤包括:获取数据,在被测场景前放置激光测距仪,调节激光测距仪使其Z轴垂直地面,X和Y轴平行于地面;对被测场景进行逐行逐列扫描,获得每一个视点的三维数据;相邻视点扫描数据要求保持10%~20%的重叠;提取被测物结构并对其数据进行特征提取;计算虚拟特征与构建特征单元,并进行粗配准精配准和全局配准,最后建立配准模型本发明可不借助任何外协设备和人工参与的方法,可解决大规模三维场景多视点激光扫描数据自动配准;具有普遍适用性,即适用于现代结构的场景又适用于复杂的大兴文物和古建。
The invention relates to a method for automatic registration of multi-viewpoint laser scanning data of a large-scale three-dimensional scene, and the steps thereof include: acquiring data, placing a laser rangefinder in front of the scene to be measured, adjusting the laser rangefinder so that its Z axis is perpendicular to the ground, and its X and Y axes are parallel to the ground; scanning the scene to be measured row by row and column by column to obtain three-dimensional data of each viewpoint; requiring adjacent viewpoint scanning data to maintain 10% to 20% overlap; extracting the structure of the object to be measured and performing feature extraction on its data; calculating virtual features and constructing feature units, and performing coarse registration, fine registration and global registration, and finally establishing a registration model. The invention can solve the automatic registration of multi-viewpoint laser scanning data of a large-scale three-dimensional scene without the aid of any outsourced equipment or manual participation; and has universal applicability, that is, it is applicable to scenes with modern structures as well as to complex Daxing cultural relics and ancient buildings.
Description
技术领域technical field
本发明属于计算机信息处理领域的三维数据配准方法,特别涉及一种大规模三维场景多视点激光扫描数据自动配准方法。The invention belongs to a three-dimensional data registration method in the field of computer information processing, in particular to a large-scale three-dimensional scene multi-view point laser scanning data automatic registration method.
背景技术Background technique
三维数据配准是数字化和反求工程的关键技术。特别是随着中远距离三维激光扫描技术的出现,简化了三维数据获取过程,促进了数字城市、数字导航、虚拟现实、城市规划、文物数字化保护等相关领域的发展,基于地面的三维激光扫描数据备受重视。为了获得大规模三维场景完整几何描述,需要在几十个甚至几百个视点位置采集激光三维扫描数据,故而存在将不同视点的激光三维扫描数据转换在同一坐标系的配准问题。3D data registration is the key technology of digitization and reverse engineering. Especially with the emergence of medium and long-distance 3D laser scanning technology, the process of 3D data acquisition has been simplified, and the development of digital cities, digital navigation, virtual reality, urban planning, digital protection of cultural relics and other related fields has been promoted. Ground-based 3D laser scanning data Highly regarded. In order to obtain a complete geometric description of large-scale 3D scenes, it is necessary to collect laser 3D scanning data at dozens or even hundreds of viewpoints, so there is a registration problem of converting laser 3D scanning data from different viewpoints into the same coordinate system.
传统的多视点三维数据配准的研究多针对单一物体,最有代表性的是最近点迭代算法,即ICP算法该方法通过迭代在两视点激光扫描点集内寻找对应点,计算两视点的位置关系不足之处在于需人工设定初始转换估计,易陷入局部最小,迭代时间长,不适合大规模配准对于大规模场景常用配准方法有两种:靶标控制点法和GPS等外协设备法靶标控制点法要求在相邻视点重叠区域摆放靶标,利用激光对靶标特殊的反射特性自动寻找靶标控制点,只要相邻视点的激光扫描数据中有三个以上的同名控制点,即可求出相邻视点的坐标转换关系如果场景中存在与靶标具有相同反射特性的物质,会误识靶标,配准失败靶标控制点法需要大量人工干预,且有些环境无法放置靶标GPS等外协设备法主要是借助外协设备获得激光扫描仪在各视点的位姿参数,不但设备价格昂贵,而且位姿参数精度不高,配准结果与实际情况有差异因此,急需一种不借助任何外协设备直接将各视点的三维激光扫描数据无缝拼合在同一坐标系的自动大规模三维场景多视点三维数据配准方法经文献检索分析,发现I.Stamos在2003年发表的名为“基于特征的城市大规模场景距离数据自动配准(AutomatedFeature-Based Range Registration of Urban Scenes of Large Scale)”的文章中提出了采用特征直线实现相邻视点三维激光数据自动配准方法,并称直线特征减少了几何复杂性,适合大规模配准该方法不足之处在于只采用线性特征不具有普遍性,对于线性特征较少的大型文物古建等三维场景配准常常失败此外,没能解决因遮挡等实际因素影响重叠区域缺乏有效线性特征的配准问题The traditional research on multi-view 3D data registration mostly focuses on a single object, and the most representative one is the closest point iterative algorithm, that is, the ICP algorithm. This method finds corresponding points in the laser scanning point set of two viewpoints iteratively, and calculates the positions of the two viewpoints. The disadvantage of the relationship is that it is necessary to manually set the initial conversion estimation, which is easy to fall into the local minimum, and the iteration time is long, which is not suitable for large-scale registration. There are two commonly used registration methods for large-scale scenes: target control point method and external equipment such as GPS The target control point method requires that targets be placed in overlapping areas of adjacent viewpoints, and the target control points are automatically found by using the special reflection characteristics of the laser on the target. As long as there are more than three control points with the same name in the laser scanning data of adjacent viewpoints, the If there is a substance with the same reflection characteristics as the target in the scene, the target will be misidentified, and the registration will fail. The target control point method requires a lot of manual intervention, and some environments cannot place the target GPS and other external equipment methods. It is mainly to obtain the pose parameters of the laser scanner at each viewpoint with the help of external equipment. Not only is the equipment expensive, but the accuracy of the pose parameters is not high, and the registration results are different from the actual situation. Therefore, there is an urgent need for a method that does not use any external equipment. The automatic large-scale 3D scene multi-viewpoint 3D data registration method that directly seamlessly combines the 3D laser scanning data of each viewpoint into the same coordinate system has been found through literature search and analysis. The article "Automated Feature-Based Range Registration of Urban Scenes of Large Scale" proposed a method of using feature lines to realize automatic registration of 3D laser data from adjacent viewpoints, and said that line features reduce geometric complexity It is suitable for large-scale registration. The disadvantage of this method is that only using linear features is not universal. For 3D scenes such as large cultural relics and ancient buildings with less linear features, the registration often fails. In addition, it cannot solve the influence of practical factors such as occlusion. Registration issues with lack of valid linear features in overlapping regions
上述方法意在解决有重叠区域的两视点三维数据配准,至于几十个甚至几百个视点,必需采取有效的配准策略方可将其三维数据无锋拼合在同一坐标系常用的配准策略有序列配准和同步配准序列配准指一对视点配准后,另一对视点开始配准,且须包含前一对视点中的一个视点,环环相扣直到遍历所有视点为止序列配准策略的优点在于同一时间只有一对视点参加配准,占用内存少;缺点在于存在较大的累积误差,特别是闭环形场景,首末视点间出现缝隙同步配准则是所有视点同时配准,不会出现累积误差,配准精度高,但计算量大,对计算设备要求高序列配准和同步配准均不适合大规模场景The above method is intended to solve the two-viewpoint 3D data registration with overlapping areas. As for dozens or even hundreds of viewpoints, an effective registration strategy must be adopted to merge the 3D data into the same coordinate system. Strategies include sequential registration and synchronous registration. Sequence registration means that after a pair of viewpoints are registered, another pair of viewpoints starts to register, and must include one viewpoint in the previous pair of viewpoints, and the sequence is interlocked until all viewpoints are traversed. The advantage of the registration strategy is that only one pair of viewpoints participate in the registration at the same time, and it takes up less memory; the disadvantage is that there is a large cumulative error, especially in closed-loop scenarios, where there is a gap between the first and last viewpoints. The synchronous registration criterion is that all viewpoints are registered at the same time , there will be no cumulative error, high registration accuracy, but a large amount of calculation, high requirements for computing equipment, neither sequence registration nor simultaneous registration is suitable for large-scale scenarios
发明内容Contents of the invention
本发明的目的是提供一种不借助任何外借设备直接将大规模三维场景多视点激光扫描数据自动无缝拼合在同一坐标系的配准方法The purpose of the present invention is to provide a registration method that automatically and seamlessly stitches together large-scale three-dimensional scene multi-viewpoint laser scanning data in the same coordinate system without any borrowed equipment
本发明的技术方案如下:Technical scheme of the present invention is as follows:
本发明提供的大规模三维场景多视点激光扫描数据自动配准方法,包括如下步骤:The method for automatic registration of multi-viewpoint laser scanning data in large-scale three-dimensional scenes provided by the present invention includes the following steps:
1)数据获取1) Data acquisition
由于大规模场景尤其室外场景,不仅包含主要建筑结构,而且含有树木行人车辆等遮挡物,以及激光穿过透明物体形成的散乱点遮挡物位于三维激光扫描仪与主要建筑结构之间,而激光穿过透明物体形成散乱点位于主要建筑结构之后在测量时,在被测场景前放置激光测距仪,调整三维激光扫描仪,使其Z轴垂直地面,X和Y轴与地面平行;对被测场景进行逐行逐列扫描,获得每一个视点的三维数据;相邻视点扫描数据保持10%~20%的重叠;Because large-scale scenes, especially outdoor scenes, not only include the main building structure, but also contain trees, pedestrians, vehicles and other occlusions, and the scattered dot occlusions formed by the laser passing through transparent objects are located between the 3D laser scanner and the main building structure, while the laser penetrating Scattered points formed by transparent objects are located behind the main building structure. When measuring, place a laser rangefinder in front of the scene to be measured, adjust the 3D laser scanner so that the Z axis is vertical to the ground, and the X and Y axes are parallel to the ground; The scene is scanned row by row and column by row to obtain the 3D data of each viewpoint; the scanning data of adjacent viewpoints maintains an overlap of 10% to 20%;
2)提取主要建筑结构2) Extract the main building structure
输入每一视点对应的激光三维扫描数据,计算各垂直扫描线上采样点与激光测距仪的水平距离,并根据水平距离,利用哈夫变换检测垂直线段;以其所含的点数计算垂直线的长度,并选择最长垂直线段对应的水平距离为基准,基准之前δ1米和基准之后δ2米范围内的数据点为主要建筑结构;所述δ1为1米,δ2为0.5米;Input the laser three-dimensional scanning data corresponding to each viewpoint, calculate the horizontal distance between the sampling points on each vertical scanning line and the laser rangefinder, and use the Hough transform to detect the vertical line segment according to the horizontal distance; calculate the vertical line with the number of points it contains , and select the horizontal distance corresponding to the longest vertical line segment as the benchmark, the data points within the range of δ 1 meter before the benchmark and δ 2 meters after the benchmark are the main building structures; the δ 1 is 1 meter, and δ 2 is 0.5 meters ;
因为激光测距仪对场景是逐行逐列采样,对于每一垂直扫描线,其上属于同一垂直面的点到激光测距仪的水平距离相等,且主要建筑结构一般与地面垂直。本发明逐一分析每一垂直扫描线,根据采样点到激光测距仪的水平距离,采用统计分布的方法确定主要建筑结构的区域范围,将主要建筑结构提取出来;Because the laser rangefinder samples the scene row by column, for each vertical scanning line, the horizontal distance from the points belonging to the same vertical plane to the laser rangefinder is equal, and the main building structure is generally perpendicular to the ground. The present invention analyzes each vertical scanning line one by one, according to the horizontal distance from the sampling point to the laser range finder, adopts the statistical distribution method to determine the area range of the main building structure, and extracts the main building structure;
3)对保留的主要建筑结构的数据点进行特征提取3) Feature extraction is performed on the data points of the retained main building structures
三维特征反映了场景内在的结构特征,简化了场景的几何复杂度,是多视点三维数据配准的有效基元。本发明首先采用加权局部曲线拟合分别处理水平扫描线和垂直扫描线,计算扫描线各点的一次微分;接着根据点的上下两邻域点的微分符号和点与其上下两邻域点的距离检测场景边缘点;然后利用链码跟踪技术形成不同的边缘链,并将同一边缘链的点拟合成首尾相连的一组线段,称为特征线段。相邻特征线段的交点为特征点;The 3D feature reflects the inherent structural features of the scene, simplifies the geometric complexity of the scene, and is an effective primitive for multi-view 3D data registration. The present invention first uses weighted local curve fitting to process the horizontal scanning line and the vertical scanning line respectively, and calculates the primary differential of each point of the scanning line; Detect the edge points of the scene; then use the chain code tracking technology to form different edge chains, and fit the points of the same edge chain into a set of line segments connected end to end, called feature line segments. The intersection of adjacent feature line segments is a feature point;
a)扫描线上各点微分a) Differentiation of each point on the scanning line
对于每条扫描线上的采样点用(1)式进行局部二次拟合,并以该点沿扫描线方向上下k个邻近点为拟合区间[-K,K],K为3-5;For the sampling points on each scanning line, use the formula (1) to perform local quadratic fitting, and take the k adjacent points up and down along the scanning line as the fitting interval [-K, K], and K is 3-5 ;
式中:i-扫描线上采样点排序,i=0,1,…,n;-第i点到激光测距仪的计算水平距离;In the formula: i - the ordering of sampling points on the scanning line, i = 0, 1, ..., n; - Calculated horizontal distance from the i-th point to the laser range finder;
a0,a1,a2-拟合方程(1)式的拟合系数;a 0 , a 1 , a 2 - fitting coefficients of fitting equation (1);
采用自适应加权拟合方法,求取拟合系数a0,a1,a2,且加权拟合误差函数为Adaptive weighted fitting method is adopted to obtain the fitting coefficients a 0 , a 1 , a 2 , and the weighted fitting error function is
式中:d(i)-第i点到激光测距仪的实际水平距离;In the formula: d(i)-the actual horizontal distance from the i-th point to the laser range finder;
w(i)-扫描线上第i点的权值,代表点的可靠性;采用迭代权值修正法,逐步消除局部点的影响,使似合仅在可靠点内进行;具体方法如下:假设已知第m次迭代中系数为a0 m,a1 m,a2 m,求第m+1次迭代中系数为a0 m+1,a1 m+1,a2 m+1首先计w(i)-the weight of the i-th point on the scanning line, which represents the reliability of the point; the iterative weight correction method is used to gradually eliminate the influence of local points, so that the approximation is only carried out within the reliable point; the specific method is as follows: Assume that Knowing that the coefficients in the mth iteration are a 0 m , a 1 m , and a 2 m , the coefficients in the m+1th iteration are a 0 m+1 , a 1 m+1 , and a 2 m+1.
算第m次迭代的拟合残余量和第m次迭代的权值;Calculate the fitting residual of the mth iteration and the weight of the mth iteration;
式中:bs-误差允许阈值;In the formula: b s - error allowable threshold;
初始权值w0(i)=1,用(3)、(4)式不断修正拟合系数和权值,直至拟合曲线满足误差要求且包含最多点数为止;标记参与拟合的点;The initial weight w 0 (i) = 1, use (3) and (4) to continuously modify the fitting coefficient and weight until the fitting curve meets the error requirements and contains the most points; mark the points participating in the fitting;
然后,对(1)式求一次微分,并计算扫描线上有标记各点的一次微分;Then, ask for a differential to (1), and calculate the primary differential of each point marked on the scanning line;
b)检测边缘点b) Detect edge points
对于扫描线上有标记的各点,考察其沿扫描线方向上下有标记两点的一次微分符号,选择异号的点为法向不连续点,并标记为0;For each point marked on the scan line, examine the primary differential sign of two points marked up and down along the scan line direction, select the point with a different sign as the normal discontinuity point, and mark it as 0;
对于扫描线上有标记的各点,计算其与沿扫描线方向上下有标记两点距离,考察其中最大距离,选择最大距离大于阈值ε为深度不连续点,并标记为1;For each point marked on the scan line, calculate the distance between it and two points marked up and down along the scan line, examine the maximum distance, select the maximum distance greater than the threshold ε as the depth discontinuity point, and mark it as 1;
c)特征点线提取c) Extraction of feature points and lines
首先采用链码跟踪方法将边缘点区分成不同的边缘链,并将每一边缘链上的点按顺时针排序,然后通过分裂合并的方法将每一边缘链拟合成一组首尾相接的特征线段,具体方法如下:First, the chain code tracking method is used to distinguish the edge points into different edge chains, and the points on each edge chain are sorted clockwise, and then each edge chain is fitted into a set of end-to-end features by splitting and merging. Line segment, the specific method is as follows:
·初始化·initialization
将边缘链分为等长的若干段L0,L1,…,Ln Divide the edge chain into several segments of equal length L 0 , L 1 ,…, L n
·分裂·Split
利用最小二乘法对当前每一段Li拟合成直线。选择离该拟合直线最远且最小残差Ei大于允许阈值T的一点为基准,将本段边缘链断开为两段,并不断分裂至满足拟合误差为止;依次处理其它各段;Use the least squares method to fit each segment of L i into a straight line. Select the point farthest from the fitting straight line and the minimum residual E i is greater than the allowable threshold T as the benchmark, break the edge chain of this segment into two segments, and keep splitting until the fitting error is met; process other segments in turn;
·合并·merge
选择拟合残差小于阈值T的相邻两段合并成一段为特征线段,特征线段的端点为特征点;Select two adjacent segments whose fitting residual error is less than the threshold T to merge into a segment as a feature line segment, and the endpoint of the feature line segment is a feature point;
4)计算虚特征与构造特征单元4) Calculating virtual features and constructing feature units
特征线对应场景的边缘信息,特征点对应场景的角点或尖点,它们构成了场景基本结构框架然而,由于真实场景不可避免地存在遮挡现象,如果遮挡严重,在两视点的重叠区内缺乏有效特征,导致配准失败本发明提出虚特征概念,将实际不相交的特征线段延长相交形成虚特征线段和虚特征点,并加入上述提取的特征线段和特征点中,虚特征说明了特征之间的几何位置关系;The feature lines correspond to the edge information of the scene, and the feature points correspond to the corners or sharp points of the scene, which constitute the basic structural framework of the scene. However, due to the unavoidable occlusion phenomenon in the real scene, if the occlusion is serious, there is a lack of Effective features lead to registration failure The present invention proposes the concept of virtual features, which extend and intersect the actual disjoint feature line segments to form virtual feature line segments and virtual feature points, and add them to the extracted feature line segments and feature points. The geometric position relationship between;
本发明将特征点连同其相邻的两特征线段嵌套成特征单元PU{P|L1,L2:<n1,n2,n3,p>}。特征单元包含丰富的几何信息,即三个方向矢量和一个位置矢量:两个特征线段L1和L2的方向矢量(n1,n2),矢量方向由特征点指向线段,第三个矢量为前两矢量的合成矢量,反映了相邻两特征线段构成平面的法向n3=n1×n2,×为矢量叉乘位置矢量为特征点p;In the present invention, a feature point and two adjacent feature line segments are nested into a feature unit PU {P|L 1 , L 2 :<n 1 , n 2 , n 3 , p>}. The feature unit contains rich geometric information, that is, three direction vectors and one position vector: the direction vectors (n 1 , n 2 ) of the two feature line segments L 1 and L 2 , the direction of the vector is from the feature point to the line segment, and the third vector is the composite vector of the first two vectors, reflecting the normal direction n 3 =n 1 ×n 2 of the plane formed by two adjacent feature line segments, where x is the cross product of the vector and the position vector is the feature point p;
5)配准5) Registration
局部配准指带有重叠区域的相邻视点两两配准。本发明采用由粗到精的配准方式,使两视点数据完全融合;Local registration refers to pairwise registration of adjacent viewpoints with overlapping regions. The present invention adopts a coarse-to-fine registration method to completely integrate the data of two viewpoints;
先进行粗配准:利用特征单元将两视点的三维数据自动拼合在同一坐标系一对特征单元便可构造一个空间转换关系:从对应的特征线的方向及其合成法向计算旋转矩阵;从对应的特征点对计算平移向量给定一对分别属于视点S1和视点S2的特征单元PU{P|L1,L2:<n1,n2,n3,P>}∈S1和PU′{P′|L1′,L2′:<n1′,n2′,n3′,p′>∈S2,旋转矩阵由n1和n1′(i=1,2,3)计算,即使下式误差函数最小,并采用四元素法求出旋转矩阵;Coarse registration first: use the feature unit to automatically combine the 3D data of the two viewpoints in the same coordinate system to construct a spatial transformation relationship: calculate the rotation matrix from the direction of the corresponding feature line and its composite normal; The corresponding feature point pair calculates the translation vector Given a pair of feature units PU{P| L 1 , L 2 : <n 1 ,n 2 ,n 3 ,P>}∈S 1 and PU′{P′|L 1 ′, L 2 ′: <n 1 ′, n 2 ′, n 3 ′, p′>∈S 2 , the rotation matrix consists of n 1 and n 1 ′(i=1, 2 , 3) calculate, even if following formula error function is minimum, and adopt four-element method to find out rotation matrix;
设p和p′在各自局部坐标系的坐标为(x,y,z)与(x′,y′,z′),计算向量;Let the coordinates of p and p' in their respective local coordinate systems be (x, y, z) and (x', y', z'), and calculate the vector;
遍历所有特征单元构造N个候选转换估计,采用匹配程度确定出最佳配准转换估计匹配程度以得到匹配的特征单元数目计算特征单元匹配的约束条件如下:Traverse all feature units to construct N candidate conversion estimates, use the matching degree to determine the best registration conversion estimation matching degree to obtain the number of matching feature units, and calculate the constraint conditions of feature unit matching as follows:
·位置约束·Position constraints
p′是视点S2中与视点S1中p点最近的特征点,且p和p′间的距离小于阈值Δ1;p' is the feature point closest to point p in viewpoint S 2 in viewpoint S 2 , and the distance between p and p' is smaller than threshold Δ 1 ;
·方向约束· Direction constraints
Li与Li′两线段所在直线间的距离和方向夹角分别小于Δ2和Δ3;The distance and direction angle between the straight lines where the two line segments of L i and L i 'are located are less than Δ 2 and Δ 3 respectively;
再进行精配准:经过粗配准之后,带有重叠区域的两视点的三维数据基本拼合在同一坐标系内,只在某些细节部分配准质量稍差些,需进一步精配准具体方法:1)在粗配准的基础上,确定重叠区2)将重叠区划分成N块,每块含u×v个点3)计算每块对应点的平均差异,找出差异较大的M块作为适配区在适配区内利用ICP算法进行精配准,可大大减少运算时间4)利用了激光采样点的激光反射率和点位信息,结合马氏距离寻找对应点Then perform fine registration: after rough registration, the 3D data of two viewpoints with overlapping areas are basically merged in the same coordinate system, and only in some details, the registration quality is slightly worse, and further fine registration is required for specific methods : 1) On the basis of rough registration, determine the overlapping area 2) Divide the overlapping area into N blocks, each block contains u×v points 3) Calculate the average difference of corresponding points in each block, and find M blocks with large differences As the adaptation area, the ICP algorithm is used for fine registration in the adaptation area, which can greatly reduce the calculation time 4) The laser reflectivity and point information of the laser sampling point are used to find the corresponding point combined with the Mahalanobis distance
对于视点超过2个以上的多视点三维场景,所述步骤还包括全局配准:For a multi-viewpoint 3D scene with more than 2 viewpoints, the steps also include global registration:
全局配准指将所有视点三维数据拼合在同一坐标系的策略首先所有视点两两配准,计算匹配程度;再以视点为结点,以匹配程度为连结权,生成无向带权图;然后利用最小生成树原则建立全局配准模型图由全局配准模型图确定一个固定视点Sa,并使其它视点的数据向其转化其中建立全局配准模型的具体方法如下:Global registration refers to the strategy of stitching all viewpoint 3D data into the same coordinate system. First, all viewpoints are registered in pairs to calculate the matching degree; then the viewpoint is used as the node, and the matching degree is used as the connection weight to generate an undirected weighted graph; then Using the principle of minimum spanning tree to establish a global registration model graph A fixed viewpoint S a is determined from the global registration model graph, and the data of other viewpoints are converted to it. The specific method of establishing a global registration model is as follows:
设第i对有重叠区的两视点计算出的转换矩阵为Tl=[Ri,ti]和对应的局部匹配程度为g(Ti);以视点为结点,以g(Ti)为结点连接权重,生成一个包含所有视点在内的无向带权连通图G=<V,E>,V为结点,E为边,且ei=g(Ti)∈E,并采用逐步短接法求出最小生成树,建立全局配准模型Assume that the transformation matrix calculated by the i-th pair of two viewpoints with overlapping areas is T l =[R i , t i ] and the corresponding local matching degree is g(T i ); with the viewpoint as the node, g(T i ) is the connection weight of nodes, generating an undirected weighted connected graph G=<V, E> including all viewpoints, V is a node, E is an edge, and e i =g(T i )∈E, And use the step-by-step short-circuit method to find the minimum spanning tree, and establish a global registration model
本发明的技术效果为:Technical effect of the present invention is:
本发明不借助任何外协设备和人工参与的方法,解决了大规模三维场景多视点激光扫描数据自动配准其优势在于:1)构造特征单元,描述了不同特征之间的关系,简化了几何复杂性,更适于大规模配准2)引入虚特征解决了因遮挡等因素影响相邻视点重叠区缺少有效特征的配准问题3)采用最小生成树原则建立了全局配准模型图,配准精度可靠4)本发明提供的方法具有普遍适用性,既适用于现代结构的场景又适用于复杂的大型文物和古建The present invention solves the problem of automatic registration of multi-viewpoint laser scanning data in large-scale three-dimensional scenes without the help of any external equipment and manual participation. Complexity, more suitable for large-scale registration 2) The introduction of virtual features solves the registration problem of lack of effective features in the overlapping areas of adjacent viewpoints due to occlusion and other factors 3) Using the minimum spanning tree principle to establish a global registration model diagram, the registration Accurate and reliable 4) The method provided by the present invention has universal applicability, not only applicable to the scene of modern structure but also applicable to complex large-scale cultural relics and ancient buildings
附图说明Description of drawings
图1为本发明对场景被测物进行激光扫描的示意图;Fig. 1 is the schematic diagram that the present invention carries out laser scanning to scene measured object;
图2(a)为垂直扫描面内激光采样点分布情况;Figure 2(a) shows the distribution of laser sampling points in the vertical scanning plane;
图2(b)为垂直扫描面内激光采样点按水平投影值的统计情况;Figure 2(b) shows the statistical situation of the laser sampling points in the vertical scanning plane according to the horizontal projection value;
图3(a)和3(b)为主要建筑结构提取的实例;Figures 3(a) and 3(b) are examples of main building structures extracted;
图4(a)、4(b)、4(c)4(d)和4(e)为在遮挡严重情况下两视点三维数据配准实例;Figures 4(a), 4(b), 4(c), 4(d) and 4(e) are examples of two-viewpoint 3D data registration under severe occlusion conditions;
图5为全局配准模型生成过程;Fig. 5 is the global registration model generation process;
图6(a)、6(b)为全局配准实例;Figure 6(a) and 6(b) are examples of global registration;
具体实施方式Detailed ways
以下结合附图及具体的实施例对本发明的技术方案作进一步描述:The technical scheme of the present invention will be further described below in conjunction with accompanying drawing and specific embodiment:
本发明要求输入的是中远距离的三维激光测距仪获取的激光扫描数据。三维激光测距仪的作用距离一般大于200m,沿激光束方向测量精度是5cm/200m本发明的具体实施步骤如下:The input required by the present invention is the laser scanning data acquired by the medium and long-distance three-dimensional laser range finder. The working distance of the three-dimensional laser rangefinder is generally greater than 200m, and the measurement accuracy along the direction of the laser beam is 5cm/200m. The specific implementation steps of the present invention are as follows:
实施例1Example 1
1)数据获取1) Data acquisition
激光测距仪放在被测场景前20-50米,调节测距仪使Z轴垂直地面,X和Y轴平行于地面,对被测场景进行逐行逐列扫描(如图1),获得对应视点的三维数据另外,相邻视点扫描数据保持10%~20%的重叠,以便使各视点三维数据顺利配准The laser rangefinder is placed 20-50 meters in front of the measured scene, adjust the rangefinder so that the Z axis is perpendicular to the ground, and the X and Y axes are parallel to the ground, and scan the measured scene row by row (as shown in Figure 1), and obtain The 3D data corresponding to the viewpoint In addition, the scanning data of adjacent viewpoints maintain an overlap of 10% to 20%, so that the 3D data of each viewpoint can be registered smoothly
2)提取主要建筑结构2) Extract the main building structure
输入每一视点对应的激光扫描数据,计算各垂直扫描面各点与激光测距仪的水平距离,并根据水平距离,利用哈夫变换检测垂直线段【如图2(a)和图2(b)】以其所含的点数计算垂直线的长度,选择最长垂直线段对应的水平距离为基准dmi,基准之前δ1米和基准之后δ2米范围内的数据为主要建筑结构保留图2(a)某一垂直扫描面激光采样点的分布图,由图可以看出,保留区段内为主要建筑结构,其保留区段之前和之后的景物和散乱点删去;图2(b)为图2(a)中该垂直扫描面激光采样点水平距离统计分布示意图,由图可以看出,主要建筑墙面上的点在同一垂直线上,即水平距离相等Input the laser scanning data corresponding to each viewpoint, calculate the horizontal distance between each point on each vertical scanning surface and the laser range finder, and use the Hough transform to detect the vertical line segment according to the horizontal distance (as shown in Figure 2(a) and Figure 2(b) )] Calculate the length of the vertical line with the number of points it contains, select the horizontal distance corresponding to the longest vertical line segment as the benchmark d mi , and the data within the range of δ 1 meter before the benchmark and δ 2 meters after the benchmark are the main building structures. Figure 2 (a) The distribution of laser sampling points on a certain vertical scanning plane. It can be seen from the figure that the main building structure is in the reserved section, and the scenery and scattered points before and after the reserved section are deleted; Fig. 2(b) It is a schematic diagram of the horizontal distance statistical distribution of laser sampling points on the vertical scanning surface in Figure 2(a). It can be seen from the figure that the points on the main building wall are on the same vertical line, that is, the horizontal distances are equal
本实施例δ1=1.0m,δ2=0.5m可提取主要建筑结构,图3(a)和3(b)为主要建筑结构提取实例图3(a)为对被测物进行逐行逐列激光扫描的全景实测图,其中A为主要建筑结构(墙面),B为地面,C为主要建筑结构的遮挡物;图3(b)为使用上述方法提取主要建筑结构的结果图,图中A为提取出的主要建筑结构(墙面)In this embodiment, δ 1 =1.0m, δ 2 =0.5m can extract the main building structure, Figure 3(a) and 3(b) are the main building structure extraction examples Figure 3(a) is the line by line of the measured object A series of laser-scanned panoramic images, where A is the main building structure (wall), B is the ground, and C is the obstruction of the main building structure; Figure 3(b) is the result of extracting the main building structure using the above method, Fig. A is the extracted main building structure (wall)
3)特征提取3) Feature extraction
a)计算扫描线上各点的一次微分a) Calculate the first differential of each point on the scan line
对于每条扫描线上采样点用(1)式进行局部二次拟合,并以该点沿扫描线方向上下k个邻近点为拟合区间[-k,k],K为3-5;本实施例k=3(当然K也可为4或5;可根据扫描点密度而定,至于究竟应该取多少,属本技术领域技术人员应知应会);For each sampling point on the scanning line, use formula (1) to perform local quadratic fitting, and take the k adjacent points up and down along the scanning line direction as the fitting interval [-k, k], and K is 3-5; In this embodiment k=3 (certainly K can also be 4 or 5; it can be determined according to the scanning point density, as to how much should be taken, it should be known by those skilled in the art);
式中:i-扫描线上采样点排序,i=0,1,…,n;-第i点到激光测距仪的计算水平距离;In the formula: i - the ordering of sampling points on the scanning line, i = 0, 1, ..., n; - Calculated horizontal distance from the i-th point to the laser range finder;
a0,a1,a2-拟合方程(1)式的拟合系数a 0 , a 1 , a 2 - fitting coefficients of fitting equation (1)
传统的方法根据(2)式的拟合误差函数E(i),利用最小二乘法计算拟合系数a0,a1,a2。The traditional method calculates the fitting coefficients a 0 , a 1 , and a 2 by using the least square method according to the fitting error function E(i) of formula (2).
式中:d(i)-第i点到激光测距仪的实际水平距离;In the formula: d(i)-the actual horizontal distance from the i-th point to the laser range finder;
但最小二乘法对不连续点和局外点不敏感,故采用自适应加权曲线拟合方法求取拟合系数a0,a1,a2,式(3)为加权拟合误差函数式中:w(i)-扫描线上第i点的权值,代表点的可靠性。采用迭代权值修正法,逐步消除局部点的影响,使拟合仅在可靠点内进行。具体方法如下:But the least squares method is not sensitive to discontinuous points and outlying points, so the adaptive weighted curve fitting method is used to obtain the fitting coefficients a 0 , a 1 , a 2 , and formula (3) is the weighted fitting error function where : w(i) - the weight of the i-th point on the scanning line, representing the reliability of the point. The iterative weight correction method is used to gradually eliminate the influence of local points, so that the fitting can only be carried out in reliable points. The specific method is as follows:
假设已知第m次迭代中拟合系数为a0 m,a1 m,a2 m,求第m+1次迭代中拟合系数a0 m+1,a1 m+1,a2 m+1。首先计算第m次迭代的拟合残余量和第m次迭代的权值Assuming that the fitting coefficients in the mth iteration are a 0 m , a 1 m , a 2 m , find the fitting coefficients a 0 m+1 , a 1 m +1 , a 2 m in the m+1th iteration +1 . First calculate the fitting residual of the mth iteration and the weight of the mth iteration
式中:bs-误差允许阈值。In the formula: b s - error allowable threshold.
初始权值w0(i)=1,利用(4)、(5)式不断修正拟合系数和权值,至拟合曲线满足误差要求且包含最多点数为止。标记参与拟合的点。The initial weight w 0 (i) = 1, use (4) and (5) to continuously modify the fitting coefficients and weights until the fitting curve meets the error requirements and contains the most points. Mark the points involved in the fit.
然后,对(1)式求一次微分,并计算扫描线上有标记各点的一次微分。Then, calculate the first differential of formula (1), and calculate the first differential of each point marked on the scanning line.
b)检测边缘点b) Detect edge points
对于扫描线上任意一个有标记的点p,及其上下有标记的邻域点(p1和p2),检测邻域点(p1和p2)一次微分的符号,若异号则p点为法向不连续,并标记为0。For any marked point p on the scan line, and its marked neighboring points (p 1 and p 2 ), detect the sign of the primary differential of the neighboring points (p 1 and p 2 ), if the sign is different, then p Points are normal discontinuities and are labeled 0.
计算p与p1和p2的距离,如果dis=max(dis(p,p1),dis(p,p2)>ε(ε为阈值),则p点为深度不连续,标记为1。Calculate the distance between p and p 1 and p 2 , if dis=max(dis(p,p 1 ), dis(p,p 2 )>ε(ε is the threshold), then point p is a depth discontinuity, marked as 1 .
c)特征点线提取c) Extraction of feature points and lines
首先采用链码跟踪方法将边缘点区分成不同的边缘链,并将每一边缘链上的点按顺时针排序,然后通过分裂合并的方法将每一边缘链拟合成一组首尾相接的特征线段,具体方法如下:First, the chain code tracking method is used to divide the edge points into different edge chains, and the points on each edge chain are sorted clockwise, and then each edge chain is fitted into a set of end-to-end features by splitting and merging. Line segment, the specific method is as follows:
·初始化·initialization
将边缘链分为等长的若干段L0,L1,…,Ln Divide the edge chain into several segments of equal length L 0 , L 1 ,…, L n
·分裂·Split
·分裂·Split
利用最小二乘法对当前每一段Li拟合成直线选择离该拟合直线最远且最小残差Ei大于允许阈值T的一点为基准,将本段边缘链断开为两段,并不断分裂至满足拟合误差为止依次处理其它各段Use the least squares method to fit each current section L i into a straight line, select the point that is the farthest from the fitted line and the minimum residual E i is greater than the allowable threshold T as the reference, break the edge chain of this section into two sections, and continue Split until the fitting error is met and process the other segments sequentially
·合并·merge
选择拟合残差小于阈值T的相邻两段合并成一段为特征线段,特征线段的端点为特征点Select two adjacent segments whose fitting residual error is less than the threshold T to merge into a segment as a feature line segment, and the endpoint of the feature line segment is a feature point
4)计算虚特征与构造特征单元4) Calculating virtual features and constructing feature units
选择实际不相交的特征线段延长相交形成虚特征线段和虚特征点加入上述提取的特征线段和特征点中将特征点连同其相邻的两特征线段嵌套成特征单元PU{P|L1,L2:<n1,n2,n3,p>}n1和n2为特征线段L1和L2的方向矢量,n3=n1×n2为前两矢量合成的第三矢量,×为矢量叉乘p为特征点Select actual disjoint feature line segments to extend and intersect to form virtual feature line segments and virtual feature points and add them to the above extracted feature line segments and feature points. Nest feature points together with their adjacent two feature line segments into feature unit PU{P|L 1 , L 2 :<n 1 , n 2 , n 3 , p>}n 1 and n 2 are the direction vectors of the characteristic line segments L 1 and L 2 , n 3 =n 1 ×n 2 is the third vector synthesized from the first two vectors , × is vector cross product p is feature point
5)配准5) Registration
a)粗配准: 给定一对分别属于两视点S1和S2的特征单元PU{P|L1,L2:<n1,n2,n3,p>}∈S1和PU′{P′|L1′,L2′:<n1′,n2′,n3′,p′>}∈S2,那么旋转矩阵会由ni和ni′(i=1,2,3,)得到,即使下式误差函数最小,并采用四元素法求出旋转矩阵a) Coarse registration: Given a pair of feature units PU{P|L 1 , L 2 :< n 1 , n 2 , n 3 , p>}∈S 1 and PU belonging to two viewpoints S 1 and S 2 ′{P′|L 1 ′, L 2 ′:<n 1 ′, n 2 ′, n 3 ′, p′>}∈S 2 , then the rotation matrix will be composed of n i and n i ′(i=1, 2, 3,) to obtain, even if the error function of the following formula is the smallest, and use the four-element method to find the rotation matrix
设p和p′在各自局部坐标系的坐标为(x,y,z)与(x′,y′,z′),计算向量Let the coordinates of p and p' in their respective local coordinate systems be (x, y, z) and (x', y', z'), and calculate the vector
每对对应特征单元得到一个对应的配准转换估计,选择匹配程度最大的为最佳配准转换估讳特征单元匹配的约束条件如下:Each pair of corresponding feature units gets a corresponding registration transformation estimate, and the best matching degree is selected as the best registration transformation estimate. The matching constraints of feature units are as follows:
·位置约束·Position constraints
p′是视点S2中与视点S1中p点最近的特征点,且p和p′间的距离小于阈值Δ1。p' is the feature point closest to point p in viewpoint S 2 in viewpoint S 2 , and the distance between p and p' is smaller than threshold Δ 1 .
·方向约束· Direction constraints
Li与Li′两线段所在直线间的距离和方向夹角分别小于Δ2和Δ3。b)精配准:按粗配准获得的转换估计,将两视点的三维数据拼合在同一坐标系,确定出重叠区域将两视点对应的重叠区分别等分成10块,计算每块内对应点的平均差异选择差异最大的3块作为适配区在适配区内利用ICP算法进行精配准为增加对应点搜索的鲁棒性,利用激光采样点的反射率和三维信息,以马氏距离衡量两点P∈S1和P′∈S2的差异,即The distance and direction angle between the straight lines of the two line segments L i and L i ′ are smaller than Δ 2 and Δ 3 respectively. b) Fine registration: According to the conversion estimation obtained by rough registration, the three-dimensional data of the two viewpoints are put together in the same coordinate system, and the overlapping area is determined. The overlapping area corresponding to the two viewpoints is divided into 10 blocks, and the corresponding points in each block are calculated. The average difference of the three blocks with the largest difference is selected as the adaptation area, and the ICP algorithm is used for fine registration in the adaptation area. In order to increase the robustness of corresponding point search, the reflectivity and three-dimensional information of laser sampling points are used to measure Measure the difference between two points P∈S1 and P′∈S2, namely
式中:d2(p′p′)-p和p′点的马氏距离;In the formula: d 2 (p′p′)-the Mahalanobis distance between points p and p′;
Cp′-P′点及其k×k个邻域点的协方差,本实施例k=3The covariance of C p' -P' point and its k×k neighbor points, k=3 in this embodiment
图4(a)、4(b)4(c)4(d)和4(e)为招待所两视点三维数据配准结果;图4(a)为在S1视点获取的激光采样点云;图4(b)为在S2视点获取的激光采样点云;图4(c)和图4(d)为利用上述方法的粗配准结果,其中图4(c)没采用虚特征,图4(d)采用了虚特征;图4(e)为利用上述方法进行精配准的结果Figure 4(a), 4(b), 4(c), 4(d) and 4(e) are the two-viewpoint 3D data registration results of the guest house; Figure 4(a) is the laser sampling point cloud acquired at the S1 viewpoint; Fig. 4(b) is the laser sampling point cloud acquired at the S2 viewpoint; Figure 4(c) and Figure 4(d) are the rough registration results using the above method, where Figure 4(c) does not use virtual features, and Figure 4( d) Virtual features are used; Figure 4(e) is the result of fine registration using the above method
实施例2Example 2
为使用本发明的方法对学生餐厅进行大规模三维场景8个视点的激光扫描数据自动配准,其步骤1)-5)同实施例1,所不同的是:由于视点数目超过2个,需要进行全局配准;In order to use the method of the present invention to automatically register the laser scanning data of 8 viewpoints in a large-scale three-dimensional scene to the student restaurant, its steps 1)-5) are the same as in Embodiment 1, except that: since the number of viewpoints exceeds 2, it is necessary to Perform global registration;
首先所有(有重叠区)视点进行两两配准,利用最小生成树原则建立全局配准模型,然后根据全局配准模型选择一个视点Sa固定,其它视点的激光扫描数据转换在Sa所在的坐标系内First, all viewpoints (with overlapping areas) are pairwise registered, and the global registration model is established by using the minimum spanning tree principle, and then a viewpoint S a is selected to be fixed according to the global registration model, and the laser scanning data of other viewpoints are converted at the location of S a In the coordinate system
具体地讲,每一对有重叠区的两视点都可计算出转换矩阵Ti=[Ri,ti]和局部匹配程度g(Ti)(以得到配准的点数计)以各视点为结点,以g(Ti)为结点的连接权重,则会生成一个包含所有视点在内的无向带权连通图G=<V,E>,V为结点,E为边(权重),且ei=g(Ti)∈E。根据局部匹配程度g(Ti),在无向连通带权图G=<V,E>中,采用逐步短接法可求最小生成树,建立全局配准模型Specifically, for each pair of two views with overlapping areas, the transformation matrix T i =[R i , t i ] and the degree of local matching g(T i ) (counted by the number of registered points) can be calculated for each view point is a node, with g(T i ) as the connection weight of the node, an undirected weighted connected graph G=<V, E> containing all viewpoints will be generated, V is a node, and E is an edge ( weight), and e i =g(T i )∈E. According to the degree of local matching g(T i ), in the undirected connected weighted graph G=<V, E>, the minimum spanning tree can be obtained by using the step-by-step short connection method, and the global registration model can be established
图5为其全局配准模型生成过程示意图,图中的结点12345和6表示了6个视点;第1步,计算所有视点两两配准的局部匹配程度;第2步为将匹配程度最大化的两视点进行短接;第3-5步为将匹配程度次之的两视点进行短接,直至所有视点短接完成;第6步为将分枝最多的视点4固定为SaFigure 5 is a schematic diagram of the generation process of its global registration model. The nodes 12345 and 6 in the figure represent 6 viewpoints; the first step is to calculate the local matching degree of pairwise registration of all viewpoints; the second step is to maximize the matching degree Short-circuit the two viewpoints that are optimized; Steps 3-5 are to short-circuit the two viewpoints with the next best matching degree until all viewpoints are short-circuited; Step 6 is to fix the viewpoint 4 with the most branches as Sa
图6(a)为本实施例《学生餐厅》8个视点全局配准结果图;图6(b)为图6(a)中窗户部位的局部放大图。Fig. 6(a) is the result of global registration of 8 viewpoints of the "student restaurant" in this embodiment; Fig. 6(b) is a partially enlarged view of the window in Fig. 6(a).
Claims (2)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB2004100311577A CN1312633C (en) | 2004-04-13 | 2004-04-13 | Automatic registration method for large-scale three-dimensional scene multi-view laser scanning data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB2004100311577A CN1312633C (en) | 2004-04-13 | 2004-04-13 | Automatic registration method for large-scale three-dimensional scene multi-view laser scanning data |
Publications (2)
Publication Number | Publication Date |
---|---|
CN1684105A CN1684105A (en) | 2005-10-19 |
CN1312633C true CN1312633C (en) | 2007-04-25 |
Family
ID=35263428
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNB2004100311577A Expired - Fee Related CN1312633C (en) | 2004-04-13 | 2004-04-13 | Automatic registration method for large-scale three-dimensional scene multi-view laser scanning data |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN1312633C (en) |
Families Citing this family (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009536499A (en) * | 2006-05-05 | 2009-10-08 | トムソン ライセンシング | System and method for reconstructing a three-dimensional object from a two-dimensional image |
CN101726255B (en) * | 2008-10-24 | 2011-05-04 | 中国科学院光电研究院 | Method for extracting interesting buildings from three-dimensional laser point cloud data |
CN102271255B (en) * | 2011-08-09 | 2013-01-16 | 清华大学 | Motion estimation method and device for dual-spelling stereo video coding |
US9183631B2 (en) * | 2012-06-29 | 2015-11-10 | Mitsubishi Electric Research Laboratories, Inc. | Method for registering points and planes of 3D data in multiple coordinate systems |
CN102903117A (en) * | 2012-10-24 | 2013-01-30 | 深圳大学 | 3D (three-dimensional) image registration method and device based on conformal geometric algebra |
US9189702B2 (en) * | 2012-12-31 | 2015-11-17 | Cognex Corporation | Imaging system for determining multi-view alignment |
CN105787933B (en) * | 2016-02-19 | 2018-11-30 | 武汉理工大学 | Water front three-dimensional reconstruction apparatus and method based on multi-angle of view point cloud registering |
CN110136179B (en) * | 2018-02-08 | 2022-02-22 | 中国人民解放军战略支援部队信息工程大学 | Three-dimensional laser point cloud registration method and device based on straight line fitting |
CN110399892B (en) * | 2018-04-24 | 2022-12-02 | 北京京东尚科信息技术有限公司 | Environmental feature extraction method and device |
CN108921095A (en) * | 2018-07-03 | 2018-11-30 | 安徽灵图壹智能科技有限公司 | A kind of parking occupancy management system neural network based, method and parking stall |
CN109978919B (en) * | 2019-03-22 | 2021-06-04 | 广州小鹏汽车科技有限公司 | Monocular camera-based vehicle positioning method and system |
CN110209160A (en) * | 2019-04-29 | 2019-09-06 | 北京云迹科技有限公司 | Barrier extracting method and device based on laser |
CN110706357B (en) * | 2019-10-10 | 2023-02-24 | 青岛大学附属医院 | Navigation system |
CN112381863B (en) * | 2020-11-12 | 2022-04-05 | 中国电建集团江西省电力设计院有限公司 | Ground laser point cloud method for forest scene |
CN113155510B (en) * | 2021-04-16 | 2024-05-10 | 伊达生物有限公司 | Tissue cell segmentation sampling system and method |
CN117428582B (en) * | 2023-12-22 | 2024-03-15 | 泉州装备制造研究所 | A processing method and medium for special-shaped workpieces |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5633951A (en) * | 1992-12-18 | 1997-05-27 | North America Philips Corporation | Registration of volumetric images which are relatively elastically deformed by matching surfaces |
US5937083A (en) * | 1996-04-29 | 1999-08-10 | The United States Of America As Represented By The Department Of Health And Human Services | Image registration using closest corresponding voxels with an iterative registration process |
CN1408102A (en) * | 1999-07-26 | 2003-04-02 | 计算机化医疗系统公司 | Automated image fusion/alignment system and method |
US20030160970A1 (en) * | 2002-01-30 | 2003-08-28 | Anup Basu | Method and apparatus for high resolution 3D scanning |
CN1483999A (en) * | 2003-08-15 | 2004-03-24 | 清华大学 | Method and system for measuring three-dimensional surface profile of object |
-
2004
- 2004-04-13 CN CNB2004100311577A patent/CN1312633C/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5633951A (en) * | 1992-12-18 | 1997-05-27 | North America Philips Corporation | Registration of volumetric images which are relatively elastically deformed by matching surfaces |
US5937083A (en) * | 1996-04-29 | 1999-08-10 | The United States Of America As Represented By The Department Of Health And Human Services | Image registration using closest corresponding voxels with an iterative registration process |
CN1408102A (en) * | 1999-07-26 | 2003-04-02 | 计算机化医疗系统公司 | Automated image fusion/alignment system and method |
US20030160970A1 (en) * | 2002-01-30 | 2003-08-28 | Anup Basu | Method and apparatus for high resolution 3D scanning |
CN1483999A (en) * | 2003-08-15 | 2004-03-24 | 清华大学 | Method and system for measuring three-dimensional surface profile of object |
Also Published As
Publication number | Publication date |
---|---|
CN1684105A (en) | 2005-10-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111486855B (en) | Indoor two-dimensional semantic grid map construction method with object navigation points | |
CN1312633C (en) | Automatic registration method for large-scale three-dimensional scene multi-view laser scanning data | |
Ding et al. | Automatic registration of aerial imagery with untextured 3d lidar models | |
Zhao et al. | A vehicle-borne urban 3-D acquisition system using single-row laser range scanners | |
Qu et al. | Landmark based localization in urban environment | |
CN107850449A (en) | Method and system for generating and using locating reference datum | |
CN113916130B (en) | Building position measuring method based on least square method | |
CN110288659B (en) | Depth imaging and information acquisition method based on binocular vision | |
CN106127771A (en) | Tunnel orthography system and method is obtained based on laser radar LIDAR cloud data | |
CN102804231A (en) | Piecewise planar reconstruction of three-dimensional scenes | |
McIntosh et al. | Integration of laser-derived DSMs and matched image edges for generating an accurate surface model | |
CN103295239A (en) | Laser-point cloud data automatic registration method based on plane base images | |
CN110889899B (en) | Digital earth surface model generation method and device | |
CN114140539B (en) | Method and device for acquiring position of indoor object | |
Cosido et al. | Hybridization of convergent photogrammetry, computer vision, and artificial intelligence for digital documentation of cultural heritage-a case study: the magdalena palace | |
Stal et al. | Test case on the quality analysis of structure from motion in airborne applications | |
Zhang et al. | 3D highway curve reconstruction from mobile laser scanning point clouds | |
Wang et al. | A method for detecting windows from mobile LiDAR data | |
Tao et al. | Automated processing of mobile mapping image sequences | |
Zhao et al. | A robust method for registering ground-based laser range images of urban outdoor objects | |
Zhao et al. | Alignment of continuous video onto 3D point clouds | |
Sohn et al. | Sequential modelling of building rooftops by integrating airborne LiDAR data and optical imagery: preliminary results | |
Riveiro et al. | Automatic creation of structural models from point cloud data: the case of masonry structures | |
Remondino | Detailed image-based 3D geometric reconstruction of heritage objects | |
DATA | SURFACE MODELLING OF URBAN 3D OBJECTS FROM |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
C17 | Cessation of patent right | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20070425 Termination date: 20140413 |