CN111340942A - Three-dimensional reconstruction system based on unmanned aerial vehicle and method thereof - Google Patents
Three-dimensional reconstruction system based on unmanned aerial vehicle and method thereof Download PDFInfo
- Publication number
- CN111340942A CN111340942A CN202010116115.2A CN202010116115A CN111340942A CN 111340942 A CN111340942 A CN 111340942A CN 202010116115 A CN202010116115 A CN 202010116115A CN 111340942 A CN111340942 A CN 111340942A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- cloud data
- point
- edge
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 36
- 230000008569 process Effects 0.000 claims abstract description 12
- 238000012545 processing Methods 0.000 claims abstract description 11
- 238000005259 measurement Methods 0.000 claims description 20
- 235000020061 kirsch Nutrition 0.000 claims description 12
- 230000010354 integration Effects 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 5
- 238000009499 grossing Methods 0.000 claims description 4
- 238000005457 optimization Methods 0.000 claims description 4
- 230000009467 reduction Effects 0.000 claims description 3
- 238000012360 testing method Methods 0.000 claims 1
- 230000001131 transforming effect Effects 0.000 claims 1
- 238000012800 visualization Methods 0.000 claims 1
- 238000003466 welding Methods 0.000 abstract 5
- 238000005452 bending Methods 0.000 abstract 4
- 230000005540 biological transmission Effects 0.000 abstract 3
- 239000011521 glass Substances 0.000 abstract 1
- 238000007731 hot pressing Methods 0.000 abstract 1
- 238000003825 pressing Methods 0.000 abstract 1
- 230000002787 reinforcement Effects 0.000 abstract 1
- 238000013507 mapping Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000001035 drying Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 238000009418 renovation Methods 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/08—Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
本发明涉及一种基于无人机的三维重建系统及其方法,包括工作台,工作台的上表面中部设有传送装置,传送装置上部安放多个定位载具,每个定位载具内设置两个载料位,工作台上位于传送装置的侧面从左到右依次设有镜圈上料折弯机构、庄头折弯焊接机构、鼻梁折弯焊接机构、加强杆磨削焊接机构、去毛刺机构、鼻托焊接机构、镜腿打标焊接机构、旋转铆压机构、腿套组装机构、镜腿热压折弯机构和下料机械手;本发明通过传送装将镜圈传送至各机构处进行相应加工作业,进而将各零配件按照设定的工艺组装焊接到一起;全机械化作业代替人工上下料,提高加工组装质量和效率,提高太阳眼镜成品率,减少人工成本;可组装其他种类的眼镜架体,具有良好市场应用价值。
The invention relates to a three-dimensional reconstruction system based on an unmanned aerial vehicle and a method thereof, comprising a workbench, a transmission device is arranged in the middle of the upper surface of the workbench, a plurality of positioning vehicles are arranged on the upper part of the transmission device, and two positioning vehicles are arranged in each positioning vehicle. Each loading position, the worktable is located on the side of the conveying device and is provided with mirror ring feeding and bending mechanism, head bending and welding mechanism, nose bridge bending and welding mechanism, reinforcement rod grinding and welding mechanism, deburring mechanism from left to right. Mechanism, nose pad welding mechanism, temple marking and welding mechanism, rotary riveting and pressing mechanism, leg cover assembling mechanism, temple hot pressing and bending mechanism and blanking manipulator; the present invention transmits the mirror ring to each mechanism through the transmission device. Corresponding processing operations, and then assemble and weld the parts together according to the set process; fully mechanized operations replace manual loading and unloading, improve the quality and efficiency of processing and assembly, improve the yield of sunglasses, and reduce labor costs; other types of glasses can be assembled The frame body has good market application value.
Description
技术领域technical field
本发明属于三维重建技术领域,具体涉及一种基于无人机的三维重建系统及三维重建方法。The invention belongs to the technical field of three-dimensional reconstruction, and in particular relates to a three-dimensional reconstruction system and a three-dimensional reconstruction method based on an unmanned aerial vehicle.
背景技术Background technique
用于三维重建的图像的获取方法包括,控制在空间上分离设置的至少两个光源中的每个光源的亮度周期性的变化,采用至少三个位置的三个摄像头分别采集用于三维重建的图像。基于图像的三维重建是根据物体或者场景所拍摄的两个或者两个以上二维的图像,由计算机自动进行计算和匹配,计算出物体或者场景的二维的几何信息和深度信息,并建立三维的立体模型的过程。但是也存在一些不足之处,首先,当要重建真实场景不能够获取到真实感知的图像时,比如物体或者场景根本不存在,是虚构出来的,又或者场景处于设The method for acquiring an image for three-dimensional reconstruction includes: controlling the periodic change of the brightness of each of the at least two light sources that are spatially separated, and using three cameras in at least three positions to separately acquire images for three-dimensional reconstruction. image. Image-based three-dimensional reconstruction is to automatically calculate and match two or more two-dimensional images captured by an object or scene, calculate the two-dimensional geometric information and depth information of the object or scene, and establish a three-dimensional The process of the diorama. However, there are also some shortcomings. First of all, when the real scene cannot be obtained when the real perception image is to be reconstructed, for example, the object or scene does not exist at all, it is fictitious, or the scene is in the setting
计规划阶段时,是在时刻变化的,就不能使用基于图像建模技术。其次,由于场景中的物体都变成了图像中的二维对象,因此用户很难与这些二维图形对象进行交互,获取所需要的信息;另外对照相机与摄影设备有一定的要求,这是获得真实的感知图像的需要。同时这些大量的图像文件也需要足够的存储空间来保存。在光伏行业,经常会出现管道有损坏,但是不会立即进行大规模的整修的情况,往往会先记录下当时的场景,后续做整改的时候可以进行定位。三维重建就是被广泛的在这样的实际场景中进行应用。每隔一段时间对整体的环境进行一次三维模型重建,每一次都会发生变化,在后期的对比维修中可以找到不同点。传统的三维重建是依靠人力手持激光对需要重建的环境进行扫描,可靠性不高,而且单纯的激光雷达得到的只有点云信息,重建得到的是模型架构,无法真实的还原真实场景,还需配合视觉传感器,得到包括形状、纹理、颜色等的信息。When the planning stage is changing at all times, image-based modeling techniques cannot be used. Secondly, because the objects in the scene have become two-dimensional objects in the image, it is difficult for users to interact with these two-dimensional graphic objects to obtain the required information; in addition, there are certain requirements for cameras and photographic equipment, which are The need to obtain a true perceptual image. At the same time, these large number of image files also need enough storage space to save. In the photovoltaic industry, pipelines are often damaged, but large-scale renovations are not carried out immediately. The scene at that time is often recorded first, and the positioning can be carried out when subsequent rectifications are made. 3D reconstruction is widely used in such practical scenarios. The 3D model of the overall environment is reconstructed every once in a while, and changes will occur each time, and differences can be found in the later comparison and maintenance. Traditional 3D reconstruction relies on human hand-held lasers to scan the environment to be reconstructed, which is not very reliable, and only point cloud information can be obtained by simple lidar. With the vision sensor, information including shape, texture, color, etc. is obtained.
随着无人机技术的发展,消费级无人机价格不断下降,以及激光雷达的小型化便携化,中小型无人机搭载激光雷达进行地理测绘成为可能。目前无人机Lidar测绘普遍使用地面标靶定位和地面基站配合的方式,但使用起来对不同地貌的适应能力性不强,需要提前人为勘探,去测绘地点周围树立标靶,过程繁琐效率低。同时取得的点云数据需要在基站进行离线处理,实时性不够好。With the development of UAV technology, the price of consumer UAVs continues to drop, and the miniaturization and portability of lidars make it possible for small and medium-sized UAVs to carry lidars for geographic mapping. At present, UAV Lidar surveying and mapping generally uses the method of ground target positioning and ground base station cooperation, but it is not adaptable to different landforms. It needs to be artificially explored in advance to set up targets around the surveying and mapping site, which is cumbersome and inefficient. The point cloud data obtained at the same time needs to be processed offline at the base station, and the real-time performance is not good enough.
近年来,随着计算机视觉的发展和图形计算能力的增强,也出现了依靠序列式图像从运动中恢复结构的方法来对城市地形进行三维建模的技术,但通过拍摄图片建模精度不够高,容易出现模型空洞和扭曲,建模后的调整依赖人为参与,进行手工编辑和优化。导致模型不够精准,容易混入人为主观因素。诸如百度地图等地图的三维地图功能也依赖于人工建模,自动化建模比例较低。In recent years, with the development of computer vision and the enhancement of graphics computing capabilities, there has also been a technique for 3D modeling of urban terrain by relying on sequential images to recover the structure from motion. However, the accuracy of modeling by taking pictures is not high enough. , the model is prone to voids and distortions, and the adjustment after modeling relies on human participation, manual editing and optimization. As a result, the model is not accurate enough, and it is easy to mix human subjective factors. The 3D map function of maps such as Baidu Maps also relies on manual modeling, and the proportion of automated modeling is low.
因此,现有技术存在缺陷,需要改进。Therefore, the prior art has shortcomings and needs to be improved.
发明内容SUMMARY OF THE INVENTION
本发明提供为了解决上述问题,本发明提供了一种基于三维激光和相机的三维重建方法,将激光数据和视觉数据融合,避免单相机存在的会有虚构场景的情况发生。解决的上述问题。The present invention provides a 3D reconstruction method based on a 3D laser and a camera, in order to solve the above problems, which integrates laser data and visual data to avoid the occurrence of fictitious scenes when a single camera exists. solve the above problem.
本发明为了解决上述现有技术存在的不足之处,提供一种城市地形三维重建方法,以期能够快速低成本的对一片目标区域进行扫描,从而实现自动化实时城市地形的三维重建,并保证较高的建模精度为解决上述问题,本发明提供的技术方案如下:In order to solve the above-mentioned shortcomings of the prior art, the present invention provides a three-dimensional reconstruction method of urban terrain, in order to scan a target area quickly and at low cost, so as to realize automatic real-time three-dimensional reconstruction of urban terrain, and to ensure high In order to solve the above problems, the technical solutions provided by the present invention are as follows:
一种基于的三维重建系统,A 3D reconstruction system based on
步骤S1、对三维重建坐标系进行标定;获取被测物体的第一帧三维测量数据的点云数据,称为全局点云数据,基于所述第一帧三维测量数据的坐标系称为全局坐标系;步骤S2、对被测物体某个局部区域的表面进行三维测量,利用双目立体视觉原理进行三维重建,得到所述局部区域的点云数据,称为局部点云数据,其中,所述局部区域与所述全局点云数据对应的区域存在重叠区;步骤S3、将所述局部点云数据变换到所述全局坐标系下,根据所述重叠区将所述局部点云数据与所述全局点云数据进行配准,更新所述全局点云数据;步骤S4、保持所述被测物体不动,变换测量视角对所述被测物体进行测量,重复所述步骤S2至所述步骤S3,直至完成对所述被测物体的测量;步骤S5、对测量完成后更新的全局点云数据进行全局优化处理,得到点云模型;所述步骤S2具体包括以下步骤:步骤S21、对所述局部区域的表面进行三维测量,得到所述局部区域的第一散斑图像和第二散斑图像;步骤S22、在所述第二散斑图像中确定所述第一散斑图像中每个像素的整像素级对应点;步骤S23、根据所述整像素级对应点,以及所述第一散斑图像中的各像素点坐标,对所述第二散斑图像进行亚像素对应点搜索,得到所述第二散斑图像中的亚像素对应点;步骤S24、利用双目立体视觉原理,利用Kirsch算法结合所述第二散斑图像的亚像素的对应关系进行三维重建,得到所述被测物体表面的局部点云数据。Step S1, calibrate the three-dimensional reconstruction coordinate system; obtain the point cloud data of the first frame of three-dimensional measurement data of the measured object, which is called global point cloud data, and the coordinate system based on the first frame of three-dimensional measurement data is called global coordinates Step S2: 3D measurement is performed on the surface of a certain local area of the object to be measured, and 3D reconstruction is performed using the principle of binocular stereo vision to obtain the point cloud data of the local area, which is called local point cloud data, wherein the There is an overlapping area between the local area and the area corresponding to the global point cloud data; step S3, transform the local point cloud data into the global coordinate system, and according to the overlapping area, compare the local point cloud data with the The global point cloud data is registered, and the global point cloud data is updated; step S4, keeping the measured object still, changing the measurement perspective to measure the measured object, and repeating the steps S2 to S3 , until the measurement of the measured object is completed; step S5, performing global optimization processing on the global point cloud data updated after the measurement is completed, to obtain a point cloud model; the step S2 specifically includes the following steps: step S21, on the Three-dimensional measurement is performed on the surface of the local area to obtain a first speckle image and a second speckle image of the local area; step S22, determining each pixel in the first speckle image in the second speckle image In step S23, according to the integer pixel level corresponding points and the coordinates of each pixel point in the first speckle image, perform sub-pixel corresponding point search on the second speckle image, and obtain sub-pixel corresponding points in the second speckle image; step S24, using the principle of binocular stereo vision, using Kirsch algorithm to combine the sub-pixel correspondence of the second speckle image to perform three-dimensional reconstruction, to obtain the measured Local point cloud data of the object surface.
所述的Kirsch算法包括:The Kirsch algorithm includes:
第1步:对原始图像用带有指定标准偏差σ的高斯滤波器来平滑,然后在每一点处计算局部梯度g(x,y)和边缘方向α(x,y);Step 1: Smooth the original image with a Gaussian filter with a specified standard deviation σ, then compute the local gradient g(x,y) and edge direction α(x,y) at each point;
第2步:对所计算出的梯度图像进行Kirsch计算。假设图像有H×W个像素点,其边缘像素一般不会超过5×H个。对于具有一定目标的图像这是个比较宽松的限定值。取初始阈值T0,对于每个像素点i计算其Kirsch算子,如果满足K(i)>T0,则i为边缘点,边缘点数N加1,一旦边缘点数超过5×H而i还小于整幅图像的像素数,说明阈值取的太低,致使许多不是边缘点的像素也被取出。因此,需要提高阈值,计算过程将记录满足K(i)>T0条件的最小K(i),记做Kmin,将此最小值作为新的阈值,整个过程按如下方法进行调整:Step 2: Perform Kirsch calculation on the calculated gradient image. Assuming that the image has H×W pixels, the edge pixels generally do not exceed 5×H. This is a looser limit for images with certain goals. Take the initial threshold T 0 , and calculate its Kirsch operator for each pixel point i. If K(i)>T 0 is satisfied, then i is an edge point, and the number of edge points N is increased by 1. Once the number of edge points exceeds 5×H and i is still If the number of pixels is less than the whole image, it means that the threshold is too low, so that many pixels that are not edge points are also taken out. Therefore, it is necessary to increase the threshold. The calculation process will record the minimum K(i) that satisfies the condition of K(i)>T 0 , record it as K min , and use this minimum value as the new threshold. The whole process is adjusted as follows:
(1)如果K(i)>T,则i为边缘点,记录边缘点i的坐标及K(i)的最小值Kmin=min[K(i)],同时N加1;(1) If K(i)>T, then i is an edge point, record the coordinates of edge point i and the minimum value of K(i) K min =min[K(i)], and add 1 to N at the same time;
(2)一旦N≥5×H,则将阈值调整到满足最低边缘要求的最小值处,即取T=Kmin;(2) Once N≥5×H, adjust the threshold to the minimum value that satisfies the minimum edge requirement, that is, take T=K min ;
(3)将前面已得到的边缘点和新的阈值做比较,取大于新阈值的点为新的边缘点,并重新计算此新阈值下的Kmin,记录新边缘点的newN;(3) Compare the previously obtained edge point with the new threshold, take the point greater than the new threshold as the new edge point, recalculate K min under the new threshold, and record the newN of the new edge point;
(4)将新的边缘点数赋给N后面的计数从N=newN开始;(4) assigning the number of new edge points to the count behind N starts from N=newN;
(5)继续对剩下的边缘点做第1步的处理,如N≥5×H则又回到(2);(5) Continue to process the remaining edge points in the first step, and return to (2) if N≥5×H;
(6)如果N<5×H,令T2=T,T1=βT2(其中0<β<1,β为试验中得到得一个常数);(6) If N<5×H, let T 2 =T, T 1 =βT 2 (where 0<β<1, and β is a constant obtained in the experiment);
第3步:边缘提取。使用两个阈值T1和T2对第1步产生得梯度图像做阈值处理,其中值大于T2的脊像素称为强边缘像素,T1和T2之间的脊像素称为弱边缘像素,仅当强边缘和弱边缘相连时,弱边缘才会包含在输出中。Step 3: Edge extraction. Use two thresholds T 1 and T 2 to threshold the gradient image generated in step 1, where the ridge pixels with a value greater than T 2 are called strong edge pixels, and the ridge pixels between T 1 and T 2 are called weak edge pixels. , the weak edge is included in the output only if the strong and weak edges are connected.
所述三维重建方法是:The three-dimensional reconstruction method is:
步骤1、数据获取:Step 1. Data acquisition:
步骤1.1、利用四旋翼无人机上的机载GPS实时获取自身RMC格式的定位信息集合并按照顺序逐帧发送给地面基站用于存储,其中任意第α条定位信息包括:第α个GPS时间戳RMCα.timestamp,第α个经纬度RMCα.position,第α个航向信息RMCα.track;Step 1.1. Use the airborne GPS on the quadrotor UAV to obtain the positioning information set in RMC format in real time and send it to the ground base station frame by frame in order for storage, wherein any αth positioning information includes: the αth GPS timestamp RMCα.timestamp, the αth latitude and longitude RMCα.position, the αth heading information RMCα.track;
步骤1.2、利用四旋翼无人机上的机载激光雷达获取城市地形数据集合D并按照顺序逐帧发送给地面基站用于存储,其中任意第j条城市地形数据dj包含:第j个点号dj.PointID、第j个空间坐标点(xj,yj,zj)、第j个调节时间dj.adjustedtime、第j个方位角dj.Azimuth、第j个距离dj.Distance、第j个反射强度dj.Intensity、第j个雷达通道dj.Laser_id、第j个点时间戳dj.timestamp;Step 1.2. Use the airborne lidar on the quadrotor UAV to obtain the urban terrain data set D and send it to the ground base station frame by frame in sequence for storage, where any jth urban terrain data dj includes: the jth point number dj .PointID, the jth spatial coordinate point (xj, yj, zj), the jth adjustment time dj.adjustedtime, the jth azimuth angle dj.Azimuth, the jth distance dj.Distance, the jth reflection intensity dj. Intensity, jth radar channel dj.Laser_id, jth point timestamp dj.timestamp;
步骤2、数据整合:Step 2. Data integration:
从所述城市地形数据集合D和定位信息集合中选取满足式(1)的各条城市地形数据和定位信息,从而得到n个数据并构成数据集Pfit:From the urban terrain data set D and the positioning information set, select each piece of urban terrain data and positioning information that satisfies the formula (1), thereby obtaining n data and forming a data set Pfit:
到旋转后的n个空间坐标点:To the rotated n space coordinate points:
步骤4、点云去噪:Step 4, point cloud denoising:
步骤4.1、利用阈值法获得所述n个点云数据PN中的无效点云数据并将所述无效点云数据清零,从而得到去除后的点云数据集;Step 4.1, use the threshold method to obtain invalid point cloud data in the n point cloud data PN and clear the invalid point cloud data, thereby obtaining the removed point cloud data set;
步骤4.2、利用距离和数量双约束的KNN算法来对所述去除后的点云数据集进行去噪和平顺处理,得到去噪后的点云数据集;Step 4.2, using the KNN algorithm with double constraints of distance and quantity to perform denoising and smoothing processing on the removed point cloud data set to obtain a denoised point cloud data set;
步骤5、点云抽稀:Step 5, point cloud thinning:
使用基于K-means++聚类的点云精简算法对所述去噪后的点云数据集进行抽稀处理,得到抽稀后的点云数据;Using a point cloud reduction algorithm based on K-means++ clustering to perform thinning processing on the denoised point cloud data set to obtain thinned point cloud data;
步骤6、对所述稀后的点云数据可视化处理,从而得到城市地形的三维点云模型。Step 6: Visually process the diluted point cloud data to obtain a three-dimensional point cloud model of the urban terrain.
相对于现有技术的有益效果是,采用上述方案,本发明的基于无人机的多视角兰维重建方法及系统,通过由所述无人机携带着的Compared with the prior art, the beneficial effect is that, by adopting the above solution, the multi-view Lanwei reconstruction method and system based on the UAV of the present invention, through the drone carried by the UAV.
图像采集装置从目标建筑的多个预设方位分别以多个预设视角采集多幅所述目标建筑的二维图像,并通过图像处理器根据所述图像采集装置采集的多幅所述二维图像,利用预设图像处理软件生成所述目标建筑的兰维模型,实现了根据目标建筑多视角的图像生成所述The image acquisition device collects a plurality of two-dimensional images of the target building from a plurality of preset orientations of the target building with a plurality of preset viewing angles, and uses an image processor according to the plurality of two-dimensional images collected by the image acquisition device. image, using the preset image processing software to generate the blue-dimensional model of the target building, and realizes the generation of the target building according to the multi-view image of the target building.
目标建筑的兰维模型,可以全面获取目标建筑文物的表面数据,方法实施起来难度小,使用设备价格低廉、工作周期短,降低了经费投入,且建立的兰维重建精度高、效果好。The Lanwei model of the target building can comprehensively obtain the surface data of the target building cultural relics. The method is less difficult to implement, the equipment is cheap, the work cycle is short, and the investment is reduced. The established Lanwei reconstruction has high accuracy and good effect.
本发明采用时间戳数据整合方法可以有效的匹配GPS数据和激光雷达数据,加快了数据整合的速度,提高了从海量点云数据中提取有效数据的效率,可以对不同类型的地形进行三维重建。The invention adopts the time stamp data integration method, which can effectively match GPS data and laser radar data, accelerate the speed of data integration, improve the efficiency of extracting valid data from massive point cloud data, and can perform three-dimensional reconstruction of different types of terrain.
本发明采用距离和数量双约束的KNN算法可以自动去除离群点和噪点,同时能够对点云进行一定程度的平滑,加快了去燥速度。The invention adopts the KNN algorithm with double constraints of distance and quantity, which can automatically remove outlier points and noise points, and can smooth the point cloud to a certain degree, thereby speeding up the drying speed.
具有良好的市场应用价值。Has good market application value.
附图说明Description of drawings
为了更清楚的说明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需使用的附图作简单介绍,显而易见的,下面描述中的附图仅仅是发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the embodiments or technical solutions in the prior art, the following briefly introduces the accompanying drawings used in the description of the embodiments or the prior art. Obviously, the drawings in the following description are only for invention. In some embodiments, for those of ordinary skill in the art, other drawings can also be obtained according to these drawings without any creative effort.
图1为本发明示意图;Fig. 1 is a schematic diagram of the present invention;
具体实施方式Detailed ways
为了便于理解本发明,下面结合附图和具体实施例,对本发明进行更详细的说明。附图中给出了本发明的较佳的实施例。但是,本发明可以以许多不同的形式来实现,并不限于本说明书所描述的实施例。相反地,提供这些实施例的目的是使对本发明的公开内容的理解更加透彻全面。In order to facilitate understanding of the present invention, the present invention will be described in more detail below with reference to the accompanying drawings and specific embodiments. Preferred embodiments of the invention are shown in the accompanying drawings. However, the present invention may be embodied in many different forms and is not limited to the embodiments described in this specification. Rather, these embodiments are provided so that a thorough and complete understanding of the present disclosure is provided.
需要说明的是,当元件被称为“固定于”另一个元件,它可以直接在另一个元件上或者也可以存在居中的元件。当一个元件被认为是“连接”另一个元件,它可以是直接连接到另一个元件或者可能同时存在居中元件。本说明书所使用的术语“固定”、“一体成型”、“左”、“右”以及类似的表述只是为了说明的目的,在图中,结构相似的单元是用以相同标号标示。It should be noted that when an element is referred to as being "fixed to" another element, it can be directly on the other element or intervening elements may also be present. When an element is referred to as being "connected" to another element, it can be directly connected to the other element or intervening elements may also be present. The terms "fixed", "integrated", "left", "right" and similar expressions used in this specification are only for the purpose of illustration, and in the drawings, elements with similar structures are marked with the same reference numerals.
除非另有定义,本说明书所使用的所有的技术和科学术语与属于本发明的技术领域的技术人员通常理解的含义相同。本说明书中在本发明的说明书中所使用的术语只是为了描述具体的实施例的目的,不是用于限制本发明。Unless otherwise defined, all technical and scientific terms used in this specification have the same meaning as commonly understood by one of ordinary skill in the technical field of the present invention. The terms used in the description of the present invention in this specification are only for the purpose of describing specific embodiments, and are not used to limit the present invention.
如图1所示,一种基于的三维重建系统,As shown in Figure 1, a 3D reconstruction system based on
步骤S1、对三维重建坐标系进行标定;获取被测物体的第一帧三维测量数据的点云数据,称为全局点云数据,基于所述第一帧三维测量数据的坐标系称为全局坐标系;步骤S2、对被测物体某个局部区域的表面进行三维测量,利用双目立体视觉原理进行三维重建,得到所述局部区域的点云数据,称为局部点云数据,其中,所述局部区域与所述全局点云数据对应的区域存在重叠区;步骤S3、将所述局部点云数据变换到所述全局坐标系下,根据所述重叠区将所述局部点云数据与所述全局点云数据进行配准,更新所述全局点云数据;步骤S4、保持所述被测物体不动,变换测量视角对所述被测物体进行测量,重复所述步骤S2至所述步骤S3,直至完成对所述被测物体的测量;步骤S5、对测量完成后更新的全局点云数据进行全局优化处理,得到点云模型;所述步骤S2具体包括以下步骤:步骤S21、对所述局部区域的表面进行三维测量,得到所述局部区域的第一散斑图像和第二散斑图像;步骤S22、在所述第二散斑图像中确定所述第一散斑图像中每个像素的整像素级对应点;步骤S23、根据所述整像素级对应点,以及所述第一散斑图像中的各像素点坐标,对所述第二散斑图像进行亚像素对应点搜索,得到所述第二散斑图像中的亚像素对应点;步骤S24、利用双目立体视觉原理,利用Kirsch算法结合所述第二散斑图像的亚像素的对应关系进行三维重建,得到所述被测物体表面的局部点云数据。Step S1, calibrate the three-dimensional reconstruction coordinate system; obtain the point cloud data of the first frame of three-dimensional measurement data of the measured object, which is called global point cloud data, and the coordinate system based on the first frame of three-dimensional measurement data is called global coordinates Step S2: 3D measurement is performed on the surface of a certain local area of the object to be measured, and 3D reconstruction is performed using the principle of binocular stereo vision to obtain the point cloud data of the local area, which is called local point cloud data, wherein the There is an overlapping area between the local area and the area corresponding to the global point cloud data; step S3, transform the local point cloud data into the global coordinate system, and according to the overlapping area, compare the local point cloud data with the The global point cloud data is registered, and the global point cloud data is updated; step S4, keeping the measured object still, changing the measurement perspective to measure the measured object, and repeating the steps S2 to S3 , until the measurement of the measured object is completed; step S5, performing global optimization processing on the global point cloud data updated after the measurement is completed, to obtain a point cloud model; the step S2 specifically includes the following steps: step S21, on the Three-dimensional measurement is performed on the surface of the local area to obtain a first speckle image and a second speckle image of the local area; step S22, determining each pixel in the first speckle image in the second speckle image In step S23, according to the integer pixel level corresponding points and the coordinates of each pixel point in the first speckle image, perform sub-pixel corresponding point search on the second speckle image, and obtain sub-pixel corresponding points in the second speckle image; step S24, using the principle of binocular stereo vision, using Kirsch algorithm to combine the sub-pixel correspondence of the second speckle image to perform three-dimensional reconstruction, to obtain the measured Local point cloud data of the object surface.
所述的Kirsch算法包括:The Kirsch algorithm includes:
第1步:对原始图像用带有指定标准偏差σ的高斯滤波器来平滑,然后在每一点处计算局部梯度g(x,y)和边缘方向α(x,y);Step 1: Smooth the original image with a Gaussian filter with a specified standard deviation σ, then compute the local gradient g(x,y) and edge direction α(x,y) at each point;
第2步:对所计算出的梯度图像进行Kirsch计算。假设图像有H×W个像素点,其边缘像素一般不会超过5×H个。对于具有一定目标的图像这是个比较宽松的限定值。取初始阈值T0,对于每个像素点i计算其Kirsch算子,如果满足K(i)>T0,则i为边缘点,边缘点数N加1,一旦边缘点数超过5×H而i还小于整幅图像的像素数,说明阈值取的太低,致使许多不是边缘点的像素也被取出。因此,需要提高阈值,计算过程将记录满足K(i)>T0条件的最小K(i),记做Kmin,将此最小值作为新的阈值,整个过程按如下方法进行调整:Step 2: Perform Kirsch calculation on the calculated gradient image. Assuming that the image has H×W pixels, the edge pixels generally do not exceed 5×H. This is a looser limit for images with certain goals. Take the initial threshold T 0 , and calculate its Kirsch operator for each pixel point i. If K(i)>T 0 is satisfied, then i is an edge point, and the number of edge points N is increased by 1. Once the number of edge points exceeds 5×H and i is still If the number of pixels is less than the whole image, it means that the threshold is too low, so that many pixels that are not edge points are also taken out. Therefore, it is necessary to increase the threshold. The calculation process will record the minimum K(i) that satisfies the condition of K(i)>T 0 , record it as K min , and use this minimum value as the new threshold. The whole process is adjusted as follows:
(1)如果K(i)>T,则i为边缘点,记录边缘点i的坐标及K(i)的最小值Kmin=min[K(i)],同时N加1;(1) If K(i)>T, then i is an edge point, record the coordinates of edge point i and the minimum value of K(i) K min =min[K(i)], and add 1 to N at the same time;
(2)一旦N≥5×H,则将阈值调整到满足最低边缘要求的最小值处,即取T=Kmin;(2) Once N≥5×H, adjust the threshold to the minimum value that satisfies the minimum edge requirement, that is, take T=K min ;
(3)将前面已得到的边缘点和新的阈值做比较,取大于新阈值的点为新的边缘点,并重新计算此新阈值下的Kmin,记录新边缘点的newN;(3) Compare the previously obtained edge point with the new threshold, take the point greater than the new threshold as the new edge point, recalculate K min under the new threshold, and record the newN of the new edge point;
(4)将新的边缘点数赋给N后面的计数从N=newN开始;(4) assigning the number of new edge points to the count behind N starts from N=newN;
(5)继续对剩下的边缘点做第1步的处理,如N≥5×H则又回到(2);(5) Continue to process the remaining edge points in the first step, and return to (2) if N≥5×H;
(6)如果N<5×H,令T2=T,T1=βT2(其中0<β<1,β为试验中得到得一个常数);(6) If N<5×H, let T 2 =T, T 1 =βT 2 (where 0<β<1, and β is a constant obtained in the experiment);
第3步:边缘提取。使用两个阈值T1和T2对第1步产生得梯度图像做阈值处理,其中值大于T2的脊像素称为强边缘像素,T1和T2之间的脊像素称为弱边缘像素,仅当强边缘和弱边缘相连时,弱边缘才会包含在输出中。Step 3: Edge extraction. Use two thresholds T 1 and T 2 to threshold the gradient image generated in step 1, where the ridge pixels with a value greater than T 2 are called strong edge pixels, and the ridge pixels between T 1 and T 2 are called weak edge pixels. , the weak edge is included in the output only if the strong and weak edges are connected.
所述三维重建方法是:The three-dimensional reconstruction method is:
步骤1、数据获取:Step 1. Data acquisition:
步骤1.1、利用四旋翼无人机上的机载GPS实时获取自身RMC格式的定位信息集合并按照顺序逐帧发送给地面基站用于存储,其中任意第α条定位信息包括:第α个GPS时间戳RMCα.timestamp,第α个经纬度RMCα.position,第α个航向信息RMCα.track;Step 1.1. Use the airborne GPS on the quadrotor UAV to obtain the positioning information set in RMC format in real time and send it to the ground base station frame by frame in order for storage, wherein any αth positioning information includes: the αth GPS timestamp RMCα.timestamp, the αth latitude and longitude RMCα.position, the αth heading information RMCα.track;
步骤1.2、利用四旋翼无人机上的机载激光雷达获取城市地形数据集合D并按照顺序逐帧发送给地面基站用于存储,其中任意第j条城市地形数据dj包含:第j个点号dj.PointID、第j个空间坐标点(xj,yj,zj)、第j个调节时间dj.adjustedtime、第j个方位角dj.Azimuth、第j个距离dj.Distance、第j个反射强度dj.Intensity、第j个雷达通道dj.Laser_id、第j个点时间戳dj.timestamp;Step 1.2. Use the airborne lidar on the quadrotor UAV to obtain the urban terrain data set D and send it to the ground base station frame by frame in sequence for storage, where any jth urban terrain data dj includes: the jth point number dj .PointID, the jth spatial coordinate point (xj, yj, zj), the jth adjustment time dj.adjustedtime, the jth azimuth angle dj.Azimuth, the jth distance dj.Distance, the jth reflection intensity dj. Intensity, jth radar channel dj.Laser_id, jth point timestamp dj.timestamp;
步骤2、数据整合:Step 2. Data integration:
从所述城市地形数据集合D和定位信息集合中选取满足式(1)的各条城市地形数据和定位信息,从而得到n个数据并构成数据集Pfit:From the urban terrain data set D and the positioning information set, select each piece of urban terrain data and positioning information that satisfies the formula (1), thereby obtaining n data and forming a data set Pfit:
到旋转后的n个空间坐标点:To the rotated n space coordinate points:
步骤4、点云去噪:Step 4, point cloud denoising:
步骤4.1、利用阈值法获得所述n个点云数据PN中的无效点云数据并将所述无效点云数据清零,从而得到去除后的点云数据集;Step 4.1, use the threshold method to obtain invalid point cloud data in the n point cloud data PN and clear the invalid point cloud data, thereby obtaining the removed point cloud data set;
步骤4.2、利用距离和数量双约束的KNN算法来对所述去除后的点云数据集进行去噪和平顺处理,得到去噪后的点云数据集;Step 4.2, using the KNN algorithm with double constraints of distance and quantity to perform denoising and smoothing processing on the removed point cloud data set to obtain a denoised point cloud data set;
步骤5、点云抽稀:Step 5, point cloud thinning:
使用基于K-means++聚类的点云精简算法对所述去噪后的点云数据集进行抽稀处理,得到抽稀后的点云数据;Using a point cloud reduction algorithm based on K-means++ clustering to perform thinning processing on the denoised point cloud data set to obtain thinned point cloud data;
步骤6、对所述稀后的点云数据可视化处理,从而得到城市地形的三维点云模型。Step 6: Visually process the diluted point cloud data to obtain a three-dimensional point cloud model of the urban terrain.
相对于现有技术的有益效果是,采用上述方案,本发明的基于无人机的多视角兰维重建方法及系统,通过由所述无人机携带着的Compared with the prior art, the beneficial effect is that, by adopting the above solution, the multi-view Lanwei reconstruction method and system based on the UAV of the present invention, through the drone carried by the UAV.
图像采集装置从目标建筑的多个预设方位分别以多个预设视角采集多幅所述目标建筑的二维图像,并通过图像处理器根据所述图像采集装置采集的多幅所述二维图像,利用预设图像处理软件生成所述目标建筑的兰维模型,实现了根据目标建筑多视角的图像生成所述The image acquisition device collects a plurality of two-dimensional images of the target building from a plurality of preset orientations of the target building with a plurality of preset viewing angles, and uses an image processor according to the plurality of two-dimensional images collected by the image acquisition device. image, using the preset image processing software to generate the blue-dimensional model of the target building, and realizes the generation of the target building according to the multi-view image of the target building.
目标建筑的兰维模型,可以全面获取目标建筑文物的表面数据,方法实施起来难度小,使用设备价格低廉、工作周期短,降低了经费投入,且建立的兰维重建精度高、效果好。The Lanwei model of the target building can comprehensively obtain the surface data of the target building cultural relics. The method is less difficult to implement, the equipment is cheap, the work cycle is short, and the investment is reduced. The established Lanwei reconstruction has high accuracy and good effect.
本发明采用时间戳数据整合方法可以有效的匹配GPS数据和激光雷达数据,加快了数据整合的速度,提高了从海量点云数据中提取有效数据的效率,可以对不同类型的地形进行三维重建。The invention adopts the time stamp data integration method, which can effectively match GPS data and laser radar data, accelerate the speed of data integration, improve the efficiency of extracting valid data from massive point cloud data, and can perform three-dimensional reconstruction of different types of terrain.
本发明采用距离和数量双约束的KNN算法可以自动去除离群点和噪点,同时能够对点云进行一定程度的平滑,加快了去燥速度。The invention adopts the KNN algorithm with double constraints of distance and quantity, which can automatically remove outlier points and noise points, and can smooth the point cloud to a certain degree, thereby speeding up the drying speed.
具有良好的市场应用价值。Has good market application value.
需要说明的是,上述各技术特征继续相互组合,形成未在上面列举的各种实施例,均视为本发明说明书记载的范围;并且,对本领域普通技术人员来说,可以根据上述说明加以改进或变换,而所有这些改进和变换都应属于本发明所附权利要求的保护范围。It should be noted that the above-mentioned technical features continue to be combined with each other to form various embodiments not listed above, which are all regarded as the scope of the description of the present invention; and, for those of ordinary skill in the art, improvements can be made according to the above descriptions or transformation, and all these improvements and transformations should belong to the protection scope of the appended claims of the present invention.
Claims (3)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010116115.2A CN111340942A (en) | 2020-02-25 | 2020-02-25 | Three-dimensional reconstruction system based on unmanned aerial vehicle and method thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010116115.2A CN111340942A (en) | 2020-02-25 | 2020-02-25 | Three-dimensional reconstruction system based on unmanned aerial vehicle and method thereof |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111340942A true CN111340942A (en) | 2020-06-26 |
Family
ID=71183637
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010116115.2A Withdrawn CN111340942A (en) | 2020-02-25 | 2020-02-25 | Three-dimensional reconstruction system based on unmanned aerial vehicle and method thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111340942A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112668610A (en) * | 2020-12-08 | 2021-04-16 | 上海裕芮信息技术有限公司 | Building facade recognition model training method, system, equipment and memory |
CN113985383A (en) * | 2021-12-27 | 2022-01-28 | 广东维正科技有限公司 | Method, device and system for surveying and mapping house outline and readable medium |
CN116486012A (en) * | 2023-04-27 | 2023-07-25 | 中国民用航空总局第二研究所 | A method for building a three-dimensional model of an aircraft, a storage medium, and an electronic device |
CN118118911A (en) * | 2024-04-30 | 2024-05-31 | 中国电子科技集团公司第五十四研究所 | Multi-unmanned aerial vehicle collaborative deployment method with safety and communication double constraints |
CN119229036A (en) * | 2024-12-03 | 2024-12-31 | 中科云谷科技有限公司 | Three-dimensional mapping method, device and storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109459759A (en) * | 2018-11-13 | 2019-03-12 | 中国科学院合肥物质科学研究院 | City Terrain three-dimensional rebuilding method based on quadrotor drone laser radar system |
CN110189400A (en) * | 2019-05-20 | 2019-08-30 | 深圳大学 | Three-dimensional reconstruction method, three-dimensional reconstruction system, mobile terminal and storage device |
-
2020
- 2020-02-25 CN CN202010116115.2A patent/CN111340942A/en not_active Withdrawn
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109459759A (en) * | 2018-11-13 | 2019-03-12 | 中国科学院合肥物质科学研究院 | City Terrain three-dimensional rebuilding method based on quadrotor drone laser radar system |
CN110189400A (en) * | 2019-05-20 | 2019-08-30 | 深圳大学 | Three-dimensional reconstruction method, three-dimensional reconstruction system, mobile terminal and storage device |
Non-Patent Citations (1)
Title |
---|
于微波 等: "基于Canny算法的改进Kirsch人脸边缘检测方法", 《微计算机信息》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112668610A (en) * | 2020-12-08 | 2021-04-16 | 上海裕芮信息技术有限公司 | Building facade recognition model training method, system, equipment and memory |
CN113985383A (en) * | 2021-12-27 | 2022-01-28 | 广东维正科技有限公司 | Method, device and system for surveying and mapping house outline and readable medium |
CN116486012A (en) * | 2023-04-27 | 2023-07-25 | 中国民用航空总局第二研究所 | A method for building a three-dimensional model of an aircraft, a storage medium, and an electronic device |
CN116486012B (en) * | 2023-04-27 | 2024-01-23 | 中国民用航空总局第二研究所 | Aircraft three-dimensional model construction method, storage medium and electronic equipment |
CN118118911A (en) * | 2024-04-30 | 2024-05-31 | 中国电子科技集团公司第五十四研究所 | Multi-unmanned aerial vehicle collaborative deployment method with safety and communication double constraints |
CN119229036A (en) * | 2024-12-03 | 2024-12-31 | 中科云谷科技有限公司 | Three-dimensional mapping method, device and storage medium |
CN119229036B (en) * | 2024-12-03 | 2025-03-07 | 中科云谷科技有限公司 | Three-dimensional image construction method, device and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111340942A (en) | Three-dimensional reconstruction system based on unmanned aerial vehicle and method thereof | |
CN113409459B (en) | Method, device and equipment for producing high-precision map and computer storage medium | |
CN106485785B (en) | Scene generation method and system based on indoor three-dimensional modeling and positioning | |
CN104268935A (en) | Feature-based airborne laser point cloud and image data fusion system and method | |
CN112505065A (en) | Method for detecting surface defects of large part by indoor unmanned aerial vehicle | |
CN105096386A (en) | Method for automatically generating geographic maps for large-range complex urban environment | |
CN110033489A (en) | A kind of appraisal procedure, device and the equipment of vehicle location accuracy | |
CN112634370A (en) | Unmanned aerial vehicle dotting method, device, equipment and storage medium | |
CN109459759B (en) | 3D reconstruction method of urban terrain based on quadrotor UAV lidar system | |
CN112967344A (en) | Method, apparatus, storage medium, and program product for camera external reference calibration | |
CN114419259B (en) | A visual positioning method and system based on physical model imaging simulation | |
CN110032211A (en) | Multi-rotor unmanned aerial vehicle automatic obstacle-avoiding method | |
CN115371673A (en) | A binocular camera target location method based on Bundle Adjustment in an unknown environment | |
CN114692720A (en) | Image classification method, device, equipment and storage medium based on aerial view | |
CN117496103A (en) | Technical method for producing multi-mountain terrain area DEM by fusing unmanned aerial vehicle oblique photographing point cloud and terrain map elevation information | |
CN118691776A (en) | A 3D real scene modeling and dynamic updating method based on multi-source data fusion | |
CN116563377A (en) | A Martian Rock Measurement Method Based on Hemispherical Projection Model | |
CN108596947A (en) | A kind of fast-moving target tracking method suitable for RGB-D cameras | |
CN112465849B (en) | Registration method for laser point cloud and sequence image of unmanned aerial vehicle | |
CN117470259A (en) | Primary and secondary type space-ground cooperative multi-sensor fusion three-dimensional map building system | |
CN110021041A (en) | Unmanned scene progressive mesh structural remodeling method based on binocular camera | |
CN117557931B (en) | Planning method for meter optimal inspection point based on three-dimensional scene | |
CN112837366A (en) | A method for target recognition and localization based on binocular camera and convolutional neural network | |
CN112632415A (en) | Web map real-time generation method and image processing server | |
CN112150630A (en) | Using fixed-wing and multi-rotor UAV to solve high-precision modeling method of industrial park |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20200626 |