CN117115063A - A multi-source data fusion application method - Google Patents

A multi-source data fusion application method Download PDF

Info

Publication number
CN117115063A
CN117115063A CN202311271724.5A CN202311271724A CN117115063A CN 117115063 A CN117115063 A CN 117115063A CN 202311271724 A CN202311271724 A CN 202311271724A CN 117115063 A CN117115063 A CN 117115063A
Authority
CN
China
Prior art keywords
point cloud
data
cloud data
model
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311271724.5A
Other languages
Chinese (zh)
Inventor
胡俊勇
杨燕
张晓楠
杨秀琼
任玉冰
杜炎坤
刘云鹤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shaanxi Tirain Technology Co ltd
Original Assignee
Shaanxi Tirain Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shaanxi Tirain Technology Co ltd filed Critical Shaanxi Tirain Technology Co ltd
Priority to CN202311271724.5A priority Critical patent/CN117115063A/en
Publication of CN117115063A publication Critical patent/CN117115063A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Geometry (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • Computer Graphics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Remote Sensing (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a multi-source data fusion application method, which comprises the following steps: step one, processing point cloud data to obtain target point cloud data; step two, processing the image to obtain a target image; step three, matching the target point cloud data with the target image to obtain modeling data; step four, model reconstruction: constructing a three-dimensional model and a two-dimensional model of the detected area based on the modeling data; fifthly, model modification: modifying the two-dimensional model and the three-dimensional model; step six, model display: and integrally displaying the two-dimensional space data of the two-dimensional model and the three-dimensional space data of the three-dimensional model. The application has simple structure and reasonable design, adopts a method of rough segmentation and fine adjustment to process point cloud data, adopts a learnable weight to fuse historical images and newly added images, then fuses multi-source point cloud data and multi-source images to obtain modeling data, respectively establishes a two-dimensional model and a three-dimensional model, and integrates two-dimensional space data and three-dimensional space display.

Description

一种多源数据融合应用方法A multi-source data fusion application method

技术领域Technical field

本发明属于地理信息技术领域,具体涉及一种多源数据融合应用方法。The invention belongs to the technical field of geographical information, and specifically relates to a multi-source data fusion application method.

背景技术Background technique

在现代社会中,人们收集的数据种类和数量不断增加,这些数据通常来自于不同的来源,包括传感器、社交媒体、移动设备等等。如何将这些数据融合起来,进行分析和应用,已成为一项重要的技术挑战。In modern society, the type and amount of data collected by people is constantly increasing, which often comes from different sources, including sensors, social media, mobile devices, etc. How to integrate these data for analysis and application has become an important technical challenge.

二三维一体化是一种新兴的技术,可以实现将不同来源的数据融合为一个统一的模型,从而更好地进行可视化和分析。2D and 3D integration is an emerging technology that can integrate data from different sources into a unified model for better visualization and analysis.

在地理信息采集和显示方面,实现多源数据融合,综合利用地理地质数据,形成综合应用平台,直观分析地理环境,对数据场景进行管理,具有重要意义。In terms of geographical information collection and display, it is of great significance to achieve multi-source data fusion, comprehensively utilize geographical and geological data, form a comprehensive application platform, intuitively analyze the geographical environment, and manage data scenarios.

发明内容Contents of the invention

本发明所要解决的技术问题在于针对上述现有技术中的不足,提供一种多源数据融合应用方法,其结构简单、设计合理,融合多源点云数据和多源影像,得到建模数据,分别建立二维模型和三维模型,将二维空间数据和三维空间显示一体化,二维空间数据和三维空间数据位于同一坐标系内。The technical problem to be solved by this invention is to provide a multi-source data fusion application method in view of the above-mentioned deficiencies in the prior art. It has a simple structure and reasonable design. It can fuse multi-source point cloud data and multi-source images to obtain modeling data. Establish two-dimensional models and three-dimensional models respectively, integrate two-dimensional space data and three-dimensional space display, and two-dimensional space data and three-dimensional space data are located in the same coordinate system.

为解决上述技术问题,本发明采用的技术方案是:一种多源数据融合应用方法,其特征在于:In order to solve the above technical problems, the technical solution adopted by the present invention is: a multi-source data fusion application method, which is characterized by:

步骤一、获取目标点云数据:将第一点云数据、第二点云数据和历史点云数据输入粗分割网络,根据粗分割网络学习像素与对象区域特征之间的关系来增强像素特征的描述,得到粗分割结果;将粗分割结果和历史点云数据融合,得到融合点云,将融合点云输入微调网络,微调网络输出精准的分割结果,得到目标点云数据;Step 1. Obtain target point cloud data: Input the first point cloud data, second point cloud data and historical point cloud data into the coarse segmentation network, and learn the relationship between pixels and object area features based on the coarse segmentation network to enhance the pixel features. Description, obtain the rough segmentation result; fuse the rough segmentation result with the historical point cloud data to obtain the fused point cloud, input the fused point cloud into the fine-tuning network, and the fine-tuning network outputs accurate segmentation results to obtain the target point cloud data;

步骤二、获取目标影像:对第一新增影像、第二新增影像和历史影像分别进行特征提取,得到第一特征图F1、第二特征图F2和第三特征图F3,对第一特征图F1和第二特征图F2进行融合,得到融合特征图FR;基于多层感知mlp模型输出可学习权重,基于可学习权重融合第三特征图F3和融合特征图FR,得到目标影像;Step 2. Obtain the target image: perform feature extraction on the first new image, the second new image and the historical image respectively to obtain the first feature map F 1 , the second feature map F 2 and the third feature map F 3 . The first feature map F 1 and the second feature map F 2 are fused to obtain the fused feature map F R ; based on the multi-layer perception mlp model, the learnable weight is output, and the third feature map F 3 and the fused feature map F are fused based on the learnable weight R , get the target image;

步骤三、对目标点云数据和目标影像进行匹配,得到建模数据;Step 3: Match the target point cloud data and the target image to obtain modeling data;

步骤四、模型重建:基于建模数据,构建被测区域的三维模型和二维模型;Step 4. Model reconstruction: Based on the modeling data, construct a three-dimensional model and a two-dimensional model of the measured area;

步骤五、模型修饰:修饰二维模型和三维模型;Step 5. Model modification: modify the two-dimensional model and the three-dimensional model;

步骤六、模型显示:将所述二维模型的二维空间数据和所述三维模型的三维空间数据一体化显示。Step 6. Model display: display the two-dimensional spatial data of the two-dimensional model and the three-dimensional spatial data of the three-dimensional model in an integrated manner.

上述的一种多源数据融合应用方法,其特征在于:步骤一的具体方法为:The above-mentioned multi-source data fusion application method is characterized by: the specific method of step one is:

步骤101、第一航空设备采集被测区域的第一新增点云数据,对第一新增点云数据进行预处理,纠正第一新增点云数据,得到第一点云数据;Step 101. The first aviation equipment collects the first new point cloud data of the measured area, preprocesses the first new point cloud data, corrects the first new point cloud data, and obtains the first point cloud data;

步骤102、第一车载设备采集被测区域的第二新增点云数据,对第二新增点云数据进行预处理,纠正第二新增点云数据,得到第二点云数据;Step 102: The first vehicle-mounted device collects the second newly added point cloud data of the measured area, preprocesses the second newly added point cloud data, corrects the second newly added point cloud data, and obtains the second point cloud data;

步骤103、获取被测区域的历史点云数据;Step 103: Obtain historical point cloud data of the measured area;

步骤104、第一点云数据、第二点云数据和历史点云数据构成点云数据集,将点云数据集划分为训练集、验证集和测试集,对训练集的点云数据添加对应的标签;Step 104. The first point cloud data, the second point cloud data and the historical point cloud data constitute a point cloud data set. The point cloud data set is divided into a training set, a verification set and a test set, and corresponding points are added to the point cloud data of the training set. Tag of;

步骤105、构建分割网络,将训练集的点云数据输入分割网络得到预测分割结果,根据预测分割结果调整分割网络的网络参数,直至满足训练停止条件,得到训练好的分割网络;Step 105: Construct a segmentation network, input the point cloud data of the training set into the segmentation network to obtain the predicted segmentation results, and adjust the network parameters of the segmentation network according to the predicted segmentation results until the training stop conditions are met, and a trained segmentation network is obtained;

步骤106、将第一点云数据和第二点云数据分别输入分割网络,得到粗分割结果,将粗分割结果和历史点云数据融合,得到融合点云,对融合点云切片;Step 106: Input the first point cloud data and the second point cloud data into the segmentation network respectively to obtain a rough segmentation result, fuse the rough segmentation result with the historical point cloud data to obtain a fused point cloud, and slice the fused point cloud;

步骤107、构建微调网络:将切片数据输入微调网络,微调网络输出精准的分割结果,得到目标点云数据。Step 107. Construct a fine-tuning network: Input the slice data into the fine-tuning network, and the fine-tuning network outputs accurate segmentation results to obtain target point cloud data.

上述的一种多源数据融合应用方法,其特征在于:步骤二的具体方法为:The above-mentioned multi-source data fusion application method is characterized by: the specific method of step two is:

步骤201、第二航空设备采集被测区域的第一新增影像,对预处理后的第一新增影像进行空三加密,得到第一新增影像;Step 201: The second aviation equipment collects the first new image of the measured area, and performs aerial three-dimensional encryption on the preprocessed first new image to obtain the first new image;

步骤202、第二车载设备采集被测区域的第二新增影像,对预处理后的第二新增影像进行空三加密,得到第二新增影像;Step 202: The second vehicle-mounted device collects the second new image of the measured area, and performs air triple encryption on the preprocessed second new image to obtain the second new image;

步骤203、获取被测区域的历史影像;Step 203: Obtain historical images of the measured area;

步骤204、对第一新增影像和第二新增影像分别进行特征提取,得到第一特征图F1和第二特征图F2,对历史影像进行特征提取,得到第三特征图F3Step 204: Perform feature extraction on the first newly added image and the second newly added image respectively to obtain the first feature map F 1 and the second feature map F 2 , perform feature extraction on the historical image to obtain the third feature map F 3 ;

步骤205、采用BCA网络对第一特征图F1和第二特征图F2进行融合,得到融合特征图FRStep 205: Use the BCA network to fuse the first feature map F 1 and the second feature map F 2 to obtain the fused feature map FR ;

步骤206、构建多层感知mlp模型;Step 206: Build a multi-layer perception mlp model;

步骤207、对融合特征图FR进行全局最大池化处理,得到第一池化值,用第一池化值构建第一通道维度向量CFRStep 207: Perform global maximum pooling processing on the fused feature map F R to obtain the first pooling value, and use the first pooling value to construct the first channel dimension vector C FR ;

步骤208、对第三特征图F3进行全局平均池化处理,得到第二池化值,用第二池化值构建第二通道维度向量CF3Step 208: Perform global average pooling processing on the third feature map F 3 to obtain the second pooling value, and use the second pooling value to construct the second channel dimension vector C F3 ;

步骤209、将第一通道维度向量CFR和第二通道维度向量CF3输入多层感知mlp模型,mlp模型输出权重ω1Step 209: Input the first channel dimension vector C FR and the second channel dimension vector C F3 into the multi-layer perception mlp model, and the mlp model outputs weight ω 1 ;

步骤2010、基于公式F=ω1·FR+(1-ω1)·F3对融合特征图FR和第三特征图F3进行加权融合,得到目标影像。Step 2010: Perform weighted fusion of the fused feature map F R and the third feature map F 3 based on the formula F=ω 1 ·FR + (1-ω 1 )·F 3 to obtain the target image.

上述的一种多源数据融合应用方法,其特征在于:对目标点云数据和目标影像进行匹配的具体方法为:The above-mentioned multi-source data fusion application method is characterized by: the specific method for matching target point cloud data and target images is:

步骤301、确定匹配域:采用CornerNet模型读取第m个目标影像的角点,由角点组成目标框,换算得到各个角点的地面坐标Xj,各个角点的地面坐标连线构成匹配域;Step 301. Determine the matching domain: Use the CornerNet model to read the corner points of the m-th target image, form the target frame from the corner points, convert the ground coordinates X j of each corner point, and connect the ground coordinates of each corner point to form the matching domain. ;

步骤302、确定匹配线:将目标框的其中一条边n等分,得到n+1条平行线,每条平行线上随机选取一个点,换算得到各个点的地面坐标Xd,各个点的地面坐标Xd连接起来构成第一匹配线;Step 302. Determine the matching line: Divide one side of the target frame into n equal parts to obtain n+1 parallel lines. Randomly select a point on each parallel line and convert to obtain the ground coordinates X d of each point. The ground coordinates of each point are obtained. The coordinates X d are connected to form the first matching line;

步骤303、在目标点云中,找到与匹配域相对应的目标点云子集,在目标点云子集中至少确定3个与地面坐标Xd相对应的坐标点,构成第二匹配线,计算第一匹配线与第二匹配线之间的旋转角度θ;Step 303. In the target point cloud, find a target point cloud subset corresponding to the matching domain, and determine at least 3 coordinate points corresponding to the ground coordinates X d in the target point cloud subset to form a second matching line. Calculate The rotation angle θ between the first matching line and the second matching line;

步骤304、将目标点云子集对应旋转角度θ,将旋转后的目标点云子集加入第m个目标影像的特征点集合中,获得第m个目标影像的完整特征点集合;Step 304: Correspond to the rotation angle θ of the target point cloud subset, and add the rotated target point cloud subset to the feature point set of the m-th target image to obtain the complete feature point set of the m-th target image;

步骤305、重复步骤301-304,完成所有目标点云数据和目标影像的匹配。Step 305: Repeat steps 301-304 to complete the matching of all target point cloud data and target images.

上述的一种多源数据融合应用方法,其特征在于:步骤五中的修饰二维模型包括:检测二维模型的无效值区域并进行填充;修复二维模型的噪声区域;清除二维模型的悬浮物;设置分层数,将多个颜色分别对应到多个分层数据上,对二维模型进行分层设色的渲染。The above-mentioned multi-source data fusion application method is characterized in that: modifying the two-dimensional model in step five includes: detecting the invalid value area of the two-dimensional model and filling it; repairing the noise area of the two-dimensional model; and clearing the two-dimensional model. Suspended objects; set the number of layers, map multiple colors to multiple layered data, and render the two-dimensional model with layered coloring.

上述的一种多源数据融合应用方法,其特征在于:步骤五中的修饰三维模型包括:检测三维模型的无效值区域并进行填充;修复三维模型的噪声区域;清除三维模型的悬浮物;设置分层数,将多个颜色分别对应到多个分层数据上,对三维模型进行分层设色的渲染;对三维模型进行自动匀光匀色处理。The above-mentioned multi-source data fusion application method is characterized in that: modifying the three-dimensional model in step five includes: detecting invalid value areas of the three-dimensional model and filling them; repairing the noise areas of the three-dimensional model; clearing suspended objects of the three-dimensional model; setting The number of layers corresponds to multiple colors to multiple layer data, and the three-dimensional model is rendered with layered coloring; the three-dimensional model is automatically processed with light and color uniformity.

上述的一种多源数据融合应用方法,其特征在于:步骤一中的分割网络采用V-net网络,V-net网络的编码器末尾依次设置池化层、1×1卷积层和3个不同膨胀率的3×3卷积层。The above-mentioned multi-source data fusion application method is characterized in that: the segmentation network in step one adopts the V-net network, and at the end of the encoder of the V-net network, a pooling layer, a 1×1 convolution layer and three 3×3 convolutional layers with different dilation rates.

上述的一种多源数据融合应用方法,其特征在于:步骤106中的微调网络采用U-net网络,U-net网络采用边界损失函数和二分类交叉熵损失的加权和作为整体损失函数。The above-mentioned multi-source data fusion application method is characterized in that: the fine-tuning network in step 106 adopts the U-net network, and the U-net network adopts the weighted sum of the boundary loss function and the binary cross-entropy loss as the overall loss function.

上述的一种多源数据融合应用方法,其特征在于:二维空间数据包括矢量数据、高程数据、影像数据、倾斜数据;三维空间数据包括模型数据和倾斜数据。The above-mentioned multi-source data fusion application method is characterized in that: two-dimensional spatial data includes vector data, elevation data, image data, and tilt data; three-dimensional spatial data includes model data and tilt data.

上述的一种多源数据融合应用方法,其特征在于:步骤一中对点云数据进行预处理包括点云全自动滤波,点云全自动滤波包括自适应滤波、整平滤波、平滑滤波、融合滤波、通用滤波、降低高程滤波、剖面滤波。The above-mentioned multi-source data fusion application method is characterized in that: the preprocessing of point cloud data in step one includes fully automatic filtering of point clouds, and the fully automatic filtering of point clouds includes adaptive filtering, flattening filtering, smoothing filtering, and fusion. Filtering, general filtering, lower elevation filtering, profile filtering.

本发明与现有技术相比具有以下优点:Compared with the prior art, the present invention has the following advantages:

1、本发明的结构简单、设计合理,实现及使用操作方便。1. The present invention has a simple structure, reasonable design, and is easy to implement, use and operate.

2、本发明在处理点云数据时,分两个阶段,在第一个阶段采用3D卷积的分割网络对点云数据进行分割,分割结果在通道维度上与历史点云数据融合,并输入到微调网络,然后使用2D卷积的微调网络对分割结果进行微调,使得网络可以充分利用三维卷积和二维卷积的特征,从而提高分割精度。2. When processing point cloud data, the present invention divides it into two stages. In the first stage, a 3D convolutional segmentation network is used to segment the point cloud data. The segmentation result is integrated with the historical point cloud data in the channel dimension and input. to the fine-tuning network, and then use the 2D convolution fine-tuning network to fine-tune the segmentation results, so that the network can make full use of the characteristics of three-dimensional convolution and two-dimensional convolution, thereby improving segmentation accuracy.

3、本发明采用BCA网络对第一新增影像和第二新增影像进行特征融合,得到融合特征图,基于融合特征图和历史影像的池化值对网络权重进行更新,进而更新整个多层感知mlp模型,最终得到与输入相同大小的目标影像,使用效果好。3. The present invention uses the BCA network to perform feature fusion on the first newly added image and the second newly added image to obtain a fused feature map, and updates the network weight based on the fused feature map and the pooling value of the historical image, thereby updating the entire multi-layer Perceptual MLP model finally obtains the target image of the same size as the input, and the effect is good.

4、本发明中将二维空间数据和三维空间数据一体化显示。4. In the present invention, two-dimensional spatial data and three-dimensional spatial data are displayed in an integrated manner.

综上所述,本发明结构简单、设计合理,采用先粗分割再微调的方法处理点云数据,采用可学习权重融合历史影像和新增影像,并融合多源点云数据和多源影像,得到建模数据,分别建立二维模型和三维模型,将二维空间数据和三维空间显示一体化。To sum up, the present invention has a simple structure and reasonable design. It uses a rough segmentation and then fine-tuning method to process point cloud data, uses learnable weights to fuse historical images and new images, and integrates multi-source point cloud data and multi-source images. Obtain modeling data, establish two-dimensional models and three-dimensional models respectively, and integrate two-dimensional space data and three-dimensional space display.

下面通过附图和实施例,对本发明的技术方案做进一步的详细描述。The technical solution of the present invention will be further described in detail below through the accompanying drawings and examples.

附图说明Description of drawings

图1为本发明目标点云数据的获取方法流程图。Figure 1 is a flow chart of the method for obtaining target point cloud data in the present invention.

图2为本发明目标影像的获取方法流程图。Figure 2 is a flow chart of the method for obtaining target images according to the present invention.

图3为本发明的方法流程图。Figure 3 is a flow chart of the method of the present invention.

具体实施方式Detailed ways

下面结合附图及本发明的实施例对本发明作进一步详细的说明。The present invention will be described in further detail below in conjunction with the accompanying drawings and embodiments of the present invention.

需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。下面将参考附图并结合实施例来详细说明本发明。It should be noted that, as long as there is no conflict, the embodiments and features in the embodiments of this application can be combined with each other. The present invention will be described in detail below with reference to the accompanying drawings and embodiments.

需要注意的是,这里所使用的术语仅是为了描述具体实施方式,而非意图限制根据本申请的示例性实施方式。如在这里所使用的,除非上下文另外明确指出,否则单数形式也意图包括复数形式,此外,还应当理解的是,当在本说明书中使用术语“包含”和/或“包括”时,其指明存在特征、步骤、操作、器件、组件和/或它们的组合。It should be noted that the terms used herein are only for describing specific embodiments and are not intended to limit the exemplary embodiments according to the present application. As used herein, the singular forms are also intended to include the plural forms unless the context clearly indicates otherwise. Furthermore, it will be understood that when the terms "comprises" and/or "includes" are used in this specification, they indicate There are features, steps, operations, means, components and/or combinations thereof.

需要说明的是,本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施方式例如能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。It should be noted that the terms "first", "second", etc. in the description and claims of this application and the above-mentioned drawings are used to distinguish similar objects and are not necessarily used to describe a specific order or sequence. It is to be understood that the data so used are interchangeable under appropriate circumstances so that the embodiments of the application described herein, for example, can be practiced in sequences other than those illustrated or described herein. In addition, the terms "including" and "having" and any variations thereof are intended to cover non-exclusive inclusions, e.g., a process, method, system, product, or apparatus that encompasses a series of steps or units and need not be limited to those explicitly listed. Those steps or elements may instead include other steps or elements not expressly listed or inherent to the process, method, product or apparatus.

为了便于描述,在这里可以使用空间相对术语,如“在……之上”、“在……上方”、“在……上表面”、“上面的”等,用来描述如在图中所示的一个器件或特征与其他器件或特征的空间位置关系。应当理解的是,空间相对术语旨在包含除了器件在图中所描述的方位之外的在使用或操作中的不同方位。例如,如果附图中的器件被倒置,则描述为“在其他器件或构造上方”或“在其他器件或构造之上”的器件之后将被定位为“在其他器件或构造下方”或“在其他器件或构造之下”。因而,示例性术语“在……上方”可以包括“在……上方”和“在……下方”两种方位。该器件也可以其他不同方式定位(旋转90度或处于其他方位),并且对这里所使用的空间相对描述作出相应解释。For the convenience of description, spatially relative terms can be used here, such as "on...", "on...", "on the upper surface of...", "above", etc., to describe what is shown in the figure. The spatial relationship between one device or feature and other devices or features. It will be understood that the spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if a feature in the figure is turned upside down, then one feature described as "above" or "on top of" other features or features would then be oriented "below" or "below" the other features or features. under other devices or structures". Thus, the exemplary term "over" may include both orientations "above" and "below." The device may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly.

如图1-3所示,本实施例包括一种多源数据融合应用方法,其特征在于:As shown in Figure 1-3, this embodiment includes a multi-source data fusion application method, which is characterized by:

步骤一、获取目标点云数据:将第一点云数据、第二点云数据和历史点云数据输入粗分割网络,根据粗分割网络学习像素与对象区域特征之间的关系来增强像素特征的描述,得到粗分割结果;将粗分割结果和历史点云数据融合,得到融合点云,将融合点云输入微调网络,微调网络输出精准的分割结果,得到目标点云数据。Step 1. Obtain target point cloud data: Input the first point cloud data, second point cloud data and historical point cloud data into the coarse segmentation network, and learn the relationship between pixels and object area features based on the coarse segmentation network to enhance the pixel features. Description, get the rough segmentation result; fuse the rough segmentation result with the historical point cloud data to get the fused point cloud, input the fused point cloud into the fine-tuning network, and the fine-tuning network outputs accurate segmentation results to get the target point cloud data.

在一个可能的实施例中,获取目标点云数据的具体方法为:In a possible embodiment, the specific method of obtaining target point cloud data is:

步骤101、第一航空设备采集被测区域的第一新增点云数据,对第一新增点云数据进行预处理,纠正第一新增点云数据,得到第一点云数据;Step 101. The first aviation equipment collects the first new point cloud data of the measured area, preprocesses the first new point cloud data, corrects the first new point cloud data, and obtains the first point cloud data;

步骤102、第一车载设备采集被测区域的第二新增点云数据,对第二新增点云数据进行预处理,纠正第二新增点云数据,得到第二点云数据;Step 102: The first vehicle-mounted device collects the second newly added point cloud data of the measured area, preprocesses the second newly added point cloud data, corrects the second newly added point cloud data, and obtains the second point cloud data;

对点云数据进行预处理包括点云全自动滤波,点云全自动滤波包括自适应滤波、整平滤波、平滑滤波、融合滤波、通用滤波、降低高程滤波、剖面滤波。Preprocessing of point cloud data includes fully automatic point cloud filtering, which includes adaptive filtering, flattening filtering, smoothing filtering, fusion filtering, general filtering, elevation reduction filtering, and profile filtering.

步骤103、获取被测区域的历史点云数据;Step 103: Obtain historical point cloud data of the measured area;

步骤104、第一点云数据、第二点云数据和历史点云数据构成点云数据集,将点云数据集划分为训练集、验证集和测试集,对训练集的点云数据添加对应的标签。Step 104. The first point cloud data, the second point cloud data and the historical point cloud data constitute a point cloud data set. The point cloud data set is divided into a training set, a verification set and a test set, and corresponding points are added to the point cloud data of the training set. Tag of.

实际使用时,对第一新增点云数据进行预处理和对第二新增点云数据进行预处理的方法包括归一化和使用高斯滤波去噪。In actual use, methods for preprocessing the first newly added point cloud data and preprocessing the second newly added point cloud data include normalization and denoising using Gaussian filtering.

归一化是将点云数据缩放到相同的尺度范围内,减去均值再除以标准差。使用高斯滤波去噪是使用高斯滤波对点云数据进行去噪处理。Normalization is scaling the point cloud data to the same scale range, subtracting the mean and dividing by the standard deviation. Using Gaussian filter to denoise is to use Gaussian filter to denoise point cloud data.

按照8:1:1的比例对数据集进行划分,划分为训练集、验证集和测试集。The data set is divided into training set, verification set and test set according to the ratio of 8:1:1.

需要说明的是,点云数据对应的标签用于表征其中各个坐标点的类别。示例:对应的标签可以设置为需要识别的目标A、目标B、目标C以及背景共三个类别,或者目标和背景共两个类别。It should be noted that the labels corresponding to the point cloud data are used to characterize the categories of each coordinate point in it. Example: The corresponding label can be set to three categories: target A, target B, target C, and background that need to be recognized, or two categories: target and background.

以识别建筑物为例,可以对点云数据中的每个坐标点进行建筑物和背景的标注,将建筑物识别出来,然后将每个坐标点标注为建筑物或背景。Taking building recognition as an example, each coordinate point in the point cloud data can be labeled as a building or background, the building can be identified, and then each coordinate point can be labeled as a building or background.

在一个可能的实施例中,以其中一次获得的点云数据组成的数据集为例,对数据集进行划分,共800张图像。其中640张用于模型训练,80张用于测试验证,80张用于模型测试。In a possible embodiment, taking a data set composed of point cloud data obtained once as an example, the data set is divided into a total of 800 images. Among them, 640 pictures are used for model training, 80 pictures are used for test verification, and 80 pictures are used for model testing.

在一个可能的实施例中,可通过激光雷达获取点云数据,激光雷达获取的点云数据实时传输到地面的遥控器,地面的遥控器再实时传输到手机、平板电脑、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、个人数字助理(personal digital assistant,PDA)等终端设备上,从而实现点云数据的采集和实时传输,终端设备进行点云数据分割。本申请实施例点云数据的获取方式不作任何限制。In a possible embodiment, point cloud data can be obtained through lidar. The point cloud data obtained by lidar is transmitted to a remote controller on the ground in real time. The remote controller on the ground then transmits it to a mobile phone, tablet computer, notebook computer, super mobile phone, etc. in real time. On terminal devices such as ultra-mobile personal computers (UMPCs), netbooks, and personal digital assistants (PDAs), point cloud data can be collected and transmitted in real time, and the terminal devices can perform point cloud data segmentation. There is no restriction on the acquisition method of point cloud data in the embodiment of this application.

步骤105、构建分割网络,将训练集的点云数据输入分割网络得到预测分割结果,根据预测分割结果调整分割网络的网络参数,直至满足训练停止条件,得到训练好的分割网络;Step 105: Construct a segmentation network, input the point cloud data of the training set into the segmentation network to obtain the predicted segmentation results, and adjust the network parameters of the segmentation network according to the predicted segmentation results until the training stop conditions are met, and a trained segmentation network is obtained;

步骤106、将第一点云数据和第二点云数据分别输入分割网络,得到粗分割结果,将粗分割结果和历史点云数据融合,得到融合点云,对融合点云切片;Step 106: Input the first point cloud data and the second point cloud data into the segmentation network respectively to obtain a rough segmentation result, fuse the rough segmentation result with the historical point cloud data to obtain a fused point cloud, and slice the fused point cloud;

步骤107、构建微调网络:将切片数据输入微调网络,微调网络输出精准的分割结果,得到目标点云数据。Step 107. Construct a fine-tuning network: Input the slice data into the fine-tuning network, and the fine-tuning network outputs accurate segmentation results to obtain target point cloud data.

在第一个阶段采用3D卷积的分割网络对点云数据进行分割,分割结果在通道维度上与历史点云数据融合,并输入到微调网络,然后使用2D卷积的微调网络对分割结果进行微调,使得网络可以充分利用三维卷积和二维卷积的特征,从而提高分割精度。In the first stage, a 3D convolutional segmentation network is used to segment the point cloud data. The segmentation results are merged with the historical point cloud data in the channel dimension and input to the fine-tuning network. The segmentation results are then processed using a 2D convolutional fine-tuning network. Fine-tuning allows the network to make full use of the features of three-dimensional convolution and two-dimensional convolution, thereby improving segmentation accuracy.

在训练集中加入历史点云数据,能够充分表达局部的几何特征分布信息,从而在分割网络对第一点云数据、第二点云数据进行语义分割时,能有良好的适应性,提高分割精度。使用分割网络提取点云数据的关键点,在减少点云数量的同时也保证了点云数据的代表性,可以提高步骤三匹配融合的效率。Adding historical point cloud data to the training set can fully express local geometric feature distribution information, so that when the segmentation network performs semantic segmentation on the first point cloud data and the second point cloud data, it can have good adaptability and improve the segmentation accuracy. . Using the segmentation network to extract key points of point cloud data not only reduces the number of point clouds but also ensures the representativeness of the point cloud data, which can improve the efficiency of matching and fusion in step three.

将粗分割结果和历史点云数据融合,在粗分割结果的点云稀疏时,充分利用历史点云数据中广泛存在的建筑物的丰富特征,实现建筑物的有效定位。Fusion of the coarse segmentation results and historical point cloud data makes full use of the rich features of buildings that are widely present in historical point cloud data to achieve effective positioning of buildings when the point clouds of the coarse segmentation results are sparse.

使用微调网络,细分像素,进一步优化分割网络的分割结果。Use the fine-tuning network to subdivide pixels to further optimize the segmentation results of the segmentation network.

本实施例中,步骤一中的分割网络采用V-net网络,V-net网络的编码器末尾依次设置池化层、1×1卷积层和3个不同膨胀率的3×3卷积层。In this embodiment, the segmentation network in step 1 adopts the V-net network. At the end of the encoder of the V-net network, a pooling layer, a 1×1 convolution layer and three 3×3 convolution layers with different expansion rates are set in sequence. .

本实施例中,步骤一中的微调网络采用U-net网络,U-net网络采用边界损失函数和二分类交叉熵损失的加权和作为整体损失函数。In this embodiment, the fine-tuning network in step 1 uses the U-net network, and the U-net network uses the weighted sum of the boundary loss function and the binary cross-entropy loss as the overall loss function.

U-net网络指的是用于生物医学图像分割的卷积网络。V-net网络指的是用于三维医学图像分割的全卷积神经网络。U-net network refers to a convolutional network used for biomedical image segmentation. V-net network refers to a fully convolutional neural network used for three-dimensional medical image segmentation.

步骤二、处理影像,得到目标影像;Step 2: Process the image to obtain the target image;

步骤201、第二航空设备采集被测区域的第一新增影像,对预处理后的第一新增影像进行空三加密,得到第一新增影像;Step 201: The second aviation equipment collects the first new image of the measured area, and performs aerial three-dimensional encryption on the preprocessed first new image to obtain the first new image;

步骤202、第二车载设备采集被测区域的第二新增影像,对预处理后的第二新增影像进行空三加密,得到第二新增影像;Step 202: The second vehicle-mounted device collects the second new image of the measured area, and performs air triple encryption on the preprocessed second new image to obtain the second new image;

步骤203、获取被测区域的历史影像;Step 203: Obtain historical images of the measured area;

步骤204、对第一新增影像和第二新增影像分别进行特征提取,得到第一特征图F1和第二特征图F2,对历史影像进行特征提取,得到第三特征图F3Step 204: Perform feature extraction on the first new image and the second new image respectively to obtain the first feature map F 1 and the second feature map F 2 , and perform feature extraction on the historical image to obtain the third feature map F 3 .

特征提取模块内置有深度神经网络,其特征提取方式,是本领域技术人员所熟知的,因此,其具体实现方式不做过多描述。The feature extraction module has a built-in deep neural network, and its feature extraction method is well known to those skilled in the art. Therefore, its specific implementation method will not be described in detail.

步骤205、采用BCA网络对第一特征图F1和第二特征图F2进行融合。BCA网络采用卷积层融合第一特征图F1和第二特征图F2,BCA网络使用交叉熵损失函数来进行监督,使用注意力损失函数来加强边缘检测和语义分割之间的协同性。交叉熵损失函数和注意力损失函数的权重分别为1和0.5。Step 205: Use the BCA network to fuse the first feature map F 1 and the second feature map F 2 . The BCA network uses a convolutional layer to fuse the first feature map F 1 and the second feature map F 2 . The BCA network uses a cross-entropy loss function for supervision and an attention loss function to enhance the synergy between edge detection and semantic segmentation. The weights of the cross-entropy loss function and attention loss function are 1 and 0.5 respectively.

BCA的英文全称为:Boundary guided Context Aggregation module,BCA翻译为边界引导网络。The full English name of BCA is: Boundary guided Context Aggregation module. BCA is translated as boundary guidance network.

步骤206、构建多层感知mlp模型;Step 206: Build a multi-layer perception mlp model;

MLP的英文全称为Multilayer Perceptron,MLP翻译为多层感知器,是一种前馈人工神经网络模型。The full English name of MLP is Multilayer Perceptron. MLP is translated as multi-layer perceptron, which is a feedforward artificial neural network model.

步骤207、对融合特征图FR进行全局最大池化处理,得到第一池化值,用第一池化值构建第一通道维度向量CFRStep 207: Perform global maximum pooling processing on the fused feature map F R to obtain the first pooling value, and use the first pooling value to construct the first channel dimension vector C FR ;

步骤208、对第三特征图F3进行全局平均池化处理,得到第二池化值,用第二池化值构建第二通道维度向量CF3Step 208: Perform global average pooling processing on the third feature map F 3 to obtain the second pooling value, and use the second pooling value to construct the second channel dimension vector C F3 ;

步骤209、将第一通道维度向量CFR和第二通道维度向量CF3输入多层感知mlp模型,mlp模型输出权重ω1Step 209: Input the first channel dimension vector C FR and the second channel dimension vector C F3 into the multi-layer perception mlp model, and the mlp model outputs weight ω 1 ;

步骤2010、基于公式F=ω1·FR+(1-ω1)·F3对融合特征图FR和第三特征图F3进行加权融合,得到目标影像。Step 2010: Perform weighted fusion of the fused feature map F R and the third feature map F 3 based on the formula F=ω 1 ·FR + (1-ω 1 )·F 3 to obtain the target image.

首先对有着相同通道数的融合特征图FR和第三特征图F3分别进行池化操作,用得到的池化值构建通道维度向量,将第一通道维度向量CFR和第二通道维度向量CF3输入多层感知mlp模型,mlp模型输出权重ω1First, perform a pooling operation on the fused feature map F R and the third feature map F 3 with the same number of channels, use the obtained pooling value to construct a channel dimension vector, and combine the first channel dimension vector C FR and the second channel dimension vector C F3 inputs the multi-layer perception mlp model, and the mlp model outputs weight ω 1 .

基于池化值得到可学习的权重ω1,因此影像特征不同,则池化值不同,对应的可学习的权重ω1不同,因此基于多层感知mlp模型更新可学习的权重ω1,最终得到与输入相同大小的目标影像,使用效果好。The learnable weight ω 1 is obtained based on the pooling value. Therefore, if the image features are different, the pooling value is different, and the corresponding learnable weight ω 1 is different. Therefore, the learnable weight ω 1 is updated based on the multi-layer perception mlp model, and finally we get The target image is the same size as the input, and the effect is good.

受到拍摄角度的影响,第二航空设备采集的第一新增影像其顶部区域的信息丰富,但其侧边由于拍摄角度或遮挡,信息匮乏甚至空洞。第二车载设备采集的第二新增影像与其相反,第二新增影像顶部区域的信息匮乏甚至空洞,但是侧面信息完整,信息丰富。因此融合第一新增影像和第二新增影像,能够提供细节更为丰富的几何约束信息,同时融合历史影像,在第一新增影像和第二新增影像的特征信息稀疏时,充分利用历史影像中广泛存在的建筑物的丰富特征,避免出现影像信息空白,实现建筑物的有效定位。Affected by the shooting angle, the top area of the first newly added image collected by the second aerial equipment is rich in information, but its sides are lacking in information or even empty due to the shooting angle or occlusion. The second newly added image collected by the second vehicle-mounted device is the opposite. The information in the top area of the second newly added image is lacking or even empty, but the side information is complete and rich. Therefore, fusing the first newly added image and the second newly added image can provide more detailed geometric constraint information. At the same time, fusing historical images can make full use of the sparse feature information of the first newly added image and the second newly added image. The rich features of buildings that are widely present in historical images avoid image information gaps and achieve effective positioning of buildings.

步骤三、对目标点云数据和目标影像进行匹配,得到建模数据的具体方法为:Step 3: Match the target point cloud data and the target image to obtain the modeling data. The specific method is:

步骤301、确定匹配域:采用CornerNet模型读取第m个目标影像的角点,由角点组成目标框,换算得到各个角点的地面坐标Xj,各个角点的地面坐标连线构成匹配域。Step 301. Determine the matching domain: Use the CornerNet model to read the corner points of the m-th target image, form the target frame from the corner points, convert the ground coordinates X j of each corner point, and connect the ground coordinates of each corner point to form the matching domain. .

需要说明的是,CornerNet模型通过卷积网络预测两组热力图,分别代表了目标影像的左上角位置和右下角位置。左上角位置和右下角位置即为角点,左上角位置和右下角位置构成目标框。It should be noted that the CornerNet model predicts two sets of heat maps through a convolutional network, representing the upper left corner position and the lower right corner position of the target image respectively. The upper left corner position and the lower right corner position are the corner points, and the upper left corner position and the lower right corner position constitute the target frame.

步骤302、确定匹配线:将目标框的其中一条边n等分,得到n+1条平行线,每条平行线上随机选取一个点,换算得到各个点的地面坐标Xd,各个点的地面坐标Xd连接起来构成第一匹配线;Step 302. Determine the matching line: Divide one side of the target frame into n equal parts to obtain n+1 parallel lines. Randomly select a point on each parallel line and convert to obtain the ground coordinates X d of each point. The ground coordinates of each point are obtained. The coordinates X d are connected to form the first matching line;

步骤303、在目标点云中,找到与匹配域相对应的目标点云子集,在目标点云子集中至少确定3个与地面坐标Xd相对应的坐标点,构成第二匹配线,计算第一匹配线与第二匹配线之间的旋转角度θ;Step 303. In the target point cloud, find a target point cloud subset corresponding to the matching domain, and determine at least 3 coordinate points corresponding to the ground coordinates X d in the target point cloud subset to form a second matching line. Calculate The rotation angle θ between the first matching line and the second matching line;

步骤304、将目标点云子集对应旋转角度θ,将旋转后的目标点云子集加入第m个目标影像的特征点集合中,获得第m个目标影像的完整特征点集合;Step 304: Correspond to the rotation angle θ of the target point cloud subset, and add the rotated target point cloud subset to the feature point set of the m-th target image to obtain the complete feature point set of the m-th target image;

步骤305、重复步骤301-304,完成所有目标点云数据和目标影像的匹配。Step 305: Repeat steps 301-304 to complete the matching of all target point cloud data and target images.

将建筑物对应的目标点云数据旋转,以使得其和与建筑物对应的目标影像进行匹配融合,进而完成空地影像的融合,在目标影像提供的建筑物框架下,填充目标点云数据带来的厘米级精度的信息,可以提高建筑物定位的精度。Rotate the target point cloud data corresponding to the building so that it matches and fuses with the target image corresponding to the building, thereby completing the fusion of open space images. Under the building framework provided by the target image, fill in the target point cloud data to bring Information with centimeter-level accuracy can improve the accuracy of building positioning.

步骤四、模型重建:基于建模数据,构建被测区域的三维模型和二维模型;需要说明的是,模型重建是基于二三维一体化技术建立被测区域内建筑物的三维模型和二维模型。Step 4. Model reconstruction: Based on the modeling data, construct the three-dimensional model and the two-dimensional model of the measured area; it should be noted that the model reconstruction is based on the two-dimensional and three-dimensional integration technology to establish the three-dimensional model and the two-dimensional model of the buildings in the measured area. Model.

步骤五、模型修饰:修饰二维模型和三维模型;Step 5. Model modification: modify the two-dimensional model and the three-dimensional model;

修饰二维模型包括:检测二维模型的无效值区域并进行填充;修复二维模型的噪声区域;清除二维模型的悬浮物;设置分层数,将多个颜色分别对应到多个分层数据上,对二维模型进行分层设色的渲染。Modifying the 2D model includes: detecting the invalid value areas of the 2D model and filling them; repairing the noise areas of the 2D model; clearing the suspended matter of the 2D model; setting the number of layers and mapping multiple colors to multiple layers. In terms of data, the two-dimensional model is rendered with layered coloring.

修饰三维模型包括:检测三维模型的无效值区域并进行填充;修复三维模型的噪声区域;清除三维模型的悬浮物;设置分层数,将多个颜色分别对应到多个分层数据上,对三维模型进行分层设色的渲染;对三维模型进行自动匀光匀色处理。Modifying the 3D model includes: detecting the invalid value areas of the 3D model and filling them; repairing the noise areas of the 3D model; clearing the suspended objects of the 3D model; setting the number of layers and corresponding multiple colors to multiple layered data. The three-dimensional model is rendered with layered coloring; the three-dimensional model is automatically processed with light and color uniformity.

步骤六、模型显示:将所述二维模型的二维空间数据和所述三维模型的三维空间数据基于二三维一体化技术一体化显示。Step 6. Model display: The two-dimensional spatial data of the two-dimensional model and the three-dimensional spatial data of the three-dimensional model are integrated and displayed based on the two-dimensional and three-dimensional integration technology.

二维空间数据包括矢量数据、高程数据、影像数据、倾斜数据;三维空间数据包括模型数据和倾斜数据。Two-dimensional spatial data includes vector data, elevation data, image data, and tilt data; three-dimensional spatial data includes model data and tilt data.

二三维一体化技术是新一代的GIS技术,GIS的英文全称为:GeographicInformation System,翻译为地理信息系统。简单而讲,二三维一体化技术能够将GIS中的二维空间数据与三维空间数据整合在同一平台。其中,已建的二维数据能够直接在三维平台上直接使用,二维空间数据无需任何转换处理即可直接在三维场景中实现可视化。同时使用该二三维一体化技术的过程中,用户可以直接在二维GIS地图上对建筑物的数据进行操作,同时三维场景中也会同步生成该建筑物的三维GIS地图。Two-dimensional and three-dimensional integration technology is a new generation of GIS technology. The full English name of GIS is: Geographic Information System, which is translated as geographical information system. To put it simply, 2D and 3D integration technology can integrate 2D spatial data and 3D spatial data in GIS on the same platform. Among them, the established two-dimensional data can be used directly on the three-dimensional platform, and the two-dimensional spatial data can be directly visualized in the three-dimensional scene without any conversion processing. In the process of using this two-dimensional and three-dimensional integration technology, users can directly operate the building data on the two-dimensional GIS map, and at the same time, the three-dimensional GIS map of the building will be generated simultaneously in the three-dimensional scene.

例如:用户在二维GIS地图上新建建筑物的位置数据时,通过二三维一体化技术能够建立二维场景下的建筑物的拓扑关系,并且通过获取二维场景中的建筑物的高程数据,将同步生成建筑物的三维GIS地图。For example: when a user creates location data of a building on a two-dimensional GIS map, the topological relationship of the building in the two-dimensional scene can be established through two-dimensional and three-dimensional integration technology, and by obtaining the elevation data of the building in the two-dimensional scene, A three-dimensional GIS map of the building will be generated simultaneously.

其中,说明书中未做详细描述的内容属于本领域专业技术人员公知的现有技术。Among them, the contents not described in detail in the specification belong to the prior art known to those skilled in the art.

以上所述,仅是本发明的实施例,并非对本发明作任何限制,凡是根据本发明技术实质对以上实施例所作的任何简单修改、变更以及等效结构变化,均仍属于本发明技术方案的保护范围内。The above are only embodiments of the present invention and do not limit the present invention in any way. Any simple modifications, changes and equivalent structural changes made to the above embodiments based on the technical essence of the present invention still belong to the technical solutions of the present invention. within the scope of protection.

Claims (10)

1.一种多源数据融合应用方法,其特征在于:1. A multi-source data fusion application method, characterized by: 步骤一、获取目标点云数据:将第一点云数据、第二点云数据和历史点云数据输入粗分割网络,根据粗分割网络学习像素与对象区域特征之间的关系来增强像素特征的描述,得到粗分割结果;将粗分割结果和历史点云数据融合,得到融合点云,将融合点云输入微调网络,微调网络输出精准的分割结果,得到目标点云数据;Step 1. Obtain target point cloud data: Input the first point cloud data, second point cloud data and historical point cloud data into the coarse segmentation network, and learn the relationship between pixels and object area features based on the coarse segmentation network to enhance the pixel features. Description, obtain the rough segmentation result; fuse the rough segmentation result with the historical point cloud data to obtain the fused point cloud, input the fused point cloud into the fine-tuning network, and the fine-tuning network outputs accurate segmentation results to obtain the target point cloud data; 步骤二、获取目标影像:对第一新增影像、第二新增影像和历史影像分别进行特征提取,得到第一特征图F1、第二特征图F2和第三特征图F3,对第一特征图F1和第二特征图F2进行融合,得到融合特征图FR;基于多层感知mlp模型输出可学习权重,基于可学习权重融合第三特征图F3和融合特征图FR,得到目标影像;Step 2. Obtain the target image: perform feature extraction on the first new image, the second new image and the historical image respectively to obtain the first feature map F 1 , the second feature map F 2 and the third feature map F 3 . The first feature map F 1 and the second feature map F 2 are fused to obtain the fused feature map F R ; based on the multi-layer perception mlp model, the learnable weight is output, and the third feature map F 3 and the fused feature map F are fused based on the learnable weight R , get the target image; 步骤三、对目标点云数据和目标影像进行匹配,得到建模数据;Step 3: Match the target point cloud data and the target image to obtain modeling data; 步骤四、模型重建:基于建模数据,构建被测区域的三维模型和二维模型;Step 4. Model reconstruction: Based on the modeling data, construct a three-dimensional model and a two-dimensional model of the measured area; 步骤五、模型修饰:修饰二维模型和三维模型;Step 5. Model modification: modify the two-dimensional model and the three-dimensional model; 步骤六、模型显示:将所述二维模型的二维空间数据和所述三维模型的三维空间数据一体化显示。Step 6. Model display: display the two-dimensional spatial data of the two-dimensional model and the three-dimensional spatial data of the three-dimensional model in an integrated manner. 2.按照权利要求1所述的一种多源数据融合应用方法,其特征在于:步骤一的具体方法为:2. A multi-source data fusion application method according to claim 1, characterized in that: the specific method of step one is: 步骤101、第一航空设备采集被测区域的第一新增点云数据,对第一新增点云数据进行预处理,纠正第一新增点云数据,得到第一点云数据;Step 101. The first aviation equipment collects the first new point cloud data of the measured area, preprocesses the first new point cloud data, corrects the first new point cloud data, and obtains the first point cloud data; 步骤102、第一车载设备采集被测区域的第二新增点云数据,对第二新增点云数据进行预处理,纠正第二新增点云数据,得到第二点云数据;Step 102: The first vehicle-mounted device collects the second newly added point cloud data of the measured area, preprocesses the second newly added point cloud data, corrects the second newly added point cloud data, and obtains the second point cloud data; 步骤103、获取被测区域的历史点云数据;Step 103: Obtain historical point cloud data of the measured area; 步骤104、第一点云数据、第二点云数据和历史点云数据构成点云数据集,将点云数据集划分为训练集、验证集和测试集,对训练集的点云数据添加对应的标签;Step 104. The first point cloud data, the second point cloud data and the historical point cloud data constitute a point cloud data set. The point cloud data set is divided into a training set, a verification set and a test set, and corresponding points are added to the point cloud data of the training set. Tag of; 步骤105、构建分割网络,将训练集的点云数据输入分割网络得到预测分割结果,根据预测分割结果调整分割网络的网络参数,直至满足训练停止条件,得到训练好的分割网络;Step 105: Construct a segmentation network, input the point cloud data of the training set into the segmentation network to obtain the predicted segmentation results, and adjust the network parameters of the segmentation network according to the predicted segmentation results until the training stop conditions are met, and a trained segmentation network is obtained; 步骤106、将第一点云数据和第二点云数据分别输入分割网络,得到粗分割结果,将粗分割结果和历史点云数据融合,得到融合点云,对融合点云切片;Step 106: Input the first point cloud data and the second point cloud data into the segmentation network respectively to obtain a rough segmentation result, fuse the rough segmentation result with the historical point cloud data to obtain a fused point cloud, and slice the fused point cloud; 步骤107、构建微调网络,将切片数据输入微调网络,微调网络输出精准的分割结果,得到目标点云数据。Step 107: Construct a fine-tuning network, input the slice data into the fine-tuning network, and the fine-tuning network outputs accurate segmentation results to obtain target point cloud data. 3.按照权利要求1所述的一种多源数据融合应用方法,其特征在于:步骤二的具体方法为:3. A multi-source data fusion application method according to claim 1, characterized in that: the specific method of step two is: 步骤201、第二航空设备采集被测区域的第一新增影像,对预处理后的第一新增影像进行空三加密,得到第一新增影像;Step 201: The second aviation equipment collects the first new image of the measured area, and performs aerial three-dimensional encryption on the preprocessed first new image to obtain the first new image; 步骤202、第二车载设备采集被测区域的第二新增影像,对预处理后的第二新增影像进行空三加密,得到第二新增影像;Step 202: The second vehicle-mounted device collects the second new image of the measured area, and performs air triple encryption on the preprocessed second new image to obtain the second new image; 步骤203、获取被测区域的历史影像;Step 203: Obtain historical images of the measured area; 步骤204、对第一新增影像和第二新增影像分别进行特征提取,得到第一特征图F1和第二特征图F2,对历史影像进行特征提取,得到第三特征图F3Step 204: Perform feature extraction on the first newly added image and the second newly added image respectively to obtain the first feature map F 1 and the second feature map F 2 , perform feature extraction on the historical image to obtain the third feature map F 3 ; 步骤205、采用BCA网络对第一特征图F1和第二特征图F2进行融合,得到融合特征图FRStep 205: Use the BCA network to fuse the first feature map F 1 and the second feature map F 2 to obtain the fused feature map FR ; 步骤206、构建多层感知mlp模型;Step 206: Build a multi-layer perception mlp model; 步骤207、对融合特征图FR进行全局最大池化处理,得到第一池化值,用第一池化值构建第一通道维度向量CFRStep 207: Perform global maximum pooling processing on the fused feature map F R to obtain the first pooling value, and use the first pooling value to construct the first channel dimension vector C FR ; 步骤208、对第三特征图F3进行全局平均池化处理,得到第二池化值,用第二池化值构建第二通道维度向量CF3Step 208: Perform global average pooling processing on the third feature map F 3 to obtain the second pooling value, and use the second pooling value to construct the second channel dimension vector C F3 ; 步骤209、将第一通道维度向量CFR和第二通道维度向量CF3输入多层感知mlp模型,mlp模型输出权重ω1Step 209: Input the first channel dimension vector C FR and the second channel dimension vector C F3 into the multi-layer perception mlp model, and the mlp model outputs weight ω 1 ; 步骤2010、基于公式F=ω1·FR+(1-ω1)·F3对融合特征图FR和第三特征图F3进行加权融合,得到目标影像F。Step 2010: Perform weighted fusion of the fusion feature map F R and the third feature map F 3 based on the formula F=ω 1 · FR + (1-ω 1 )·F 3 to obtain the target image F. 4.按照权利要求1所述的一种多源数据融合应用方法,其特征在于:对目标点云数据和目标影像进行匹配的具体方法为:4. A multi-source data fusion application method according to claim 1, characterized in that: the specific method for matching target point cloud data and target images is: 步骤301、确定匹配域:采用CornerNet模型读取第m个目标影像的角点,由角点组成目标框,换算得到各个角点的地面坐标Xj,各个角点的地面坐标连线构成匹配域;Step 301. Determine the matching domain: Use the CornerNet model to read the corner points of the m-th target image, form the target frame from the corner points, convert the ground coordinates X j of each corner point, and connect the ground coordinates of each corner point to form the matching domain. ; 步骤302、确定匹配线:将目标框的其中一条边n等分,得到n+1条平行线,每条平行线上随机选取一个点,换算得到各个点的地面坐标Xd,各个点的地面坐标Xd连接起来构成第一匹配线;Step 302. Determine the matching line: Divide one side of the target frame into n equal parts to obtain n+1 parallel lines. Randomly select a point on each parallel line and convert to obtain the ground coordinates X d of each point. The ground coordinates of each point are obtained. The coordinates X d are connected to form the first matching line; 步骤303、在目标点云中,找到与匹配域相对应的目标点云子集,在目标点云子集中至少确定3个与地面坐标Xd相对应的坐标点,构成第二匹配线,计算第一匹配线与第二匹配线之间的旋转角度θ;Step 303. In the target point cloud, find a target point cloud subset corresponding to the matching domain, and determine at least 3 coordinate points corresponding to the ground coordinates X d in the target point cloud subset to form a second matching line. Calculate The rotation angle θ between the first matching line and the second matching line; 步骤304、将目标点云子集对应旋转角度θ,将旋转后的目标点云子集加入第m个目标影像的特征点集合中,获得第m个目标影像的完整特征点集合;Step 304: Correspond to the rotation angle θ of the target point cloud subset, and add the rotated target point cloud subset to the feature point set of the m-th target image to obtain the complete feature point set of the m-th target image; 步骤305、重复步骤301-304,完成所有目标点云数据和目标影像的匹配。Step 305: Repeat steps 301-304 to complete the matching of all target point cloud data and target images. 5.按照权利要求1所述的一种多源数据融合应用方法,其特征在于:步骤五中的修饰二维模型包括:检测二维模型的无效值区域并进行填充;修复二维模型的噪声区域;清除二维模型的悬浮物;设置分层数,将多个颜色分别对应到多个分层数据上,对二维模型进行分层设色的渲染。5. A multi-source data fusion application method according to claim 1, characterized in that: modifying the two-dimensional model in step five includes: detecting the invalid value area of the two-dimensional model and filling it; repairing the noise of the two-dimensional model area; clear the suspended objects of the 2D model; set the number of layers, map multiple colors to multiple layered data, and render the 2D model with layered coloring. 6.按照权利要求1所述的一种多源数据融合应用方法,其特征在于:步骤五中的修饰三维模型包括:检测三维模型的无效值区域并进行填充;修复三维模型的噪声区域;清除三维模型的悬浮物;设置分层数,将多个颜色分别对应到多个分层数据上,对三维模型进行分层设色的渲染;对三维模型进行自动匀光匀色处理。6. A multi-source data fusion application method according to claim 1, characterized in that: modifying the three-dimensional model in step five includes: detecting invalid value areas of the three-dimensional model and filling them; repairing the noise areas of the three-dimensional model; and clearing Suspended objects of the 3D model; set the number of layers, map multiple colors to multiple layered data respectively, perform layered color rendering of the 3D model; perform automatic light and color uniformity processing on the 3D model. 7.按照权利要求2所述的一种多源数据融合应用方法,其特征在于:步骤一中的分割网络采用V-net网络,V-net网络的编码器末尾依次设置池化层、1×1卷积层和3个不同膨胀率的3×3卷积层。7. A multi-source data fusion application method according to claim 2, characterized in that: the segmentation network in step one adopts the V-net network, and the end of the encoder of the V-net network is provided with a pooling layer, a 1× 1 convolutional layer and 3 3×3 convolutional layers with different dilation rates. 8.按照权利要求2所述的一种多源数据融合应用方法,其特征在于:步骤一中的微调网络采用U-net网络,U-net网络采用边界损失函数和二分类交叉熵损失的加权和作为整体损失函数。8. A multi-source data fusion application method according to claim 2, characterized in that: the fine-tuning network in step one adopts the U-net network, and the U-net network adopts the weighting of the boundary loss function and the binary cross-entropy loss. and as the overall loss function. 9.按照权利要求1或6所述的一种多源数据融合应用方法,其特征在于:二维空间数据包括矢量数据、高程数据、影像数据、倾斜数据;三维空间数据包括模型数据和倾斜数据。9. A multi-source data fusion application method according to claim 1 or 6, characterized in that: two-dimensional spatial data includes vector data, elevation data, image data, and tilt data; three-dimensional spatial data includes model data and tilt data. . 10.按照权利要求1或6所述的一种多源数据融合应用方法,其特征在于:步骤一中对点云数据进行预处理包括点云全自动滤波,点云全自动滤波包括自适应滤波、整平滤波、平滑滤波、融合滤波、通用滤波、降低高程滤波、剖面滤波。10. A multi-source data fusion application method according to claim 1 or 6, characterized in that: preprocessing the point cloud data in step one includes fully automatic filtering of the point cloud, and the fully automatic filtering of the point cloud includes adaptive filtering. , leveling filter, smoothing filter, fusion filter, general filter, elevation reduction filter, profile filter.
CN202311271724.5A 2023-09-28 2023-09-28 A multi-source data fusion application method Pending CN117115063A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311271724.5A CN117115063A (en) 2023-09-28 2023-09-28 A multi-source data fusion application method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311271724.5A CN117115063A (en) 2023-09-28 2023-09-28 A multi-source data fusion application method

Publications (1)

Publication Number Publication Date
CN117115063A true CN117115063A (en) 2023-11-24

Family

ID=88811108

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311271724.5A Pending CN117115063A (en) 2023-09-28 2023-09-28 A multi-source data fusion application method

Country Status (1)

Country Link
CN (1) CN117115063A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117889781A (en) * 2024-03-13 2024-04-16 深圳市高松科技有限公司 EDM electrode rapid detection device
CN118097656A (en) * 2024-04-01 2024-05-28 中创智元信息技术有限公司 Spatial data construction method based on live-action three-dimension

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109214288A (en) * 2018-08-02 2019-01-15 广州市鑫广飞信息科技有限公司 It is taken photo by plane the interframe scene matching method and device of video based on multi-rotor unmanned aerial vehicle
CN112529044A (en) * 2020-11-20 2021-03-19 西南交通大学 Railway contact net extraction and classification method based on vehicle-mounted LiDAR
CN112801945A (en) * 2021-01-11 2021-05-14 西北大学 Depth Gaussian mixture model skull registration method based on dual attention mechanism feature extraction
WO2021232463A1 (en) * 2020-05-19 2021-11-25 北京数字绿土科技有限公司 Multi-source mobile measurement point cloud data air-ground integrated fusion method and storage medium
CN116453107A (en) * 2023-03-20 2023-07-18 北京迈格威科技有限公司 3D target detection method and device, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109214288A (en) * 2018-08-02 2019-01-15 广州市鑫广飞信息科技有限公司 It is taken photo by plane the interframe scene matching method and device of video based on multi-rotor unmanned aerial vehicle
WO2021232463A1 (en) * 2020-05-19 2021-11-25 北京数字绿土科技有限公司 Multi-source mobile measurement point cloud data air-ground integrated fusion method and storage medium
CN112529044A (en) * 2020-11-20 2021-03-19 西南交通大学 Railway contact net extraction and classification method based on vehicle-mounted LiDAR
CN112801945A (en) * 2021-01-11 2021-05-14 西北大学 Depth Gaussian mixture model skull registration method based on dual attention mechanism feature extraction
CN116453107A (en) * 2023-03-20 2023-07-18 北京迈格威科技有限公司 3D target detection method and device, electronic equipment and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
HAOXIANG MA等: "Boundary Guided Context Aggregation for Semantic Segmentatio", 《ARXIV:2110.14587V1[CS.CV]》, 27 October 2021 (2021-10-27), pages 1 - 13 *
吴风华等: "《地理信息系统基础》", 31 December 2014, 《武汉大学出版社》, pages: 182 - 199 *
文学东等: "一种融合多源特征的建筑物三维模型重建方法", 《武汉大学学报信息科学版》, 31 May 2019 (2019-05-31), pages 1 - 7 *
范喜全等: "《地面无人系统原理与设计》", 31 August 2021, 《西南交通大学出版社》, pages: 60 - 64 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117889781A (en) * 2024-03-13 2024-04-16 深圳市高松科技有限公司 EDM electrode rapid detection device
CN118097656A (en) * 2024-04-01 2024-05-28 中创智元信息技术有限公司 Spatial data construction method based on live-action three-dimension

Similar Documents

Publication Publication Date Title
US11423610B2 (en) Large-scale environment-modeling with geometric optimization
US20220028163A1 (en) Computer Vision Systems and Methods for Detecting and Modeling Features of Structures in Images
CN111815776B (en) Fine geometric reconstruction method for three-dimensional building integrating airborne and vehicle-mounted three-dimensional laser point clouds and street view images
Chen et al. Automatic building information model reconstruction in high-density urban areas: Augmenting multi-source data with architectural knowledge
CN111486855B (en) Indoor two-dimensional semantic grid map construction method with object navigation points
KR102126724B1 (en) Method and apparatus for restoring point cloud data
CN108596101B (en) A multi-target detection method for remote sensing images based on convolutional neural network
Chen et al. A methodology for automated segmentation and reconstruction of urban 3-D buildings from ALS point clouds
US10297074B2 (en) Three-dimensional modeling from optical capture
US20190026400A1 (en) Three-dimensional modeling from point cloud data migration
WO2019029099A1 (en) Image gradient combined optimization-based binocular visual sense mileage calculating method
US11430087B2 (en) Using maps comprising covariances in multi-resolution voxels
CN110163213B (en) Remote sensing image segmentation method based on disparity map and multi-scale depth network model
CN117115063A (en) A multi-source data fusion application method
CN108960135B (en) Dense ship target accurate detection method based on high-resolution remote sensing image
CN108596108B (en) Aerial remote sensing image change detection method based on triple semantic relation learning
CN105205808A (en) Multi-vision image dense coupling fusion method and system based on multiple characteristics and multiple constraints
CN102308320A (en) Generating three-dimensional models from images
CN112489099B (en) Point cloud registration method and device, storage medium and electronic equipment
US11288861B2 (en) Maps comprising covariances in multi-resolution voxels
Xiao et al. Building segmentation and modeling from airborne LiDAR data
CN108898669A (en) Data processing method, device, medium and calculating equipment
Takacs et al. 3D mobile augmented reality in urban scenes
CN113902802A (en) Visual positioning method and related device, electronic equipment and storage medium
CN115512247A (en) Regional building damage grade assessment method based on image multi-parameter extraction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination