WO2023060632A1 - Street view ground object multi-dimensional extraction method and system based on point cloud data - Google Patents

Street view ground object multi-dimensional extraction method and system based on point cloud data Download PDF

Info

Publication number
WO2023060632A1
WO2023060632A1 PCT/CN2021/124565 CN2021124565W WO2023060632A1 WO 2023060632 A1 WO2023060632 A1 WO 2023060632A1 CN 2021124565 W CN2021124565 W CN 2021124565W WO 2023060632 A1 WO2023060632 A1 WO 2023060632A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
cloud data
ground
grid
subunit
Prior art date
Application number
PCT/CN2021/124565
Other languages
French (fr)
Chinese (zh)
Inventor
罗再谦
刘颖
向煜
黄志�
周兵
韩�熙
华媛媛
朱勃
李兵
张彦
曹欣
王永刚
王军涛
李楠楠
王翔
Original Assignee
重庆数字城市科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 重庆数字城市科技有限公司 filed Critical 重庆数字城市科技有限公司
Publication of WO2023060632A1 publication Critical patent/WO2023060632A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/08Projecting images onto non-planar surfaces, e.g. geodetic screens

Definitions

  • the present invention relates to the technical field of point cloud data extraction, in particular to a method and system for multi-dimensional extraction of street view features based on point cloud data.
  • LIDAR laser radar technology
  • the laser radar scanning system avoids the necessary steps of orientation and image matching in traditional photogrammetry.
  • the point cloud data acquired by LIDAR contains rich environmental information, including ground information, vegetation information, wire information, building information, etc. To accurately obtain the 3D information of buildings, the LIDAR data must be processed.
  • the artificial environment and the natural environment in which the building is located are generally more complicated, including vegetation such as trees, as well as roads, poles and towers and other artificial features.
  • Objects are extracted through a single dimension, and the types of extracted objects are single, and the accuracy is not high.
  • the purpose of the present invention is to propose a multi-dimensional extraction method and system for street scene features based on point cloud data, by using multi-dimensional methods to extract different types of features from non-ground point cloud data, which can be quickly and effectively 1.
  • the present invention discloses the following technical solutions.
  • the present invention discloses a multi-dimensional extraction method of street view features based on point cloud data, including:
  • Raw point cloud data includes absolute coordinates and elevation information
  • a multi-dimensional method is used to extract various ground objects by category from the non-ground point cloud to obtain different types of ground object recognition.
  • the multi-dimensional method is used to extract various ground objects by category from the non-ground point cloud, which specifically includes;
  • the building point cloud data in the non-ground point cloud is extracted.
  • the extracting the tree class point cloud data in the non-ground point cloud specifically includes:
  • the street tree point cloud is extracted from the non-ground point cloud data according to the street tree outline.
  • the extracting the rod-like object point cloud data in the non-ground point cloud specifically includes:
  • the second clustering is performed on the layered clustering results in the elevation direction to extract the point cloud of the rod-shaped objects.
  • the extracting the building point cloud data in the non-ground point cloud specifically includes:
  • Connectivity analysis is performed on the interest grid to obtain the object area, and based on the regional semantic features, the building facade point cloud is extracted.
  • the present invention also discloses a multi-dimensional extraction system of street view features based on point cloud data, including:
  • a preprocessing module which preprocesses the original point cloud data, the original point cloud data including absolute coordinates and elevation information;
  • An extraction module extracts ground point clouds and non-ground point clouds in the point cloud data by using a cloth algorithm
  • a recognition module uses a multi-dimensional method to extract various ground objects by category from the non-ground point cloud, and obtains different types of ground object recognition.
  • the identification module includes identifying tree-like units, identifying rod-shaped object-like units, and identifying building-like units;
  • Described recognition tree class unit adopts elevation value and watershed algorithm, extracts the tree class point cloud data in described non-ground point cloud;
  • the identifying rod-shaped object class unit extracts the rod-shaped object point cloud data in the non-ground point cloud through the elevation value and hierarchical clustering;
  • the identification of building-type units extracts building-type point cloud data in the non-ground point cloud based on elevation values and multi-level semantics.
  • the identifying tree-like units includes dividing subunits, dividing subunits, and extracting tree-like subunits;
  • the division subunit divides the non-ground point cloud data into a grid and projects it into a grayscale image
  • the segmentation subunit performs watershed segmentation on the grayscale image to determine the outline of street trees
  • the extracting tree subunit extracts street tree point clouds from the non-ground point cloud data according to street tree outlines.
  • the unit for identifying rod-shaped objects includes a hierarchical subunit, a clustering subunit, and a subunit for extracting rod-shaped objects;
  • the layering subunit layers the non-ground point cloud according to a preset elevation interval
  • the clustering subunit clusters the non-ground point cloud data of each layer
  • the extracting rod-shaped feature subunit performs a second clustering on the layered clustering results in the elevation direction according to the characteristics of the rod-shaped feature, and extracts the point cloud of the rod-shaped feature.
  • the identifying building type unit includes a semantic subunit, a grid subunit and the extracted building facade subunit;
  • the semantic subunit extracts point clouds of high-rise buildings above a threshold height through single-point semantic features
  • the grid subunit projects the point cloud and high-rise building point cloud below the threshold to the XOY plane, and divides the grid according to the preset size, and selects the grid of interest according to the semantic features of the grid;
  • the extracting building facade subunit performs connectivity analysis on the interest grid to obtain object areas, and extracts building facade point clouds based on regional semantic features.
  • the present invention discloses a method and system for extracting multi-dimensional street scene features based on point cloud data, which has the following beneficial effects: the collected point cloud data is preprocessed, the ground point cloud and non-ground point cloud are extracted by using the cloth algorithm, By adopting multi-dimensional or multi-type methods to extract various ground object ranges from non-ground point clouds, the contours and categories of ground objects can be extracted and identified with high efficiency, high precision and accuracy.
  • Fig. 1 is one of the flow charts of the multi-dimensional extraction method of street view features based on point cloud data disclosed by the present invention
  • Fig. 2 is the second flow chart of the multi-dimensional extraction method of street view features based on point cloud data disclosed by the present invention
  • Fig. 3 is a flow chart of extracting tree class point cloud data in the non-ground point cloud disclosed by the present invention
  • Fig. 4 is the elevation difference schematic diagram of different positions of the street tree point cloud model disclosed by the present invention.
  • Fig. 5 is the waveform diagram of the elevation difference distribution of the single street tree point cloud disclosed by the present invention.
  • Fig. 6 is a flow chart of extracting rod-shaped object point cloud data in the non-ground point cloud disclosed by the present invention.
  • Fig. 7 is a flow chart of extracting building class point cloud data in the non-ground point cloud disclosed by the present invention.
  • Fig. 8 is a block diagram of a multi-dimensional extraction system for street view features based on point cloud data disclosed in the present invention.
  • a first embodiment of a multi-dimensional extraction method of street view features based on point cloud data disclosed in the present invention will be described in detail below with reference to FIGS. 1-7 .
  • This embodiment is mainly applied to point cloud data extraction.
  • street features can be extracted quickly, effectively and accurately.
  • this embodiment specifically includes the following steps:
  • the original point cloud data includes absolute coordinates and elevation information.
  • step S100 the vehicle-mounted mobile measurement system is used to collect the original point cloud data.
  • the vehicle-mounted laser scanning system is used to acquire data including laser point cloud data.
  • the laser point cloud data is a set of points with three-dimensional coordinates in real space.
  • the original point cloud data is preprocessed, and the original point cloud data is preprocessed by denoising, filtering and other technologies, so as to improve the accuracy of the point cloud data.
  • the vehicle-mounted laser scanning system adopts a vehicle-mounted mobile data acquisition system with various advanced technologies such as GPS positioning, INS inertial navigation, CCD video, and automatic control. It uses vehicle-mounted remote sensing to quickly collect street and road positioning information on the spot. There are two methods of measurable stereoscopic images and using professional cameras to collect single-point panoramic images and 360° panoramic images. The collected data has the advantages of simple and fast update, high efficiency, and rich information.
  • step S200 the cloth algorithm is used to extract the ground point cloud and non-ground point cloud in the point cloud data, specifically including:
  • X is the position of the cloth grid point at time t
  • ⁇ t is the time period
  • G is the acceleration and is a constant value
  • A is the quality of the cloth grid point, which is set as a constant 1.
  • S250 Classify ground point clouds and non-ground point clouds, and calculate distances between grid points and corresponding point cloud data points. For point cloud data points, if the distance is less than the threshold L, it is classified as ground point cloud, otherwise it is classified as non-ground point cloud.
  • step S300 a multi-dimensional method is used to extract various ground objects one by one from the non-ground point cloud, specifically including;
  • step S310 the extraction of trees, that is, the extraction of street trees, specifically includes:
  • the non-ground point cloud data is grid-divided and projected into a grayscale image, and the gray-scale image is divided into watershed to determine the outline of the street tree, and then the complete street tree point cloud is extracted from the non-ground point cloud data according to the outline of the street tree.
  • the elevation difference of different positions of the non-ground point cloud is obtained through the elevation model, and the gray image is segmented according to the elevation difference of different positions of the non-ground point cloud to determine the outline of the street tree.
  • the elevation difference distribution of a single street tree is shown in Figure 4.
  • the elevation difference of different positions of the street tree point cloud model is shown.
  • Figure 5 it shows the waveform diagram of the elevation difference distribution of a single street tree point cloud.
  • the horizontal axis is the projection diameter D of a single street tree crown on the XOY plane, and the vertical axis is is the elevation difference. It can be seen that the closer to the crown center or trunk, the greater the elevation difference, and the closer to the peak, and the farther away from the crown center or trunk, the smaller the elevation difference, and the closer to the trough.
  • step S320 extract the point cloud data of rod-shaped objects in the non-ground point cloud, specifically including:
  • the non-ground point cloud is layered according to the preset elevation interval, and the point cloud data of each layer is clustered, and then according to the characteristics of the continuous extension of the rod-shaped objects in the elevation direction, the layered clustering results are reassessed in the elevation direction. Clustering to identify the complete rod-shaped feature point cloud.
  • this application uses the method of processing data from high to low to layer the non-ground point cloud according to the preset elevation interval, first determine the maximum and minimum values of the non-ground point cloud in the elevation direction, and divide the obtained results in the elevation direction The number of non-ground point cloud layers, the threshold value in the elevation direction of each layer, and the distance between each layer are set, and the non-ground point cloud is clustered according to the above rules.
  • the data processing process is adopted from high to low, without data filtering, which simplifies the data processing process, involves fewer parameters, data processing is efficient, and the degree of automation is higher.
  • step S330 the building class point cloud data in the non-ground point cloud is extracted, specifically including:
  • S333 Perform connectivity analysis on the interest grid to obtain the object area, and extract building facade point clouds based on the semantic features of the area.
  • the single-point semantic feature that is, the elevation value of the point, the point cloud lower than the building is eliminated, and the high-rise building point cloud containing only the building above a certain height is extracted at the same time; then the remaining point cloud and the high-rise building point cloud are projected to XOY Plane and divide the grid according to a certain size, and select the grid of interest according to the semantic characteristics of the grid; finally, analyze the connectivity of the grid of interest to obtain the object area, and realize the accurate extraction of building facade point clouds based on the semantic features of the area.
  • the connectivity of the grid of interest specifically whether there is a projection point cloud connection between each grid.
  • the projection point cloud in some grids is in a small piece in the middle, and there is a gap with the grid boundary. Then this grid is not connected to the surrounding grids, otherwise, this grid is connected to the surrounding grids.
  • each dimension extracts/recognizes a type of feature, wherein the range of various features can be extracted by category, and the extracted feature range For fusion, various ground object ranges can also be extracted at the same time to complete the ground object recognition.
  • the embodiment of the present invention also provides a first embodiment of a multi-dimensional extraction system of street view features based on point cloud data. Since the principle of the problem solved by this system is similar to the aforementioned multi-dimensional extraction method of street view features based on point cloud data, the implementation of this system can refer to the implementation of the aforementioned method, and the repetition will not be repeated.
  • This embodiment is mainly applied to cloud data extraction.
  • a multi-dimensional method to extract different types of ground features from non-ground point cloud data, street features can be extracted quickly, effectively and accurately.
  • this embodiment mainly includes: a preprocessing module 400 , an extraction module 500 and an identification module 600 .
  • the preprocessing module 400 preprocesses the original point cloud data, and the original point cloud data includes absolute coordinates and elevation information;
  • the extraction module 500 uses the cloth algorithm to extract the ground point cloud and non-ground point cloud in the point cloud data;
  • the identification module For non-ground point clouds, use multi-dimensional methods to extract various ground objects one by one, and obtain different types of ground object recognition.
  • the vehicle-mounted mobile measurement system is used to collect the original point cloud data
  • the vehicle-mounted laser scanning system is used to acquire data including laser point cloud data.
  • the laser point cloud data is a set of points with three-dimensional coordinates in real space.
  • the original point cloud data can be preprocessed, and the original point cloud data can be preprocessed by denoising, filtering and other technologies, thereby improving the accuracy of the point cloud data.
  • the vehicle-mounted laser scanning system adopts a vehicle-mounted mobile data acquisition system with various advanced technologies such as GPS positioning, INS inertial navigation, CCD video, and automatic control. It uses vehicle-mounted remote sensing to quickly collect street and road positioning information on the spot. There are two methods of measurable stereoscopic images and using professional cameras to collect single-point panoramic images and 360° panoramic images. The collected data has the advantages of simple and fast update, high efficiency, and rich information.
  • the extraction module 500 uses a cloth algorithm to extract ground point clouds and non-ground point clouds in the point cloud data, specifically: initializing the cloth grid, and determining the number of grid nodes through the grid resolution; Project the point cloud data and grid points to the same horizontal plane, determine the point cloud data corresponding to each grid point, and mark the elevation value of the corresponding point cloud data point; calculate the position of the grid node moved by gravity, and compare the position elevation It corresponds to point cloud data point elevation. If the node elevation is less than or equal to the point cloud data elevation, replace the node position with the position of the corresponding point cloud data point, and mark it as an immovable point.
  • the position of the cloth point after being displaced by gravity is calculated by the following formula:
  • X is the position of the cloth grid point at time t
  • ⁇ t is the time period
  • G is the acceleration and is a constant value
  • A is the quality of the cloth grid point, which is set to a constant 1
  • the simulation process Terminate; classify the ground point cloud and non-ground point cloud, and calculate the distance between the grid points and the corresponding point cloud data points. For point cloud data points, if the distance is less than the threshold L, it is classified as a ground point cloud, otherwise it is classified as a non-ground point cloud.
  • the recognition module 600 includes a tree recognition unit 610, a rod-shaped object recognition unit 620, and a building recognition unit 630, wherein the tree recognition unit 610 uses elevation values and watershed algorithms to extract non-ground Tree-like point cloud data in the point cloud; identify pole-shaped object unit 620 through elevation values and hierarchical clustering to extract pole-shaped object-type point cloud data in non-ground point clouds; identify building class unit 630 based on elevation values and multi-level semantics to extract building-like point cloud data from non-ground point clouds.
  • the identifying tree class unit 610 includes a division subunit 611, a division subunit 612, and an extraction tree class subunit 613; wherein the division subunit 611 divides the non-ground point cloud data into grids, and projects is a grayscale image; the segmentation subunit 612 performs watershed segmentation on the grayscale image to determine the outline of the street tree; the extracting tree class subunit 613 extracts the street tree point cloud from the non-ground point cloud data according to the outline of the street tree.
  • the non-ground point cloud data is grid-divided and projected into a grayscale image, and the gray-scale image is divided into watershed to determine the outline of the street tree, and then the complete street tree point cloud is extracted from the non-ground point cloud data according to the outline of the street tree.
  • the elevation difference of different positions of the non-ground point cloud is obtained through the elevation model, and the gray image is segmented according to the elevation difference of different positions of the non-ground point cloud to determine the outline of the street tree.
  • the elevation difference distribution of a single street tree is shown in Figure 4.
  • the elevation difference of different positions of the street tree point cloud model is shown.
  • Figure 5 it shows the waveform diagram of the elevation difference distribution of a single street tree point cloud.
  • the horizontal axis is the projection diameter D of a single street tree crown on the XOY plane, and the vertical axis is is the elevation difference. It can be seen that the closer to the crown center or trunk, the greater the elevation difference, and the closer to the peak, and the farther away from the crown center or trunk, the smaller the elevation difference, and the closer to the trough.
  • the unit for identifying rod-shaped objects 620 includes a layering subunit 621, a clustering subunit 622, and a subunit for extracting rod-shaped objects 623, wherein the layering subunit 621 treats non-ground point clouds according to The preset elevation interval is stratified; the clustering subunit 622 clusters the non-ground point cloud data of each layer; The clustering results are clustered for the second time, and the rod-shaped object point cloud is extracted.
  • the non-ground point cloud is layered according to the preset elevation interval, and the point cloud data of each layer is clustered, and then according to the characteristics of the continuous extension of the rod-shaped objects in the elevation direction, the layered clustering results are reassessed in the elevation direction. Clustering to identify the complete rod-shaped feature point cloud.
  • this application uses the method of processing data from high to low to layer the non-ground point cloud according to the preset elevation interval, first determine the maximum and minimum values of the non-ground point cloud in the elevation direction, and divide the obtained results in the elevation direction The number of non-ground point cloud layers, the threshold value in the elevation direction of each layer, and the distance between each layer are set, and the non-ground point cloud is clustered according to the above rules.
  • the data processing process is adopted from high to low, without data filtering, which simplifies the data processing process, involves fewer parameters, data processing is efficient, and the degree of automation is higher.
  • the recognition building class unit 630 includes a semantic subunit 631, a grid subunit 632, and an extraction building facade subunit 633, wherein the semantic subunit 631 uses single-point semantic features to extract The high-rise building point cloud; the grid subunit 632 projects the point cloud below the threshold and the high-rise building point cloud to the XOY plane, and divides the grid according to the preset size, and selects the grid of interest according to the semantic features of the grid; extracts the building vertical
  • the face subunit 633 performs connectivity analysis on the interest grid to obtain the object area, and extracts the building facade point cloud based on the semantic features of the area.
  • the single-point semantic feature that is, the elevation value of the point, the point cloud lower than the building is eliminated, and the high-rise building point cloud containing only the building above a certain height is extracted at the same time; then the remaining point cloud and the high-rise building point cloud are projected to XOY Plane and divide the grid according to a certain size, and select the grid of interest according to the semantic characteristics of the grid; finally, analyze the connectivity of the grid of interest to obtain the object area, and realize the accurate extraction of building facade point clouds based on the semantic features of the area.
  • the connectivity of the grid of interest specifically whether there is a projection point cloud connection between each grid.
  • the projection point cloud in some grids is in a small piece in the middle, and there is a gap with the grid boundary. Then this grid is not connected to the surrounding grids, otherwise, this grid is connected to the surrounding grids.
  • each dimension extracts/recognizes a type of feature, wherein the range of various features can be extracted by category, and the extracted feature range For fusion, various ground object ranges can also be extracted at the same time to complete the ground object recognition.

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

A street view ground object multi-dimensional extraction method and system based on point cloud data. The method comprises: original point cloud data comprising absolute coordinates and elevation information; extracting a ground point cloud and a non-ground point cloud from the point cloud data by using a cloth algorithm; and extracting, from the non-ground point cloud, various ground objects type by type by using a multi-dimensional method, so as to achieve identification of different types of ground objects. Different types of ground objects are extracted from non-ground point cloud data by using a multi-dimensional method, such that street ground objects can be quickly, effectively and accurately extracted and identified.

Description

基于点云数据的街景地物多维度提取方法和系统Method and system for multi-dimensional extraction of street scene features based on point cloud data 技术领域technical field
本发明涉及点云数据提取的技术领域,特别涉及基于点云数据的街景地物多维度提取方法和系统。The present invention relates to the technical field of point cloud data extraction, in particular to a method and system for multi-dimensional extraction of street view features based on point cloud data.
背景技术Background technique
近年,激光雷达技术(LIDAR)是一种可以直接获取地面三维信息的技术,具有传统航空摄影测量无可比拟的优势,激光雷达扫描系统避免了传统摄影测量中所必须的定向及影像匹配等步骤,相比于传统的摄影测量方法速度快、精度高、成本低,具备实时性、高效性、非接触性、数据量大等优点。LIDAR获取的点云数据除包含了丰富的环境信息,包括地面信息、植被信息、导线信息、建筑信息等。要准确获取建筑物的三维信息,必须对LIDAR数据进行处理。In recent years, laser radar technology (LIDAR) is a technology that can directly obtain three-dimensional information on the ground, which has unparalleled advantages over traditional aerial photogrammetry. The laser radar scanning system avoids the necessary steps of orientation and image matching in traditional photogrammetry. Compared with the traditional photogrammetry method, it has the advantages of fast speed, high precision, and low cost, and has the advantages of real-time, high efficiency, non-contact, and large data volume. The point cloud data acquired by LIDAR contains rich environmental information, including ground information, vegetation information, wire information, building information, etc. To accurately obtain the 3D information of buildings, the LIDAR data must be processed.
由于建筑物的几何模型具有多样性、复杂性,建筑物所处的人工环境和自然环境一般比较复杂,既有树木等植被,又有道路,杆塔等其他人工地物,目前现有的提取地物都是通过单一的维度进行提取,其提取地物的种类单一,且精度不高。Due to the diversity and complexity of the geometric model of the building, the artificial environment and the natural environment in which the building is located are generally more complicated, including vegetation such as trees, as well as roads, poles and towers and other artificial features. Objects are extracted through a single dimension, and the types of extracted objects are single, and the accuracy is not high.
发明内容Contents of the invention
(一)发明目的(1) Purpose of the invention
鉴于上述问题,本发明的目的是提出一种基于点云数据的街景地物多维度提取方法和系统,通过采用多维度方法对非地面点云数据进行提取不同种类的地物,可快速、有效、精准的提取街道地物,本发明公开了以下技术方案。In view of the above problems, the purpose of the present invention is to propose a multi-dimensional extraction method and system for street scene features based on point cloud data, by using multi-dimensional methods to extract different types of features from non-ground point cloud data, which can be quickly and effectively 1. Accurately extracting street features, the present invention discloses the following technical solutions.
(二)技术方案(2) Technical solution
作为本发明的第一方面,本发明公开了一种基于点云数据的街景地物多维度提取方法,包括:As the first aspect of the present invention, the present invention discloses a multi-dimensional extraction method of street view features based on point cloud data, including:
原始点云数据包括绝对坐标和高程信息;Raw point cloud data includes absolute coordinates and elevation information;
利用布料算法,提取所述点云数据中的地面点云和非地面点云;Using a cloth algorithm to extract ground point clouds and non-ground point clouds in the point cloud data;
对所述非地面点云利用多维度方法逐类提取各种地物,获得不同种类的地物识别。A multi-dimensional method is used to extract various ground objects by category from the non-ground point cloud to obtain different types of ground object recognition.
在一种可能的实施方式中,所述对所述非地面点云利用多维度方法进行逐类提取各种地物,具体包括;In a possible implementation manner, the multi-dimensional method is used to extract various ground objects by category from the non-ground point cloud, which specifically includes;
采用高程值和分水岭算法,提取所述非地面点云中的树木类点云数据;Using an elevation value and a watershed algorithm to extract tree class point cloud data in the non-ground point cloud;
通过高程值和分层聚类,提取所述非地面点云中的杆状物类点云数据;Extracting rod-like object point cloud data in the non-ground point cloud through elevation values and hierarchical clustering;
基于高程值和多层次语义,提取所述非地面点云中的建筑物类点云数据。Based on the elevation value and the multi-level semantics, the building point cloud data in the non-ground point cloud is extracted.
在一种可能的实施方式中,所述提取所述非地面点云中的树木类点云数据,具体包括:In a possible implementation manner, the extracting the tree class point cloud data in the non-ground point cloud specifically includes:
将所述非地面点云数据进行格网划分,并投影为灰度图像;Carrying out grid division of the non-ground point cloud data, and projecting it into a grayscale image;
对所述灰度图像进行分水岭分割,确定行道树轮廓;Carrying out watershed segmentation to the grayscale image to determine the outline of street trees;
根据行道树轮廓从所述非地面点云数据中提取行道树点云。The street tree point cloud is extracted from the non-ground point cloud data according to the street tree outline.
在一种可能的实施方式中,所述提取所述非地面点云中的杆状物类点云数据,具体包括:In a possible implementation manner, the extracting the rod-like object point cloud data in the non-ground point cloud specifically includes:
对所述非地面点云按照预设的高程间隔进行分层;layering the non-ground point cloud according to preset elevation intervals;
对每层的所述非地面点云数据进行聚类;Clustering the non-ground point cloud data of each layer;
根据杆状地物的特点,在高程方向对所述分层的聚类结果进行第二次聚类,提取出杆状地物点云。According to the characteristics of the rod-shaped objects, the second clustering is performed on the layered clustering results in the elevation direction to extract the point cloud of the rod-shaped objects.
在一种可能的实施方式中,所述提取所述非地面点云中的建筑物类点云数据,具体包括:In a possible implementation manner, the extracting the building point cloud data in the non-ground point cloud specifically includes:
通过单点语义特征,提取出阈值高度以上的高层建筑点云;Extract point clouds of high-rise buildings above the threshold height through single-point semantic features;
将低于所述阈值的点云及高层建筑点云投影到XOY平面,并按预设尺寸划分格网,根据格网语义特征选取兴趣格网;Project the point cloud and high-rise building point cloud below the threshold to the XOY plane, and divide the grid according to the preset size, and select the grid of interest according to the semantic characteristics of the grid;
对所述兴趣格网进行连通性分析得到对象区域,并基于区域语义特征,提取建筑立面点云。Connectivity analysis is performed on the interest grid to obtain the object area, and based on the regional semantic features, the building facade point cloud is extracted.
作为本发明的第二方面,本发明还公开了一种基于点云数据的街景地物多维度提取系统,包括:As the second aspect of the present invention, the present invention also discloses a multi-dimensional extraction system of street view features based on point cloud data, including:
预处理模块,所述预处理模块对原始点云数据进行预处理,所述原始点云数据包括绝对坐标和高程信息;A preprocessing module, which preprocesses the original point cloud data, the original point cloud data including absolute coordinates and elevation information;
提取模块,所述提取模块利用布料算法提取所述点云数据中的地面点云和非地面点云;An extraction module, the extraction module extracts ground point clouds and non-ground point clouds in the point cloud data by using a cloth algorithm;
识别模块,所述识别模模块对所述非地面点云利用多维度方法逐类提取各种地物,获得不同种类的地物识别。A recognition module, the recognition module uses a multi-dimensional method to extract various ground objects by category from the non-ground point cloud, and obtains different types of ground object recognition.
在一种可能的实施方式中,所述识别模块包括识别树木类单元、识别杆状物类单元和识别建筑物类单元;In a possible implementation manner, the identification module includes identifying tree-like units, identifying rod-shaped object-like units, and identifying building-like units;
所述识别树木类单元采用高程值和分水岭算法,提取所述非地面 点云中的树木类点云数据;Described recognition tree class unit adopts elevation value and watershed algorithm, extracts the tree class point cloud data in described non-ground point cloud;
所述识别杆状物类单元通过高程值和分层聚类,提取所述非地面点云中的杆状物类点云数据;The identifying rod-shaped object class unit extracts the rod-shaped object point cloud data in the non-ground point cloud through the elevation value and hierarchical clustering;
所述识别建筑物类单元基于高程值和多层次语义,提取所述非地面点云中的建筑物类点云数据。The identification of building-type units extracts building-type point cloud data in the non-ground point cloud based on elevation values and multi-level semantics.
在一种可能的实施方式中,所述识别树木类单元包括划分子单元、分割子单元和提取树木类子单元;In a possible implementation manner, the identifying tree-like units includes dividing subunits, dividing subunits, and extracting tree-like subunits;
所述划分子单元将所述非地面点云数据进行格网划分,并投影为灰度图像;The division subunit divides the non-ground point cloud data into a grid and projects it into a grayscale image;
所述分割子单元对所述灰度图像进行分水岭分割,确定行道树轮廓;The segmentation subunit performs watershed segmentation on the grayscale image to determine the outline of street trees;
所述提取树木类子单元根据行道树轮廓从所述非地面点云数据中提取行道树点云。The extracting tree subunit extracts street tree point clouds from the non-ground point cloud data according to street tree outlines.
在一种可能的实施方式中,所述识别杆状物类单元包括分层子单元、聚类子单元和提取杆状地物子单元;In a possible implementation manner, the unit for identifying rod-shaped objects includes a hierarchical subunit, a clustering subunit, and a subunit for extracting rod-shaped objects;
所述分层子单元对所述非地面点云按照预设的高程间隔进行分层;The layering subunit layers the non-ground point cloud according to a preset elevation interval;
所述聚类子单元对每层的所述非地面点云数据进行聚类;The clustering subunit clusters the non-ground point cloud data of each layer;
所述提取杆状地物子单元根据杆状地物的特点,在高程方向对所述分层的聚类结果进行第二次聚类,提取出杆状地物点云。The extracting rod-shaped feature subunit performs a second clustering on the layered clustering results in the elevation direction according to the characteristics of the rod-shaped feature, and extracts the point cloud of the rod-shaped feature.
在一种可能的实施方式中,所述识别建筑物类单元包括语义子单元、格网子单元和所述提取建筑立面子单元;In a possible implementation manner, the identifying building type unit includes a semantic subunit, a grid subunit and the extracted building facade subunit;
所述语义子单元通过单点语义特征,提取出阈值高度以上的高层建筑点云;The semantic subunit extracts point clouds of high-rise buildings above a threshold height through single-point semantic features;
所述格网子单元将低于所述阈值的点云及高层建筑点云投影到XOY平面,并按预设尺寸划分格网,根据格网语义特征选取兴趣格网;The grid subunit projects the point cloud and high-rise building point cloud below the threshold to the XOY plane, and divides the grid according to the preset size, and selects the grid of interest according to the semantic features of the grid;
所述提取建筑立面子单元对所述兴趣格网进行连通性分析得到对象区域,并基于区域语义特征,提取建筑立面点云。The extracting building facade subunit performs connectivity analysis on the interest grid to obtain object areas, and extracts building facade point clouds based on regional semantic features.
(三)有益效果(3) Beneficial effects
本发明公开的一种基于点云数据的街景地物多维度提取方法和系统,具有如下有益效果:对采集的点云数据进行预处理,采用布料算法提取出地面点云和非地面点云,通过采用多维度或多种类的方法对非地面点云进行提取各种地物范围,从而高效、高精度、准确的提取和识别地物轮廓和类别。The present invention discloses a method and system for extracting multi-dimensional street scene features based on point cloud data, which has the following beneficial effects: the collected point cloud data is preprocessed, the ground point cloud and non-ground point cloud are extracted by using the cloth algorithm, By adopting multi-dimensional or multi-type methods to extract various ground object ranges from non-ground point clouds, the contours and categories of ground objects can be extracted and identified with high efficiency, high precision and accuracy.
附图说明Description of drawings
以下参考附图描述的实施例是示例性的,旨在用于解释和说明本发明,而不能理解为对本发明的保护范围的限制。The embodiments described below with reference to the accompanying drawings are exemplary and are intended to explain and describe the present invention, but should not be construed as limiting the protection scope of the present invention.
图1是本发明公开的基于点云数据的街景地物多维度提取方法的流程图之一;Fig. 1 is one of the flow charts of the multi-dimensional extraction method of street view features based on point cloud data disclosed by the present invention;
图2是本发明公开的基于点云数据的街景地物多维度提取方法的流程图之二;Fig. 2 is the second flow chart of the multi-dimensional extraction method of street view features based on point cloud data disclosed by the present invention;
图3是本发明公开的提取非地面点云中的树木类点云数据的流程图;Fig. 3 is a flow chart of extracting tree class point cloud data in the non-ground point cloud disclosed by the present invention;
图4是本发明公开的行道树点云模型不同位置的高程差示意图;Fig. 4 is the elevation difference schematic diagram of different positions of the street tree point cloud model disclosed by the present invention;
图5是本发明公开的单棵行道树点云的高程差分布波形图;Fig. 5 is the waveform diagram of the elevation difference distribution of the single street tree point cloud disclosed by the present invention;
图6是本发明公开的提取非地面点云中的杆状物类点云数据的流程图;Fig. 6 is a flow chart of extracting rod-shaped object point cloud data in the non-ground point cloud disclosed by the present invention;
图7是本发明公开的提取非地面点云中的建筑物类点云数据的流程图;Fig. 7 is a flow chart of extracting building class point cloud data in the non-ground point cloud disclosed by the present invention;
图8是本发明公开的基于点云数据的街景地物多维度提取系统的框图。Fig. 8 is a block diagram of a multi-dimensional extraction system for street view features based on point cloud data disclosed in the present invention.
附图标记:400、预处理模块;500、提取模块;600、识别模块;610、识别树木类单元;611、划分子单元;612、分割子单元;613、提取树木类子单元;620、识别杆状物类单元;621、分层子单元;622、聚类子单元;623、提取杆状地物子单元;630、识别建筑物类单元;631、语义子单元;632、格网子单元;633、提取建筑立面子单元。Reference numerals: 400, preprocessing module; 500, extraction module; 600, recognition module; 610, identification of tree-like units; 611, division of sub-units; 612, segmentation of sub-units; 613, extraction of tree-like sub-units; 620, identification Rod-shaped object unit; 621, hierarchical subunit; 622, clustering subunit; 623, extracting rod-shaped object subunit; 630, identifying building class unit; 631, semantic subunit; 632, grid subunit ; 633. Extract building facade sub-units.
具体实施方式Detailed ways
为使本发明实施的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行更加详细的描述。In order to make the objectives, technical solutions and advantages of the present invention clearer, the technical solutions in the embodiments of the present invention will be described in more detail below in conjunction with the drawings in the embodiments of the present invention.
需要说明的是:在附图中,自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。所描述的实施例是本发明一部分实施例,而不是全部的实施例,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其 他实施例,都属于本发明保护的范围。It should be noted that: in the drawings, the same or similar symbols represent the same or similar elements or elements with the same or similar functions. The described embodiments are part of the embodiments of the present invention, but not all of the embodiments. In the case of no conflict, the embodiments in the present application and the features in the embodiments can be combined with each other. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.
在本发明的描述中,需要理解的是,术语“中心”、“纵向”、“横向”、“前”、“后”、“左”、“右”、“竖直”、“水平”、“顶”、“底”、“内”、“外”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本发明和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本发明保护范围的限制。In describing the present invention, it is to be understood that the terms "central", "longitudinal", "transverse", "front", "rear", "left", "right", "vertical", "horizontal", The orientations or positional relationships indicated by "top", "bottom", "inner", "outer", etc. are based on the orientations or positional relationships shown in the drawings, and are only for the convenience of describing the present invention and simplifying the description, rather than indicating or implying The devices or elements referred to must have a specific orientation, be constructed and operate in a specific orientation, and therefore should not be construed as limiting the scope of the invention.
下面参考图1-7详细描述本发明公开的一种基于点云数据的街景地物多维度提取方法的第一实施例。本实施例主要应用于点云数据提取,通过采用多维度方法对非地面点云数据进行提取不同种类的地物,可快速、有效、精准的提取街道地物。A first embodiment of a multi-dimensional extraction method of street view features based on point cloud data disclosed in the present invention will be described in detail below with reference to FIGS. 1-7 . This embodiment is mainly applied to point cloud data extraction. By using a multi-dimensional method to extract different types of ground features from non-ground point cloud data, street features can be extracted quickly, effectively and accurately.
如图1-2所示,本实施例具体包括以下步骤:As shown in Figure 1-2, this embodiment specifically includes the following steps:
S100、原始点云数据包括绝对坐标和高程信息。S100. The original point cloud data includes absolute coordinates and elevation information.
在步骤S100中,本申请中通过采用车载移动测量系统进行原始点云数据采集,具体采用车载激光扫描系统获取数据包括激光点云数据,激光点云数据为具有真实空间三维坐标的点集合。In step S100, in this application, the vehicle-mounted mobile measurement system is used to collect the original point cloud data. Specifically, the vehicle-mounted laser scanning system is used to acquire data including laser point cloud data. The laser point cloud data is a set of points with three-dimensional coordinates in real space.
进一步,在采集原始点云数据之后,对原始点云数据进行预处理,通过去噪、滤波等技术,对原始点云数据进行预处理,从而提高点云数据精度。Further, after the original point cloud data is collected, the original point cloud data is preprocessed, and the original point cloud data is preprocessed by denoising, filtering and other technologies, so as to improve the accuracy of the point cloud data.
进一步的,车载激光扫描系统采用了具有GPS定位、INS惯性导航、CCD视频以及自动控制等多种先进技术的车载移动数据采集系统,以车载遥感的方式,实地快速采集街道、道路带有定位信息的 可量测立体影像和采用专业相机采集单点全景影像360°全景影像两种方法,所采集的数据具有更新简单快捷、效率高、信息量丰富等优点。Furthermore, the vehicle-mounted laser scanning system adopts a vehicle-mounted mobile data acquisition system with various advanced technologies such as GPS positioning, INS inertial navigation, CCD video, and automatic control. It uses vehicle-mounted remote sensing to quickly collect street and road positioning information on the spot. There are two methods of measurable stereoscopic images and using professional cameras to collect single-point panoramic images and 360° panoramic images. The collected data has the advantages of simple and fast update, high efficiency, and rich information.
S200、利用布料算法,提取点云数据中的地面点云和非地面点云。S200. Using a cloth algorithm, extract ground point clouds and non-ground point clouds in the point cloud data.
在步骤S200中,利用布料算法,提取点云数据中的地面点云和非地面点云,具体包括:In step S200, the cloth algorithm is used to extract the ground point cloud and non-ground point cloud in the point cloud data, specifically including:
S210、初始化布料网格,通过格网分辨率确定格网节点数量;S210. Initialize the cloth grid, and determine the number of grid nodes through the grid resolution;
S220、将点云数据和格网点投影到同一水平面,确定每个格网点对应的点云数据,并标记对应点云数据点的高程值;S220. Project the point cloud data and the grid points onto the same horizontal plane, determine the point cloud data corresponding to each grid point, and mark the elevation value of the corresponding point cloud data point;
S230、计算格网节点受重力作用移动的位置,并比较该位置高程与其对应点云数据点高程。如果节点高程小于或等于点云数据高程,则将该节点位置替换到对应点云数据点的位置,并将其标记为不可动点。布料点受重力作用而产生位移后的位置通过下式计算:S230. Calculate the position of the grid node moved by gravity, and compare the elevation of the position with the elevation of the corresponding point cloud data point. If the node elevation is less than or equal to the point cloud data elevation, replace the node position with the position of the corresponding point cloud data point, and mark it as an immovable point. The position of the cloth point after being displaced by gravity is calculated by the following formula:
Figure PCTCN2021124565-appb-000001
Figure PCTCN2021124565-appb-000001
其中X为时间t时刻布料格网点的位置,Δt为时间段,G为加速度且为一恒定的值,A为布料格网点的质量,被设为常数1。Where X is the position of the cloth grid point at time t, Δt is the time period, G is the acceleration and is a constant value, and A is the quality of the cloth grid point, which is set as a constant 1.
S240、计算每个格网点受邻近节点影响而移动的位置。S240. Calculate the moving position of each grid point affected by the neighboring nodes.
S250、重复步骤S230、S240,当所有节点的最大高程变化足够小或者超出最大迭代次数时,模拟过程终止;S250, repeating steps S230 and S240, when the maximum elevation change of all nodes is small enough or exceeds the maximum number of iterations, the simulation process is terminated;
S250、分类地面点云和非地面点云,计算格网点和相应点云数据点之间的距离。对于点云数据点,如果该距离小于阈值L,被分为地 面点云,否则被分为非地面点云。S250. Classify ground point clouds and non-ground point clouds, and calculate distances between grid points and corresponding point cloud data points. For point cloud data points, if the distance is less than the threshold L, it is classified as ground point cloud, otherwise it is classified as non-ground point cloud.
S300、对非地面点云利用多维度方法逐类提取各种地物,获得不同种类的地物识别。S300. Using a multi-dimensional method to extract various ground objects one by one from the non-ground point cloud, and obtain different types of ground object recognition.
在步骤S300中,对非地面点云利用多维度方法逐类提取各种地物,具体包括;In step S300, a multi-dimensional method is used to extract various ground objects one by one from the non-ground point cloud, specifically including;
S310、采用高程值和分水岭算法,提取非地面点云中的树木类点云数据。S310. Using the elevation value and the watershed algorithm, extract tree-like point cloud data in the non-ground point cloud.
如图3所示,在步骤S310中,树木类提取,也就是对行道树提取,具体包括:As shown in Figure 3, in step S310, the extraction of trees, that is, the extraction of street trees, specifically includes:
S311、将非地面点云数据进行格网划分,并投影为灰度图像;S311. Grid-dividing the non-ground point cloud data and projecting it into a grayscale image;
S312、对灰度图像进行分水岭分割,确定行道树轮廓;S312. Carry out watershed segmentation on the grayscale image, and determine the outline of street trees;
S313、根据行道树轮廓从非地面点云数据中提取行道树点云。S313. Extract street tree point clouds from non-ground point cloud data according to street tree outlines.
将非地面点云数据进行格网划分后投影为灰度图像,对灰度图像进行分水岭分割确定行道树轮廓,然后根据行道树轮廓从非地面点云数据中提取完整的行道树点云。其中通过高程模型获得非地面点云不同位置的高程差,根据非地面点云不同位置的高程差值对灰度图像进行分水岭分割确定行道树轮廓,例如单棵行道树高程差分布如图4所示,展示了行道树点云模型不同位置的高程差,如图5所示,显示了单棵行道树点云的高程差分布波形图,横轴为单棵行道树树冠在XOY平面上的投影直径D,纵轴为高程差值。可以看出,越靠近树冠中心或者树干,高程差值越大,也就越靠近波峰,距离树冠中心或者树干越远,高程差值越小,越靠近波谷。The non-ground point cloud data is grid-divided and projected into a grayscale image, and the gray-scale image is divided into watershed to determine the outline of the street tree, and then the complete street tree point cloud is extracted from the non-ground point cloud data according to the outline of the street tree. Among them, the elevation difference of different positions of the non-ground point cloud is obtained through the elevation model, and the gray image is segmented according to the elevation difference of different positions of the non-ground point cloud to determine the outline of the street tree. For example, the elevation difference distribution of a single street tree is shown in Figure 4. The elevation difference of different positions of the street tree point cloud model is shown. As shown in Figure 5, it shows the waveform diagram of the elevation difference distribution of a single street tree point cloud. The horizontal axis is the projection diameter D of a single street tree crown on the XOY plane, and the vertical axis is is the elevation difference. It can be seen that the closer to the crown center or trunk, the greater the elevation difference, and the closer to the peak, and the farther away from the crown center or trunk, the smaller the elevation difference, and the closer to the trough.
S320、通过高程值和分层聚类,提取非地面点云中的杆状物类点云数据。S320. Using the elevation value and hierarchical clustering, extract point cloud data of rod-like objects in the non-ground point cloud.
如图6所示,在步骤S320中,提取非地面点云中的杆状物类点云数据,具体包括:As shown in Figure 6, in step S320, extract the point cloud data of rod-shaped objects in the non-ground point cloud, specifically including:
S321、对非地面点云按照预设的高程间隔进行分层;S321. Layer the non-ground point cloud according to the preset elevation interval;
S322、对每层的非地面点云数据进行聚类;S322. Clustering the non-ground point cloud data of each layer;
S323、根据杆状地物的特点,在高程方向对分层的聚类结果进行第二次聚类,提取出杆状地物点云。S323. Perform a second clustering on the layered clustering results in the elevation direction according to the characteristics of the rod-shaped objects to extract point clouds of the rod-shaped objects.
对非地面点云按照预设的高程间隔进行分层,对每层点云数据进行聚类,然后根据杆状地物在高程方向连续延伸的特点,在高程方向对分层的聚类结果再次聚类,以识别出完整的杆状地物点云。The non-ground point cloud is layered according to the preset elevation interval, and the point cloud data of each layer is clustered, and then according to the characteristics of the continuous extension of the rod-shaped objects in the elevation direction, the layered clustering results are reassessed in the elevation direction. Clustering to identify the complete rod-shaped feature point cloud.
进一步,本申请采用由高到低处理数据方法对非地面点云按照预设的高程间隔进行分层,先确定非地面点云高程方向上的最大值与最小值,在高程方向上划分得到的非地面点云层数,在设定每层高程方向上的阈值,以及每层之间的距离,根据上述规则,对非地面点云进行聚类。在高程方向采取由高到低的数据处理流程,无需进行数据滤波,简化了数据处理流程,涉及参数少,数据处理高效,自动化程度更高。Furthermore, this application uses the method of processing data from high to low to layer the non-ground point cloud according to the preset elevation interval, first determine the maximum and minimum values of the non-ground point cloud in the elevation direction, and divide the obtained results in the elevation direction The number of non-ground point cloud layers, the threshold value in the elevation direction of each layer, and the distance between each layer are set, and the non-ground point cloud is clustered according to the above rules. In the elevation direction, the data processing process is adopted from high to low, without data filtering, which simplifies the data processing process, involves fewer parameters, data processing is efficient, and the degree of automation is higher.
S330、基于高程值和多层次语义,提取非地面点云中的建筑物类点云数据。S330. Based on the elevation value and multi-level semantics, extract point cloud data of buildings in the non-ground point cloud.
如图7所示,在步骤S330中,提取非地面点云中的建筑物类点云数据,具体包括:As shown in Figure 7, in step S330, the building class point cloud data in the non-ground point cloud is extracted, specifically including:
S331、通过单点语义特征,提取出阈值高度以上的高层建筑点云;S331. Extracting point clouds of high-rise buildings above a threshold height through single-point semantic features;
S332、将低于阈值的点云及高层建筑点云投影到XOY平面,并按预设尺寸划分格网,根据格网语义特征选取兴趣格网;S332. Project the point cloud below the threshold and the point cloud of high-rise buildings to the XOY plane, divide the grid according to the preset size, and select the grid of interest according to the semantic characteristics of the grid;
S333、对兴趣格网进行连通性分析得到对象区域,并基于区域语义特征,提取建筑立面点云。S333. Perform connectivity analysis on the interest grid to obtain the object area, and extract building facade point clouds based on the semantic features of the area.
首先通过单点语义特征,即点的高程值剔除低于建筑物的点云,同时提取出一定高度以上仅包含建筑物的高层建筑点云;然后将剩余点云及高层建筑点云投影到XOY平面并按一定尺寸划分格网,根据格网语义特征选取兴趣格网;最后对兴趣格网进行连通性分析得到对象区域,并基于区域语义特征实现建筑立面点云的精确提取。Firstly, through the single-point semantic feature, that is, the elevation value of the point, the point cloud lower than the building is eliminated, and the high-rise building point cloud containing only the building above a certain height is extracted at the same time; then the remaining point cloud and the high-rise building point cloud are projected to XOY Plane and divide the grid according to a certain size, and select the grid of interest according to the semantic characteristics of the grid; finally, analyze the connectivity of the grid of interest to obtain the object area, and realize the accurate extraction of building facade point clouds based on the semantic features of the area.
进一步的,对兴趣格网进行连通性分析,具体为各格网之间是否有投影点云连通,如有些格网里面的投影点云都在中间一小块,与该格网边界有间隙,则这个格网与周围格网都连不通,反之,这个格网与周围格网连通。Further, analyze the connectivity of the grid of interest, specifically whether there is a projection point cloud connection between each grid. For example, the projection point cloud in some grids is in a small piece in the middle, and there is a gap with the grid boundary. Then this grid is not connected to the surrounding grids, otherwise, this grid is connected to the surrounding grids.
进一步的,在采用多维度或多类别方法提取地物点云数据时,每一个维度,提取/识别一类地物,其中,可逐类提取各种地物范围,将提取后的地物范围进行融合,也可以同时提取各种地物范围,完成地物识别。Further, when using a multi-dimensional or multi-category method to extract feature point cloud data, each dimension extracts/recognizes a type of feature, wherein the range of various features can be extracted by category, and the extracted feature range For fusion, various ground object ranges can also be extracted at the same time to complete the ground object recognition.
下面参考图4-图5和图8详细描述,基于同一发明构思,本发明实施例还提供了一种基于点云数据的街景地物多维度提取系统的第一实施例。由于该系统所解决问题的原理与前述一种基于点云数据的街景地物多维度提取方法相似,因此该系统的实施可以参见前述方法 的实施,重复之处不在赘述。The following describes in detail with reference to FIG. 4-FIG. 5 and FIG. 8. Based on the same inventive concept, the embodiment of the present invention also provides a first embodiment of a multi-dimensional extraction system of street view features based on point cloud data. Since the principle of the problem solved by this system is similar to the aforementioned multi-dimensional extraction method of street view features based on point cloud data, the implementation of this system can refer to the implementation of the aforementioned method, and the repetition will not be repeated.
本实施例主要应用于云数据提取,通过通采用多维度方法对非地面点云数据进行提取不同种类的地物,可快速、有效、精准的提取街道地物。This embodiment is mainly applied to cloud data extraction. By using a multi-dimensional method to extract different types of ground features from non-ground point cloud data, street features can be extracted quickly, effectively and accurately.
如图8所示,本实施例主要包括:预处理模块400、提取模块500和识别模块600。As shown in FIG. 8 , this embodiment mainly includes: a preprocessing module 400 , an extraction module 500 and an identification module 600 .
其中,预处理模块400对原始点云数据进行预处理,原始点云数据包括绝对坐标和高程信息;提取模块500利用布料算法提取点云数据中的地面点云和非地面点云;识别模模块对非地面点云利用多维度方法逐类提取各种地物,获得不同种类的地物识别。Wherein, the preprocessing module 400 preprocesses the original point cloud data, and the original point cloud data includes absolute coordinates and elevation information; the extraction module 500 uses the cloth algorithm to extract the ground point cloud and non-ground point cloud in the point cloud data; the identification module For non-ground point clouds, use multi-dimensional methods to extract various ground objects one by one, and obtain different types of ground object recognition.
进一步地,在本申请中通过采用车载移动测量系统进行原始点云数据采集,具体采用车载激光扫描系统获取数据包括激光点云数据,激光点云数据为具有真实空间三维坐标的点集合。Further, in this application, the vehicle-mounted mobile measurement system is used to collect the original point cloud data, and the vehicle-mounted laser scanning system is used to acquire data including laser point cloud data. The laser point cloud data is a set of points with three-dimensional coordinates in real space.
进一步,在采集原始点云数据之后,可对原始点云数据进行预处理,通过去噪、滤波等技术,对原始点云数据进行预处理,从而提高点云数据精度。Further, after the original point cloud data is collected, the original point cloud data can be preprocessed, and the original point cloud data can be preprocessed by denoising, filtering and other technologies, thereby improving the accuracy of the point cloud data.
进一步的,车载激光扫描系统采用了具有GPS定位、INS惯性导航、CCD视频以及自动控制等多种先进技术的车载移动数据采集系统,以车载遥感的方式,实地快速采集街道、道路带有定位信息的可量测立体影像和采用专业相机采集单点全景影像360°全景影像两种方法,所采集的数据具有更新简单快捷、效率高、信息量丰富等优点。Furthermore, the vehicle-mounted laser scanning system adopts a vehicle-mounted mobile data acquisition system with various advanced technologies such as GPS positioning, INS inertial navigation, CCD video, and automatic control. It uses vehicle-mounted remote sensing to quickly collect street and road positioning information on the spot. There are two methods of measurable stereoscopic images and using professional cameras to collect single-point panoramic images and 360° panoramic images. The collected data has the advantages of simple and fast update, high efficiency, and rich information.
在一种可能的实施方式中,提取模块500利用布料算法,提取点云数据中的地面点云和非地面点云,具体为:初始化布料网格,通过格网分辨率确定格网节点数量;将点云数据和格网点投影到同一水平面,确定每个格网点对应的点云数据,并标记对应点云数据点的高程值;计算格网节点受重力作用移动的位置,并比较该位置高程与其对应点云数据点高程。如果节点高程小于或等于点云数据高程,则将该节点位置替换到对应点云数据点的位置,并将其标记为不可动点。布料点受重力作用而产生位移后的位置通过下式计算:In a possible implementation manner, the extraction module 500 uses a cloth algorithm to extract ground point clouds and non-ground point clouds in the point cloud data, specifically: initializing the cloth grid, and determining the number of grid nodes through the grid resolution; Project the point cloud data and grid points to the same horizontal plane, determine the point cloud data corresponding to each grid point, and mark the elevation value of the corresponding point cloud data point; calculate the position of the grid node moved by gravity, and compare the position elevation It corresponds to point cloud data point elevation. If the node elevation is less than or equal to the point cloud data elevation, replace the node position with the position of the corresponding point cloud data point, and mark it as an immovable point. The position of the cloth point after being displaced by gravity is calculated by the following formula:
Figure PCTCN2021124565-appb-000002
Figure PCTCN2021124565-appb-000002
其中X为时间t时刻布料格网点的位置,Δt为时间段,G为加速度且为一恒定的值,A为布料格网点的质量,被设为常数1;计算每个格网点受邻近节点影响而移动的位置;重复上述计算格网节点受重力作用移动的位置和计算每个格网点受邻近节点影响而移动的位置,当所有节点的最大高程变化足够小或者超出最大迭代次数时,模拟过程终止;分类地面点云和非地面点云,计算格网点和相应点云数据点之间的距离。对于点云数据点,如果该距离小于阈值L,被分为地面点云,否则被分为非地面点云。Where X is the position of the cloth grid point at time t, Δt is the time period, G is the acceleration and is a constant value, A is the quality of the cloth grid point, which is set to a constant 1; calculate the influence of each grid point by the adjacent nodes and the moving position; repeat the above calculation of the position of grid nodes moved by gravity and calculate the position of each grid point moved by the influence of adjacent nodes. When the maximum elevation change of all nodes is small enough or exceeds the maximum number of iterations, the simulation process Terminate; classify the ground point cloud and non-ground point cloud, and calculate the distance between the grid points and the corresponding point cloud data points. For point cloud data points, if the distance is less than the threshold L, it is classified as a ground point cloud, otherwise it is classified as a non-ground point cloud.
在一种可能的实施方式中,识别模块600包括识别树木类单元610、识别杆状物类单元620和识别建筑物类单元630,其中识别树木类单元610采用高程值和分水岭算法,提取非地面点云中的树木类点云数据;识别杆状物类单元620通过高程值和分层聚类,提取非地 面点云中的杆状物类点云数据;识别建筑物类单元630基于高程值和多层次语义,提取非地面点云中的建筑物类点云数据。In a possible implementation, the recognition module 600 includes a tree recognition unit 610, a rod-shaped object recognition unit 620, and a building recognition unit 630, wherein the tree recognition unit 610 uses elevation values and watershed algorithms to extract non-ground Tree-like point cloud data in the point cloud; identify pole-shaped object unit 620 through elevation values and hierarchical clustering to extract pole-shaped object-type point cloud data in non-ground point clouds; identify building class unit 630 based on elevation values and multi-level semantics to extract building-like point cloud data from non-ground point clouds.
在一种可能的实施方式中,识别树木类单元610包括划分子单元611、分割子单元612和提取树木类子单元613;其中划分子单元611将非地面点云数据进行格网划分,并投影为灰度图像;分割子单元612对灰度图像进行分水岭分割,确定行道树轮廓;提取树木类子单元613根据行道树轮廓从非地面点云数据中提取行道树点云。In a possible implementation, the identifying tree class unit 610 includes a division subunit 611, a division subunit 612, and an extraction tree class subunit 613; wherein the division subunit 611 divides the non-ground point cloud data into grids, and projects is a grayscale image; the segmentation subunit 612 performs watershed segmentation on the grayscale image to determine the outline of the street tree; the extracting tree class subunit 613 extracts the street tree point cloud from the non-ground point cloud data according to the outline of the street tree.
将非地面点云数据进行格网划分后投影为灰度图像,对灰度图像进行分水岭分割确定行道树轮廓,然后根据行道树轮廓从非地面点云数据中提取完整的行道树点云。其中通过高程模型获得非地面点云不同位置的高程差,根据非地面点云不同位置的高程差值对灰度图像进行分水岭分割确定行道树轮廓,例如单棵行道树高程差分布如图4所示,展示了行道树点云模型不同位置的高程差,如图5所示,显示了单棵行道树点云的高程差分布波形图,横轴为单棵行道树树冠在XOY平面上的投影直径D,纵轴为高程差值。可以看出,越靠近树冠中心或者树干,高程差值越大,也就越靠近波峰,距离树冠中心或者树干越远,高程差值越小,越靠近波谷。The non-ground point cloud data is grid-divided and projected into a grayscale image, and the gray-scale image is divided into watershed to determine the outline of the street tree, and then the complete street tree point cloud is extracted from the non-ground point cloud data according to the outline of the street tree. Among them, the elevation difference of different positions of the non-ground point cloud is obtained through the elevation model, and the gray image is segmented according to the elevation difference of different positions of the non-ground point cloud to determine the outline of the street tree. For example, the elevation difference distribution of a single street tree is shown in Figure 4. The elevation difference of different positions of the street tree point cloud model is shown. As shown in Figure 5, it shows the waveform diagram of the elevation difference distribution of a single street tree point cloud. The horizontal axis is the projection diameter D of a single street tree crown on the XOY plane, and the vertical axis is is the elevation difference. It can be seen that the closer to the crown center or trunk, the greater the elevation difference, and the closer to the peak, and the farther away from the crown center or trunk, the smaller the elevation difference, and the closer to the trough.
在一种可能的实施方式中,识别杆状物类单元620包括分层子单元621、聚类子单元622和提取杆状地物子单元623,其中分层子单元621对非地面点云按照预设的高程间隔进行分层;聚类子单元622对每层的非地面点云数据进行聚类;提取杆状地物子单元623根据杆状地物的特点,在高程方向对分层的聚类结果进行第二次聚类,提取 出杆状地物点云。In a possible implementation, the unit for identifying rod-shaped objects 620 includes a layering subunit 621, a clustering subunit 622, and a subunit for extracting rod-shaped objects 623, wherein the layering subunit 621 treats non-ground point clouds according to The preset elevation interval is stratified; the clustering subunit 622 clusters the non-ground point cloud data of each layer; The clustering results are clustered for the second time, and the rod-shaped object point cloud is extracted.
对非地面点云按照预设的高程间隔进行分层,对每层点云数据进行聚类,然后根据杆状地物在高程方向连续延伸的特点,在高程方向对分层的聚类结果再次聚类,以识别出完整的杆状地物点云。The non-ground point cloud is layered according to the preset elevation interval, and the point cloud data of each layer is clustered, and then according to the characteristics of the continuous extension of the rod-shaped objects in the elevation direction, the layered clustering results are reassessed in the elevation direction. Clustering to identify the complete rod-shaped feature point cloud.
进一步,本申请采用由高到低处理数据方法对非地面点云按照预设的高程间隔进行分层,先确定非地面点云高程方向上的最大值与最小值,在高程方向上划分得到的非地面点云层数,在设定每层高程方向上的阈值,以及每层之间的距离,根据上述规则,对非地面点云进行聚类。在高程方向采取由高到低的数据处理流程,无需进行数据滤波,简化了数据处理流程,涉及参数少,数据处理高效,自动化程度更高。Furthermore, this application uses the method of processing data from high to low to layer the non-ground point cloud according to the preset elevation interval, first determine the maximum and minimum values of the non-ground point cloud in the elevation direction, and divide the obtained results in the elevation direction The number of non-ground point cloud layers, the threshold value in the elevation direction of each layer, and the distance between each layer are set, and the non-ground point cloud is clustered according to the above rules. In the elevation direction, the data processing process is adopted from high to low, without data filtering, which simplifies the data processing process, involves fewer parameters, data processing is efficient, and the degree of automation is higher.
在一种可能的实施方式中,识别建筑物类单元630包括语义子单元631、格网子单元632和提取建筑立面子单元633,其中语义子单元631通过单点语义特征,提取出阈值高度以上的高层建筑点云;格网子单元632将低于阈值的点云及高层建筑点云投影到XOY平面,并按预设尺寸划分格网,根据格网语义特征选取兴趣格网;提取建筑立面子单元633对兴趣格网进行连通性分析得到对象区域,并基于区域语义特征,提取建筑立面点云。In a possible implementation manner, the recognition building class unit 630 includes a semantic subunit 631, a grid subunit 632, and an extraction building facade subunit 633, wherein the semantic subunit 631 uses single-point semantic features to extract The high-rise building point cloud; the grid subunit 632 projects the point cloud below the threshold and the high-rise building point cloud to the XOY plane, and divides the grid according to the preset size, and selects the grid of interest according to the semantic features of the grid; extracts the building vertical The face subunit 633 performs connectivity analysis on the interest grid to obtain the object area, and extracts the building facade point cloud based on the semantic features of the area.
首先通过单点语义特征,即点的高程值剔除低于建筑物的点云,同时提取出一定高度以上仅包含建筑物的高层建筑点云;然后将剩余点云及高层建筑点云投影到XOY平面并按一定尺寸划分格网,根据格网语义特征选取兴趣格网;最后对兴趣格网进行连通性分析得到对 象区域,并基于区域语义特征实现建筑立面点云的精确提取。Firstly, through the single-point semantic feature, that is, the elevation value of the point, the point cloud lower than the building is eliminated, and the high-rise building point cloud containing only the building above a certain height is extracted at the same time; then the remaining point cloud and the high-rise building point cloud are projected to XOY Plane and divide the grid according to a certain size, and select the grid of interest according to the semantic characteristics of the grid; finally, analyze the connectivity of the grid of interest to obtain the object area, and realize the accurate extraction of building facade point clouds based on the semantic features of the area.
进一步的,对兴趣格网进行连通性分析,具体为各格网之间是否有投影点云连通,如有些格网里面的投影点云都在中间一小块,与该格网边界有间隙,则这个格网与周围格网都连不通,反之,这个格网与周围格网连通。Further, analyze the connectivity of the grid of interest, specifically whether there is a projection point cloud connection between each grid. For example, the projection point cloud in some grids is in a small piece in the middle, and there is a gap with the grid boundary. Then this grid is not connected to the surrounding grids, otherwise, this grid is connected to the surrounding grids.
进一步的,在采用多维度或多类别方法提取地物点云数据时,每一个维度,提取/识别一类地物,其中,可逐类提取各种地物范围,将提取后的地物范围进行融合,也可以同时提取各种地物范围,完成地物识别。Further, when using a multi-dimensional or multi-category method to extract feature point cloud data, each dimension extracts/recognizes a type of feature, wherein the range of various features can be extracted by category, and the extracted feature range For fusion, various ground object ranges can also be extracted at the same time to complete the ground object recognition.
以上,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到的变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以权利要求的保护范围为准。The above is only a specific embodiment of the present invention, but the scope of protection of the present invention is not limited thereto. Any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope disclosed in the present invention shall be covered within the protection scope of the present invention. Therefore, the protection scope of the present invention should be based on the protection scope of the claims.

Claims (10)

  1. 一种基于点云数据的街景地物多维度提取方法,其特征在于,包括:A multi-dimensional extraction method for street scene features based on point cloud data, characterized in that it includes:
    原始点云数据包括绝对坐标和高程信息;Raw point cloud data includes absolute coordinates and elevation information;
    利用布料算法,提取所述点云数据中的地面点云和非地面点云;Using a cloth algorithm to extract ground point clouds and non-ground point clouds in the point cloud data;
    对所述非地面点云利用多维度方法逐类提取各种地物,获得不同种类的地物识别。A multi-dimensional method is used to extract various ground objects by category from the non-ground point cloud to obtain different types of ground object recognition.
  2. 根据权利要求1所述的基于点云数据的街景地物多维度提取方法,其特征在于,所述对所述非地面点云利用多维度方法逐类提取各种地物,具体包括;The multi-dimensional extraction method of street view features based on point cloud data according to claim 1, wherein the non-ground point cloud is extracted by using a multi-dimensional method to extract various features by category, specifically comprising;
    采用高程值和分水岭算法,提取所述非地面点云中的树木类点云数据;Using an elevation value and a watershed algorithm to extract tree class point cloud data in the non-ground point cloud;
    通过高程值和分层聚类,提取所述非地面点云中的杆状物类点云数据;Extracting rod-like object point cloud data in the non-ground point cloud through elevation values and hierarchical clustering;
    基于高程值和多层次语义,提取所述非地面点云中的建筑物类点云数据。Based on the elevation value and the multi-level semantics, the building point cloud data in the non-ground point cloud is extracted.
  3. 根据权利要求2所述的基于点云数据的街景地物多维度提取方法,其特征在于,所述提取所述非地面点云中的树木类点云数据,具体包括:The multi-dimensional extraction method of street scene features based on point cloud data according to claim 2, wherein said extraction of tree class point cloud data in said non-ground point cloud specifically includes:
    将所述非地面点云数据进行格网划分,并投影为灰度图像;Carrying out grid division of the non-ground point cloud data, and projecting it into a grayscale image;
    对所述灰度图像进行分水岭分割,确定行道树轮廓;Carrying out watershed segmentation to the grayscale image to determine the outline of street trees;
    根据行道树轮廓从所述非地面点云数据中提取行道树点云。The street tree point cloud is extracted from the non-ground point cloud data according to the street tree outline.
  4. 根据权利要求2所述的基于点云数据的街景地物多维度提取方 法,其特征在于,所述提取所述非地面点云中的杆状物类点云数据,具体包括:The multi-dimensional extraction method of street scene feature based on point cloud data according to claim 2, is characterized in that, described extracting the pole-shaped object class point cloud data in described non-ground point cloud specifically comprises:
    对所述非地面点云按照预设的高程间隔进行分层;layering the non-ground point cloud according to preset elevation intervals;
    对每层的所述非地面点云数据进行聚类;Clustering the non-ground point cloud data of each layer;
    根据杆状地物的特点,在高程方向对所述分层的聚类结果进行第二次聚类,提取出杆状地物点云。According to the characteristics of the rod-shaped objects, the second clustering is performed on the layered clustering results in the elevation direction to extract the point cloud of the rod-shaped objects.
  5. 根据权利要求2所述的基于点云数据的街景地物多维度提取方法,其特征在于,所述提取所述非地面点云中的建筑物类点云数据,具体包括:The multi-dimensional extraction method of street scene feature based on point cloud data according to claim 2, wherein said extracting the building class point cloud data in said non-ground point cloud specifically includes:
    通过单点语义特征,提取出阈值高度以上的高层建筑点云;Extract point clouds of high-rise buildings above the threshold height through single-point semantic features;
    将低于所述阈值的点云及高层建筑点云投影到XOY平面,并按预设尺寸划分格网,根据格网语义特征选取兴趣格网;Project the point cloud and high-rise building point cloud below the threshold to the XOY plane, and divide the grid according to the preset size, and select the grid of interest according to the semantic characteristics of the grid;
    对所述兴趣格网进行连通性分析得到对象区域,并基于区域语义特征,提取建筑立面点云。Connectivity analysis is performed on the interest grid to obtain the object area, and based on the regional semantic features, the building facade point cloud is extracted.
  6. 一种基于点云数据的街景地物多维度提取系统,其特征在于,包括:A multi-dimensional extraction system of street scene features based on point cloud data, characterized in that it includes:
    预处理模块,所述预处理模块对原始点云数据进行预处理,所述原始点云数据包括绝对坐标和高程信息;A preprocessing module, which preprocesses the original point cloud data, the original point cloud data including absolute coordinates and elevation information;
    提取模块,所述提取模块利用布料算法提取所述点云数据中的地面点云和非地面点云;An extraction module, the extraction module extracts ground point clouds and non-ground point clouds in the point cloud data by using a cloth algorithm;
    识别模块,所述识别模模块对所述非地面点云利用多维度方法逐类提取各种地物,获得不同种类的地物识别。A recognition module, the recognition module uses a multi-dimensional method to extract various ground objects by category from the non-ground point cloud, and obtains different types of ground object recognition.
  7. 根据权利要求6所述的基于点云数据的街景地物多维度提取系统,其特征在于,所述识别模块包括识别树木类单元、识别杆状物类单元和识别建筑物类单元;The multi-dimensional extraction system of street view features based on point cloud data according to claim 6, wherein the identification module includes identifying tree-like units, identifying rod-shaped object-like units and identifying building-like units;
    所述识别树木类单元采用高程值和分水岭算法,提取所述非地面点云中的树木类点云数据;The recognition tree class unit adopts the elevation value and the watershed algorithm to extract the tree class point cloud data in the non-ground point cloud;
    所述识别杆状物类单元通过高程值和分层聚类,提取所述非地面点云中的杆状物类点云数据;The identifying rod-shaped object class unit extracts the rod-shaped object point cloud data in the non-ground point cloud through the elevation value and hierarchical clustering;
    所述识别建筑物类单元基于高程值和多层次语义,提取所述非地面点云中的建筑物类点云数据。The identification of building-type units extracts building-type point cloud data in the non-ground point cloud based on elevation values and multi-level semantics.
  8. 根据权利要求7所述的基于点云数据的街景地物多维度提取系统,其特征在于,所述识别树木类单元包括划分子单元、分割子单元和提取树木类子单元;The multi-dimensional extraction system of street view features based on point cloud data according to claim 7, wherein said identifying tree class units includes dividing subunits, segmenting subunits and extracting tree class subunits;
    所述划分子单元将所述非地面点云数据进行格网划分,并投影为灰度图像;The division subunit divides the non-ground point cloud data into a grid and projects it into a grayscale image;
    所述分割子单元对所述灰度图像进行分水岭分割,确定行道树轮廓;The segmentation subunit performs watershed segmentation on the grayscale image to determine the outline of street trees;
    所述提取树木类子单元根据行道树轮廓从所述非地面点云数据中提取行道树点云。The extracting tree subunit extracts street tree point clouds from the non-ground point cloud data according to street tree outlines.
  9. 根据权利要求7所述的基于点云数据的街景地物多维度提取系统,其特征在于,所述识别杆状物类单元包括分层子单元、聚类子单元和提取杆状地物子单元;The multi-dimensional extraction system of street view features based on point cloud data according to claim 7, wherein the unit for identifying rod-shaped objects includes a hierarchical subunit, a clustering subunit, and a subunit for extracting rod-shaped objects ;
    所述分层子单元对所述非地面点云按照预设的高程间隔进行分 层;The stratification subunit stratifies the non-ground point cloud according to a preset elevation interval;
    所述聚类子单元对每层的所述非地面点云数据进行聚类;The clustering subunit clusters the non-ground point cloud data of each layer;
    所述提取杆状地物子单元根据杆状地物的特点,在高程方向对所述分层的聚类结果进行第二次聚类,提取出杆状地物点云。The extracting rod-shaped feature subunit performs a second clustering on the layered clustering results in the elevation direction according to the characteristics of the rod-shaped feature, and extracts the point cloud of the rod-shaped feature.
  10. 根据权利要求7所述的基于点云数据的街景地物多维度提取系统,其特征在于,所述识别建筑物类单元包括语义子单元、格网子单元和提取建筑立面子单元;The multi-dimensional extraction system of street view features based on point cloud data according to claim 7, wherein the recognition building class unit includes a semantic subunit, a grid subunit and an extraction building facade subunit;
    所述语义子单元通过单点语义特征,提取出阈值高度以上的高层建筑点云;The semantic subunit extracts point clouds of high-rise buildings above a threshold height through single-point semantic features;
    所述格网子单元将低于所述阈值的点云及高层建筑点云投影到XOY平面,并按预设尺寸划分格网,根据格网语义特征选取兴趣格网;The grid subunit projects the point cloud and high-rise building point cloud below the threshold to the XOY plane, and divides the grid according to the preset size, and selects the grid of interest according to the semantic features of the grid;
    所述提取建筑立面子单元对所述兴趣格网进行连通性分析得到对象区域,并基于区域语义特征,提取建筑立面点云。The extracting building facade subunit performs connectivity analysis on the interest grid to obtain object areas, and extracts building facade point clouds based on regional semantic features.
PCT/CN2021/124565 2021-10-14 2021-10-19 Street view ground object multi-dimensional extraction method and system based on point cloud data WO2023060632A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111200089.2A CN113963259A (en) 2021-10-14 2021-10-14 Street view ground object multi-dimensional extraction method and system based on point cloud data
CN202111200089.2 2021-10-14

Publications (1)

Publication Number Publication Date
WO2023060632A1 true WO2023060632A1 (en) 2023-04-20

Family

ID=79464048

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/124565 WO2023060632A1 (en) 2021-10-14 2021-10-19 Street view ground object multi-dimensional extraction method and system based on point cloud data

Country Status (2)

Country Link
CN (1) CN113963259A (en)
WO (1) WO2023060632A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117274536A (en) * 2023-11-22 2023-12-22 北京飞渡科技股份有限公司 Live-action three-dimensional model reconstruction method and device
CN117456121A (en) * 2023-10-30 2024-01-26 中佳勘察设计有限公司 Topographic map acquisition and drawing method and device without camera
CN117494549A (en) * 2023-10-12 2024-02-02 青岛市勘察测绘研究院 Information simulation display method and system of three-dimensional geographic information system
CN117494549B (en) * 2023-10-12 2024-05-28 青岛市勘察测绘研究院 Information simulation display method and system of three-dimensional geographic information system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104091321A (en) * 2014-04-14 2014-10-08 北京师范大学 Multi-level-point-set characteristic extraction method applicable to ground laser radar point cloud classification
CN110992341A (en) * 2019-12-04 2020-04-10 沈阳建筑大学 Segmentation-based airborne LiDAR point cloud building extraction method
CN112241440A (en) * 2019-07-17 2021-01-19 临沂大学 Three-dimensional green quantity estimation and management method based on LiDAR point cloud data
CN112381041A (en) * 2020-11-27 2021-02-19 广东电网有限责任公司肇庆供电局 Tree identification method and device for power transmission line and terminal equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104091321A (en) * 2014-04-14 2014-10-08 北京师范大学 Multi-level-point-set characteristic extraction method applicable to ground laser radar point cloud classification
CN112241440A (en) * 2019-07-17 2021-01-19 临沂大学 Three-dimensional green quantity estimation and management method based on LiDAR point cloud data
CN110992341A (en) * 2019-12-04 2020-04-10 沈阳建筑大学 Segmentation-based airborne LiDAR point cloud building extraction method
CN112381041A (en) * 2020-11-27 2021-02-19 广东电网有限责任公司肇庆供电局 Tree identification method and device for power transmission line and terminal equipment

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117494549A (en) * 2023-10-12 2024-02-02 青岛市勘察测绘研究院 Information simulation display method and system of three-dimensional geographic information system
CN117494549B (en) * 2023-10-12 2024-05-28 青岛市勘察测绘研究院 Information simulation display method and system of three-dimensional geographic information system
CN117456121A (en) * 2023-10-30 2024-01-26 中佳勘察设计有限公司 Topographic map acquisition and drawing method and device without camera
CN117274536A (en) * 2023-11-22 2023-12-22 北京飞渡科技股份有限公司 Live-action three-dimensional model reconstruction method and device
CN117274536B (en) * 2023-11-22 2024-02-20 北京飞渡科技股份有限公司 Live-action three-dimensional model reconstruction method and device

Also Published As

Publication number Publication date
CN113963259A (en) 2022-01-21

Similar Documents

Publication Publication Date Title
Xia et al. Geometric primitives in LiDAR point clouds: A review
CN106022381B (en) Automatic extraction method of street lamp pole based on vehicle-mounted laser scanning point cloud
CN111598823B (en) Multisource mobile measurement point cloud data space-ground integration method and storage medium
Yang et al. Hierarchical extraction of urban objects from mobile laser scanning data
CN111815776A (en) Three-dimensional building fine geometric reconstruction method integrating airborne and vehicle-mounted three-dimensional laser point clouds and streetscape images
CN108171131B (en) Improved MeanShift-based method for extracting Lidar point cloud data road marking line
CN109631855A (en) High-precision vehicle positioning method based on ORB-SLAM
CN110175576A (en) A kind of driving vehicle visible detection method of combination laser point cloud data
CN109270544A (en) Mobile robot self-localization system based on shaft identification
Guan et al. A novel framework to automatically fuse multiplatform LiDAR data in forest environments based on tree locations
WO2023060632A1 (en) Street view ground object multi-dimensional extraction method and system based on point cloud data
CN108564650B (en) Lane tree target identification method based on vehicle-mounted 2D LiDAR point cloud data
CN107679458B (en) Method for extracting road marking lines in road color laser point cloud based on K-Means
CN110047036B (en) Polar grid-based ground laser scanning data building facade extraction method
CN106446785A (en) Passable road detection method based on binocular vision
CN114119863A (en) Method for automatically extracting street tree target and forest attribute thereof based on vehicle-mounted laser radar data
WO2021017211A1 (en) Vehicle positioning method and device employing visual sensing, and vehicle-mounted terminal
Liu et al. Deep-learning and depth-map based approach for detection and 3-D localization of small traffic signs
CN111487643B (en) Building detection method based on laser radar point cloud and near-infrared image
CN106709432B (en) Human head detection counting method based on binocular stereo vision
CN115690138A (en) Road boundary extraction and vectorization method fusing vehicle-mounted image and point cloud
CN114332134B (en) Building facade extraction method and device based on dense point cloud
CN115063555A (en) Method for extracting vehicle-mounted LiDAR point cloud street tree growing in Gaussian distribution area
Liu et al. Image-translation-based road marking extraction from mobile laser point clouds
Yao et al. Automated detection of 3D individual trees along urban road corridors by mobile laser scanning systems

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21960361

Country of ref document: EP

Kind code of ref document: A1