CN113963259A - Street view ground object multi-dimensional extraction method and system based on point cloud data - Google Patents

Street view ground object multi-dimensional extraction method and system based on point cloud data Download PDF

Info

Publication number
CN113963259A
CN113963259A CN202111200089.2A CN202111200089A CN113963259A CN 113963259 A CN113963259 A CN 113963259A CN 202111200089 A CN202111200089 A CN 202111200089A CN 113963259 A CN113963259 A CN 113963259A
Authority
CN
China
Prior art keywords
point cloud
ground
cloud data
extracting
subunit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111200089.2A
Other languages
Chinese (zh)
Inventor
罗再谦
刘颖
向煜
黄志�
周兵
韩�熙
华媛媛
朱勃
李兵
张彦
曹欣
王永刚
王军涛
李楠楠
王翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CHONGQING CYBERCITY SCI-TECH CO LTD
Original Assignee
CHONGQING CYBERCITY SCI-TECH CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CHONGQING CYBERCITY SCI-TECH CO LTD filed Critical CHONGQING CYBERCITY SCI-TECH CO LTD
Priority to CN202111200089.2A priority Critical patent/CN113963259A/en
Priority to PCT/CN2021/124565 priority patent/WO2023060632A1/en
Publication of CN113963259A publication Critical patent/CN113963259A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/08Projecting images onto non-planar surfaces, e.g. geodetic screens

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a street view ground object multi-dimensional extraction method and system based on point cloud data, which mainly comprise the following steps: the original point cloud data comprises absolute coordinates and elevation information; extracting ground point clouds and non-ground point clouds in the point cloud data by using a material distribution algorithm; and extracting various ground features class by using a multi-dimensional method for the non-ground point cloud to obtain different types of ground feature identification. According to the method, the non-ground point cloud data are extracted by adopting a multi-dimensional method, so that the street ground objects can be extracted and identified quickly, effectively and accurately.

Description

Street view ground object multi-dimensional extraction method and system based on point cloud data
Technical Field
The invention relates to the technical field of point cloud data extraction, in particular to a street view ground object multi-dimensional extraction method and system based on point cloud data.
Background
In recent years, a laser radar technology (LIDAR) is a technology capable of directly acquiring ground three-dimensional information, has incomparable advantages of traditional aerial photogrammetry, avoids the steps of orientation, image matching and the like required in the traditional photogrammetry by a laser radar scanning system, and has the advantages of high speed, high precision, low cost, instantaneity, high efficiency, non-contact property, large data volume and the like compared with the traditional photogrammetry method. The point cloud data acquired by the LIDAR contains abundant environmental information including ground information, vegetation information, wire information, building information and the like. To accurately obtain three-dimensional information of a building, LIDAR data must be processed.
Because the geometric model of the building has diversity and complexity, the artificial environment and the natural environment of the building are generally complex, the existing vegetation such as trees and other artificial ground objects such as roads, towers and the like exist, the existing ground objects are extracted through a single dimensionality, the types of the extracted ground objects are single, and the accuracy is not high.
Disclosure of Invention
Objects of the invention
In view of the above problems, the present invention aims to provide a method and a system for extracting street scenery features in multiple dimensions based on point cloud data, which can extract different types of features from non-ground point cloud data by using a multiple-dimensional method, and can extract street features quickly, efficiently and accurately.
(II) technical scheme
As a first aspect of the invention, the invention discloses a point cloud data-based streetscape ground object multi-dimensional extraction method, which comprises the following steps:
the original point cloud data comprises absolute coordinates and elevation information;
extracting ground point clouds and non-ground point clouds in the point cloud data by using a material distribution algorithm;
and extracting various ground features class by using a multi-dimensional method for the non-ground point cloud to obtain different types of ground feature identification.
In a possible implementation manner, the extracting various ground features class by using a multi-dimensional method on the non-ground point cloud specifically comprises;
extracting tree point cloud data in the non-ground point cloud by adopting an elevation value and watershed algorithm;
extracting rod-shaped object point cloud data in the non-ground point cloud through elevation values and hierarchical clustering;
and extracting the building point cloud data in the non-ground point cloud based on the elevation value and the multilevel semantics.
In a possible embodiment, the extracting the tree point cloud data in the non-ground point cloud specifically includes:
carrying out grid division on the non-ground point cloud data, and projecting the non-ground point cloud data into a gray image;
carrying out watershed segmentation on the gray level image to determine a street tree contour;
and extracting the street tree point cloud from the non-ground point cloud data according to the street tree outline.
In a possible embodiment, the extracting the rod-like point cloud data in the non-ground point cloud specifically includes:
layering the non-ground point clouds according to a preset elevation interval;
clustering the non-ground point cloud data of each layer;
and according to the characteristics of the rod-shaped ground objects, performing secondary clustering on the layered clustering results in the elevation direction, and extracting the rod-shaped ground object point cloud.
In a possible embodiment, the extracting the building point cloud data in the non-ground point cloud specifically includes:
extracting high-rise building point cloud above a threshold height through single-point semantic features;
projecting the point cloud lower than the threshold value and the point cloud of the high-rise building to an XOY plane, dividing a grid according to a preset size, and selecting an interest grid according to the semantic features of the grid;
and performing connectivity analysis on the interest grids to obtain object regions, and extracting the point cloud of the building facade based on the region semantic features.
As a second aspect of the present invention, the present invention further discloses a point cloud data-based streetscape ground object multidimensional extraction system, including:
the system comprises a preprocessing module, a data processing module and a data processing module, wherein the preprocessing module is used for preprocessing original point cloud data, and the original point cloud data comprises absolute coordinates and elevation information;
an extraction module that extracts a ground point cloud and a non-ground point cloud in the point cloud data using a distribution algorithm;
and the identification module extracts various ground features class by class from the non-ground point cloud by using a multi-dimensional method to obtain different types of ground feature identification.
In one possible embodiment, the identification module comprises an identification tree class unit, an identification shaft class unit and an identification building class unit;
the tree identification unit extracts tree point cloud data in the non-ground point cloud by adopting an elevation value and watershed algorithm;
the rod-shaped object identifying unit extracts rod-shaped object point cloud data in the non-ground point cloud through elevation value and hierarchical clustering;
and the building type identification unit extracts the building type point cloud data in the non-ground point cloud based on the elevation value and the multilevel semantics.
In a possible implementation, the tree class unit is identified by a dividing subunit, a dividing subunit and an extracting tree class subunit;
the dividing subunit divides the non-ground point cloud data into grids and projects the grids into gray images;
the segmentation subunit performs watershed segmentation on the gray level image to determine a street tree contour;
and the tree extraction subunit extracts the street tree point cloud from the non-ground point cloud data according to the street tree outline.
In one possible embodiment, the identifying rod-like object unit comprises a layering subunit, a clustering subunit and an extracting rod-like ground object subunit;
the layering subunit is used for layering the non-ground point cloud according to a preset elevation interval;
the clustering subunit is used for clustering the non-ground point cloud data of each layer;
and the rod-shaped ground object extracting subunit performs secondary clustering on the layered clustering result in the elevation direction according to the characteristics of the rod-shaped ground objects to extract rod-shaped ground object point clouds.
In one possible embodiment, the building class identification unit comprises a semantic subunit, a grid subunit and the building facade extraction subunit;
the semantic subunit extracts high-rise building point cloud above a threshold height through single-point semantic features;
the grid subunit projects the point cloud lower than the threshold value and the point cloud of the high-rise building to an XOY plane, divides a grid according to a preset size, and selects an interest grid according to the semantic features of the grid;
and the building facade extraction subunit performs connectivity analysis on the interest grids to obtain object areas, and extracts building facade point clouds based on the area semantic features.
(III) advantageous effects
The invention discloses a street view ground object multi-dimensional extraction method and system based on point cloud data, which have the following beneficial effects: the collected point cloud data is preprocessed, ground point cloud and non-ground point cloud are extracted by adopting a distribution algorithm, and various ground object ranges are extracted from the non-ground point cloud by adopting a multi-dimensional or multi-type method, so that the contour and the category of the ground object are efficiently, highly accurately and accurately extracted and identified.
Drawings
The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining and illustrating the present invention and should not be construed as limiting the scope of the present invention.
FIG. 1 is a flow chart of a multi-dimensional extraction method for street view and ground objects based on point cloud data disclosed in the present invention;
FIG. 2 is a second flowchart of the street view feature multi-dimensional extraction method based on point cloud data disclosed in the present invention;
FIG. 3 is a flow chart of the present disclosure for extracting tree point cloud data from a non-ground point cloud;
FIG. 4 is an elevation differential view of different positions of the pavement tree point cloud model disclosed herein;
FIG. 5 is an elevation difference distribution waveform for a single street tree point cloud as disclosed herein;
FIG. 6 is a flow chart of the disclosed method for extracting rod-like point cloud data from a non-ground point cloud;
FIG. 7 is a flow chart of the present disclosure for extracting building type point cloud data from non-ground point clouds;
FIG. 8 is a block diagram of a point cloud data-based street view feature multi-dimensional extraction system disclosed in the present invention.
Reference numerals: 400. a preprocessing module; 500. an extraction module; 600. an identification module; 610. identifying tree units; 611. dividing the subunits; 612. dividing the subunits; 613. extracting tree subunits; 620. identifying a rod-like object unit; 621. a hierarchical subunit; 622. a clustering subunit; 623. extracting rod-shaped ground object subunits; 630. identifying a building class unit; 631. a semantic subunit; 632. a grid subunit; 633. and extracting the building facade subunits.
Detailed Description
In order to make the implementation objects, technical solutions and advantages of the present invention clearer, the technical solutions in the embodiments of the present invention will be described in more detail below with reference to the accompanying drawings in the embodiments of the present invention.
It should be noted that: in the drawings, the same or similar reference numerals denote the same or similar elements or elements having the same or similar functions throughout. The embodiments described are some embodiments of the present invention, not all embodiments, and features in embodiments and embodiments in the present application may be combined with each other without conflict. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", etc., indicate orientations or positional relationships based on those shown in the drawings, and are used merely for convenience in describing the present invention and for simplifying the description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and therefore, should not be taken as limiting the scope of the present invention.
A first embodiment of a point cloud data-based streetscape ground object multi-dimensional extraction method disclosed by the invention is described in detail below with reference to fig. 1 to 7. The embodiment is mainly applied to point cloud data extraction, different types of ground objects are extracted from non-ground point cloud data by adopting a multi-dimensional method, and street ground objects can be extracted quickly, effectively and accurately.
As shown in fig. 1-2, this embodiment specifically includes the following steps:
s100, the original point cloud data comprise absolute coordinates and elevation information.
In step S100, the vehicle-mounted mobile measurement system is used to collect the original point cloud data, and the data obtained by the vehicle-mounted laser scanning system includes laser point cloud data, which is a point set with real space three-dimensional coordinates.
Furthermore, after the original point cloud data is collected, the original point cloud data is preprocessed, and the original point cloud data is preprocessed through the technologies of denoising, filtering and the like, so that the precision of the point cloud data is improved.
Furthermore, the vehicle-mounted laser scanning system adopts a vehicle-mounted mobile data acquisition system with various advanced technologies such as GPS positioning, INS inertial navigation, CCD video and automatic control, and two methods of rapidly acquiring measurable stereo images with positioning information of streets and roads on the spot and acquiring 360-degree panoramic images of single-point panoramic images by adopting a professional camera in a vehicle-mounted remote sensing mode, and the acquired data has the advantages of simplicity and rapidness in updating, high efficiency, rich information quantity and the like.
S200, extracting ground point clouds and non-ground point clouds in the point cloud data by using a material distribution algorithm.
In step S200, a distribution algorithm is used to extract a ground point cloud and a non-ground point cloud in the point cloud data, which specifically includes:
s210, initializing a material distribution grid, and determining the number of grid nodes according to the grid resolution;
s220, projecting the point cloud data and the grid points to the same horizontal plane, determining the point cloud data corresponding to each grid point, and marking the elevation values of the corresponding point cloud data points;
and S230, calculating the position of the grid node moved under the action of gravity, and comparing the elevation of the position with the elevation of the corresponding point cloud data point. And if the node elevation is less than or equal to the point cloud data elevation, replacing the node position with the position of the corresponding point cloud data point, and marking the node position as an unmovable point. The position of the material distribution point after displacement under the action of gravity is calculated by the following formula:
Figure BDA0003304592820000081
where X is the position of the cloth grid point at time t, Δ t is the time period, G is the acceleration and is a constant value, and a is the mass of the cloth grid point, set to a constant of 1.
And S240, calculating the position of each grid point, which is influenced by the adjacent nodes to move.
S250, repeating the steps S230 and S240, and when the maximum elevation changes of all the nodes are small enough or exceed the maximum iteration times, terminating the simulation process;
and S250, classifying the ground point cloud and the non-ground point cloud, and calculating the distance between the grid point and the corresponding point cloud data point. For point cloud data points, if the distance is less than a threshold L, it is classified as a ground point cloud, otherwise it is classified as a non-ground point cloud.
S300, extracting various ground features class by using a multi-dimensional method for non-ground point clouds to obtain different types of ground feature identification.
In step S300, extracting various ground features class by using a multidimensional method for non-ground point clouds, specifically including;
s310, extracting tree point cloud data in non-ground point cloud by adopting an elevation value and watershed algorithm.
As shown in fig. 3, in step S310, the tree extraction, that is, the street tree extraction specifically includes:
s311, carrying out grid division on the non-ground point cloud data, and projecting the non-ground point cloud data into a gray image;
s312, performing watershed segmentation on the gray level image to determine a street tree contour;
s313, extracting the street tree point cloud from the non-ground point cloud data according to the street tree outline.
And performing grid division on the non-ground point cloud data, projecting the non-ground point cloud data into a gray image, performing watershed segmentation on the gray image to determine a street tree contour, and extracting complete street tree point cloud from the non-ground point cloud data according to the street tree contour. The elevation difference of different positions of the non-ground point cloud is obtained through the elevation model, the gray level image is divided according to the elevation difference of different positions of the non-ground point cloud to determine the contour of the street tree, for example, the elevation difference distribution of a single street tree is shown in fig. 4, the elevation difference distribution waveform of different positions of the street tree point cloud model is shown in fig. 5, the elevation difference distribution waveform of the single street tree point cloud is displayed, the horizontal axis is the projection diameter D of the crown of the single street tree on the XOY plane, and the vertical axis is the elevation difference. It can be seen that the closer to the center of the crown or the trunk, the larger the elevation difference, that is, the closer to the peak, the farther from the center of the crown or the trunk, the smaller the elevation difference, the closer to the valley.
And S320, extracting the rod-shaped object point cloud data in the non-ground point cloud through the elevation value and hierarchical clustering.
As shown in fig. 6, in step S320, extracting the rod-like point cloud data in the non-ground point cloud specifically includes:
s321, layering non-ground point clouds according to preset elevation intervals;
s322, clustering the non-ground point cloud data of each layer;
and S323, performing secondary clustering on the layered clustering result in the elevation direction according to the characteristics of the rod-shaped ground object, and extracting a rod-shaped ground object point cloud.
Layering the non-ground point clouds according to preset elevation intervals, clustering point cloud data of each layer, and then clustering the layered clustering results again in the elevation direction according to the characteristic that the rod-shaped ground objects continuously extend in the elevation direction so as to identify complete rod-shaped ground object point clouds.
Further, the method comprises the steps of layering non-ground point clouds according to preset elevation intervals by adopting a data processing method from high to low, determining the maximum value and the minimum value of the non-ground point clouds in the elevation direction, dividing the number of the obtained non-ground point clouds in the elevation direction, setting a threshold value of each layer in the elevation direction and the distance between the layers, and clustering the non-ground point clouds according to the rules. The data processing flow from high to low is adopted in the elevation direction, data filtering is not needed, the data processing flow is simplified, the related parameters are few, the data processing is efficient, and the automation degree is higher.
S330, extracting the building point cloud data in the non-ground point cloud based on the elevation value and the multilevel semantics.
As shown in fig. 7, in step S330, extracting the building point cloud data in the non-ground point cloud specifically includes:
s331, extracting high-rise building point cloud above a threshold height through single-point semantic features;
s332, projecting the point cloud lower than the threshold value and the high-rise building point cloud to an XOY plane, dividing a grid according to a preset size, and selecting an interest grid according to the semantic features of the grid;
s333, performing connectivity analysis on the interest grids to obtain object regions, and extracting the point cloud of the building facade based on the region semantic features.
Firstly, eliminating point clouds lower than a building through single-point semantic features, namely the elevation values of the points, and simultaneously extracting high-rise building point clouds only containing the building above a certain height; then projecting the residual point cloud and the high-rise building point cloud to an XOY plane, dividing a grid according to a certain size, and selecting an interest grid according to the semantic features of the grid; and finally, performing connectivity analysis on the interest grids to obtain object regions, and realizing accurate extraction of the point cloud of the building facade based on the region semantic features.
Further, connectivity analysis is performed on the interest grids, specifically, whether projection point clouds exist among the grids or not is determined, if some projection point clouds in the grids are in a small block in the middle and have a gap with the grid boundary, the grid is not communicated with the peripheral grids, and otherwise, the grid is communicated with the peripheral grids.
Further, when the ground feature point cloud data is extracted by adopting a multi-dimensional or multi-category method, one category of ground features is extracted/identified for each dimension, wherein various ground feature ranges can be extracted one by one, the extracted ground feature ranges are fused, and various ground feature ranges can also be extracted at the same time to complete ground feature identification.
The following describes in detail with reference to fig. 4 to 5 and fig. 8, and based on the same inventive concept, the embodiment of the present invention further provides a first embodiment of a street view feature multidimensional extraction system based on point cloud data. The principle of the problem solved by the system is similar to that of the point cloud data-based streetscape ground object multi-dimensional extraction method, so the implementation of the system can refer to the implementation of the method, and repeated parts are not repeated.
The embodiment is mainly applied to cloud data extraction, and different types of ground objects can be extracted from non-ground point cloud data by adopting a multi-dimensional method, so that street ground objects can be extracted quickly, effectively and accurately.
As shown in fig. 8, the present embodiment mainly includes: a preprocessing module 400, an extraction module 500, and an identification module 600.
The preprocessing module 400 preprocesses original point cloud data, wherein the original point cloud data comprises absolute coordinates and elevation information; the extraction module 500 extracts ground point clouds and non-ground point clouds in the point cloud data by using a distribution algorithm; the identification module extracts various ground features from the non-ground point cloud class by using a multi-dimensional method to obtain different types of ground feature identification.
Further, in the application, a vehicle-mounted mobile measurement system is adopted for collecting original point cloud data, and the data obtained by a vehicle-mounted laser scanning system specifically comprises laser point cloud data which is a point set with real space three-dimensional coordinates.
Furthermore, after the original point cloud data is collected, the original point cloud data can be preprocessed, and the original point cloud data is preprocessed through the technologies of denoising, filtering and the like, so that the precision of the point cloud data is improved.
Furthermore, the vehicle-mounted laser scanning system adopts a vehicle-mounted mobile data acquisition system with various advanced technologies such as GPS positioning, INS inertial navigation, CCD video and automatic control, and two methods of rapidly acquiring measurable stereo images with positioning information of streets and roads on the spot and acquiring 360-degree panoramic images of single-point panoramic images by adopting a professional camera in a vehicle-mounted remote sensing mode, and the acquired data has the advantages of simplicity and rapidness in updating, high efficiency, rich information quantity and the like.
In a possible implementation, the extraction module 500 extracts the ground point cloud and the non-ground point cloud in the point cloud data by using a distribution algorithm, specifically: initializing a material distribution grid, and determining the number of grid nodes through grid resolution; projecting the point cloud data and the grid points to the same horizontal plane, determining point cloud data corresponding to each grid point, and marking elevation values of corresponding point cloud data points; and calculating the position of the grid node moved under the action of gravity, and comparing the elevation of the position with the elevation of the corresponding point cloud data point. And if the node elevation is less than or equal to the point cloud data elevation, replacing the node position with the position of the corresponding point cloud data point, and marking the node position as an unmovable point. The position of the material distribution point after displacement under the action of gravity is calculated by the following formula:
Figure BDA0003304592820000131
wherein X is the position of the cloth grid point at the time t, Δ t is the time period, G is the acceleration and is a constant value, A is the mass of the cloth grid point and is set to be constant 1; calculating the position of each grid point moved by being influenced by the adjacent nodes; repeating the calculation of the positions of the grid nodes moved under the action of gravity and the calculation of the positions of each grid point moved under the influence of the adjacent nodes, and terminating the simulation process when the maximum elevation change of all the nodes is small enough or exceeds the maximum iteration times; and classifying the ground point cloud and the non-ground point cloud, and calculating the distance between the grid points and the corresponding point cloud data points. For point cloud data points, if the distance is less than a threshold L, it is classified as a ground point cloud, otherwise it is classified as a non-ground point cloud.
In one possible implementation, the identifying module 600 includes a tree identifying unit 610, a rod identifying unit 620 and a building identifying unit 630, wherein the tree identifying unit 610 extracts tree point cloud data in non-ground point cloud by using elevation value and watershed algorithm; the rod-like object identifying unit 620 extracts rod-like object point cloud data in non-ground point cloud through elevation value and hierarchical clustering; the identify buildings unit 630 extracts the data of the point clouds of buildings in the non-ground point clouds based on the elevation values and the multi-level semantics.
In one possible embodiment, identifying trees class unit 610 includes dividing sub-unit 611, dividing sub-unit 612, and extracting trees class sub-unit 613; the dividing subunit 611 performs grid division on the non-ground point cloud data and projects the non-ground point cloud data into a gray image; the segmentation subunit 612 performs watershed segmentation on the grayscale image to determine a street tree contour; the extract trees sub-unit 613 extracts street tree point clouds from the non-ground point cloud data according to the street tree contours.
And performing grid division on the non-ground point cloud data, projecting the non-ground point cloud data into a gray image, performing watershed segmentation on the gray image to determine a street tree contour, and extracting complete street tree point cloud from the non-ground point cloud data according to the street tree contour. The elevation difference of different positions of the non-ground point cloud is obtained through the elevation model, the gray level image is divided according to the elevation difference of different positions of the non-ground point cloud to determine the contour of the street tree, for example, the elevation difference distribution of a single street tree is shown in fig. 4, the elevation difference distribution waveform of different positions of the street tree point cloud model is shown in fig. 5, the elevation difference distribution waveform of the single street tree point cloud is displayed, the horizontal axis is the projection diameter D of the crown of the single street tree on the XOY plane, and the vertical axis is the elevation difference. It can be seen that the closer to the center of the crown or the trunk, the larger the elevation difference, that is, the closer to the peak, the farther from the center of the crown or the trunk, the smaller the elevation difference, the closer to the valley.
In one possible implementation, the identifying rod type object unit 620 includes a layering subunit 621, a clustering subunit 622, and an extracting rod type ground object subunit 623, wherein the layering subunit 621 performs layering on non-ground point clouds according to a preset elevation interval; the clustering subunit 622 performs clustering on the non-ground point cloud data of each layer; the rod-shaped ground object extracting subunit 623 performs secondary clustering on the layered clustering result in the elevation direction according to the characteristics of the rod-shaped ground objects, and extracts rod-shaped ground object point clouds.
Layering the non-ground point clouds according to preset elevation intervals, clustering point cloud data of each layer, and then clustering the layered clustering results again in the elevation direction according to the characteristic that the rod-shaped ground objects continuously extend in the elevation direction so as to identify complete rod-shaped ground object point clouds.
Further, the method comprises the steps of layering non-ground point clouds according to preset elevation intervals by adopting a data processing method from high to low, determining the maximum value and the minimum value of the non-ground point clouds in the elevation direction, dividing the number of the obtained non-ground point clouds in the elevation direction, setting a threshold value of each layer in the elevation direction and the distance between the layers, and clustering the non-ground point clouds according to the rules. The data processing flow from high to low is adopted in the elevation direction, data filtering is not needed, the data processing flow is simplified, the related parameters are few, the data processing is efficient, and the automation degree is higher.
In one possible implementation, the building class identifying unit 630 includes a semantic subunit 631, a grid subunit 632, and an building facade extracting subunit 633, where the semantic subunit 631 extracts a high-rise building point cloud above a threshold height by a single-point semantic feature; the grid subunit 632 projects the point cloud lower than the threshold value and the point cloud of the high-rise building onto the XOY plane, divides the grid according to a preset size, and selects an interest grid according to the semantic features of the grid; the building facade extraction subunit 633 performs connectivity analysis on the interest grids to obtain object regions, and extracts building facade point clouds based on region semantic features.
Firstly, eliminating point clouds lower than a building through single-point semantic features, namely the elevation values of the points, and simultaneously extracting high-rise building point clouds only containing the building above a certain height; then projecting the residual point cloud and the high-rise building point cloud to an XOY plane, dividing a grid according to a certain size, and selecting an interest grid according to the semantic features of the grid; and finally, performing connectivity analysis on the interest grids to obtain object regions, and realizing accurate extraction of the point cloud of the building facade based on the region semantic features.
Further, connectivity analysis is performed on the interest grids, specifically, whether projection point clouds exist among the grids or not is determined, if some projection point clouds in the grids are in a small block in the middle and have a gap with the grid boundary, the grid is not communicated with the peripheral grids, and otherwise, the grid is communicated with the peripheral grids.
Further, when the ground feature point cloud data is extracted by adopting a multi-dimensional or multi-category method, one category of ground features is extracted/identified for each dimension, wherein various ground feature ranges can be extracted one by one, the extracted ground feature ranges are fused, and various ground feature ranges can also be extracted at the same time to complete ground feature identification.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A street view ground object multi-dimensional extraction method based on point cloud data is characterized by comprising the following steps:
the original point cloud data comprises absolute coordinates and elevation information;
extracting ground point clouds and non-ground point clouds in the point cloud data by using a material distribution algorithm;
and extracting various ground features class by using a multi-dimensional method for the non-ground point cloud to obtain different types of ground feature identification.
2. The point cloud data-based streetscape ground object multi-dimensional extraction method according to claim 1, wherein the extracting of various ground objects class by using a multi-dimensional method on the non-ground point cloud specifically comprises;
extracting tree point cloud data in the non-ground point cloud by adopting an elevation value and watershed algorithm;
extracting rod-shaped object point cloud data in the non-ground point cloud through elevation values and hierarchical clustering;
and extracting the building point cloud data in the non-ground point cloud based on the elevation value and the multilevel semantics.
3. The point cloud data-based streetscape ground object multi-dimensional extraction method according to claim 2, wherein the extracting of the tree point cloud data in the non-ground point cloud specifically comprises:
carrying out grid division on the non-ground point cloud data, and projecting the non-ground point cloud data into a gray image;
carrying out watershed segmentation on the gray level image to determine a street tree contour;
and extracting the street tree point cloud from the non-ground point cloud data according to the street tree outline.
4. The point cloud data-based streetscape ground object multi-dimensional extraction method according to claim 2, wherein the extracting of the rod-like object point cloud data in the non-ground point cloud specifically comprises:
layering the non-ground point clouds according to a preset elevation interval;
clustering the non-ground point cloud data of each layer;
and according to the characteristics of the rod-shaped ground objects, performing secondary clustering on the layered clustering results in the elevation direction, and extracting the rod-shaped ground object point cloud.
5. The point cloud data-based streetscape ground object multi-dimensional extraction method according to claim 2, wherein the extracting of the building-type point cloud data in the non-ground point cloud specifically comprises:
extracting high-rise building point cloud above a threshold height through single-point semantic features;
projecting the point cloud lower than the threshold value and the point cloud of the high-rise building to an XOY plane, dividing a grid according to a preset size, and selecting an interest grid according to the semantic features of the grid;
and performing connectivity analysis on the interest grids to obtain object regions, and extracting the point cloud of the building facade based on the region semantic features.
6. A street view ground object multi-dimensional extraction system based on point cloud data is characterized by comprising:
the system comprises a preprocessing module, a data processing module and a data processing module, wherein the preprocessing module is used for preprocessing original point cloud data, and the original point cloud data comprises absolute coordinates and elevation information;
an extraction module that extracts a ground point cloud and a non-ground point cloud in the point cloud data using a distribution algorithm;
and the identification module extracts various ground features class by class from the non-ground point cloud by using a multi-dimensional method to obtain different types of ground feature identification.
7. The point cloud data-based streetscape ground object multi-dimensional extraction system of claim 6, wherein the identification module comprises an identification tree class unit, an identification rod class unit and an identification building class unit;
the tree identification unit extracts tree point cloud data in the non-ground point cloud by adopting an elevation value and watershed algorithm;
the rod-shaped object identifying unit extracts rod-shaped object point cloud data in the non-ground point cloud through elevation value and hierarchical clustering;
and the building type identification unit extracts the building type point cloud data in the non-ground point cloud based on the elevation value and the multilevel semantics.
8. The point cloud data-based streetscape ground object multi-dimensional extraction system of claim 7, wherein the identifying tree class units comprises a partitioning subunit, a segmenting subunit, and an extracting tree class subunit;
the dividing subunit divides the non-ground point cloud data into grids and projects the grids into gray images;
the segmentation subunit performs watershed segmentation on the gray level image to determine a street tree contour;
and the tree extraction subunit extracts the street tree point cloud from the non-ground point cloud data according to the street tree outline.
9. The point cloud data-based streetscape ground object multi-dimensional extraction system of claim 7, wherein the identifying rod-like object units comprise a layering subunit, a clustering subunit, and an extracting rod-like ground object subunit;
the layering subunit is used for layering the non-ground point cloud according to a preset elevation interval;
the clustering subunit is used for clustering the non-ground point cloud data of each layer;
and the rod-shaped ground object extracting subunit performs secondary clustering on the layered clustering result in the elevation direction according to the characteristics of the rod-shaped ground objects to extract rod-shaped ground object point clouds.
10. The point cloud data-based streetscape ground object multi-dimensional extraction system of claim 7, wherein the identified building class units comprise a semantic subunit, a grid subunit, and an extracted building facade subunit;
the semantic subunit extracts high-rise building point cloud above a threshold height through single-point semantic features;
the grid subunit projects the point cloud lower than the threshold value and the point cloud of the high-rise building to an XOY plane, divides a grid according to a preset size, and selects an interest grid according to the semantic features of the grid;
and the building facade extraction subunit performs connectivity analysis on the interest grids to obtain object areas, and extracts building facade point clouds based on the area semantic features.
CN202111200089.2A 2021-10-14 2021-10-14 Street view ground object multi-dimensional extraction method and system based on point cloud data Pending CN113963259A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111200089.2A CN113963259A (en) 2021-10-14 2021-10-14 Street view ground object multi-dimensional extraction method and system based on point cloud data
PCT/CN2021/124565 WO2023060632A1 (en) 2021-10-14 2021-10-19 Street view ground object multi-dimensional extraction method and system based on point cloud data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111200089.2A CN113963259A (en) 2021-10-14 2021-10-14 Street view ground object multi-dimensional extraction method and system based on point cloud data

Publications (1)

Publication Number Publication Date
CN113963259A true CN113963259A (en) 2022-01-21

Family

ID=79464048

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111200089.2A Pending CN113963259A (en) 2021-10-14 2021-10-14 Street view ground object multi-dimensional extraction method and system based on point cloud data

Country Status (2)

Country Link
CN (1) CN113963259A (en)
WO (1) WO2023060632A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114519712A (en) * 2022-02-23 2022-05-20 广州极飞科技股份有限公司 Point cloud data processing method and device, terminal equipment and storage medium

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117494549B (en) * 2023-10-12 2024-05-28 青岛市勘察测绘研究院 Information simulation display method and system of three-dimensional geographic information system
CN117456121B (en) * 2023-10-30 2024-07-12 中佳勘察设计有限公司 Topographic map acquisition and drawing method and device without camera
CN117274536B (en) * 2023-11-22 2024-02-20 北京飞渡科技股份有限公司 Live-action three-dimensional model reconstruction method and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104091321B (en) * 2014-04-14 2016-10-19 北京师范大学 It is applicable to the extracting method of the multi-level point set feature of ground laser radar point cloud classifications
CN112241440B (en) * 2019-07-17 2024-04-26 临沂大学 Three-dimensional green quantity estimation and management method based on LiDAR point cloud data
CN110992341A (en) * 2019-12-04 2020-04-10 沈阳建筑大学 Segmentation-based airborne LiDAR point cloud building extraction method
CN112381041A (en) * 2020-11-27 2021-02-19 广东电网有限责任公司肇庆供电局 Tree identification method and device for power transmission line and terminal equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114519712A (en) * 2022-02-23 2022-05-20 广州极飞科技股份有限公司 Point cloud data processing method and device, terminal equipment and storage medium

Also Published As

Publication number Publication date
WO2023060632A1 (en) 2023-04-20

Similar Documents

Publication Publication Date Title
CN111598823B (en) Multisource mobile measurement point cloud data space-ground integration method and storage medium
Xia et al. Geometric primitives in LiDAR point clouds: A review
CN113034689B (en) Laser point cloud-based terrain three-dimensional model, terrain map construction method and system, and storage medium
CN113963259A (en) Street view ground object multi-dimensional extraction method and system based on point cloud data
Yu et al. Semiautomated extraction of street light poles from mobile LiDAR point-clouds
Sohn et al. Using a binary space partitioning tree for reconstructing polyhedral building models from airborne lidar data
CN100533486C (en) Digital city full-automatic generating method
CN107527038A (en) A kind of three-dimensional atural object automatically extracts and scene reconstruction method
CN108564650B (en) Lane tree target identification method based on vehicle-mounted 2D LiDAR point cloud data
CN109270544A (en) Mobile robot self-localization system based on shaft identification
CN115063555B (en) Vehicle-mounted LiDAR point cloud street tree extraction method for Gaussian distribution area growth
CN113781431B (en) Green view rate calculation method based on urban point cloud data
CN114764871B (en) Urban building attribute extraction method based on airborne laser point cloud
CN111487643B (en) Building detection method based on laser radar point cloud and near-infrared image
CN114332134B (en) Building facade extraction method and device based on dense point cloud
CN115294293B (en) Method for automatically compiling high-precision map road reference line based on low-altitude aerial photography result
CN111754618A (en) Object-oriented live-action three-dimensional model multilevel interpretation method and system
CN114119863A (en) Method for automatically extracting street tree target and forest attribute thereof based on vehicle-mounted laser radar data
CN114758086B (en) Method and device for constructing urban road information model
Yao et al. Automated detection of 3D individual trees along urban road corridors by mobile laser scanning systems
Lalonde et al. Automatic three-dimensional point cloud processing for forest inventory
Tang et al. Assessing the visibility of urban greenery using MLS LiDAR data
CN118447420A (en) Unmanned aerial vehicle vision positioning system under damage environment and construction method thereof
Wu et al. A stepwise minimum spanning tree matching method for registering vehicle-borne and backpack LiDAR point clouds
Sohn et al. A data-driven method for modeling 3D building objects using a binary space partitioning tree

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination