Disclosure of Invention
Objects of the invention
In view of the above problems, the present invention aims to provide a method and a system for extracting street scenery features in multiple dimensions based on point cloud data, which can extract different types of features from non-ground point cloud data by using a multiple-dimensional method, and can extract street features quickly, efficiently and accurately.
(II) technical scheme
As a first aspect of the invention, the invention discloses a point cloud data-based streetscape ground object multi-dimensional extraction method, which comprises the following steps:
the original point cloud data comprises absolute coordinates and elevation information;
extracting ground point clouds and non-ground point clouds in the point cloud data by using a material distribution algorithm;
and extracting various ground features class by using a multi-dimensional method for the non-ground point cloud to obtain different types of ground feature identification.
In a possible implementation manner, the extracting various ground features class by using a multi-dimensional method on the non-ground point cloud specifically comprises;
extracting tree point cloud data in the non-ground point cloud by adopting an elevation value and watershed algorithm;
extracting rod-shaped object point cloud data in the non-ground point cloud through elevation values and hierarchical clustering;
and extracting the building point cloud data in the non-ground point cloud based on the elevation value and the multilevel semantics.
In a possible embodiment, the extracting the tree point cloud data in the non-ground point cloud specifically includes:
carrying out grid division on the non-ground point cloud data, and projecting the non-ground point cloud data into a gray image;
carrying out watershed segmentation on the gray level image to determine a street tree contour;
and extracting the street tree point cloud from the non-ground point cloud data according to the street tree outline.
In a possible embodiment, the extracting the rod-like point cloud data in the non-ground point cloud specifically includes:
layering the non-ground point clouds according to a preset elevation interval;
clustering the non-ground point cloud data of each layer;
and according to the characteristics of the rod-shaped ground objects, performing secondary clustering on the layered clustering results in the elevation direction, and extracting the rod-shaped ground object point cloud.
In a possible embodiment, the extracting the building point cloud data in the non-ground point cloud specifically includes:
extracting high-rise building point cloud above a threshold height through single-point semantic features;
projecting the point cloud lower than the threshold value and the point cloud of the high-rise building to an XOY plane, dividing a grid according to a preset size, and selecting an interest grid according to the semantic features of the grid;
and performing connectivity analysis on the interest grids to obtain object regions, and extracting the point cloud of the building facade based on the region semantic features.
As a second aspect of the present invention, the present invention further discloses a point cloud data-based streetscape ground object multidimensional extraction system, including:
the system comprises a preprocessing module, a data processing module and a data processing module, wherein the preprocessing module is used for preprocessing original point cloud data, and the original point cloud data comprises absolute coordinates and elevation information;
an extraction module that extracts a ground point cloud and a non-ground point cloud in the point cloud data using a distribution algorithm;
and the identification module extracts various ground features class by class from the non-ground point cloud by using a multi-dimensional method to obtain different types of ground feature identification.
In one possible embodiment, the identification module comprises an identification tree class unit, an identification shaft class unit and an identification building class unit;
the tree identification unit extracts tree point cloud data in the non-ground point cloud by adopting an elevation value and watershed algorithm;
the rod-shaped object identifying unit extracts rod-shaped object point cloud data in the non-ground point cloud through elevation value and hierarchical clustering;
and the building type identification unit extracts the building type point cloud data in the non-ground point cloud based on the elevation value and the multilevel semantics.
In a possible implementation, the tree class unit is identified by a dividing subunit, a dividing subunit and an extracting tree class subunit;
the dividing subunit divides the non-ground point cloud data into grids and projects the grids into gray images;
the segmentation subunit performs watershed segmentation on the gray level image to determine a street tree contour;
and the tree extraction subunit extracts the street tree point cloud from the non-ground point cloud data according to the street tree outline.
In one possible embodiment, the identifying rod-like object unit comprises a layering subunit, a clustering subunit and an extracting rod-like ground object subunit;
the layering subunit is used for layering the non-ground point cloud according to a preset elevation interval;
the clustering subunit is used for clustering the non-ground point cloud data of each layer;
and the rod-shaped ground object extracting subunit performs secondary clustering on the layered clustering result in the elevation direction according to the characteristics of the rod-shaped ground objects to extract rod-shaped ground object point clouds.
In one possible embodiment, the building class identification unit comprises a semantic subunit, a grid subunit and the building facade extraction subunit;
the semantic subunit extracts high-rise building point cloud above a threshold height through single-point semantic features;
the grid subunit projects the point cloud lower than the threshold value and the point cloud of the high-rise building to an XOY plane, divides a grid according to a preset size, and selects an interest grid according to the semantic features of the grid;
and the building facade extraction subunit performs connectivity analysis on the interest grids to obtain object areas, and extracts building facade point clouds based on the area semantic features.
(III) advantageous effects
The invention discloses a street view ground object multi-dimensional extraction method and system based on point cloud data, which have the following beneficial effects: the collected point cloud data is preprocessed, ground point cloud and non-ground point cloud are extracted by adopting a distribution algorithm, and various ground object ranges are extracted from the non-ground point cloud by adopting a multi-dimensional or multi-type method, so that the contour and the category of the ground object are efficiently, highly accurately and accurately extracted and identified.
Detailed Description
In order to make the implementation objects, technical solutions and advantages of the present invention clearer, the technical solutions in the embodiments of the present invention will be described in more detail below with reference to the accompanying drawings in the embodiments of the present invention.
It should be noted that: in the drawings, the same or similar reference numerals denote the same or similar elements or elements having the same or similar functions throughout. The embodiments described are some embodiments of the present invention, not all embodiments, and features in embodiments and embodiments in the present application may be combined with each other without conflict. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", etc., indicate orientations or positional relationships based on those shown in the drawings, and are used merely for convenience in describing the present invention and for simplifying the description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and therefore, should not be taken as limiting the scope of the present invention.
A first embodiment of a point cloud data-based streetscape ground object multi-dimensional extraction method disclosed by the invention is described in detail below with reference to fig. 1 to 7. The embodiment is mainly applied to point cloud data extraction, different types of ground objects are extracted from non-ground point cloud data by adopting a multi-dimensional method, and street ground objects can be extracted quickly, effectively and accurately.
As shown in fig. 1-2, this embodiment specifically includes the following steps:
s100, the original point cloud data comprise absolute coordinates and elevation information.
In step S100, the vehicle-mounted mobile measurement system is used to collect the original point cloud data, and the data obtained by the vehicle-mounted laser scanning system includes laser point cloud data, which is a point set with real space three-dimensional coordinates.
Furthermore, after the original point cloud data is collected, the original point cloud data is preprocessed, and the original point cloud data is preprocessed through the technologies of denoising, filtering and the like, so that the precision of the point cloud data is improved.
Furthermore, the vehicle-mounted laser scanning system adopts a vehicle-mounted mobile data acquisition system with various advanced technologies such as GPS positioning, INS inertial navigation, CCD video and automatic control, and two methods of rapidly acquiring measurable stereo images with positioning information of streets and roads on the spot and acquiring 360-degree panoramic images of single-point panoramic images by adopting a professional camera in a vehicle-mounted remote sensing mode, and the acquired data has the advantages of simplicity and rapidness in updating, high efficiency, rich information quantity and the like.
S200, extracting ground point clouds and non-ground point clouds in the point cloud data by using a material distribution algorithm.
In step S200, a distribution algorithm is used to extract a ground point cloud and a non-ground point cloud in the point cloud data, which specifically includes:
s210, initializing a material distribution grid, and determining the number of grid nodes according to the grid resolution;
s220, projecting the point cloud data and the grid points to the same horizontal plane, determining the point cloud data corresponding to each grid point, and marking the elevation values of the corresponding point cloud data points;
and S230, calculating the position of the grid node moved under the action of gravity, and comparing the elevation of the position with the elevation of the corresponding point cloud data point. And if the node elevation is less than or equal to the point cloud data elevation, replacing the node position with the position of the corresponding point cloud data point, and marking the node position as an unmovable point. The position of the material distribution point after displacement under the action of gravity is calculated by the following formula:
where X is the position of the cloth grid point at time t, Δ t is the time period, G is the acceleration and is a constant value, and a is the mass of the cloth grid point, set to a constant of 1.
And S240, calculating the position of each grid point, which is influenced by the adjacent nodes to move.
S250, repeating the steps S230 and S240, and when the maximum elevation changes of all the nodes are small enough or exceed the maximum iteration times, terminating the simulation process;
and S250, classifying the ground point cloud and the non-ground point cloud, and calculating the distance between the grid point and the corresponding point cloud data point. For point cloud data points, if the distance is less than a threshold L, it is classified as a ground point cloud, otherwise it is classified as a non-ground point cloud.
S300, extracting various ground features class by using a multi-dimensional method for non-ground point clouds to obtain different types of ground feature identification.
In step S300, extracting various ground features class by using a multidimensional method for non-ground point clouds, specifically including;
s310, extracting tree point cloud data in non-ground point cloud by adopting an elevation value and watershed algorithm.
As shown in fig. 3, in step S310, the tree extraction, that is, the street tree extraction specifically includes:
s311, carrying out grid division on the non-ground point cloud data, and projecting the non-ground point cloud data into a gray image;
s312, performing watershed segmentation on the gray level image to determine a street tree contour;
s313, extracting the street tree point cloud from the non-ground point cloud data according to the street tree outline.
And performing grid division on the non-ground point cloud data, projecting the non-ground point cloud data into a gray image, performing watershed segmentation on the gray image to determine a street tree contour, and extracting complete street tree point cloud from the non-ground point cloud data according to the street tree contour. The elevation difference of different positions of the non-ground point cloud is obtained through the elevation model, the gray level image is divided according to the elevation difference of different positions of the non-ground point cloud to determine the contour of the street tree, for example, the elevation difference distribution of a single street tree is shown in fig. 4, the elevation difference distribution waveform of different positions of the street tree point cloud model is shown in fig. 5, the elevation difference distribution waveform of the single street tree point cloud is displayed, the horizontal axis is the projection diameter D of the crown of the single street tree on the XOY plane, and the vertical axis is the elevation difference. It can be seen that the closer to the center of the crown or the trunk, the larger the elevation difference, that is, the closer to the peak, the farther from the center of the crown or the trunk, the smaller the elevation difference, the closer to the valley.
And S320, extracting the rod-shaped object point cloud data in the non-ground point cloud through the elevation value and hierarchical clustering.
As shown in fig. 6, in step S320, extracting the rod-like point cloud data in the non-ground point cloud specifically includes:
s321, layering non-ground point clouds according to preset elevation intervals;
s322, clustering the non-ground point cloud data of each layer;
and S323, performing secondary clustering on the layered clustering result in the elevation direction according to the characteristics of the rod-shaped ground object, and extracting a rod-shaped ground object point cloud.
Layering the non-ground point clouds according to preset elevation intervals, clustering point cloud data of each layer, and then clustering the layered clustering results again in the elevation direction according to the characteristic that the rod-shaped ground objects continuously extend in the elevation direction so as to identify complete rod-shaped ground object point clouds.
Further, the method comprises the steps of layering non-ground point clouds according to preset elevation intervals by adopting a data processing method from high to low, determining the maximum value and the minimum value of the non-ground point clouds in the elevation direction, dividing the number of the obtained non-ground point clouds in the elevation direction, setting a threshold value of each layer in the elevation direction and the distance between the layers, and clustering the non-ground point clouds according to the rules. The data processing flow from high to low is adopted in the elevation direction, data filtering is not needed, the data processing flow is simplified, the related parameters are few, the data processing is efficient, and the automation degree is higher.
S330, extracting the building point cloud data in the non-ground point cloud based on the elevation value and the multilevel semantics.
As shown in fig. 7, in step S330, extracting the building point cloud data in the non-ground point cloud specifically includes:
s331, extracting high-rise building point cloud above a threshold height through single-point semantic features;
s332, projecting the point cloud lower than the threshold value and the high-rise building point cloud to an XOY plane, dividing a grid according to a preset size, and selecting an interest grid according to the semantic features of the grid;
s333, performing connectivity analysis on the interest grids to obtain object regions, and extracting the point cloud of the building facade based on the region semantic features.
Firstly, eliminating point clouds lower than a building through single-point semantic features, namely the elevation values of the points, and simultaneously extracting high-rise building point clouds only containing the building above a certain height; then projecting the residual point cloud and the high-rise building point cloud to an XOY plane, dividing a grid according to a certain size, and selecting an interest grid according to the semantic features of the grid; and finally, performing connectivity analysis on the interest grids to obtain object regions, and realizing accurate extraction of the point cloud of the building facade based on the region semantic features.
Further, connectivity analysis is performed on the interest grids, specifically, whether projection point clouds exist among the grids or not is determined, if some projection point clouds in the grids are in a small block in the middle and have a gap with the grid boundary, the grid is not communicated with the peripheral grids, and otherwise, the grid is communicated with the peripheral grids.
Further, when the ground feature point cloud data is extracted by adopting a multi-dimensional or multi-category method, one category of ground features is extracted/identified for each dimension, wherein various ground feature ranges can be extracted one by one, the extracted ground feature ranges are fused, and various ground feature ranges can also be extracted at the same time to complete ground feature identification.
The following describes in detail with reference to fig. 4 to 5 and fig. 8, and based on the same inventive concept, the embodiment of the present invention further provides a first embodiment of a street view feature multidimensional extraction system based on point cloud data. The principle of the problem solved by the system is similar to that of the point cloud data-based streetscape ground object multi-dimensional extraction method, so the implementation of the system can refer to the implementation of the method, and repeated parts are not repeated.
The embodiment is mainly applied to cloud data extraction, and different types of ground objects can be extracted from non-ground point cloud data by adopting a multi-dimensional method, so that street ground objects can be extracted quickly, effectively and accurately.
As shown in fig. 8, the present embodiment mainly includes: a preprocessing module 400, an extraction module 500, and an identification module 600.
The preprocessing module 400 preprocesses original point cloud data, wherein the original point cloud data comprises absolute coordinates and elevation information; the extraction module 500 extracts ground point clouds and non-ground point clouds in the point cloud data by using a distribution algorithm; the identification module extracts various ground features from the non-ground point cloud class by using a multi-dimensional method to obtain different types of ground feature identification.
Further, in the application, a vehicle-mounted mobile measurement system is adopted for collecting original point cloud data, and the data obtained by a vehicle-mounted laser scanning system specifically comprises laser point cloud data which is a point set with real space three-dimensional coordinates.
Furthermore, after the original point cloud data is collected, the original point cloud data can be preprocessed, and the original point cloud data is preprocessed through the technologies of denoising, filtering and the like, so that the precision of the point cloud data is improved.
Furthermore, the vehicle-mounted laser scanning system adopts a vehicle-mounted mobile data acquisition system with various advanced technologies such as GPS positioning, INS inertial navigation, CCD video and automatic control, and two methods of rapidly acquiring measurable stereo images with positioning information of streets and roads on the spot and acquiring 360-degree panoramic images of single-point panoramic images by adopting a professional camera in a vehicle-mounted remote sensing mode, and the acquired data has the advantages of simplicity and rapidness in updating, high efficiency, rich information quantity and the like.
In a possible implementation, the extraction module 500 extracts the ground point cloud and the non-ground point cloud in the point cloud data by using a distribution algorithm, specifically: initializing a material distribution grid, and determining the number of grid nodes through grid resolution; projecting the point cloud data and the grid points to the same horizontal plane, determining point cloud data corresponding to each grid point, and marking elevation values of corresponding point cloud data points; and calculating the position of the grid node moved under the action of gravity, and comparing the elevation of the position with the elevation of the corresponding point cloud data point. And if the node elevation is less than or equal to the point cloud data elevation, replacing the node position with the position of the corresponding point cloud data point, and marking the node position as an unmovable point. The position of the material distribution point after displacement under the action of gravity is calculated by the following formula:
wherein X is the position of the cloth grid point at the time t, Δ t is the time period, G is the acceleration and is a constant value, A is the mass of the cloth grid point and is set to be constant 1; calculating the position of each grid point moved by being influenced by the adjacent nodes; repeating the calculation of the positions of the grid nodes moved under the action of gravity and the calculation of the positions of each grid point moved under the influence of the adjacent nodes, and terminating the simulation process when the maximum elevation change of all the nodes is small enough or exceeds the maximum iteration times; and classifying the ground point cloud and the non-ground point cloud, and calculating the distance between the grid points and the corresponding point cloud data points. For point cloud data points, if the distance is less than a threshold L, it is classified as a ground point cloud, otherwise it is classified as a non-ground point cloud.
In one possible implementation, the identifying module 600 includes a tree identifying unit 610, a rod identifying unit 620 and a building identifying unit 630, wherein the tree identifying unit 610 extracts tree point cloud data in non-ground point cloud by using elevation value and watershed algorithm; the rod-like object identifying unit 620 extracts rod-like object point cloud data in non-ground point cloud through elevation value and hierarchical clustering; the identify buildings unit 630 extracts the data of the point clouds of buildings in the non-ground point clouds based on the elevation values and the multi-level semantics.
In one possible embodiment, identifying trees class unit 610 includes dividing sub-unit 611, dividing sub-unit 612, and extracting trees class sub-unit 613; the dividing subunit 611 performs grid division on the non-ground point cloud data and projects the non-ground point cloud data into a gray image; the segmentation subunit 612 performs watershed segmentation on the grayscale image to determine a street tree contour; the extract trees sub-unit 613 extracts street tree point clouds from the non-ground point cloud data according to the street tree contours.
And performing grid division on the non-ground point cloud data, projecting the non-ground point cloud data into a gray image, performing watershed segmentation on the gray image to determine a street tree contour, and extracting complete street tree point cloud from the non-ground point cloud data according to the street tree contour. The elevation difference of different positions of the non-ground point cloud is obtained through the elevation model, the gray level image is divided according to the elevation difference of different positions of the non-ground point cloud to determine the contour of the street tree, for example, the elevation difference distribution of a single street tree is shown in fig. 4, the elevation difference distribution waveform of different positions of the street tree point cloud model is shown in fig. 5, the elevation difference distribution waveform of the single street tree point cloud is displayed, the horizontal axis is the projection diameter D of the crown of the single street tree on the XOY plane, and the vertical axis is the elevation difference. It can be seen that the closer to the center of the crown or the trunk, the larger the elevation difference, that is, the closer to the peak, the farther from the center of the crown or the trunk, the smaller the elevation difference, the closer to the valley.
In one possible implementation, the identifying rod type object unit 620 includes a layering subunit 621, a clustering subunit 622, and an extracting rod type ground object subunit 623, wherein the layering subunit 621 performs layering on non-ground point clouds according to a preset elevation interval; the clustering subunit 622 performs clustering on the non-ground point cloud data of each layer; the rod-shaped ground object extracting subunit 623 performs secondary clustering on the layered clustering result in the elevation direction according to the characteristics of the rod-shaped ground objects, and extracts rod-shaped ground object point clouds.
Layering the non-ground point clouds according to preset elevation intervals, clustering point cloud data of each layer, and then clustering the layered clustering results again in the elevation direction according to the characteristic that the rod-shaped ground objects continuously extend in the elevation direction so as to identify complete rod-shaped ground object point clouds.
Further, the method comprises the steps of layering non-ground point clouds according to preset elevation intervals by adopting a data processing method from high to low, determining the maximum value and the minimum value of the non-ground point clouds in the elevation direction, dividing the number of the obtained non-ground point clouds in the elevation direction, setting a threshold value of each layer in the elevation direction and the distance between the layers, and clustering the non-ground point clouds according to the rules. The data processing flow from high to low is adopted in the elevation direction, data filtering is not needed, the data processing flow is simplified, the related parameters are few, the data processing is efficient, and the automation degree is higher.
In one possible implementation, the building class identifying unit 630 includes a semantic subunit 631, a grid subunit 632, and an building facade extracting subunit 633, where the semantic subunit 631 extracts a high-rise building point cloud above a threshold height by a single-point semantic feature; the grid subunit 632 projects the point cloud lower than the threshold value and the point cloud of the high-rise building onto the XOY plane, divides the grid according to a preset size, and selects an interest grid according to the semantic features of the grid; the building facade extraction subunit 633 performs connectivity analysis on the interest grids to obtain object regions, and extracts building facade point clouds based on region semantic features.
Firstly, eliminating point clouds lower than a building through single-point semantic features, namely the elevation values of the points, and simultaneously extracting high-rise building point clouds only containing the building above a certain height; then projecting the residual point cloud and the high-rise building point cloud to an XOY plane, dividing a grid according to a certain size, and selecting an interest grid according to the semantic features of the grid; and finally, performing connectivity analysis on the interest grids to obtain object regions, and realizing accurate extraction of the point cloud of the building facade based on the region semantic features.
Further, connectivity analysis is performed on the interest grids, specifically, whether projection point clouds exist among the grids or not is determined, if some projection point clouds in the grids are in a small block in the middle and have a gap with the grid boundary, the grid is not communicated with the peripheral grids, and otherwise, the grid is communicated with the peripheral grids.
Further, when the ground feature point cloud data is extracted by adopting a multi-dimensional or multi-category method, one category of ground features is extracted/identified for each dimension, wherein various ground feature ranges can be extracted one by one, the extracted ground feature ranges are fused, and various ground feature ranges can also be extracted at the same time to complete ground feature identification.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.