CN110992341A - Segmentation-based airborne LiDAR point cloud building extraction method - Google Patents

Segmentation-based airborne LiDAR point cloud building extraction method Download PDF

Info

Publication number
CN110992341A
CN110992341A CN201911226845.1A CN201911226845A CN110992341A CN 110992341 A CN110992341 A CN 110992341A CN 201911226845 A CN201911226845 A CN 201911226845A CN 110992341 A CN110992341 A CN 110992341A
Authority
CN
China
Prior art keywords
point
point cloud
points
building
laser
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911226845.1A
Other languages
Chinese (zh)
Inventor
刘茂华
邵悦
王岩
杜茜诗慧
张丹华
由迎春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Jianzhu University
Original Assignee
Shenyang Jianzhu University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Jianzhu University filed Critical Shenyang Jianzhu University
Priority to CN201911226845.1A priority Critical patent/CN110992341A/en
Publication of CN110992341A publication Critical patent/CN110992341A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image

Abstract

A method for extracting an airborne LiDAR point cloud building based on segmentation comprises the following steps of (1) loading airborne laser LiDAR point cloud data; (2) identifying noise points in airborne laser LiDAR point cloud, and eliminating the noise points; (3) performing cloth simulation filtering to separate ground points and non-ground points; (4) performing region growing segmentation on the filtered non-ground point cloud; (5) and calculating the direction cosine of the local normal vector and the normal vector of each divided cluster, generating a histogram, and separating the building point cloud from the non-building point cloud through the generated histogram to realize the accurate extraction of the building point cloud. The invention provides a simple and efficient histogram method for distinguishing buildings from non-buildings. According to the difference of normal vector characteristics of the building roof and the vegetation surface, a PCL-based region growing algorithm is utilized to carry out three-dimensional point cloud segmentation on non-ground points, and a histogram method is combined to distinguish buildings from non-buildings, so that building point cloud data are accurately extracted.

Description

Segmentation-based airborne LiDAR point cloud building extraction method
Technical Field
The invention belongs to the technical field of extraction of airborne laser LiDAR point cloud data information, and particularly relates to an extraction method of an airborne LiDAR point cloud building based on segmentation.
Background
In the operation of the airborne laser radar equipment, the laser scanning process is blind, namely laser pulses can be applied to the ground and can also be applied to artificial ground objects or vegetation such as buildings, bridges, power lines, lighthouses, vehicles and the like. Therefore, the acquired point cloud data of the airborne laser radar has ground points and ground object points. Point cloud classification is the basis for subsequent applications. At present, extraction of airborne laser radar point cloud buildings is one of key steps of airborne laser radar point cloud classification, and is a hotspot and difficulty of research.
The existing building extraction methods can be roughly divided into two types: one is to directly classify the LiDAR data according to the characteristics and finally extract the building point cloud. The method comprises the following steps that a building is extracted by combining a height difference threshold, point cloud depth and image texture features through Rottensteiner and Briese; zhang, etc. firstly, separating non-ground points by using a filtering algorithm, then overlapping the non-ground points with corresponding near red (CIR) images, calculating the NDVI value of each non-ground point, removing most vegetation points according to the difference of the NDVI values, and finally, extracting the building by using an european clustering algorithm based on multi-echo and area characteristics; cheng et al utilizes a mathematical morphology algorithm based on reverse iteration to achieve automatic extraction of building point clouds; caochong et al extract the building point cloud by using a region growing algorithm with gradient threshold and area characteristics. Another class is object-oriented classification methods, the idea of which is segmentation followed by classification. Firstly, a data point is divided into a plurality of objects, and the divided objects are classified according to characteristics, so that the building point cloud is extracted. A common region growing segmentation algorithm segments the point cloud into multiple homogeneous regions, but when vegetation is in close proximity to the roof of a building, some of the resulting homogeneous regions will contain other ground object points. A CRF (Conditional Random Fields, CRF) model based on machine learning, such as Niemeyer, provides a powerful probability framework for classification, and realizes building point cloud extraction by using a Random forest classifier; richter et al partition the point cloud through a smooth constrained region growing partition algorithm, and then extract the building point cloud by using a multi-channel iterative algorithm (combining a height difference threshold and area characteristics); combining characteristics such as area, height difference, spatial position, point cloud coplanarity and the like with Awrangjeb and Fraser to realize building point cloud extraction; zhang et al uses a region growing algorithm combining characteristics such as topology, geometry, echo and radiation to segment point clouds, and adopts a connected component analysis method and a Support Vector Machine (SVM) to realize the extraction of the building point cloud; the Liliang is extracted from the building point clouds by a layer-by-layer method, the point clouds are firstly segmented by using a region growing algorithm, then the initial building point clouds are subjected to Euclidean clustering by adopting connected component analysis, and finally the buildings and the vegetation are further distinguished by combining the characteristics of the area, the height difference and the like.
However, the building extraction methods based on the two ideas are complex in process and need to combine a plurality of characteristic parameters, and the research provides a simple and efficient building extraction method with high precision. Because the building area and the vegetation area are main components of non-ground point elements, the normal vector directions of the building roof are basically consistent, and the normal vector variation of the vegetation surface is large. According to the difference of normal vector characteristics of the building roof and the vegetation surface, a PCL-based region growing algorithm is utilized to carry out three-dimensional point cloud segmentation on non-ground points, and a novel histogram method is combined to distinguish buildings from non-buildings, so that building point cloud data are accurately extracted.
Disclosure of Invention
Aiming at the technical problems that the existing building extraction method is complex in process and needs to combine multiple parameters, the invention provides a segmentation-based airborne LiDAR point cloud building extraction method.
The purpose of the invention is realized by the following technical scheme:
a segmentation-based airborne LiDAR point cloud building extraction method is characterized by comprising the following steps: the method comprises the following steps:
loading airborne laser LiDAR point cloud data;
identifying noise points in the airborne laser LiDAR point cloud, and eliminating the noise points;
thirdly, cloth simulation filtering is carried out to separate ground points and non-ground points;
step four, performing region growing segmentation on the filtered non-ground point cloud;
and fifthly, calculating the direction cosines of the local normal vector and the normal vector of each divided cluster to generate a histogram, and separating the building point cloud from the non-building point cloud through the generated histogram to realize the accurate extraction of the building point cloud.
The second step specifically comprises:
(21) initializing pretreatment, marking the category of point cloud of the airborne laser radar, and marking the point which is not classified as 1;
(22) visually analyzing the elevation distribution characteristics of the point cloud of the airborne laser radar, checking whether gross errors exist, and if the point cloud data has high-order gross errors and low-order gross errors, directly entering the step (23); and if the point cloud data has no high-order gross error and low-order gross error, entering the step three.
(23) Separating the high coarse difference and the low coarse difference.
The third step comprises the following steps:
(31) converting the geometrical coordinates of the point cloud to turn over the original laser point cloud;
(32) initializing a material distribution grid, and determining the number of grid nodes according to the grid resolution;
(33) projecting the laser points and the grid points to the same horizontal plane, searching the laser points corresponding to each grid point, and recording the elevation value of the stress light points;
(34) and calculating the position of the grid node moved under the action of gravity, and comparing the elevation of the position with the elevation of the corresponding laser point. And if the elevation of the node is less than or equal to the elevation of the laser point, replacing the position of the node with the position of the corresponding laser point, and marking the node as an unmovable point. The position of the material distribution point after displacement under the action of gravity is calculated by the following formula:
Figure BDA0002302458730000031
where X is the position of the cloth grid point at time t, Δ t is the time period, G is the acceleration and is a constant value, and m is the mass of the cloth grid point, set to a constant of 1.
(35) The position of each grid point moving under the influence of the neighboring nodes is calculated. In addition, in order to restrict the movement of the cloth point in the blank area of the turnover surface, which is usually a micro-terrain such as a building or a pit, it is necessary to correct the position of the cloth point after the cloth point is moved by the force between the adjacent nodes, and therefore, it is necessary to calculate the height difference between the adjacent cloth points. If two adjacent nodes are movable points and have different elevation values, they move in opposite directions in the vertical direction by the same distance; if both have an immovable point, only the other point moves; neither is moved if both are at the same elevation. The corrected displacement of each cloth point is calculated according to the following formula:
Figure BDA0002302458730000032
in the formula, d is a motion vector of the node; p is a radical of0The current position of the node to be moved is obtained; p is a radical ofiIs p0The neighboring node position of (2); n is a normal vector in the vertical direction, n is (0, 0, 1)T(ii) a b is a parameter for determining whether the node moves, and when the node is a movable point, b is set to 1, otherwise, b is set to 0.
(36) Repeating the steps (34) and (35), and terminating the simulation process when the maximum elevation changes of all the nodes are small enough or exceed the maximum iteration times;
(37) and classifying the ground points and the non-ground points, and calculating the distance between the grid points and the corresponding laser points. And for the laser point cloud, if the distance is less than a threshold value h, the laser point cloud is classified as a ground point, otherwise, the laser point cloud is classified as a non-ground point.
The fourth step comprises the following steps:
(41) and selecting the point with the minimum curvature value as an initial seed point, adding the initial seed point into the seed sequence and marking the initial seed point as a current area. For one point p on the point cloud curved surface Q, setting the covariance matrix formed by the point p and the neighborhood points as C3×3. Then the curvature k of the point ppThe estimation formula is as follows:
Figure BDA0002302458730000041
Figure BDA0002302458730000042
wherein P is1,P2,P3…PkK neighbors of p, k0,k1,k2Is a covariance matrix C3×3Characteristic value of (a) and k0Is the minimum feature, kp∈[0,1/3];
(42) Searching k neighbor points of the current seed point, calculating the normal direction of each neighbor point and the normal direction of the seed point, and adding the neighbor points to the current area if the included angle between the normal of the neighbor points and the normal of the current seed point is less than a threshold value;
(43) calculating the curvature value of k adjacent points of the current seed point, if the curvature value is smaller than a threshold value, adding the adjacent points into the seed point sequence, removing the current seed point, and iteratively executing the steps until the seed sequence is empty;
(44) the iteration is carried out until all the points are marked as different areas; and if all the points in the point set are processed, the region is increased to the end, and then the point cloud number of the homogeneous region is counted and whether the threshold requirement of the minimum number is met or not is judged.
The fifth step is as follows: extracting buildings by histogram method of point cloud after region growing and dividing
(51) Calculating the direction cosine values of the directions of the local normal vector and the normal vector X, Y, Z of each cluster;
(52) generating a histogram from the distribution of cosine values;
(53) and extracting corresponding point clouds according to the characteristics of the histogram distribution, thereby distinguishing buildings from vegetation.
The invention has the beneficial effects that:
1. the invention provides a simple and efficient histogram method for distinguishing buildings from non-buildings. According to the difference of normal vector characteristics of the building roof and the vegetation surface, a PCL-based region growing algorithm is utilized to carry out three-dimensional point cloud segmentation on non-ground points, and a histogram method is combined to distinguish buildings from non-buildings, so that building point cloud data are accurately extracted.
2. The method creatively and organically combines five steps of noise point elimination, cloth simulation filtering, region growing segmentation, histogram building point cloud extraction and the like, forms a set of complete technical process of airborne LiDAR point cloud building extraction based on segmentation, and provides an effective way for airborne LiDAR building extraction.
3. Compared with the existing extraction method of the airborne laser radar point cloud building, the method is simpler and more convenient in algorithm, higher in extraction precision and higher in filtering precision.
Drawings
FIG. 1 is a flow chart of a method of the present invention;
FIG. 2(a) is a high coarse difference point;
FIG. 2(b) is a low coarse difference point;
FIG. 3(a) is raw data before filtering for dataset 1;
FIG. 3(b) shows the non-ground points of dataset 1 after filtering;
FIG. 4(a) is raw data before filtering for data set 2;
FIG. 4(b) is a filtered non-ground point of dataset 2;
FIG. 5(a) is a dataset 1X directional tree normal histogram;
FIG. 5(b) is a data set 1Y direction tree normal histogram;
FIG. 5(c) is a data set 1Z direction tree normal histogram;
FIG. 5(d) is a data set 1X direction building normal histogram;
FIG. 5(e) is a data set 1Y-direction building normal histogram;
FIG. 5(f) is a data set 1Z-direction building normal histogram;
FIG. 6(a) is a dataset 2X orientation tree normal histogram;
FIG. 6(b) is a data set 2Y direction tree normal histogram;
FIG. 6(c) is a data set 2Z direction tree normal histogram;
FIG. 6(d) is a data set 2X direction building normal histogram;
FIG. 6(e) is a data set 2Y direction building normal histogram;
FIG. 6(f) is a data set 2Z-direction building normal histogram;
FIG. 7(a) data set 1 building extraction results of the method herein;
FIG. 7(b) data set 2 building extraction results of the method herein;
FIG. 8(a) building extraction results based on Terrasolid dataset 1;
fig. 8(b) building extraction results based on TerraSolid dataset 2.
Detailed Description
In order to clearly explain the technical features of the present invention, the following detailed description of the present invention is provided with reference to the accompanying drawings. It should be noted that the components illustrated in the figures are not necessarily drawn to scale. Descriptions of well-known components and processing techniques and procedures are omitted so as to not unnecessarily limit the invention.
Example (b): aiming at the requirement of point cloud extraction of an airborne laser radar building, the invention provides a segmentation-based airborne laser radar building point cloud extraction method, which distinguishes buildings and vegetation by combining a novel histogram method and solves the problem that the existing building extraction needs to combine various characteristic parameters.
As shown in the flowchart of fig. 1, the present invention provides a segmentation-based airborne LiDAR point cloud building extraction method for the needs of point cloud extraction of airborne LiDAR buildings, which comprises the following steps:
loading airborne laser LiDAR point cloud data;
identifying noise points in the airborne laser LiDAR point cloud, and eliminating the noise points;
thirdly, cloth simulation filtering is carried out to separate ground points and non-ground points;
step four, performing region growing segmentation on the filtered non-ground point cloud;
and fifthly, calculating the direction cosines of the local normal vector and the normal vector of each divided cluster to generate a histogram, and effectively separating the building point cloud from the non-building point cloud according to the histogram to realize the accurate extraction of the building point cloud.
Further, the second step specifically includes the following steps:
(21) pretreatment of
Marking the class numbers of all points related to the airborne laser radar point cloud to be processed as '1', wherein '1' represents the points which are not classified;
(22) visually analyzing the elevation distribution characteristics of airborne laser radar point cloud; if the point cloud data has high-order gross errors and low-order gross errors, entering the step (23) for separation; if the point cloud data has no high-order gross error and low-order gross error, the second step is omitted, and the third step is directly carried out;
gross error is one of the key factors affecting the filtering effect of the point cloud data of the airborne laser radar, because most filtering algorithms often select local lowest points as ground points when selecting initial ground points. If the local lowest point is a gross error point, the ground points adjacent to the gross error point are easily determined as ground object points, which seriously affects the filtering effect in a certain area range. Therefore, gross error rejection is one of the prerequisites for airborne lidar data filtering. Because part of gross errors in the airborne laser radar point cloud data are obviously higher or lower than other laser radar points (as shown in fig. 2), the gross errors can be eliminated by analyzing the elevation;
(23) separating high-order gross error and low-order gross error:
separating low bit gross errors separates lower points from their neighbors. The basic principle of this algorithm is: comparing the elevation value of one point (central point) with the elevation value of each point in a given distance range, and if the central point is obviously lower than other points, separating the points into one class; alternatively, in the case where the error point density is high, and several error points are close to each other, the group of error points close to each other with high density is separated from their surrounding points.
By elevation gross error is meant, as the name implies, points whose elevation is significantly higher than the mean elevation of all points surrounding the data point set. Separating the high-order gross error, firstly setting a certain point needing to be judged as a target point as a central point, setting a buffer area with a three-dimensional search radius, and taking the point contained in the three-dimensional buffer area as a neighboring point of the target point. And comparing the elevation of the target point with the average elevation of the adjacent points, simultaneously calculating the difference, and classifying the difference as high gross error if the difference reaches a multiple specified by the standard deviation of the elevation.
Further, the third step specifically includes the following steps:
(31) converting the geometrical coordinates of the point cloud to turn over the original laser point cloud;
(32) initializing a material distribution grid, and determining the number of grid nodes according to the grid resolution;
(33) projecting the laser points and the grid points to the same horizontal plane, searching the laser points corresponding to each grid point, and recording the elevation value of the stress light points;
(34) and calculating the position of the grid node moved under the action of gravity, and comparing the elevation of the position with the elevation of the corresponding laser point. And if the elevation of the node is less than or equal to the elevation of the laser point, replacing the position of the node with the position of the corresponding laser point, and marking the node as an unmovable point. The position of the material distribution point after displacement under the action of gravity is calculated by the following formula:
Figure BDA0002302458730000071
where X is the position of the cloth grid point at time t, Δ t is the time period, G is the acceleration and is a constant value, and m is the mass of the cloth grid point, set to a constant of 1.
(35) The position of each grid point moving under the influence of the neighboring nodes is calculated. In addition, in order to restrict the movement of the cloth point in the blank area of the turnover surface, which is usually a micro-terrain such as a building or a pit, it is necessary to correct the position of the cloth point after the cloth point is moved by the force between the adjacent nodes, and therefore, it is necessary to calculate the height difference between the adjacent cloth points. If two adjacent nodes are movable points and have different elevation values, they move in opposite directions in the vertical direction by the same distance; if both have an immovable point, only the other point moves; neither is moved if both are at the same elevation. The corrected displacement of each cloth point is calculated according to the following formula:
Figure BDA0002302458730000072
in the formula, d is a motion vector of the node; p is a radical of0The current position of the node to be moved is obtained; p is a radical ofiIs p0The neighboring node position of (2); n is a normal vector in the vertical direction, n is (0, 0, 1)T(ii) a b is a parameter for determining whether the node moves, and when the node is a movable point, b is set to 1, otherwise, b is set to 0.
(36) Repeating the steps (34) and (35), and terminating the simulation process when the maximum elevation changes of all the nodes are small enough or exceed the maximum iteration times;
(37) and classifying the ground points and the non-ground points, and calculating the distance between the grid points and the corresponding laser points. And for the laser point cloud, if the distance is less than a threshold value h, the laser point cloud is classified as a ground point, otherwise, the laser point cloud is classified as a non-ground point.
Further, the fourth step specifically includes the following steps:
(41) and selecting the point with the minimum curvature value as an initial seed point, adding the initial seed point into the seed sequence and marking the initial seed point as a current area. For one point p on the point cloud curved surface Q, setting the covariance matrix formed by the point p and the neighborhood points as C3×3. Then the curvature k of the point ppThe estimation formula is as follows:
Figure BDA0002302458730000081
Figure BDA0002302458730000082
wherein P is1,P2,P3…PkK neighbors of p, k0,k1,k2Is a covariance matrix C3×3Characteristic value of (a) and k0Is the minimum feature, kp∈[0,1/3];
(42) Searching k neighbor points of the current seed point, calculating the normal direction of each neighbor point and the normal direction of the seed point, and adding the neighbor points to the current area if the included angle between the normal of the neighbor points and the normal of the current seed point is less than a threshold value;
(43) calculating the curvature value of k adjacent points of the current seed point, if the curvature value is smaller than a threshold value, adding the adjacent points into the seed point sequence, removing the current seed point, and iteratively executing the steps until the seed sequence is empty;
(44) the iteration is carried out until all the points are marked as different areas; and if all the points in the point set are processed, the region is increased to the end, and then the point cloud number of the homogeneous region is counted and whether the threshold requirement of the minimum number is met or not is judged.
Further, the fifth step specifically includes the following steps:
(51) calculating the direction cosine values of the directions of the local normal vector and the normal vector X, Y, Z of each cluster;
(52) generating a histogram from the distribution of cosine values;
(53) and extracting corresponding point clouds according to the characteristics of the histogram distribution, thereby distinguishing buildings from vegetation.
In order to verify the performance of the building point cloud extraction method, actually measured airborne laser radar data provided by a certain unit is used as experimental data, and the feasibility of the building is extracted through an experimental verification histogram method.
The method selects two data sets of south and west areas of the DE island, the data set 1 is high on the left and low on the right, the main land comprises buildings, tall trees, artificial facilities, medium and low vegetation, bridges, rivers and the like, and the buildings are dispersed and the surrounding vegetation is sparse. 2663447 laser foot points are arranged in the measuring area, the density of the point cloud is 31 points/square meter, the minimum elevation is +100.80, the maximum elevation is +308.09m, and the original point cloud data of a data set 1 colored according to the elevation is shown in a figure 3 (a); the terrain in the 2 measurement areas of the data set is low on the left and high on the right, and the terrain contains few varieties, namely buildings and vegetation, so that the buildings are distributed more intensively, and the surrounding vegetation is more flourishing. The number of laser foot points in the measurement area is 2064906 in total, the density of the point cloud is 20 points/square meter, the minimum elevation is +144.22m, the maximum elevation is +355.19, and the original point cloud of the data set 2 colored according to the elevation is shown in fig. 4 (a). And respectively carrying out filtering experiments on the point cloud data in the data set 1 and the data set 2 by a cloth simulation filtering algorithm to obtain non-ground points of the point cloud data, wherein the non-ground points of the data set 1 are shown in a figure 3(b), and the non-ground points of the data set 2 are shown in a figure 4 (b). And (3) counting the airborne LiDAR point cloud numbers of the data set 1 and the data set 2 before and after filtering, wherein the counting result is shown in a table 1.
Table 1 is a statistic of laser foot points before and after filtering.
Figure BDA0002302458730000091
The non-ground points of the data set 1 and the data set 2 are obtained through the filtering experiment, the filtered data sets are organized by adopting a K-D tree, then a region growing algorithm is realized based on an open source point cloud base, and the points are divided into different clusters through a region growing and dividing algorithm. The region growing algorithm needs to set the minimum clustering point number and the maximum clustering point number for a data set 1 and a data set 2, wherein the data set 1 is respectively set to be 50 and 7200, and the data set 2 is respectively set to be 40 and 5000. And calculating a normal vector of each point cloud from the obtained clusters, then calculating the normal vector and direction cosines in the X direction, the Y direction and the Z direction respectively to generate a histogram, and separating a building point group from a non-building point group according to the change characteristics of the histogram. The study lists the cosine values of the normal vectors of some sample points, and the histogram generated by the data set 1 is shown in fig. 4, and the histogram generated by the data set 2 is shown in fig. 5.
It can be seen from the histograms of fig. 5 and 6 that, regardless of the data set 1 or the data set 2, the histogram distribution of the normal line of the tree surface is more dispersed, the peak value is more, the normal cosine value of the maximum frequency values of the X direction and the Y direction tends to 0, the normal cosine value of the maximum frequency value of the Z direction tends to 1, the histogram of the normal line of the building surface shows less peak values and is more concentrated, and the building and the non-building can be distinguished by the distribution rule of the histogram.
The building point cloud is extracted by the histogram method, 65 buildings are extracted from the data set 1 (as shown in fig. 7 (a)), and 105 buildings are extracted from the data set 2 (as shown in fig. 7 (b)). In the marked area in the graph, a non-building is wrongly divided into point clouds of buildings or incomplete point clouds are extracted, two large automobiles are wrongly divided into buildings in the visible data set 1, and the top of the automobile is similar to the surface of the building and has a more similar direction cosine value; the data set 2 has more data extracted by mistake, wherein 3 parts of data which are incompletely extracted are all caused by the fact that the top of a building is covered by high vegetation; and the other 5 places are adjacent to the buildings, tall and big vegetation is wrongly divided into the buildings, and the reason of the tall and big vegetation is analyzed to find that the surfaces of the vegetation are smooth, most external outlines of the vegetation are regular, and the vegetation is similar to the direction cosine of the buildings.
The minimum size of the building and the elevation threshold value from the ground point need to be set in the building extraction based on the TerraSolid software, the parameters set in the data set 1 are 20m and 2.2m respectively, and the parameters set in the data set 2 are 40m and 2.2 m. As shown in fig. 8(a) and 8(b), the data set 1 and the data set 2 have a total of 74 buildings and the data set 2 has a total of 107 buildings. The marked areas in the graph are all points of vegetation points which are wrongly divided into buildings, and therefore the effect of extracting the buildings from the data set 1 and the data set 2 by using Terrasolid software is not ideal.
To verify the extraction accuracy of the method herein, two evaluation variables are defined herein: class i errors and class ii errors. Type i errors represent the probability of misclassifying a building point cloud as a non-building point cloud. Type ii error represents the probability of misclassifying a non-building point cloud into a building point cloud. Defining N as the actual total point cloud number in the experimental area, N1 as the actual point cloud number in the experimental area, N2 as the total point cloud number of the experimental result, and N3 as the point cloud number of the building extracted through the experiment. The calculation formulas of the type I error and the type II error are as follows:
Figure BDA0002302458730000101
Figure BDA0002302458730000102
quantitative analysis is respectively carried out on the method and the method based on Terrasolid software extraction, and the calculation results are shown in Table 2:
TABLE 2 building Point cloud extraction method error analysis
Figure BDA0002302458730000103
As can be seen from table 2 above, both class i and class ii errors for datasets 1 and 2 were greater than the method herein using terrasild software. The class i and class ii errors of the two methods of data set 2 are not very different, while the extraction error of the method of this example of data set 1 is relatively smaller. In general, the method has higher precision of extracting the buildings.

Claims (5)

1. A segmentation-based airborne LiDAR point cloud building extraction method is characterized by comprising the following steps: the method comprises the following steps:
loading airborne laser LiDAR point cloud data;
identifying noise points in the airborne laser LiDAR point cloud, and eliminating the noise points;
thirdly, cloth simulation filtering is carried out to separate ground points and non-ground points;
step four, performing region growing segmentation on the filtered non-ground point cloud;
and fifthly, calculating the direction cosines of the local normal vector and the normal vector of each divided cluster to generate a histogram, and separating the building point cloud from the non-building point cloud through the generated histogram to realize the accurate extraction of the building point cloud.
2. The segmentation-based airborne LiDAR point cloud building extraction method of claim 1, wherein: the second step specifically comprises:
(21) initializing pretreatment, marking the category of point cloud of the airborne laser radar, and marking the point which is not classified as 1;
(22) visually analyzing the elevation distribution characteristics of the point cloud of the airborne laser radar, checking whether gross errors exist, and if the point cloud data has high-order gross errors and low-order gross errors, directly entering the step (23); and if the point cloud data has no high-order gross error and low-order gross error, entering the step three.
(23) Separating the high coarse difference and the low coarse difference.
3. The segmentation-based airborne LiDAR point cloud building extraction method of claim 1, wherein step three comprises the steps of:
(31) converting the geometrical coordinates of the point cloud to turn over the original laser point cloud;
(32) initializing a material distribution grid, and determining the number of grid nodes according to the grid resolution;
(33) projecting the laser points and the grid points to the same horizontal plane, searching the laser points corresponding to each grid point, and recording the elevation value of the stress light points;
(34) and calculating the position of the grid node moved under the action of gravity, and comparing the elevation of the position with the elevation of the corresponding laser point. And if the elevation of the node is less than or equal to the elevation of the laser point, replacing the position of the node with the position of the corresponding laser point, and marking the node as an unmovable point. The position of the material distribution point after displacement under the action of gravity is calculated by the following formula:
Figure FDA0002302458720000011
where X is the position of the cloth grid point at time t, Δ t is the time period, G is the acceleration and is a constant value, and m is the mass of the cloth grid point, set to a constant of 1.
(35) The position of each grid point moving under the influence of the neighboring nodes is calculated. In addition, in order to restrict the movement of the cloth point in the blank area of the turnover surface, which is usually a micro-terrain such as a building or a pit, it is necessary to correct the position of the cloth point after the cloth point is moved by the force between the adjacent nodes, and therefore, it is necessary to calculate the height difference between the adjacent cloth points. If two adjacent nodes are movable points and have different elevation values, they move in opposite directions in the vertical direction by the same distance; if both have an immovable point, only the other point moves; neither is moved if both are at the same elevation. The corrected displacement of each cloth point is calculated according to the following formula:
Figure FDA0002302458720000021
in the formula, d is a motion vector of the node; p is a radical of0The current position of the node to be moved is obtained; p is a radical ofiIs p0The neighboring node position of (2); n is a normal vector in the vertical direction, n is (0, 0, 1)T(ii) a b is a parameter for determining whether the node moves, and when the node is a movable point, b is set to 1, otherwise, b is set to 0.
(36) Repeating the steps (34) and (35), and terminating the simulation process when the maximum elevation changes of all the nodes are small enough or exceed the maximum iteration times;
(37) and classifying the ground points and the non-ground points, and calculating the distance between the grid points and the corresponding laser points. And for the laser point cloud, if the distance is less than a threshold value h, the laser point cloud is classified as a ground point, otherwise, the laser point cloud is classified as a non-ground point.
4. The segmentation-based airborne LiDAR point cloud building extraction method of claim 1, wherein the fourth step comprises the steps of:
(41) and selecting the point with the minimum curvature value as an initial seed point, adding the initial seed point into the seed sequence and marking the initial seed point as a current area.
(42) Searching k neighbor points of the current seed point, calculating the normal direction of each neighbor point and the normal direction of the seed point, and adding the neighbor points to the current area if the included angle between the normal of the neighbor points and the normal of the current seed point is less than a threshold value;
(43) calculating the curvature value of k adjacent points of the current seed point, if the curvature value is smaller than a threshold value, adding the adjacent points into the seed point sequence, removing the current seed point, and iteratively executing the steps until the seed sequence is empty;
(44) the iteration is carried out until all the points are marked as different areas; and if all the points in the point set are processed, the region is increased to the end, and then the point cloud number of the homogeneous region is counted and whether the threshold requirement of the minimum number is met or not is judged.
5. The segmentation-based airborne LiDAR point cloud building extraction method of claim 1, wherein the step five comprises the steps of:
(51) calculating the direction cosine values of the directions of the local normal vector and the normal vector X, Y, Z of each cluster;
(52) generating a histogram from the distribution of cosine values;
(53) and extracting corresponding point clouds according to the distribution of the histogram so as to distinguish buildings and vegetation.
CN201911226845.1A 2019-12-04 2019-12-04 Segmentation-based airborne LiDAR point cloud building extraction method Pending CN110992341A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911226845.1A CN110992341A (en) 2019-12-04 2019-12-04 Segmentation-based airborne LiDAR point cloud building extraction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911226845.1A CN110992341A (en) 2019-12-04 2019-12-04 Segmentation-based airborne LiDAR point cloud building extraction method

Publications (1)

Publication Number Publication Date
CN110992341A true CN110992341A (en) 2020-04-10

Family

ID=70090006

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911226845.1A Pending CN110992341A (en) 2019-12-04 2019-12-04 Segmentation-based airborne LiDAR point cloud building extraction method

Country Status (1)

Country Link
CN (1) CN110992341A (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111859772A (en) * 2020-07-07 2020-10-30 河南工程学院 Power line extraction method and system based on cloth simulation algorithm
CN112348950A (en) * 2020-11-04 2021-02-09 大连理工大学 Topological map node generation method based on laser point cloud distribution characteristics
CN112381029A (en) * 2020-11-24 2021-02-19 沈阳建筑大学 Airborne LiDAR data building extraction method based on Euclidean distance
CN112380893A (en) * 2020-09-15 2021-02-19 广东电网有限责任公司 Power transmission line corridor automatic identification method of airborne laser point cloud data
CN113409332A (en) * 2021-06-11 2021-09-17 电子科技大学 Building plane segmentation method based on three-dimensional point cloud
CN113658190A (en) * 2021-06-29 2021-11-16 桂林理工大学 Tensor voting surface feature flight band adjustment method
CN113724400A (en) * 2021-07-26 2021-11-30 泉州装备制造研究所 Oblique photography-oriented multi-attribute fusion building point cloud extraction method
CN113759947A (en) * 2021-09-10 2021-12-07 中航空管系统装备有限公司 Airborne flight obstacle avoidance auxiliary method, device and system based on laser radar
CN114463338A (en) * 2022-01-07 2022-05-10 武汉大学 Automatic building laser foot point extraction method based on graph cutting and post-processing
CN114494301A (en) * 2022-02-14 2022-05-13 北京智弘通达科技有限公司 Railway scene point cloud segmentation method based on airborne radar point cloud
WO2022099528A1 (en) * 2020-11-12 2022-05-19 深圳元戎启行科技有限公司 Method and apparatus for calculating normal vector of point cloud, computer device, and storage medium
WO2022141116A1 (en) * 2020-12-29 2022-07-07 深圳市大疆创新科技有限公司 Three-dimensional point cloud segmentation method and apparatus, and movable platform
CN114764871A (en) * 2022-06-15 2022-07-19 青岛市勘察测绘研究院 Urban building attribute extraction method based on airborne laser point cloud
WO2023060632A1 (en) * 2021-10-14 2023-04-20 重庆数字城市科技有限公司 Street view ground object multi-dimensional extraction method and system based on point cloud data
CN116740060A (en) * 2023-08-11 2023-09-12 安徽大学绿色产业创新研究院 Method for detecting size of prefabricated part based on point cloud geometric feature extraction

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105488770A (en) * 2015-12-11 2016-04-13 中国测绘科学研究院 Object-oriented airborne laser radar point cloud filtering method
CN109446983A (en) * 2018-10-26 2019-03-08 福州大学 A kind of coniferous forest felling accumulation evaluation method based on two phase unmanned plane images
CN110047036A (en) * 2019-04-22 2019-07-23 重庆交通大学 Territorial laser scanning data building facade extracting method based on polar coordinates grid
CN110400322A (en) * 2019-07-30 2019-11-01 江南大学 Fruit point cloud segmentation method based on color and three-dimensional geometric information

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105488770A (en) * 2015-12-11 2016-04-13 中国测绘科学研究院 Object-oriented airborne laser radar point cloud filtering method
CN109446983A (en) * 2018-10-26 2019-03-08 福州大学 A kind of coniferous forest felling accumulation evaluation method based on two phase unmanned plane images
CN110047036A (en) * 2019-04-22 2019-07-23 重庆交通大学 Territorial laser scanning data building facade extracting method based on polar coordinates grid
CN110400322A (en) * 2019-07-30 2019-11-01 江南大学 Fruit point cloud segmentation method based on color and three-dimensional geometric information

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MOHAMMAD AWRANGJEB 等: "Automatic Segmentation of Raw LIDAR Data for Extraction of Building Roofs" *
邓飞 等: "融合航空影像的震后机载LiDAR建筑物点云提取" *

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111859772A (en) * 2020-07-07 2020-10-30 河南工程学院 Power line extraction method and system based on cloth simulation algorithm
CN111859772B (en) * 2020-07-07 2023-11-17 河南工程学院 Power line extraction method and system based on cloth simulation algorithm
CN112380893A (en) * 2020-09-15 2021-02-19 广东电网有限责任公司 Power transmission line corridor automatic identification method of airborne laser point cloud data
CN112348950A (en) * 2020-11-04 2021-02-09 大连理工大学 Topological map node generation method based on laser point cloud distribution characteristics
WO2022099528A1 (en) * 2020-11-12 2022-05-19 深圳元戎启行科技有限公司 Method and apparatus for calculating normal vector of point cloud, computer device, and storage medium
CN112381029A (en) * 2020-11-24 2021-02-19 沈阳建筑大学 Airborne LiDAR data building extraction method based on Euclidean distance
CN112381029B (en) * 2020-11-24 2023-11-14 沈阳建筑大学 Method for extracting airborne LiDAR data building based on Euclidean distance
WO2022141116A1 (en) * 2020-12-29 2022-07-07 深圳市大疆创新科技有限公司 Three-dimensional point cloud segmentation method and apparatus, and movable platform
CN113409332A (en) * 2021-06-11 2021-09-17 电子科技大学 Building plane segmentation method based on three-dimensional point cloud
CN113658190A (en) * 2021-06-29 2021-11-16 桂林理工大学 Tensor voting surface feature flight band adjustment method
CN113658190B (en) * 2021-06-29 2022-06-14 桂林理工大学 Tensor voting surface feature flight band adjustment method
CN113724400A (en) * 2021-07-26 2021-11-30 泉州装备制造研究所 Oblique photography-oriented multi-attribute fusion building point cloud extraction method
CN113759947B (en) * 2021-09-10 2023-08-08 中航空管系统装备有限公司 Airborne flight obstacle avoidance assisting method, device and system based on laser radar
CN113759947A (en) * 2021-09-10 2021-12-07 中航空管系统装备有限公司 Airborne flight obstacle avoidance auxiliary method, device and system based on laser radar
WO2023060632A1 (en) * 2021-10-14 2023-04-20 重庆数字城市科技有限公司 Street view ground object multi-dimensional extraction method and system based on point cloud data
CN114463338A (en) * 2022-01-07 2022-05-10 武汉大学 Automatic building laser foot point extraction method based on graph cutting and post-processing
CN114463338B (en) * 2022-01-07 2024-05-03 武汉大学 Automatic building laser foot point extraction method based on graph cutting and post-processing
CN114494301A (en) * 2022-02-14 2022-05-13 北京智弘通达科技有限公司 Railway scene point cloud segmentation method based on airborne radar point cloud
CN114764871A (en) * 2022-06-15 2022-07-19 青岛市勘察测绘研究院 Urban building attribute extraction method based on airborne laser point cloud
CN116740060A (en) * 2023-08-11 2023-09-12 安徽大学绿色产业创新研究院 Method for detecting size of prefabricated part based on point cloud geometric feature extraction
CN116740060B (en) * 2023-08-11 2023-10-20 安徽大学绿色产业创新研究院 Method for detecting size of prefabricated part based on point cloud geometric feature extraction

Similar Documents

Publication Publication Date Title
CN110992341A (en) Segmentation-based airborne LiDAR point cloud building extraction method
CN104091321B (en) It is applicable to the extracting method of the multi-level point set feature of ground laser radar point cloud classifications
CN110781827B (en) Road edge detection system and method based on laser radar and fan-shaped space division
CN108510467B (en) SAR image target identification method based on depth deformable convolution neural network
CN106199557B (en) A kind of airborne laser radar data vegetation extracting method
CN109146889B (en) Farmland boundary extraction method based on high-resolution remote sensing image
CN101877128B (en) Method for segmenting different objects in three-dimensional scene
CN110349260B (en) Automatic pavement marking extraction method and device
CN109034065B (en) Indoor scene object extraction method based on point cloud
CN114200477A (en) Laser three-dimensional imaging radar ground target point cloud data processing method
CN112347894B (en) Single plant vegetation extraction method based on transfer learning and Gaussian mixture model separation
CN108154158B (en) Building image segmentation method for augmented reality application
CN111898627B (en) SVM cloud microparticle optimization classification recognition method based on PCA
CN113484875B (en) Laser radar point cloud target hierarchical identification method based on mixed Gaussian ordering
Guo et al. Classification of airborne laser scanning data using JointBoost
Zhao et al. Ground surface recognition at voxel scale from mobile laser scanning data in urban environment
CN108629297A (en) A kind of remote sensing images cloud detection method of optic based on spatial domain natural scene statistics
CN110348478B (en) Method for extracting trees in outdoor point cloud scene based on shape classification and combination
Zheng et al. Pole-like object extraction from mobile lidar data
Naeini et al. Improving the dynamic clustering of hyperspectral data based on the integration of swarm optimization and decision analysis
CN111860359B (en) Point cloud classification method based on improved random forest algorithm
CN112200083A (en) Airborne multi-spectral LiDAR data segmentation method based on multivariate Gaussian mixture model
CN103530875A (en) End member extraction data preprocessing method
CN116805413A (en) Automatic calculation method for oil tea seedling stage phenotype characteristics based on three-dimensional point cloud
CN115170950A (en) Outdoor scene building extraction method based on multi-feature constraint

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination