CN110348478B - Method for extracting trees in outdoor point cloud scene based on shape classification and combination - Google Patents

Method for extracting trees in outdoor point cloud scene based on shape classification and combination Download PDF

Info

Publication number
CN110348478B
CN110348478B CN201910481805.5A CN201910481805A CN110348478B CN 110348478 B CN110348478 B CN 110348478B CN 201910481805 A CN201910481805 A CN 201910481805A CN 110348478 B CN110348478 B CN 110348478B
Authority
CN
China
Prior art keywords
points
point
data
scattering
characteristic information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910481805.5A
Other languages
Chinese (zh)
Other versions
CN110348478A (en
Inventor
宁小娟
田戈
王映辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Technology
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Technology filed Critical Xian University of Technology
Priority to CN201910481805.5A priority Critical patent/CN110348478B/en
Publication of CN110348478A publication Critical patent/CN110348478A/en
Application granted granted Critical
Publication of CN110348478B publication Critical patent/CN110348478B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation

Abstract

The invention discloses a method for extracting trees in an outdoor point cloud scene based on shape classification and combination, which utilizes an optimal feature set and a classification algorithm to improve the accuracy of outdoor scene classification to obtain classified data, and completes the extraction of single trees in the outdoor scene point cloud data through the steps of positioning, filtering, matching and the like. The method solves the problem that trees with overlapped crowns cannot be distinguished in an outdoor point cloud scene.

Description

Method for extracting trees in outdoor point cloud scene based on shape classification and combination
Technical Field
The invention belongs to the technical field of interdisciplines combining computer graphics and virtual reality, and particularly relates to a method for extracting trees in an outdoor point cloud scene based on shape classification and combination.
Background
With the development of virtual reality, artificial intelligence and computer vision technology, the analysis of various three-dimensional outdoor scenes has become a research topic with important research significance and application value. Advances in 3D acquisition technology can provide researchers with accurate data, such as: high-density and high-precision urban scene point cloud data can be obtained by a ground Laser Scanner (TLS) and a Mobile Laser Scanner (MLS). Analysis of urban scene data has become a major research topic in the fields of computer graphics, computer vision and photogrammetry. Most research efforts have focused on the marking and extraction of specific objects (buildings, roads, cars, or trees) in point cloud scene data.
As one of important elements in urban scenes, trees have been widely used in the fields of urban planning and construction, 3D tree modeling, urban tree detection, and the like in recent years to extract a single tree and obtain attributes (e.g., tree height, trunk diameter, crown diameter) of the tree. For cities, trees are important components of urban ecosystems and landscapes, and analysis of street trees in cities plays an important role in improving environmental quality, maintaining the aesthetic feeling of urban landscapes and providing social services for residents. How to automatically and effectively extract tree information has important significance for green city management and construction of intelligent cities. However, due to the density of leaves, the variety of trees, and the data loss caused by occlusion, the difficulty of extracting tree elements from urban scenes is undoubtedly increased, and therefore, the research on extracting and segmenting trees in urban scenes is still very challenging. The existing tree extraction method mainly comprises the following steps:
tree extraction method based on machine learning
The single tree extraction method based on machine learning is to process outdoor point cloud scene data through a classical classification algorithm or a clustering algorithm in machine learning. The classification algorithm aims to classify and partition outdoor point cloud scene data, extract a target object in a classification result and achieve the effect of narrowing the extraction range. The clustering algorithm aims to cluster point cloud data with similar characteristics in a scene together, so that trees in outdoor point cloud scene data can be extracted individually.
The single tree extraction method based on machine learning can well extract a single tree from an outdoor point cloud scene. However, there are some problems such as: the used classification method is simple, and more non-target points appear in the classification result; if the clustering method is simple, the problems of over-clustering or insufficient clustering and the like occur in the clustering process.
(II) region growing-based tree extraction method
Region growing based methods are widely used for scene segmentation, including building segmentation, tree segmentation, vehicle segmentation, etc. The method mainly clusters data by a region growing mode for points with similar characteristics in point cloud data. The method mainly comprises two steps: one is a top-down region growing mode, and the other is a bottom-up region growing mode. The method generally includes the steps of firstly selecting seed points, combining points corresponding to similar features according to the features of calculating the feature vectors, curvatures and the like of the seed points and neighborhood points, repeating the steps until no point cloud data which can be combined exists, clustering the data points with similar features together in the mode to form a cluster, then selecting seed points from the remaining point cloud data again, carrying out region growing until all the point cloud data are classified into a certain cluster in the mode of region growing, and integrally circulating the result.
The single tree extraction method based on region growing can well cluster point cloud data with similar features together to form a tree, but seed points must be selected in the method, smooth boundaries are difficult to distinguish, a criterion function is not well determined, the influence of a threshold value is large, particularly in an outdoor point cloud scene, the amount of point cloud data is large, and the scene is complex, so the method has defects.
(III) single tree extraction method based on voxelization
The voxel-based single tree extraction method aims to segment a scene according to the color, intensity, data space and the like of each point cloud data after voxelization of outdoor scene point cloud data.
Although the experimental results are good when voxelization is used, if trees in the outdoor point cloud scene are too close to other objects in the point cloud scene, the trees cannot be separated from the other objects in the outdoor point cloud scene, and therefore point cloud data of non-target objects exist in the final experimental results.
(IV) Tree extraction method based on model fitting
Model fitting based methods aim at extracting objects with specific shapes, including planes, cylinders and spheres. And preserving the data belonging to the target object in the object data with the specific shape, and then presenting a complete tree through merging.
The method based on model fitting can extract a single tree under certain specific conditions, but the single tree shows poor performance in complex geometric shapes, and particularly outdoor point cloud scene data has the conditions of huge data, complex scene and the like, so that the experimental result is not ideal.
Disclosure of Invention
The invention aims to provide a method for extracting trees in an outdoor point cloud scene based on shape classification and combination, which solves the problem that single tree cannot be extracted due to tree crown overlapping in the extraction of outdoor scene point cloud data.
The invention adopts the technical scheme that a method for extracting trees in an outdoor point cloud scene based on shape classification and combination specifically comprises the following steps:
step 1) obtaining an optimal feature set of an outdoor scene based on the features of the outdoor scene point cloud data;
step 2) classifying the outdoor scene point cloud data according to the optimal feature set, wherein the classification specifically comprises linear points, plane points, cylindrical points and scattering points;
and 3) extracting the classified cylindrical points and scattering points as the outdoor scene point cloud data corresponding to the tree trunk and the tree crown of the tree are the cylindrical points and the scattering points, and completing the extraction of the single tree in the outdoor scene point cloud data by positioning, filtering, matching and optimizing the cylindrical points and the scattering points.
The present invention is also characterized in that,
the step 1) is implemented according to the following steps:
step 1.1) manually selecting four parts of linear data points, plane data points, cylindrical data points and scattering data points from outdoor scene point cloud data, and sequentially marking the four parts of data points as T 1 、T 2 、T 3 And T 4 Calculating all characteristic information values of each data point in each part of the selected data points, wherein all characteristic information comprises dimension characteristics, normal vectors, main directions, characteristic values, characteristic value sums, characteristic entropies, total variances, point anisotropy, point local surface variation, local point density and point height;
step 1.2) determining T 1 The average values of all the characteristic information values are arranged from large to small, the characteristic information corresponding to the average value of the characteristic information values of the first three bits after arrangement is selected, and the characteristic information corresponding to the average value of the characteristic information values of the first three bits is V respectively 1 (T 1 )、V 2 (T 1 )、V 3 (T 1 ),T 2 、T 3 And T 4 By the same token, obtain V 1 (T 2 )、V 2 (T 2 )、V 3 (T 2 )、V 1 (T 3 )、V 2 (T 3 )、V 3 (T 3 )、V 1 (T 4 )、V 2 (T 4 )、V 3 (T 4 );
Step 1.3) if T 1 、T 2 、T 3 And T 4 If the selected characteristic information is different, T is determined 1 、T 2 、T 3 And T 4 The selected feature information forms a set to obtain an optimal feature set; if T 1 、T 2 、T 3 And T 4 If the selected characteristic information has the same characteristic information, deleting the same characteristic information, selecting the characteristic information corresponding to the average value of the fourth characteristic information values after the parts corresponding to the same characteristic information are arranged, and judging T 1 、T 2 、T 3 And T 4 If the re-selected characteristic information has the same characteristic information, analogizing according to the mode, and if T is performed every time 1 、T 2 、T 3 And T 4 If the same characteristic information exists in the reselected characteristic information, the T is added 1 、T 2 、T 3 And T 4 The reserved characteristic information forms a set to obtain an optimal characteristic set, if T is 1 、T 2 、T 3 And T 4 If the re-selected characteristic information does not have the same characteristic information, the T is determined 1 、T 2 、T 3 And T 4 And forming a set by the reselected feature information to obtain an optimal feature set.
The step 1.1) of obtaining all the characteristic information values of the data points is implemented according to the following steps:
step 1.1.1) assume outdoor scene point cloud data P = { P = { (P) } 0 ,p 1 ,p 2 ,…,p i ,…,p n H, i and n are natural numbers, data point p i Is T 1 、T 2 、T 3 、T 4 The data points in any section, k neighbors of the data point pi and their spatial coordinates are q j (x j ,y j ,z j ) J =1,2, …, k, k ≠ 0, then point p i The covariance matrix of (c) is as follows:
Figure BDA0002084088260000051
in the formula (1), the reaction mixture is,
Figure BDA0002084088260000052
is k p i Center of adjacent point, and
Figure BDA0002084088260000053
w is a semi-positive definite symmetric matrix, the eigenvalues are all non-negative values, and the eigenvectors corresponding to different eigenvalues are orthogonal, forming a set of unit orthogonal bases of the space in which they are located, and
Figure BDA0002084088260000054
in the formula (3), the characteristic value size relationship of W is lambda 1 ≥λ 2 ≥λ 3 ≥0;
Step 1.1.2) let p i Neighborhood k of i From small to large k min To k max And let k be i Is delta k, the best neighborhood value of each point is determined by calculating the Shannon entropy function value according to the following formula (4), and each neighborhood k is solved i Corresponding shannon entropy function value, selecting the minimum value from the shannon entropy function values, and the neighborhood k corresponding to the minimum value i Is p i The best neighborhood value of.
Entropy=-pro 1D ln(pro 1D )-pro 2D ln(pro 2D )-pro 3D ln(pro 3D ) (4)
In the formula (4), encopy represents Shannon Entropy function, pro 1D Representing linear data points, pro 2D Representing plane data points, pro 3D Representing columnar data points or scattered dataPoint;
compare each neighborhood k i Selecting the neighborhood with the minimum fragrance entropy as the best neighborhood k according to the fragrance entropy i ', then order k i '=k i + Δ k, p is calculated by principal component analysis algorithm i Of the covariance matrix W 1 、e 2 、e 3 And a characteristic value lambda 1 、λ 3 、λ 3 ,;
Step 1.1.3) the data points p can be calculated from the characteristic values i The total variance, anisotropy, characteristic entropy, sum of characteristic values, local surface variation, height characteristic and local surface density of the data points p i Normal vector of (1)
Figure BDA0002084088260000064
Can be formed by 3 Corresponding feature vector determination, principal direction
Figure BDA0002084088260000065
Can be formed by 1 And determining corresponding characteristic vectors to obtain all characteristic information values of the data points.
In the formula (4), the reaction mixture is,
Figure BDA0002084088260000061
in the formula (5), 1D represents that the dimensional characteristic is linear, 2D represents that the dimensional characteristic is planar, 3D represents that the dimensional characteristic is scattering, and delta 1 、δ 2 、δ 3 Representing the fitted residuals in three orthogonal directions, order
Figure BDA0002084088260000062
Figure BDA0002084088260000063
When delta 1 Are respectively far greater than delta 2 、δ 3 When the fitting area is a linear point, the fitting area has a large fitting residual error in only one direction, and the same reason is delta 1 、δ 2 Are all far greater than delta 3 When this point isPlanar point of at this time λ 3 The corresponding feature vector is the normal vector of the point, when delta 1 ≈δ 2 ≈δ 3 When the dots are scattered dots or columnar dots.
The step 2) is implemented according to the following steps:
the method comprises the steps of obtaining a Gaussian kernel function and parameters thereof through a support vector machine, constructing a decision tree through a random forest method, inputting an optimal feature set into the decision tree to generate a plurality of training sets, calculating the similarity degree of each point cloud data and each training set through the Gaussian kernel function, and classifying outdoor point cloud scene data through comparing the similarity degrees, wherein the classification specifically comprises linear points, plane points, cylindrical points and scattering points.
The step 3) is implemented according to the following steps:
step 3.1) extracting the classified cylindrical points and scattering points, and clustering the cylindrical points and the scattering points through a spectral clustering algorithm to obtain point sets of the cylindrical points and the scattering points of different types, namely different trunks and tree crowns;
step 3.2) screening point number thresholds through the cylindrical points and the scattering points are respectively set, and point sets with the point numbers smaller than the corresponding screening point number thresholds are removed, namely some point sets which are far away from an outdoor scene or do not belong to a real trunk, a crown and the like are removed, so that point sets of the cylindrical points and the scattering points which can be matched are obtained;
step 3.3) matching the corresponding scattering point sets according to the similarity by solving the similarity between the matchable cylindrical point sets and each scattering point set;
step 3.4) if the same scattering point set corresponds to a plurality of cylindrical point sets, solving the mass center m of each corresponding cylindrical point set i (i =1,2, …, n), randomly selecting a data point from the set of scattered points, computing the data point to the centroid m i (i =1,2, …, n), assigning the data point to the cylindrical point set corresponding to the centroid with the shortest distance by comparison, merging, according to this way, until the data point in the scattering point set is assigned completely, and merging other scattering point sets and cylindrical point sets corresponding one to one, each merged point set being a point set corresponding to a single tree,finishing the extraction of a single tree; and if the same scattering point set does not correspond to a plurality of cylindrical point sets, directly merging the scattering point set and the cylindrical point set which correspond one to one, wherein each merged point set is a point set corresponding to a single tree, and the extraction of the single tree is finished.
The step 3.1) is implemented according to the following steps:
step 3.1.1) making the extracted and classified cylindrical points and scattering points as clusters Clu cy And Clu sc Are respectively a cluster Clu cy And Clu sc Construct an undirected graph G (V, E), where V is all data points in each cluster (V) 1 ,v 2 ,…,v n ) The set E represents an edge between data points in each cluster, and the weight of the edge is a distance value between the two points;
step 3.1.2) defining a cluster Clu cy And Clu sc Symmetric similarity matrix W = { W ij } i,j=1,…,n The weight (i, j) E of each edge is represented, namely the weight W ij Is a point V i And point V j Weight in between, because G is an undirected graph, all W ij =W ji
Step 3.1.3) adopts a full connection mode to solve the following measurement matrix D for the extracted and classified cylindrical points and scattering points,
Figure BDA0002084088260000081
the diagonal line in the metric matrix D of equation (6) represents V for any one point i Its measurement d i Is defined as the sum of the weights of all edges connected to it, and
Figure BDA0002084088260000082
step 3.1.4) laplace matrix L = D-W, normalizing the laplace matrix to obtain the following formula,
L=D -1/2 LD 1/2 =I-D -1/2 WD 1/2 (8)
according to an Ncut segmentation criterion, t feature vectors of L are solved, the feature vectors of L are normalized to form a feature matrix R, each row of the feature matrix R is regarded as a sample, the feature matrix R is clustered by using a K-means algorithm, namely point cloud data labels corresponding to one sample are randomly selected at first, the point cloud data labels corresponding to the samples with similar characteristics to the sample are labeled into the same class, the point cloud data after labeling is removed from the cylinder points and the scattering points after extraction and classification, if no sample with similar characteristics to the sample exists, one sample is randomly selected again until each point cloud data in the cylinder points and the scattering points after extraction and classification is labeled, and finally different classes of cylinder points and scattering point sets are obtained.
The step 3.3) is implemented according to the following steps:
step 3.3.1) respectively recording the cylinder point set and the scattering point set which can be matched as
Figure BDA0002084088260000083
And
Figure BDA0002084088260000084
suppose G i (x i ,y i ,z i ) And Q i (x i ,y i ,z i ) Are respectively as
Figure BDA0002084088260000085
And
Figure BDA0002084088260000086
center of mass of C ij Is that
Figure BDA0002084088260000087
And
Figure BDA0002084088260000088
centre of mass G in three-dimensional space i (x i ,y i ,z i ) And Q i (x i ,y i ,z i ) A distance therebetween, then
C ij =Dist(G i ,Q j ) 3D (9)
C ij Is normalized to a value in the range of 0 to 1;
step 3.3.2) let T ij To represent
Figure BDA0002084088260000091
And
Figure BDA0002084088260000092
centroid G projected in 2D i (x i ,y i ,z i ) And Q i (x i ,y i ,z i ) A distance therebetween, then
T ij =Dist(G i ,Q j ) 2D (10)
T ij Is normalized to a value ranging from 0 to 1;
step 3.3.3) order A ij Representing the probability of the trunk becoming a candidate trunk matched with the crown, then
Figure BDA0002084088260000093
Step 3.3.4) calculating each
Figure BDA0002084088260000094
And each
Figure BDA0002084088260000095
The value of the similarity Pro (i, j) between,
Pro(i,j)=1-(w 1 Min(C ij )+w 2 Min(T ij )+w 3 A ij ) (12)
in the formula (12), w 1 、w 2 、w 3 、w 4 Is a proportionality coefficient, w 1 =w 2 =0.2,w 3 =0.6, if Pro (i, j) is greater than 0.9, matching the corresponding two point sets, if there are multiple Pro (i, j) values greater than 0.9, then for the same matchable cylindrical point set, performing the scattering point set with the largest Pro (i, j) similarity valueAnd (4) matching.
The invention has the beneficial effects that:
the method for extracting the trees in the outdoor point cloud scene based on the shape classification and combination has strong operability, solves the problem that the tree crowns are overlapped in the outdoor scene point cloud data extraction, so that the single tree cannot be extracted, optimally combines the tree trunk and the tree crown according to the topological relation between the tree trunk and the tree crown in a two-dimensional space and a three-dimensional space and the one-to-one correspondence relation between the tree trunk and the tree crown, can automatically complete the tree extraction in the outdoor scene point cloud data, and improves the accuracy of the single tree extraction.
Detailed Description
The present invention will be described in detail with reference to the following embodiments.
The invention relates to a method for extracting trees in an outdoor point cloud scene based on shape classification and combination, which comprises the following steps:
step 1) obtaining an optimal feature set of an outdoor scene based on the features of the outdoor scene point cloud data;
step 2) classifying the outdoor scene point cloud data according to the optimal feature set, wherein the classification specifically comprises linear points, plane points, cylindrical points and scattering points;
and 3) extracting the classified cylindrical points and scattering points as the outdoor scene point cloud data corresponding to the tree trunk and the tree crown of the tree are the cylindrical points and the scattering points, and completing the extraction of the single tree in the outdoor scene point cloud data by positioning, filtering, matching and optimizing the cylindrical points and the scattering points.
The step 1) is implemented according to the following steps:
step 1.1) manually selecting four parts of linear data points, plane data points, cylindrical data points and scattering data points from outdoor scene point cloud data, and sequentially marking the four parts of data points as T 1 、T 2 、T 3 And T 4 Calculating all characteristic information values of each data point in each part of the selected data points, wherein all characteristic information comprises dimension characteristics, normal vectors, main directions, characteristic values, characteristic value sums, characteristic entropies, total variances, point anisotropy and point local surfaceThe calculation formulas of the variation, the local point density, the point height, the total variance, the point anisotropy, the characteristic entropy, the sum of characteristic values, the point local surface variation, the point height and the local point density are shown in table 1;
TABLE 1 characteristic solving equation
Figure BDA0002084088260000101
Figure BDA0002084088260000111
Step 1.2) determining T 1 The average values of all the characteristic information values are arranged from large to small, the characteristic information corresponding to the average value of the characteristic information values of the first three bits after arrangement is selected, and the characteristic information corresponding to the average value of the characteristic information values of the first three bits is V respectively 1 (T 1 )、V 2 (T 1 )、V 3 (T 1 ),T 2 、T 3 And T 4 Obtain V by the same method 1 (T 2 )、V 2 (T 2 )、V 3 (T 2 )、V 1 (T 3 )、V 2 (T 3 )、V 3 (T 3 )、V 1 (T 4 )、V 2 (T 4 )、V 3 (T 4 );
Step 1.3) if T 1 、T 2 、T 3 And T 4 If the selected characteristic information is different, T is determined 1 、T 2 、T 3 And T 4 The selected feature information forms a set to obtain an optimal feature set; if T 1 、T 2 、T 3 And T 4 If the selected characteristic information has the same characteristic information, deleting the same characteristic information, selecting the characteristic information corresponding to the average value of the fourth characteristic information values after the parts corresponding to the same characteristic information are arranged, and judging T 1 、T 2 、T 3 And T 4 If the reselected characteristic information has the same characteristic information, repeating the steps in the same way, and if T is performed each time 1 、T 2 、T 3 And T 4 If the same characteristic information exists in the reselected characteristic information, the T is added 1 、T 2 、T 3 And T 4 The reserved characteristic information forms a set to obtain an optimal characteristic set, if T is 1 、T 2 、T 3 And T 4 If the re-selected characteristic information does not have the same characteristic information, the T is determined 1 、T 2 、T 3 And T 4 And forming a set by the re-selected feature information to obtain an optimal feature set.
The step 1.1) of obtaining all the characteristic information values of the data points is implemented according to the following steps:
step 1.1.1) assume outdoor scene point cloud data P = { P = { (P) } 0 ,p 1 ,p 2 ,…,p i ,…,p n Are natural numbers, data points p i Is T 1 、T 2 、T 3 、T 4 The data points in any section, k neighbors of the data point pi and their spatial coordinates are q j (x j ,y j ,z j ) J =1,2, …, k, k ≠ 0, then point p i The covariance matrix of (a) is as follows:
Figure BDA0002084088260000121
in the formula (1), the reaction mixture is,
Figure BDA0002084088260000122
is k p i Center of adjacent point, and
Figure BDA0002084088260000123
w is a semi-positive definite symmetric matrix, the eigenvalues are all non-negative values, and the eigenvectors corresponding to different eigenvalues are orthogonal, forming a set of unit orthogonal bases of the space in which they are located, and
Figure BDA0002084088260000124
in the formula (3), the characteristic value size relationship of W is lambda 1 ≥λ 2 ≥λ 3 ≥0;
Step 1.1.2) let p i Neighborhood k of i From small to large k min To k max And let k be i Is delta k, the best neighborhood value of each point is determined by calculating the Shannon entropy function value according to the following formula (4), and each neighborhood k is solved i Corresponding shannon entropy function value, selecting the minimum value from the shannon entropy function values, and the neighborhood k corresponding to the minimum value i Is p is i The best neighborhood value of.
Entropy=-pro 1D ln(pro 1D )-pro 2D ln(pro 2D )-pro 3D ln(pro 3D ) (4)
In the formula (4), encopy represents Shannon Entropy function, pro 1D Representing linear data points, pro 2D Representing plane data points, pro 3D Representing a columnar data point or a scattered data point;
compare each neighborhood k i Corresponding to the fragrance entropy value, selecting the neighborhood with the minimum fragrance entropy value as the best neighborhood k i ', then order k i '=k i + Δ k, p is calculated by Principal Component Analysis (PCA) i Of the covariance matrix W 1 、e 2 、e 3 And a characteristic value lambda 1 、λ 3 、λ 3 ,;
Step 1.1.3) the data points p can be calculated from the characteristic values i The total variance, anisotropy, characteristic entropy, sum of characteristic values, local surface variation, height characteristic and local surface density of the data points p i Normal vector of (1)
Figure BDA0002084088260000131
Can be formed by 3 Corresponding feature vector determination, principal direction
Figure BDA0002084088260000132
Can be formed by 1 Determining corresponding feature vector to obtain all features of data pointAn information value.
In the formula (4), the reaction mixture is,
Figure BDA0002084088260000133
in the formula (5), 1D represents that the dimensional characteristic is linear, 2D represents that the dimensional characteristic is planar, 3D represents that the dimensional characteristic is scattering, and delta 1 、δ 2 、δ 3 Representing the fitted residuals in three orthogonal directions, order
Figure BDA0002084088260000134
Figure BDA0002084088260000135
When delta 1 Are respectively far greater than delta 2 、δ 3 When the fitting area is a linear point, the fitting area has a large fitting residual error in only one direction, and the same reason is delta 1 、δ 2 Are all far greater than delta 3 When the point is a planar point, in this case, λ 3 The corresponding feature vector is the normal vector of the point, when delta 1 ≈δ 2 ≈δ 3 When the dots are scattered dots or columnar dots.
The step 2) is implemented according to the following steps:
obtaining a Gaussian kernel function and parameters thereof through a Support Vector Machine (SVM), constructing a decision tree through a random forest method (RF), inputting an optimal feature set into the decision tree to generate a plurality of training sets, calculating the similarity degree of each point cloud data and each training set through the Gaussian kernel function, and classifying outdoor point cloud scene data through comparing the similarity degrees, wherein the classification specifically comprises linear points, plane points, cylindrical points and scattering points.
The SVM is used for selecting a kernel function, a parameter g of the kernel function and an optimal penalty parameter c. The tree extraction method in the outdoor point cloud scene adopts a Gaussian kernel function (RBF) with the expression as follows,
Figure BDA0002084088260000141
the Gaussian Kernel function (RBF) is a space that maps the initial space into infinite dimensions, where the above equation σ > 0 represents the bandwidth of the RBF, and x i 、x j Respectively, representing selected sample data points.
For selecting the optimal kernel function parameter g and the optimal punishment parameter c, the tree extraction method in the outdoor point cloud scene finds the c and the g with the highest accuracy through a cross validation method. Under the condition of the same accuracy, the group c and g corresponding to the minimum c are selected as the optimal punishment parameter and kernel function parameter.
The classification method of the random forest refers to a classification method which trains samples by utilizing a plurality of training sets generated by a decision tree and predicts the whole data. And (3) constructing a random forest decision tree by using an Iterative Dichotomiser 3 algorithm, and inputting the optimal feature set into the decision tree to generate a plurality of data point training sets with tags. The Iterative Dichotomiser 3 algorithm measures the selection of attributes by information gain, and selects the attribute with the maximum information gain after splitting to split.
And calculating the similarity degree of all data points and each data point training set through a Gaussian kernel function, classifying the same data point and each data point training set by comparing the similarity degree of the same data point and each data point training set, and classifying the outdoor point cloud scene data into linear points, plane points, cylindrical points and scattering points.
The step 3) is implemented according to the following steps:
step 3.1) extracting the classified cylindrical points and scattering points, and clustering the cylindrical points and the scattering points through a spectral clustering algorithm to obtain point sets of the cylindrical points and the scattering points of different types, namely different trunks and tree crowns;
step 3.2) screening point number thresholds through the cylindrical points and the scattering points are respectively set, and point sets with the point numbers smaller than the corresponding screening point number thresholds are removed, namely some point sets which are far away from an outdoor scene or do not belong to a real trunk, a crown and the like are removed, so that point sets of the cylindrical points and the scattering points which can be matched are obtained;
step 3.3) matching the corresponding scattering point sets according to the similarity by solving the similarity between the matchable cylindrical point sets and each scattering point set;
step 3.4) if the same scattering point set corresponds to a plurality of cylindrical point sets, solving the mass center m of each corresponding cylindrical point set i (i =1,2, …, n), randomly selecting a data point from the set of scattering points, and calculating the data point to the centroid m i (i =1,2, …, n), the data point is distributed to the cylindrical point set corresponding to the centroid with the shortest distance for combination by comparison, according to the method, the data point in the scattering point set is known to be distributed until the data point is completely distributed, other scattering point sets and cylindrical point sets which correspond to one another one by one are combined, each combined point set is the point set corresponding to a single tree, and the single tree extraction is completed; and if the same scattering point set does not correspond to a plurality of cylindrical point sets, directly merging the scattering point set and the cylindrical point set which correspond one to one, wherein each merged point set is a point set corresponding to a single tree, and the extraction of the single tree is finished.
The step 3.1) is implemented according to the following steps:
step 3.1.1) making the extracted and classified cylindrical points and scattering points as clusters Clu cy And Clu sc Are respectively a cluster Clu cy And Clu sc Construct an undirected graph G (V, E), where V is all data points in each cluster (V) 1 ,v 2 ,…,v n ) The set E represents an edge between data points in each cluster, and the weight of the edge is a distance value between the two points;
step 3.1.2) defining a cluster Clu cy And Clu sc Symmetric similarity matrix W = { W ij } i,j=1,…,n The weight (i, j) E of each edge is represented, namely the weight W ij Is a point V i And point V j Weight in between, because G is an undirected graph, all W ij =W ji
Step 3.1.3) adopts a full connection mode to solve the following measurement matrix D for the extracted and classified cylindrical points and scattering points,
Figure BDA0002084088260000151
the diagonal line in the metric matrix D of equation (6) represents V for any one point i Its measurement d i Is defined as the sum of the weights of all edges connected to it, and
Figure BDA0002084088260000161
step 3.1.4) laplace matrix L = D-W, normalizing the laplace matrix to obtain the following formula,
L=D -1/2 LD 1/2 =I-D -1/2 WD 1/2 (8)
according to an Ncut segmentation criterion, t feature vectors of L are solved, the feature vectors of L are normalized to form a feature matrix R, each row of the feature matrix R is regarded as a sample, the feature matrix R is clustered by using a k-means algorithm, namely point cloud data labels corresponding to one sample are randomly selected at first, point cloud data labels corresponding to samples with similar sample features are labeled as the same type, the point cloud data after labeling are removed from the extracted and classified cylindrical points and scattering points, if no sample with similar sample features exists, one sample is randomly selected again until each point cloud data in the extracted and classified cylindrical points and scattering points is labeled, and finally different types of cylindrical points and scattering point sets are obtained.
The step 3.3) is implemented according to the following steps:
step 3.3.1) respectively recording the cylinder point set and the scattering point set which can be matched as
Figure BDA0002084088260000162
And
Figure BDA0002084088260000163
suppose G i (x i ,y i ,z i ) And Q i (x i ,y i ,z i ) Are respectively as
Figure BDA0002084088260000164
And
Figure BDA0002084088260000165
center of mass of C ij Is that
Figure BDA0002084088260000166
And
Figure BDA0002084088260000167
centre of mass G in three-dimensional space i (x i ,y i ,z i ) And Q i (x i ,y i ,z i ) A distance therebetween, then
C ij =Dist(G i ,Q j ) 3D (9)
C ij Is normalized to a value ranging from 0 to 1;
step 3.3.2) let T ij To represent
Figure BDA0002084088260000168
And
Figure BDA0002084088260000169
centroid G projected in 2D i (x i ,y i ,z i ) And Q i (x i ,y i ,z i ) A distance therebetween, then
T ij =Dist(G i ,Q j ) 2D (10)
T ij Is normalized to a value in the range of 0 to 1;
step 3.3.3) order A ij Representing the probability of the trunk becoming a candidate trunk matched with the crown, then
Figure BDA0002084088260000171
Step 3.3.4) calculating each
Figure BDA0002084088260000172
And each of
Figure BDA0002084088260000173
The value of the similarity Pro (i, j) between,
Pro(i,j)=1-(w 1 Min(C ij )+w 2 Min(T ij )+w 3 A ij ) (12)
in the formula (12), w 1 、w 2 、w 3 、w 4 Denotes the proportionality coefficient, w 1 =w 2 =0.2,w 3 =0.6, if Pro (i, j) is greater than 0.9, then the corresponding two point sets are matched, and if there are multiple Pro (i, j) values greater than 0.9, then the scattering point set with the largest similarity Pro (i, j) value is matched for the same matchable cylindrical point set.
The invention relates to a method for extracting trees in an outdoor point cloud scene based on shape classification and combination, which utilizes an optimal feature set and a classification algorithm to improve the accuracy of outdoor scene classification to obtain classified data, and completes the extraction of single trees in the outdoor scene point cloud data through the steps of positioning, filtering, matching and the like.

Claims (5)

1. A method for extracting trees in an outdoor point cloud scene based on shape classification and combination is characterized by comprising the following steps:
step 1) obtaining an optimal feature set of an outdoor scene based on the features of the outdoor scene point cloud data;
the step 1) is implemented according to the following steps:
step 1.1) manually selecting four parts of linear data points, plane data points, cylindrical data points and scattering data points from outdoor scene point cloud data, and sequentially marking the four parts of data points as T 1 、T 2 、T 3 And T 4 And calculating all characteristic information values of each data point in each part of selected data points, wherein all characteristic information comprises dimension characteristics, normal vectors, main directions, characteristic values, characteristic value sums, characteristic entropies,Global variance, anisotropy of points, local surface variation of points, local point density, height of points;
step 1.2) determining T 1 The average values of the characteristic information values are arranged from big to small, the characteristic information corresponding to the average value of the characteristic information values of the first three bits after arrangement is selected, and the characteristic information corresponding to the average value of the characteristic information values of the first three bits is V respectively 1 (T 1 )、V 2 (T 1 )、V 3 (T 1 ),T 2 、T 3 And T 4 Obtain V by the same method 1 (T 2 )、V 2 (T 2 )、V 3 (T 2 )、V 1 (T 3 )、V 2 (T 3 )、V 3 (T 3 )、V 1 (T 4 )、V 2 (T 4 )、V 3 (T 4 );
Step 1.3) if T 1 、T 2 、T 3 And T 4 If the selected characteristic information is different, T is determined 1 、T 2 、T 3 And T 4 The selected feature information forms a set to obtain an optimal feature set; if T is 1 、T 2 、T 3 And T 4 If the selected characteristic information has the same characteristic information, deleting the same characteristic information, selecting the characteristic information corresponding to the average value of the fourth characteristic information values after the parts corresponding to the same characteristic information are arranged, and judging T 1 、T 2 、T 3 And T 4 If the reselected characteristic information has the same characteristic information, repeating the steps in the same way, and if T is performed each time 1 、T 2 、T 3 And T 4 If the same characteristic information exists in the reselected characteristic information, the T is added 1 、T 2 、T 3 And T 4 The reserved characteristic information forms a set to obtain an optimal characteristic set, if T is 1 、T 2 、T 3 And T 4 If the re-selected characteristic information does not have the same characteristic information, the T is determined 1 、T 2 、T 3 And T 4 Forming a set by the reselected feature information to obtain an optimal feature set;
step 2) classifying the outdoor scene point cloud data according to the optimal feature set, wherein the classification specifically comprises linear points, plane points, cylindrical points and scattering points;
the step 2) is specifically that a Gaussian kernel function and parameters thereof are obtained through a support vector machine, a decision tree is constructed through a random forest method, the optimal feature set is input into the decision tree to generate a plurality of training sets, the similarity degree of each point cloud data and each training set is calculated through the Gaussian kernel function, outdoor point cloud scene data are classified through comparing the similarity degrees, and the classification specifically comprises linear points, plane points, cylindrical points and scattering points;
step 3) extracting classified cylindrical points and scattering points as the outdoor scene point cloud data corresponding to the tree trunk and the tree crown of the tree are the cylindrical points and the scattering points, and completing the extraction of the single tree in the outdoor scene point cloud data by positioning, filtering, matching and optimizing the cylindrical points and the scattering points;
the step 3) is implemented according to the following steps:
step 3.1) extracting the classified cylindrical points and scattering points, and clustering the cylindrical points and the scattering points through a spectral clustering algorithm to obtain point sets of the cylindrical points and the scattering points of different types, namely different trunks and tree crowns;
step 3.2) screening point thresholds through the cylindrical points and the scattering points respectively, and rejecting point sets with the point numbers smaller than the corresponding screening point thresholds, namely removing some point sets which are far away from an outdoor scene or do not belong to a real trunk, a crown and the like to obtain point sets of the cylindrical points and the scattering points which can be matched;
step 3.3) matching the corresponding scattering point sets according to the similarity by solving the similarity between the matchable cylindrical point sets and each scattering point set;
step 3.4) if the same scattering point set corresponds to a plurality of cylindrical point sets, solving the mass center m of each corresponding cylindrical point set i (i =1,2, …, n), randomly selecting a data point from the set of scattering points, and calculating the data point to the centroid m i (i =1,2, …, n) by comparing the set of cylindrical points corresponding to the centroid for which the data point is assigned to the shortest distance, and combining them, in this wayUntil the data points in the scattering point set are distributed, merging other scattering point sets and cylindrical point sets which correspond one to one, wherein each merged point set is a point set corresponding to a single tree, and the extraction of the single tree is finished; and if the same scattering point set does not correspond to a plurality of cylindrical point sets, directly merging the scattering point set and the cylindrical point set which correspond one to one, wherein each merged point set is a point set corresponding to a single tree, and the extraction of the single tree is finished.
2. The method for extracting trees in outdoor point cloud scene based on shape classification and combination as claimed in claim 1, wherein the step 1.1) of obtaining all the characteristic information values of the data points is implemented according to the following steps:
step 1.1.1) assume outdoor scene point cloud data P = { P = { (P) } 0 ,p 1 ,p 2 ,…,p i ,…,p n Are natural numbers, data points p i Is T 1 、T 2 、T 3 、T 4 Data points in either section, data point p i K neighboring points and their spatial coordinates q j (x j ,y j ,z j ) J =1,2, …, k, k ≠ 0, then point p i The covariance matrix of (a) is as follows:
Figure FDA0003821843430000031
in the formula (1), the reaction mixture is,
Figure FDA0003821843430000032
is k p i Center of adjacent point, and
Figure FDA0003821843430000033
w is a semi-positive definite symmetric matrix, the eigenvalues are all non-negative values, and the eigenvectors corresponding to different eigenvalues are orthogonal, forming a set of unit orthogonal bases of the space in which they are located, and
Figure FDA0003821843430000034
in the formula (3), the characteristic value size relationship of W is lambda 1 ≥λ 2 ≥λ 3 ≥0;
Step 1.1.2) let p i Neighborhood k of i From small to large k min To k max And let k be i Is delta k, the Shannon entropy function value is calculated by the following formula (4) to determine the best neighborhood value of each point, and each neighborhood k is solved i Corresponding shannon entropy function value, selecting the minimum value from the shannon entropy function values, and the neighborhood k corresponding to the minimum value i Is p i The best neighborhood value of.
Entropy=-pro 1D ln(pro 1D )-pro 2D ln(pro 2D )-pro 3D ln(pro 3D ) (4)
In the formula (4), encopy represents Shannon Entropy function, pro 1D Representing linear data points, pro 2D Representing plane data points, pro 3D Representing a columnar data point or a scattered data point;
compare each neighborhood k i Selecting the neighborhood with the minimum fragrance entropy as the best neighborhood k according to the fragrance entropy i ', then order k i '=k i + Δ k, p is calculated by principal component analysis algorithm i Of the covariance matrix W 1 、e 2 、e 3 And a characteristic value lambda 1 、λ 3 、λ 3 ,;
Step 1.1.3) from the characteristic values, data points p can be calculated i The total variance, anisotropy, characteristic entropy, sum of characteristic values, local surface variation, height characteristic and local surface density of the data points p i Normal vector of (2)
Figure FDA0003821843430000041
Can be formed by 3 Corresponding feature vector determination, principalTo the direction of
Figure FDA0003821843430000042
Can be formed by 1 And determining corresponding characteristic vectors to obtain all characteristic information values of the data points.
3. The method for extracting trees from outdoor point cloud scene based on shape classification and combination as claimed in claim 2, wherein in the formula (4),
Figure FDA0003821843430000043
in the formula (5), 1D represents that the dimensional characteristic is linear, 2D represents that the dimensional characteristic is planar, 3D represents that the dimensional characteristic is scattering, and delta 1 、δ 2 、δ 3 Representing the fitted residuals in three orthogonal directions, order
Figure FDA0003821843430000044
Figure FDA0003821843430000045
When delta 1 Are respectively far greater than delta 2 、δ 3 When the fitting area is a linear point, the fitting area has a large fitting residual error in only one direction, and the same reason is delta 1 、δ 2 Are all far greater than delta 3 When the point is a planar point, in this case, λ 3 The corresponding feature vector is the normal vector of the point, when delta 1 ≈δ 2 ≈δ 3 When the dots are scattered dots or columnar dots.
4. The method for extracting trees from outdoor point cloud scene based on shape classification and combination as claimed in claim 1, wherein the step 3.1) is implemented according to the following steps:
step 3.1.1) making the extracted and classified cylindrical points and scattering points as clusters Clu cy And Clu sc Are respectively a cluster Clu cy And Clu sc Construct an undirected graph G (V, E), where V is all data points in each cluster (V) 1 ,v 2 ,…,v n ) The set E represents an edge between data points in each cluster, and the weight of the edge is a distance value between the two points;
step 3.1.2) defining a cluster Clu cy And Clu sc Symmetric similarity matrix W = { W ij } i,j=1,…,n The weight (i, j) E of each edge is represented, namely the weight W ij Is a point V i And point V j Weight in between, because G is an undirected graph, all W ij =W ji
Step 3.1.3) adopts a full connection mode to solve the following measurement matrix D for the extracted and classified cylindrical points and scattering points,
Figure FDA0003821843430000051
the diagonal line in the metric matrix D of equation (6) represents V for any one point i Its measurement d i Is defined as the sum of the weights of all edges connected to it, and
Figure FDA0003821843430000052
step 3.1.4) the laplace matrix L = D-W, the laplace matrix is normalized to obtain the following formula,
L=D -1/2 LD 1/2 =I-D -1/2 WD 1/2 (8)
according to an Ncut segmentation criterion, t feature vectors of L are solved, the feature vectors of L are normalized to form a feature matrix R, each row of the feature matrix R is regarded as a sample, the feature matrix R is clustered by using a k-means algorithm, namely point cloud data labels corresponding to one sample are randomly selected at first, the point cloud data labels corresponding to the samples with similar characteristics to the sample are labeled into the same class, the point cloud data after labeling is removed from the cylinder points and the scattering points after extraction and classification, if no sample with similar characteristics to the sample exists, one sample is randomly selected again until each point cloud data in the cylinder points and the scattering points after extraction and classification is labeled, and finally different classes of cylinder points and scattering point sets are obtained.
5. The method for extracting trees from outdoor point cloud scene based on shape classification and combination as claimed in claim 1, wherein the step 3.3) is implemented according to the following steps:
step 3.3.1) respectively recording the cylinder point set and the scattering point set which can be matched as
Figure FDA0003821843430000061
And
Figure FDA0003821843430000062
suppose G i (x i ,y i ,z i ) And Q i (x i ,y i ,z i ) Are respectively as
Figure FDA0003821843430000063
And
Figure FDA0003821843430000064
center of mass of C ij Is that
Figure FDA0003821843430000065
And
Figure FDA0003821843430000066
centre of mass G in three-dimensional space i (x i ,y i ,z i ) And Q i (x i ,y i ,z i ) A distance therebetween, then
C ij =Dist(G i ,Q j ) 3D (9)
C ij Is normalized to a value in the range of 0 to 1;
step 3.3.2) let T ij Represent
Figure FDA0003821843430000067
And
Figure FDA0003821843430000068
centroid G projected in 2D i (x i ,y i ,z i ) And Q i (x i ,y i ,z i ) A distance therebetween, then
T ij =Dist(G i ,Q j ) 2D (10)
T ij Is normalized to a value in the range of 0 to 1;
step 3.3.3) order A ij Representing the probability of the trunk becoming a candidate trunk matched with the crown, then
Figure FDA0003821843430000069
Step 3.3.4) calculating each
Figure FDA00038218434300000610
And each
Figure FDA00038218434300000611
The value of the similarity Pro (i, j) between,
Pro(i,j)=1-(w 1 Min(C ij )+w 2 Min(T ij )+w 3 A ij ) (12)
in the formula (12), w 1 、w 2 、w 3 、w 4 Denotes the proportionality coefficient, w 1 =w 2 =0.2,w 3 =0.6, if Pro (i, j) is greater than 0.9, then the corresponding two point sets are matched, and if there are multiple Pro (i, j) values greater than 0.9, then the scattering point set with the largest similarity Pro (i, j) value is matched for the same matchable cylindrical point set.
CN201910481805.5A 2019-06-04 2019-06-04 Method for extracting trees in outdoor point cloud scene based on shape classification and combination Active CN110348478B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910481805.5A CN110348478B (en) 2019-06-04 2019-06-04 Method for extracting trees in outdoor point cloud scene based on shape classification and combination

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910481805.5A CN110348478B (en) 2019-06-04 2019-06-04 Method for extracting trees in outdoor point cloud scene based on shape classification and combination

Publications (2)

Publication Number Publication Date
CN110348478A CN110348478A (en) 2019-10-18
CN110348478B true CN110348478B (en) 2022-10-11

Family

ID=68181495

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910481805.5A Active CN110348478B (en) 2019-06-04 2019-06-04 Method for extracting trees in outdoor point cloud scene based on shape classification and combination

Country Status (1)

Country Link
CN (1) CN110348478B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111986223B (en) * 2020-07-15 2024-02-06 西安理工大学 Method for extracting trees in outdoor point cloud scene based on energy function
CN112347894B (en) * 2020-11-02 2022-05-20 东华理工大学 Single plant vegetation extraction method based on transfer learning and Gaussian mixture model separation

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101877128A (en) * 2009-12-23 2010-11-03 中国科学院自动化研究所 Method for segmenting different objects in three-dimensional scene
CN104463856A (en) * 2014-11-25 2015-03-25 大连理工大学 Outdoor scene three-dimensional point cloud data ground extraction method based on normal vector ball
WO2015149302A1 (en) * 2014-04-02 2015-10-08 中国科学院自动化研究所 Method for rebuilding tree model on the basis of point cloud and data driving

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101877128A (en) * 2009-12-23 2010-11-03 中国科学院自动化研究所 Method for segmenting different objects in three-dimensional scene
WO2015149302A1 (en) * 2014-04-02 2015-10-08 中国科学院自动化研究所 Method for rebuilding tree model on the basis of point cloud and data driving
CN104463856A (en) * 2014-11-25 2015-03-25 大连理工大学 Outdoor scene three-dimensional point cloud data ground extraction method based on normal vector ball

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于LiDAR点云的单棵树木提取方法研究;林怡等;《计算机测量与控制》;20170625(第06期);全文 *
局部形状特征概率混合的半自动三维点云分类;李红军等;《浙江大学学报(理学版)》;20170115(第01期);全文 *

Also Published As

Publication number Publication date
CN110348478A (en) 2019-10-18

Similar Documents

Publication Publication Date Title
Guo et al. Classification of airborne laser scanning data using JointBoost
Yang et al. An individual tree segmentation method based on watershed algorithm and three-dimensional spatial distribution analysis from airborne LiDAR point clouds
Zhong et al. Segmentation of individual trees from TLS and MLS data
Kim et al. 3D classification of power-line scene from airborne laser scanning data using random forests
CN110992341A (en) Segmentation-based airborne LiDAR point cloud building extraction method
CN108510516A (en) A kind of the three-dimensional line segment extracting method and system of dispersion point cloud
JP6621445B2 (en) Feature extraction device, object detection device, method, and program
CN104091321A (en) Multi-level-point-set characteristic extraction method applicable to ground laser radar point cloud classification
CN112070769A (en) Layered point cloud segmentation method based on DBSCAN
CN112347894B (en) Single plant vegetation extraction method based on transfer learning and Gaussian mixture model separation
CN115205690B (en) Method and device for extracting street tree in monomer mode based on MLS point cloud data
JP2012088796A (en) Image area division device, image area division method, and image area division program
Guo et al. Classification of airborne laser scanning data using JointBoost
CN110348478B (en) Method for extracting trees in outdoor point cloud scene based on shape classification and combination
CN106874421A (en) Image search method based on self adaptation rectangular window
Li et al. A branch-trunk-constrained hierarchical clustering method for street trees individual extraction from mobile laser scanning point clouds
Liu et al. A novel rock-mass point cloud registration method based on feature line extraction and feature point matching
CN111860359B (en) Point cloud classification method based on improved random forest algorithm
CN113988198A (en) Multi-scale city function classification method based on landmark constraint
Xu et al. Instance segmentation of trees in urban areas from MLS point clouds using supervoxel contexts and graph-based optimization
CN112070787B (en) Aviation three-dimensional point cloud plane segmentation method based on opponent reasoning theory
CN113724400A (en) Oblique photography-oriented multi-attribute fusion building point cloud extraction method
CN111414958B (en) Multi-feature image classification method and system for visual word bag pyramid
CN110458111B (en) LightGBM-based rapid extraction method for vehicle-mounted laser point cloud power line
Sun et al. A study on the classification of vegetation point cloud based on random forest in the straw checkerboard barriers area

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant