CN116597199A - Point cloud tree classification method and system based on airborne LiDAR - Google Patents

Point cloud tree classification method and system based on airborne LiDAR Download PDF

Info

Publication number
CN116597199A
CN116597199A CN202310398248.7A CN202310398248A CN116597199A CN 116597199 A CN116597199 A CN 116597199A CN 202310398248 A CN202310398248 A CN 202310398248A CN 116597199 A CN116597199 A CN 116597199A
Authority
CN
China
Prior art keywords
point
constructing
classification
point cloud
cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310398248.7A
Other languages
Chinese (zh)
Inventor
王健
张振羽
赵游龙
王凯睿
李志远
齐智宇
王政辉
刘艺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University of Science and Technology
Original Assignee
Shandong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University of Science and Technology filed Critical Shandong University of Science and Technology
Priority to CN202310398248.7A priority Critical patent/CN116597199A/en
Publication of CN116597199A publication Critical patent/CN116597199A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0499Feedforward networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Remote Sensing (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a point cloud tree classification method and a system based on an onboard LiDAR, wherein the method comprises the following steps: performing ground filtering on the original point cloud to be processed to obtain ground points of a target area; removing ground points to obtain non-ground point clouds, and carrying out elevation normalization processing on the non-ground point clouds to obtain forest point clouds; dividing the forest point cloud to obtain Shan Mudian cloud data; constructing a training data set and a test data set based on Shan Mudian cloud data; slicing Shan Mudian cloud data, and obtaining feature vectors based on the Shan Mudian cloud data after slicing; and constructing a classification neural network, training the classification neural network by adopting a training data set, and classifying tree species of the test data set based on the trained classification neural network to obtain a classification result. The depth characteristic difference can be mined from a small number of samples, and the method has higher superiority in the aspects of resisting intra-seed sample characteristic noise, increasing inter-seed sample characteristic variance and the like.

Description

Point cloud tree classification method and system based on airborne LiDAR
Technical Field
The invention belongs to the technical field of airborne LiDAR point cloud classification, and particularly relates to a point cloud tree classification method and system based on airborne LiDAR.
Background
Forest resources are taken as an important natural resource of the earth, are also an ecological system with the largest land area and the largest biomass, and have important functions of conserving water sources, preventing wind and fixing sand and regulating climate. The identification and classification of tree species are of great significance in the statistics of forestry resources, the exploration of diversity of ecological systems and even the research of biological evolution rules, so that the accurate identification and statistics of tree species in forest areas are of great importance. With the rapid development of digital information technology in recent years, the digital forestry based on the remote sensing technology provides great convenience for sustainable management of forest resources. Aerial photography is used as a non-contact remote sensing technology, and can rapidly acquire horizontal structures of forests in a large range. Early tree species identification is mostly completed by expert visual interpretation of aerial images and on-site manual investigation, but the high time and financial cost makes the aerial images not universally applicable. Hyperspectral imaging is detectible for many substances that cannot be detected in optical images acquired over a broad band. In high-resolution hyperspectral imaging, the radiation amount in different wavelength regions can well reflect the biological and non-biological properties of the tree group. The advantage enables the potential of hyperspectral images in the aspects of dividing forest types, drawing forest phase diagrams and the like to be fully developed. However, as the problems in the experiments are studied intensively, limitations in the hyperspectral itself are also developed slowly. Different tree species may have very similar spectral characteristics in the case of the same family and genus; or because the canopy density of the forest stand is too high, the spectrum characteristics of the intraspecific sample are greatly different due to crown shape difference due to excessive shielding. These problems can significantly reduce classification accuracy.
In the study of high resolution images, researchers have noted the potential of the characterization parameters of canopy contours in tree species classification. However, the image can only depict the horizontal distribution of the stand, and the profile information cannot be provided. The LiDAR technology is used as advanced active remote sensing, and has better description capability on the structure of the canopy. In particular, on-board LiDAR has long shown great potential in forestry-related mapping. In the effort of many scholars, more and more characterization parameters are proposed to help tree classification, and the classification is mainly divided into two major categories of geometric features and radiation features. The geometrical features typically include height information of the canopy, density information, structural scale parameters, etc., while the radiation features mainly include single/multi-channel reflection intensity information, echo type features, etc. And with the development of deep learning technology, tree classification methods based on neural network extraction of deep features are more and more common. In particular, the PointNet family, as a neural network that directly processes point clouds, can provide significant accuracy gains in tree classification.
In the existing algorithm for tree classification based on three-dimensional laser point cloud, there are mainly a deep learning method for directly inputting point cloud into a neural network for classification and a feature engineering method based on traditional machine learning classification after extracting feature vectors. From the angle of the classifier, the input dimension consistency limits the former, the single wood needs to be downsampled, the single wood characteristics can be lost, and the sampling number is difficult to determine for the data set with large single wood morphology and point cloud density difference. Meanwhile, the point cloud is a hidden feature, and the direct feature extraction processing of the point cloud often causes a network to have a larger operation load. The latter is limited by the classifier performance and does not allow deep features to be extracted. From the angle of the sample, the characteristic extraction of the single wood sample is established on the whole level by the method, and the characteristic noise of the intra-seed sample on different vertical height levels can be increased due to various factors such as relief fluctuation, different laser scanning angles, flight altitude change and the like in the airborne LiDAR scanning process. In addition, in field application, the structure and physical properties of the canopy between the same tree species may have high heterogeneity due to numerous environmental factors such as topography fluctuation, seasonal variation, and the like. When the classification feature is extracted from the laser point cloud, the classification imbalance problem (Class-imbalance Problem) is liable to occur by directly taking the whole single wood as the feature extraction unit.
Disclosure of Invention
The invention aims to solve the problem that in practical application, the characteristic noise of the same sample at different vertical height layers is large due to factors such as scanning conditions, geographical environments and the like, so that the classification precision is limited. The deep supervision network DSTCN (Deeply Supervised Tree Classification Network) for tree classification is provided, the DSTCN receives histogram feature descriptors of single wood slices as input vectors, and combines the height and intensity information of the slices to give different attention to the features of different single wood slices, so that the slice features with different information gains are more reasonably utilized, the precision limitation caused by feature noise on different vertical height layers can be effectively solved, and the aim of improving the overall classification precision is fulfilled.
In order to achieve the above object, the present invention provides the following solutions: the point cloud tree classification method based on the airborne LiDAR comprises the following steps:
s1, carrying out ground filtering on an original point cloud to be processed to obtain ground points of a target area; removing the ground points to obtain non-ground point clouds, and carrying out elevation normalization processing on the non-ground point clouds to obtain forest point clouds;
s2, dividing the forest point cloud to obtain Shan Mudian cloud data; constructing a training data set and a testing data set based on the Shan Mudian cloud data;
s3, slicing the Shan Mudian cloud data, and obtaining feature vectors based on the Shan Mudian cloud data after slicing;
s4, constructing a classification neural network, training the classification neural network by adopting the training data set, and classifying tree species of the test data set based on the trained classification neural network to obtain a classification result.
Preferably, the method for obtaining the ground point of the target area in S1 includes:
rasterizing the target region;
selecting the lowest point of each grid as a seed point, and constructing an initial triangular network based on the seed points;
calculating an included angle and a distance formed between each original point cloud and the vertex of the initial triangle mesh;
setting an iteration angle threshold and an iteration distance threshold, taking the included angle meeting the iteration angle threshold and the iteration distance threshold and the original point cloud of the distance as ground points, and updating a triangular network;
repeating the operation to obtain all the ground points in the target area.
Preferably, the S3 includes:
slicing the Shan Mudian cloud data;
constructing a spherical neighborhood space for each point cloud, and constructing a covariance matrix through a neighborhood point set to obtain a characteristic value;
calculating the degree value, the neighborhood point number and the mean value of the reflection intensity of the local obeying point-like, linear and spherical structures based on the characteristic values;
constructing a histogram feature descriptor based on the local obeying point-like, linear and spherical structure degree values, the neighborhood point number and the reflection intensity mean value;
and obtaining the feature vector based on the histogram feature descriptors.
Preferably, the S4 includes:
constructing the classified neural network;
extracting the average reflection intensity and the average height of each slice, and constructing an intensity sequence and a height sequence;
and inputting the intensity sequence, the height sequence and the feature vector into the classification neural network to obtain a classification result.
Preferably, the classification neural network includes:
a sequence weighting module, a weighted encoding structure and a depth feed forward structure;
the sequence weighting module comprises: a DFF, a Softmax weight normalization function and a Concate feature aggregation unit;
the weighted encoding structure includes: a multi-head attention module, a DFF, and a LayerNorm layer normalization unit;
the depth feed forward structure includes: linear full-connection layer, mish activation function and BatchNorm batch normalization unit.
The invention also provides a point cloud tree species classification system based on the onboard LiDAR, which comprises: the device comprises a filtering unit, a dividing unit, a slicing unit and a classifying unit;
the filtering unit is used for carrying out ground filtering on the original point cloud to be processed to obtain ground points of the target area; removing the ground points to obtain non-ground point clouds, and carrying out elevation normalization processing on the non-ground point clouds to obtain forest point clouds;
the segmentation unit is used for segmenting the forest point cloud to obtain Shan Mudian cloud data; constructing a training data set and a testing data set based on the Shan Mudian cloud data;
the slicing unit is used for slicing the Shan Mudian cloud data and obtaining feature vectors based on the Shan Mudian cloud data after slicing;
the classification unit is used for constructing a classification neural network, training the classification neural network by adopting the training data set, and classifying tree species of the test data set based on the trained classification neural network to obtain a classification result.
Preferably, the method for obtaining the ground point by the filtering unit includes:
rasterizing the target region;
selecting the lowest point of each grid as a seed point, and constructing an initial triangular network based on the seed points;
calculating an included angle and a distance formed between each original point cloud and the vertex of the initial triangle mesh;
setting an iteration angle threshold and an iteration distance threshold, taking the included angle meeting the iteration angle threshold and the iteration distance threshold and the original point cloud of the distance as ground points, and updating a triangular network;
repeating the operation to obtain all the ground points in the target area.
Preferably, the method for obtaining the feature vector by the slicing unit includes:
slicing the Shan Mudian cloud data;
constructing a spherical neighborhood space for each point cloud, and constructing a covariance matrix through a neighborhood point set to obtain a characteristic value;
calculating the degree value, the neighborhood point number and the mean value of the reflection intensity of the local obeying point-like, linear and spherical structures based on the characteristic values;
constructing a histogram feature descriptor based on the local obeying point-like, linear and spherical structure degree values, the neighborhood point number and the reflection intensity mean value;
and obtaining the feature vector based on the histogram feature descriptors.
Preferably, the method for obtaining the classification result by the classification unit comprises the following steps:
constructing the classified neural network;
extracting the average reflection intensity and the average height of each slice, and constructing an intensity sequence and a height sequence;
and inputting the intensity sequence, the height sequence and the feature vector into the classification neural network to obtain a classification result.
Preferably, the classification neural network includes:
a sequence weighting module, a weighted encoding structure and a depth feed forward structure;
the sequence weighting module comprises: a DFF, a Softmax weight normalization function and a Concate feature aggregation unit;
the weighted encoding structure includes: a multi-head attention module, a DFF, and a LayerNorm layer normalization unit;
the depth feed forward structure includes: linear full-connection layer, mish activation function and BatchNorm batch normalization unit
Compared with the prior art, the invention has the beneficial effects that:
in the invention, the DSTCN combines the height and intensity information of the slice, so that different attention is given to the characteristics of different single-wood slices, the slice characteristics of different information gains are more reasonably utilized, and the accuracy limitation caused by characteristic noise of different vertical height layers can be effectively solved. Compared with two common methods based on a deep network Point Net++ and a traditional machine learning SVM respectively, in the method, in the multiple experimental results, the Macro Average F-score (MAF) is 4-14 percent higher than the Point Net++ and 5-10 percent higher than the SVM; the Kappa coefficient is 6-18 percent higher than PointNet++, and 7-13 percent higher than SVM; the classification accuracy standard deviation (Precision Interclass Standard Deviation, PISD), recall standard deviation (Recall Interclass Standard Deviation, RISD) and F-score standard deviation (F-score Interclass Standard Deviation, FISD) among tree species have unique minimum values. As training samples are continuously reduced, compared with a comparison method, the method provided by the invention is always stable and keeps higher precision. Through analysis of qualitative experimental results, the method provided by the invention has the capability of excavating depth characteristic differences from a small number of training samples, and has higher superiority in the aspects of resisting intra-seed sample characteristic noise, increasing inter-seed sample characteristic variance and the like.
Drawings
In order to more clearly illustrate the technical solutions of the present invention, the drawings that are needed in the embodiments are briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a method according to an embodiment of the invention;
FIG. 2 is a schematic drawing of single-wood sample feature extraction according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of obtaining single-point characteristics from a single-wood sample according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a DSTCN architecture of a deep supervision network according to an embodiment of the present invention;
FIG. 4 (a) is a DSTCN architecture; FIG. 4 (b) shows SWM structure; fig. 4 (c) is a Transformer Encoder structure; fig. 4 (d) is a DFF structure;
FIG. 5 is a statistical chart of experimental tree seed parameter information according to an embodiment of the present invention;
FIG. 5 (a) is a tree height; FIG. 5 (b) is a crown web; FIG. 5 (c) is the average density (density value is number of points per cubic meter); FIG. 5 (d) is the average reflected intensity;
fig. 6 is a schematic diagram of a system structure according to an embodiment of the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
Example 1
Referring to fig. 1, a flow chart of a method for classifying the airborne LiDAR point cloud tree species according to the present embodiment is shown, which includes the following steps:
s1, carrying out ground filtering on an original point cloud to be processed to obtain ground points of a target area; removing ground points to obtain non-ground point clouds, and carrying out elevation normalization processing on the non-ground point clouds to obtain forest point clouds;
in the embodiment, the ground is filtered by adopting a progressive irregular triangular network encryption ground filtering algorithm, and in particular,
first, the target area is rasterized with a resolution of 2 m;
adopting morphological opening operation to select an original point cloud of the lowest point in each grid as a seed point, and constructing an initial triangle network of the whole target area based on the seed point;
then, calculating an included angle alpha and a distance d formed between each original point cloud and the vertex of the nearest triangle network;
setting an iteration angle threshold value to be 10 degrees and an iteration distance threshold value to be 1.2m, and if alpha and d meet the threshold values, taking the original point cloud as a ground point and updating a triangular network;
repeating the above operation until all the ground points in the target area are obtained.
And eliminating all the obtained ground points to obtain a non-ground point cloud, and carrying out elevation normalization processing on the non-ground point cloud to obtain a forest point cloud.
In this embodiment, DEM is fabricated by kriging interpolation and elevation normalization is performed with respect to differences from the non-ground point cloud. In particular, the method comprises the steps of,
first, n (n-1)/2 distance values between ground points are obtained, and the ground points x within each group are sorted from small to large into n groups j Distance values are Distance i
Substituting the average distance into a half variance formula to calculate an experimental variance value Semivariance corresponding to each group of distances i
Wherein Z (x) j ) To correspond to the variation value, N (Distance) i ) The number of the ground points in the corresponding group under the distance value is m.
Drawing a distance-semi-variance scatter diagram, fitting a variation curve, solving a block gold value (Nugget), an arch height value (bell) and a Range value (Range), and constructing a spherical fitting function:
and generating the DEM with 1m resolution based on the interpolation of the sphere fitting function. And rasterizing the non-ground point cloud with 1m resolution, and performing difference with the DEM to obtain the forest point cloud.
S2, cutting the forest point cloud to obtain Shan Mudian cloud data; constructing a training data set and a test data set based on Shan Mudian cloud data;
in the embodiment, professional software is adopted to divide the forest point cloud to obtain Shan Mudian cloud, sample optimization is carried out, and a training data set and a testing data set are constructed.
S3, slicing the Shan Mudian cloud data, and obtaining feature vectors based on the Shan Mudian cloud data after slicing;
and uniformly slicing the segmented Shan Mudian cloud data from the vertical direction, and carrying out the same characteristic engineering on each section of slice to extract the characteristic vector. In particular, the method comprises the steps of,
firstly, slicing Shan Mudian cloud data with equal step length, and in the embodiment, slicing into 20 layers;
constructing a spherical neighborhood space for each point cloud, and constructing a covariance matrix through a neighborhood point set to obtain a characteristic value lambda 1 、λ 2 、λ 3
And calculates the degree value DA of local compliant point-like, linear and spherical structure based on the characteristic value 1 、DA 2 、DA 3 Neighborhood point number I mean And the reflection intensity mean, as shown in fig. 3;
wherein DA is 1 、DA 2 、DA 3 Calculation formulaThe formula is as follows:
for each slice, constructing a histogram feature descriptor based on the five features of the local obeying point-like, linear and spherical structures, namely the neighborhood point-like and the reflection intensity mean value;
in this embodiment, the number of statistical intervals in the histogram is 128, and the upper and lower statistical limits are the upper and lower limits of the feature in the whole single wood. There are 5 x 20 histogram feature descriptors per single wood sample.
And splicing all the histogram descriptors on a one-dimensional level to obtain the feature vector of the single wood. As shown in fig. 2.
S4, constructing a classification neural network, training the classification neural network by adopting a training data set, and classifying tree species of the test data set based on the trained classification neural network to obtain a classification result.
As shown in fig. 5, the experimental tree seed parameter information in this embodiment, specifically, fig. 5 (a) is a tree height; FIG. 5 (b) is a crown web; FIG. 5 (c) is the average density (density value is number of points per cubic meter); fig. 5 (d) shows the average reflection intensity.
Firstly, constructing a classified neural network; fig. 4 is a schematic diagram of the structure of the classification neural network according to the present embodiment.
In this embodiment, the network architecture mainly consists of a Weighted encoded Encoder (WE) formed by connecting sequence weighting modules (Sequence Weighted Module, SWM) and Transformer Encoder (TE) through three multi-layer perceptron (Multilayer Perceptron, MLP), and a depth feed forward structure (Deep Feed Forward, DFF), as shown in fig. 4 (a). The DFF includes Linear full link layer, a mich activation function, and a batch norm batch normalization unit, as shown in fig. 4 (d). Whereas SWM contains DFF, softmax weight normalization function, and connect feature aggregation unit, as shown in fig. 4 (b). TE contains Multi-head attention Module (MHA), DFF, and LayerNorm layer normalization unit as shown in fig. 4 (c). The two WE add intensity and high attention sequentially in a cascade, and the features are multi-scale attention distributed in WE by SWM and TE combinations. A trunk Classifier IS formed by DFF, softmax, and Cross-entropy Loss function (Cross-entropy Loss), and a trunk network IS formed by adding an auxiliary branch model formed by DFF, batchNorm and Classifier after the first WE (Intermediate Supervision, IS).
Then, extracting the average reflection Intensity and the average Height of each slice, and constructing an Intensity sequence density and a Height sequence Height;
and inputting the intensity sequence, the height sequence and the feature vector into a classification neural network to obtain a classification result.
In SWM, the sequence of slice information construction is analyzed through DFF, and the slice weight is generated by adopting Softmax normalization, so that the attention distribution is carried out on different slices at the slice layer. And then the attention distribution is performed at the characteristic level through Transformer Encoder. The input vector is weighted encoded by the above steps with the Height sequence and the Intensity sequence, respectively, through a classifier composed of DFF, softmax, and Cross-entcopy Loss. The DFF further extracts features, softmax outputs class probabilities, and combines the probabilities with labels, and calculates class Loss by Cross-entopy Loss. Meanwhile, the classifier structure is added in an intermediate hidden layer to serve as relay supervision, so that gradient flow is optimized.
Example two
As shown in fig. 6, the present embodiment provides a point cloud tree classification system based on onboard LiDAR, including: the device comprises a filtering unit, a dividing unit, a slicing unit and a classifying unit;
the filtering unit is used for carrying out ground filtering on the original point cloud to be processed to obtain ground points of the target area; removing ground points to obtain non-ground point clouds, and carrying out elevation normalization processing on the non-ground point clouds to obtain forest point clouds;
in this embodiment, the filtering unit adopts a progressive irregular triangular network encryption ground filtering algorithm to filter the ground, specifically,
first, the target area is rasterized with a resolution of 2 m;
adopting morphological opening operation to select an original point cloud of the lowest point in each grid as a seed point, and constructing an initial triangle network of the whole target area based on the seed point;
then, calculating an included angle alpha and a distance d formed between each original point cloud and the vertex of the nearest triangle network;
setting an iteration angle threshold value to be 10 degrees and an iteration distance threshold value to be 1.2m, and if alpha and d meet the threshold values, taking the original point cloud as a ground point and updating a triangular network;
repeating the above operation until all the ground points in the target area are obtained.
And eliminating all the obtained ground points to obtain a non-ground point cloud, and carrying out elevation normalization processing on the non-ground point cloud to obtain a forest point cloud.
In this embodiment, the filtering unit uses kriging interpolation to make DEM, and performs elevation normalization on the difference between the DEM and the non-ground point cloud. In particular, the method comprises the steps of,
first, n (n-1)/2 distance values between ground points are obtained, and the ground points x within each group are sorted from small to large into n groups j Distance values are Distance i
Substituting the average distance into a half variance formula to calculate an experimental variance value Semivariance corresponding to each group of distances i
Wherein Z (x) j ) To correspond to the variation value, N (Distance) i ) The number of the ground points in the corresponding group under the distance value is m.
Drawing a distance-semi-variance scatter diagram, fitting a variation curve, solving a block gold value (Nugget), an arch height value (bell) and a Range value (Range), and constructing a spherical fitting function:
and generating the DEM with 1m resolution based on the interpolation of the sphere fitting function. And rasterizing the non-ground point cloud with 1m resolution, and performing difference with the DEM to obtain the forest point cloud.
The segmentation unit is used for segmenting the forest point cloud to obtain Shan Mudian cloud data; constructing a training data set and a test data set based on Shan Mudian cloud data;
in this embodiment, the segmentation unit segments the forest point cloud by using professional software to obtain Shan Mudian cloud, and performs sample optimization to construct a training data set and a test data set.
The slicing unit is used for slicing the Shan Mudian cloud data and obtaining feature vectors based on the Shan Mudian cloud data after slicing;
and uniformly slicing the segmented Shan Mudian cloud data from the vertical direction by the slicing unit, and carrying out the same characteristic engineering on each section of slice to extract the characteristic vector. In particular, the method comprises the steps of,
firstly, slicing Shan Mudian cloud data with equal step length, and in the embodiment, slicing into 20 layers;
constructing a spherical neighborhood space for each point cloud, and constructing a covariance matrix through a neighborhood point set to obtain a characteristic value lambda 1 、λ 2 、λ 3
And calculates the degree value DA of local compliant point-like, linear and spherical structure based on the characteristic value 1 、DA 2 、DA 3 Neighborhood point number I mean And the reflection intensity mean, as shown in fig. 3;
wherein DA is 1 、DA 2 、DA 3 The calculation formula is as follows:
for each slice, constructing a histogram feature descriptor based on the five features of the local obeying point-like, linear and spherical structures, namely the neighborhood point-like and the reflection intensity mean value;
in this embodiment, the number of statistical intervals in the histogram is 128, and the upper and lower statistical limits are the upper and lower limits of the feature in the whole single wood. There are 5 x 20 histogram feature descriptors per single wood sample.
And splicing all the histogram descriptors on a one-dimensional level to obtain the feature vector of the single wood.
The classification unit is used for constructing a classification neural network, training the classification neural network by adopting a training data set, and classifying tree species of the test data set based on the trained classification neural network to obtain a classification result.
The method for obtaining the classification result by the classification unit comprises the following steps:
firstly, constructing a classified neural network; fig. 4 is a schematic diagram of the structure of the classification neural network according to the present embodiment.
In this embodiment, the network architecture mainly consists of a Weighted encoded Encoder (WE) formed by connecting sequence weighting modules (Sequence Weighted Module, SWM) and Transformer Encoder (TE) through three multi-layer perceptron (Multilayer Perceptron, MLP), and a depth feed forward structure (Deep Feed Forward, DFF), as shown in fig. 4 (a). The DFF includes Linear full link layer, a mich activation function, and a batch norm batch normalization unit, as shown in fig. 4 (d). Whereas SWM contains DFF, softmax weight normalization function, and connect feature aggregation unit, as shown in fig. 4 (b). TE contains Multi-head attention Module (MHA), DFF, and LayerNorm layer normalization unit as shown in fig. 4 (c). The two WE add intensity and high attention sequentially in a cascade, and the features are multi-scale attention distributed in WE by SWM and TE combinations. A trunk Classifier IS formed by DFF, softmax, and Cross-entropy Loss function (Cross-entropy Loss), and a trunk network IS formed by adding an auxiliary branch model formed by DFF, batchNorm and Classifier after the first WE (Intermediate Supervision, IS).
Then, extracting the average reflection Intensity and the average Height of each slice, and constructing an Intensity sequence density and a Height sequence Height;
and inputting the intensity sequence, the height sequence and the feature vector into a classification neural network to obtain a classification result.
In SWM, the sequence of slice information construction is analyzed through DFF, and the slice weight is generated by adopting Softmax normalization, so that the attention distribution is carried out on different slices at the slice layer. And then the attention distribution is performed at the characteristic level through Transformer Encoder. The input vector is weighted encoded by the above steps with the Height sequence and the Intensity sequence, respectively, through a classifier composed of DFF, softmax, and Cross-entcopy Loss. The DFF further extracts features, softmax outputs class probabilities, and combines the probabilities with labels, and calculates class Loss by Cross-entopy Loss. Meanwhile, the classifier structure is added in an intermediate hidden layer to serve as relay supervision, so that gradient flow is optimized.
The above embodiments are merely illustrative of the preferred embodiments of the present invention, and the scope of the present invention is not limited thereto, but various modifications and improvements made by those skilled in the art to which the present invention pertains are made without departing from the spirit of the present invention, and all modifications and improvements fall within the scope of the present invention as defined in the appended claims.

Claims (10)

1. The point cloud tree classification method based on the airborne LiDAR is characterized by comprising the following steps of:
s1, carrying out ground filtering on an original point cloud to be processed to obtain ground points of a target area; removing the ground points to obtain non-ground point clouds, and carrying out elevation normalization processing on the non-ground point clouds to obtain forest point clouds;
s2, dividing the forest point cloud to obtain Shan Mudian cloud data; constructing a training data set and a testing data set based on the Shan Mudian cloud data;
s3, slicing the Shan Mudian cloud data, and obtaining feature vectors based on the Shan Mudian cloud data after slicing;
s4, constructing a classification neural network, training the classification neural network by adopting the training data set, and classifying tree species of the test data set based on the trained classification neural network to obtain a classification result.
2. The method for classifying point cloud tree species based on airborne LiDAR according to claim 1, wherein the method for S1 obtaining the ground point of the target area comprises the following steps:
rasterizing the target region;
selecting the lowest point of each grid as a seed point, and constructing an initial triangular network based on the seed points;
calculating an included angle and a distance formed between each original point cloud and the vertex of the initial triangle mesh;
setting an iteration angle threshold and an iteration distance threshold, taking the included angle meeting the iteration angle threshold and the iteration distance threshold and the original point cloud of the distance as ground points, and updating a triangular network;
repeating the operation to obtain all the ground points in the target area.
3. The method for classifying point cloud tree species based on onboard LiDAR of claim 1, wherein S3 comprises:
slicing the Shan Mudian cloud data;
constructing a spherical neighborhood space for each point cloud, and constructing a covariance matrix through a neighborhood point set to obtain a characteristic value;
calculating the degree value, the neighborhood point number and the mean value of the reflection intensity of the local obeying point-like, linear and spherical structures based on the characteristic values;
constructing a histogram feature descriptor based on the local obeying point-like, linear and spherical structure degree values, the neighborhood point number and the reflection intensity mean value;
and obtaining the feature vector based on the histogram feature descriptors.
4. The method for classifying point cloud tree species based on onboard LiDAR of claim 3, wherein S4 comprises:
constructing the classified neural network;
extracting the average reflection intensity and the average height of each slice, and constructing an intensity sequence and a height sequence;
and inputting the intensity sequence, the height sequence and the feature vector into the classification neural network to obtain a classification result.
5. The method for classifying point cloud tree species based on onboard LiDAR according to claim 4, wherein the classifying neural network comprises:
a sequence weighting module, a weighted encoding structure and a depth feed forward structure;
the sequence weighting module comprises: a DFF, a Softmax weight normalization function and a Concate feature aggregation unit;
the weighted encoding structure includes: a multi-head attention module, a DFF, and a LayerNorm layer normalization unit;
the depth feed forward structure includes: linear full-connection layer, mish activation function and BatchNorm batch normalization unit.
6. Point cloud tree species classification system based on airborne LiDAR, characterized by comprising: the device comprises a filtering unit, a dividing unit, a slicing unit and a classifying unit;
the filtering unit is used for carrying out ground filtering on the original point cloud to be processed to obtain ground points of the target area; removing the ground points to obtain non-ground point clouds, and carrying out elevation normalization processing on the non-ground point clouds to obtain forest point clouds;
the segmentation unit is used for segmenting the forest point cloud to obtain Shan Mudian cloud data; constructing a training data set and a testing data set based on the Shan Mudian cloud data;
the slicing unit is used for slicing the Shan Mudian cloud data and obtaining feature vectors based on the Shan Mudian cloud data after slicing;
the classification unit is used for constructing a classification neural network, training the classification neural network by adopting the training data set, and classifying tree species of the test data set based on the trained classification neural network to obtain a classification result.
7. The point cloud tree classification system based on onboard LiDAR of claim 6, wherein the method for the filtering unit to obtain the ground point comprises:
rasterizing the target region;
selecting the lowest point of each grid as a seed point, and constructing an initial triangular network based on the seed points;
calculating an included angle and a distance formed between each original point cloud and the vertex of the initial triangle mesh;
setting an iteration angle threshold and an iteration distance threshold, taking the included angle meeting the iteration angle threshold and the iteration distance threshold and the original point cloud of the distance as ground points, and updating a triangular network;
repeating the operation to obtain all the ground points in the target area.
8. The point cloud tree seed classification system based on onboard LiDAR of claim 6, wherein the method of the slicing unit to obtain the feature vector comprises:
slicing the Shan Mudian cloud data;
constructing a spherical neighborhood space for each point cloud, and constructing a covariance matrix through a neighborhood point set to obtain a characteristic value;
calculating the degree value, the neighborhood point number and the mean value of the reflection intensity of the local obeying point-like, linear and spherical structures based on the characteristic values;
constructing a histogram feature descriptor based on the local obeying point-like, linear and spherical structure degree values, the neighborhood point number and the reflection intensity mean value;
and obtaining the feature vector based on the histogram feature descriptors.
9. The point cloud tree seed classification system based on onboard LiDAR according to claim 8, wherein the method for the classification unit to obtain the classification result comprises:
constructing the classified neural network;
extracting the average reflection intensity and the average height of each slice, and constructing an intensity sequence and a height sequence;
and inputting the intensity sequence, the height sequence and the feature vector into the classification neural network to obtain a classification result.
10. The on-board LiDAR-based point cloud tree seed classification system of claim 9, wherein the classification neural network comprises:
a sequence weighting module, a weighted encoding structure and a depth feed forward structure;
the sequence weighting module comprises: a DFF, a Softmax weight normalization function and a Concate feature aggregation unit;
the weighted encoding structure includes: a multi-head attention module, a DFF, and a LayerNorm layer normalization unit;
the depth feed forward structure includes: linear full-connection layer, mish activation function and BatchNorm batch normalization unit.
CN202310398248.7A 2023-04-14 2023-04-14 Point cloud tree classification method and system based on airborne LiDAR Pending CN116597199A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310398248.7A CN116597199A (en) 2023-04-14 2023-04-14 Point cloud tree classification method and system based on airborne LiDAR

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310398248.7A CN116597199A (en) 2023-04-14 2023-04-14 Point cloud tree classification method and system based on airborne LiDAR

Publications (1)

Publication Number Publication Date
CN116597199A true CN116597199A (en) 2023-08-15

Family

ID=87594540

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310398248.7A Pending CN116597199A (en) 2023-04-14 2023-04-14 Point cloud tree classification method and system based on airborne LiDAR

Country Status (1)

Country Link
CN (1) CN116597199A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116893428A (en) * 2023-09-11 2023-10-17 山东省地质测绘院 Forest resource investigation and monitoring method and system based on laser point cloud
CN117994527A (en) * 2024-04-03 2024-05-07 中国空气动力研究与发展中心低速空气动力研究所 Point cloud segmentation method and system based on region growth

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111898688A (en) * 2020-08-04 2020-11-06 沈阳建筑大学 Airborne LiDAR data tree species classification method based on three-dimensional deep learning
WO2021232467A1 (en) * 2020-05-19 2021-11-25 北京数字绿土科技有限公司 Point cloud single-tree segmentation method and apparatus, device and computer-readable medium
CN114998604A (en) * 2022-05-09 2022-09-02 中国地质大学(武汉) Point cloud feature extraction method based on local point cloud position relation
CN115100232A (en) * 2022-06-30 2022-09-23 江苏集萃未来城市应用技术研究所有限公司 Single-tree segmentation method based on fusion of LiDAR point cloud data
CN115761382A (en) * 2022-12-19 2023-03-07 武汉大学 ALS point cloud classification method based on random forest
US20230104674A1 (en) * 2021-10-06 2023-04-06 Matterport, Inc. Machine learning techniques for ground classification

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021232467A1 (en) * 2020-05-19 2021-11-25 北京数字绿土科技有限公司 Point cloud single-tree segmentation method and apparatus, device and computer-readable medium
CN111898688A (en) * 2020-08-04 2020-11-06 沈阳建筑大学 Airborne LiDAR data tree species classification method based on three-dimensional deep learning
US20230104674A1 (en) * 2021-10-06 2023-04-06 Matterport, Inc. Machine learning techniques for ground classification
CN114998604A (en) * 2022-05-09 2022-09-02 中国地质大学(武汉) Point cloud feature extraction method based on local point cloud position relation
CN115100232A (en) * 2022-06-30 2022-09-23 江苏集萃未来城市应用技术研究所有限公司 Single-tree segmentation method based on fusion of LiDAR point cloud data
CN115761382A (en) * 2022-12-19 2023-03-07 武汉大学 ALS point cloud classification method based on random forest

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈健昌: "基于深度学习的激光点云典型树种分类研究", 《中国优秀硕士学位论文全文数据库(电子期刊)》, 28 February 2023 (2023-02-28), pages 1 - 40 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116893428A (en) * 2023-09-11 2023-10-17 山东省地质测绘院 Forest resource investigation and monitoring method and system based on laser point cloud
CN116893428B (en) * 2023-09-11 2023-12-08 山东省地质测绘院 Forest resource investigation and monitoring method and system based on laser point cloud
CN117994527A (en) * 2024-04-03 2024-05-07 中国空气动力研究与发展中心低速空气动力研究所 Point cloud segmentation method and system based on region growth

Similar Documents

Publication Publication Date Title
US11783569B2 (en) Method for classifying hyperspectral images on basis of adaptive multi-scale feature extraction model
Liu et al. Tree species classification of LiDAR data based on 3D deep learning
Wu et al. Individual tree crown delineation using localized contour tree method and airborne LiDAR data in coniferous forests
CN106199557B (en) A kind of airborne laser radar data vegetation extracting method
CN116597199A (en) Point cloud tree classification method and system based on airborne LiDAR
Chevallier et al. TIGR‐like atmospheric‐profile databases for accurate radiative‐flux computation
CN113591766B (en) Multi-source remote sensing tree species identification method for unmanned aerial vehicle
CN111340723B (en) Terrain-adaptive airborne LiDAR point cloud regularization thin plate spline interpolation filtering method
Bourgoin et al. UAV-based canopy textures assess changes in forest structure from long-term degradation
CN110309780A (en) High resolution image houseclearing based on BFD-IGA-SVM model quickly supervises identification
Shen et al. Biomimetic vision for zoom object detection based on improved vertical grid number YOLO algorithm
CN115880487A (en) Forest laser point cloud branch and leaf separation method based on deep learning method
Chen et al. Agricultural remote sensing image cultivated land extraction technology based on deep learning
Xu et al. Feature-based constraint deep CNN method for mapping rainfall-induced landslides in remote regions with mountainous terrain: An application to Brazil
Hui et al. Wood and leaf separation from terrestrial LiDAR point clouds based on mode points evolution
Zhou et al. Tree crown detection in high resolution optical and LiDAR images of tropical forest
CN113935366A (en) Automatic classification method for point cloud single wood segmentation
CN111738278A (en) Underwater multi-source acoustic image feature extraction method and system
Sutha Object based classification of high resolution remote sensing image using HRSVM-CNN classifier
Chang et al. An object-oriented analysis for characterizing the rainfall-induced shallow landslide
CN117830381A (en) Lake area change space analysis model, construction and area change prediction method and device
Xu et al. Remote Sensing Mapping of Cage and Floating-raft Aquaculture in China's Offshore Waters Using Machine Learning Methods and Google Earth Engine
Srivastava et al. Feature-Based Image Retrieval (FBIR) system for satellite image quality assessment using big data analytical technique
Yang et al. Improved tropical deep convective cloud detection using MODIS observations with an active sensor trained machine learning algorithm
Wan et al. Plot-level wood-leaf separation of trees using terrestrial LiDAR data based on a seg‐mentwise geometric feature classification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination