CN112347894A - Single-plant vegetation extraction method based on transfer learning and Gaussian mixture model separation - Google Patents

Single-plant vegetation extraction method based on transfer learning and Gaussian mixture model separation Download PDF

Info

Publication number
CN112347894A
CN112347894A CN202011203898.4A CN202011203898A CN112347894A CN 112347894 A CN112347894 A CN 112347894A CN 202011203898 A CN202011203898 A CN 202011203898A CN 112347894 A CN112347894 A CN 112347894A
Authority
CN
China
Prior art keywords
trunk
point
point cloud
points
gaussian mixture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011203898.4A
Other languages
Chinese (zh)
Other versions
CN112347894B (en
Inventor
惠振阳
李大军
刘波
王乐洋
聂运菊
余美
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
East China Institute of Technology
Original Assignee
East China Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by East China Institute of Technology filed Critical East China Institute of Technology
Priority to CN202011203898.4A priority Critical patent/CN112347894B/en
Publication of CN112347894A publication Critical patent/CN112347894A/en
Application granted granted Critical
Publication of CN112347894B publication Critical patent/CN112347894B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30188Vegetation; Agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a single-plant vegetation extraction method based on transfer learning and Gaussian mixture model separation, which comprises the following steps: s1, detecting the trunk based on the direct push type transfer learning to obtain trunk points; s2, carrying out nearest clustering on the basis of the trunk point cloud to obtain an initial segmentation result; s3, determining the number of mixed components of each part in the initial segmentation by adopting principal component transformation and kernel density estimation, and realizing Gaussian mixture model separation based on the number of the mixed components to obtain a crown separation result; s4, performing over-segmentation vegetation optimization merging based on the point density gravity center; and S5, acquiring the trunk point clouds corresponding to the crowns from top to bottom based on the vertical continuity principle, and completing the single-tree extraction. The invention can realize the detection of more trees while ensuring the acquisition of higher tree extraction precision.

Description

Single-plant vegetation extraction method based on transfer learning and Gaussian mixture model separation
Technical Field
The invention relates to the technical field of single-tree extraction, in particular to a single-plant vegetation extraction method based on transfer learning and Gaussian mixture model separation.
Background
The LiDAR (Light Detection and Ranging) technology is a new active remote sensing measurement mode, and the development is very rapid in recent years. Compared with the traditional passive optical remote sensing measurement, the LiDAR technology has the advantages of high data acquisition speed, high point position precision, no influence of external illumination conditions, capability of performing data acquisition in all weather for 24 hours and the like. In addition, laser pulses emitted by the LiDAR system can partially penetrate through vegetation canopies to reach the ground, and can directly measure the three-dimensional structure of the canopies and the under-forest terrain, thereby having more advantages in detecting the structure and the function of an ecosystem than a traditional optical sensor. Currently, LiDAR technology has become an important measurement tool for vegetation resource investigation and monitoring.
The single wood is a basic constitutional unit of the forest, and the spatial structure and the corresponding vegetation parameters of the single wood are key factors for forest resource investigation and ecological environment modeling research. And (4) single-tree segmentation, namely, identification and extraction of single-plant vegetation are realized from LiDAR point cloud. The single-tree segmentation is a precondition and a basis for estimating vegetation parameters (such as spatial position, tree height, breast diameter, crown breadth and the like). Accurate vegetation parameter estimation will provide quantitative data support for sustainable management and accurate cultivation of forest resources. The traditional measurement usually adopts a tape measure, a caliper, a height gauge and the like to manually measure the single wood. This process not only takes up a significant amount of labor, but is also very time consuming. The LiDAR technology can measure the three-dimensional space structure of trees by acquiring the backscattering signal of laser pulse, and the measurement efficiency is improved. However, the existing LiDAR technology for single-tree segmentation still has the problem of low precision, and inaccurate single-plant vegetation extraction seriously affects the subsequent vegetation parameter estimation.
Disclosure of Invention
Therefore, the invention aims to provide a single-plant vegetation extraction method based on transfer learning and Gaussian mixture model separation, so as to solve the problem of low precision in the prior art.
A single-plant vegetation extraction method based on transfer learning and Gaussian mixture model separation comprises the following steps:
s1, detecting the trunk based on the direct push type transfer learning to obtain trunk points;
s2, carrying out nearest clustering on the basis of the trunk point cloud to obtain an initial segmentation result;
s3, determining the number of mixed components of each part in the initial segmentation by adopting principal component transformation and kernel density estimation, and realizing Gaussian mixture model separation based on the number of the mixed components to obtain a crown separation result;
s4, performing over-segmentation vegetation optimization merging based on the point density gravity center;
and S5, acquiring the trunk point clouds corresponding to the crowns from top to bottom based on the vertical continuity principle, and completing the single-tree extraction.
According to the single vegetation extraction method based on the transfer learning and the Gaussian mixture model separation, which is provided by the invention, the process is combined from bottom to top and from top to bottom, in the bottom to top method, firstly, the transfer learning is used for classifying the trunk points in the point cloud, the extracted trunk points can be used as the clustering centers of initial segmentation, and the crown of each single tree can be accurately extracted through principal component analysis transformation, kernel density estimation and the Gaussian mixture model separation. In the top-down method, the extracted crown points can be used as the basis for trunk extraction, and the trunk points can be extracted according to the principle of vertical continuity. Experimental results show that the average accuracy of the method can reach 87.68% in six scenes of a single-measuring-station mode and a multi-measuring-station mode, and the method is far superior to two methods in the prior art. In addition, this method is also superior to the other two methods in terms of the integrity rate and the average accuracy. Therefore, the method and the device can realize the detection of more trees while ensuring the acquisition of higher tree extraction precision.
Drawings
FIG. 1 is a schematic flow diagram of a method for extracting individual vegetation based on migratory learning and Gaussian mixture model separation according to an embodiment of the invention;
FIG. 2 is a diagram of trunk point cloud extraction and optimization;
FIG. 3 is a comparison graph of the tree trunk point cloud and the vertical continuity of the misjudged points;
FIG. 4 is a schematic diagram of trunk burr point rejection;
FIG. 5 is a schematic view of an under-segmented crown point cloud;
FIG. 6 is a schematic diagram of a point cloud projection transformation based on principal component analysis;
FIG. 7 is a point cloud projection transform based on principal component analysis;
FIG. 8 is a graph of kernel density estimates for different bandwidths, the bandwidth in (a) being 0.4 and the bandwidth in (b) being the value calculated according to equation (10);
FIG. 9 is a schematic diagram of a Gaussian mixture model separation implementation vegetation optimization segmentation;
FIG. 10 is a schematic diagram of a classification cluster center of gravity calculation;
FIG. 11 is a schematic diagram of crown-based top-down trunk probe extraction;
FIG. 12 is a schematic view of different types of vegetation areas;
FIG. 13 is a schematic diagram of two independent tree point clouds with tag information;
FIG. 14 is a schematic diagram of a trunk extraction result in a multi-station mode and a single-station mode for six different scenarios;
FIG. 15 is a schematic diagram of a portion of an extracted independent tree;
FIG. 16 is a graph comparing the extracted result with the planar position of the exact bearing tree;
FIG. 17 is a graph of the integrity ratio comparison of three methods (A for the first classical method, B for the second classical method, C for the present invention);
FIG. 18 is a graph comparing accuracy of three methods (A is a first classical method, B is a second classical method, and C is the present invention);
FIG. 19 is a graph of average accuracy versus three methods (A for the first classical method, B for the second classical method, and C for the present invention).
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In view of the problem of low precision of the existing LiDAR technology for single-tree segmentation, the invention combines the transfer learning and the Gaussian mixture model separation to obtain the accurate single-plant vegetation extraction result. Before trunk detection is carried out, point cloud filtering is carried out by adopting an improved morphological filtering method, and the influence of ground point cloud on trunk detection is removed. After the ground point cloud is removed, trunk point cloud detection is achieved by adopting a transfer learning method, and then a trunk central point is obtained according to a vertical continuity principle. And realizing initial point cloud segmentation by adopting a nearest clustering method based on the obtained trunk central point position. In general, the initial segmentation result tends to have an under-segmentation phenomenon. In order to realize the correct segmentation of the crown point cloud, firstly, projection transformation is carried out on the crown point cloud based on a PCA (principal component analysis) criterion. And determining the number of mixed components in each initial segmentation object through kernel density function estimation. And then, separating through a Gaussian mixture model according to the calculated number of the mixture components to realize crown point cloud extraction. In order to avoid the over-segmentation phenomenon of the crown point cloud, the invention provides a point density gravity center method to optimize the crown segmentation result. And finally, acquiring the trunk point cloud corresponding to each crown by a top-down method based on the extracted crown point cloud, thereby realizing complete single-plant vegetation extraction.
Referring to fig. 1, the method for extracting single-plant vegetation based on transfer learning and gaussian mixture model separation according to an embodiment of the present invention includes steps S1-S5.
And S1, performing trunk detection based on the direct push type migration learning to obtain trunk points.
Transfer learning is a machine learning method which has developed very rapidly in recent years. Compared with the traditional supervised learning method, the transfer learning method can solve the problems of different but related fields by utilizing the established learning model. There are many examples of transfer learning in nature, for example, if a person learns to ride a bicycle, it is relatively easy for him to learn to ride an electric bicycle again. Migration learning can be classified into 3 types according to whether the source domain and the target domain have sample marks: inductive transfer learning, direct-push transfer learning, and unsupervised transfer learning. The method mainly adopts direct push type migration learning with sample marks only in a source domain to realize trunk detection on the point cloud data lacking the sample marks, namely, the point cloud data with existing trunk and blade mark information is used as the source domain, a training model is established for each sample in the source domain, the trained model is migrated and applied to a target domain lacking the sample mark information, and the trunk and blade point cloud separation in the target domain is realized. The trunk detection is realized by adopting the transfer learning, and the method has the advantages that the existing point cloud marking information can be fully utilized, and the training sample marking of the point cloud data of the target domain is avoided, wherein the sample marking is usually the most time-consuming and labor-consuming.
Although the types of vegetation in the point cloud data of the source domain and the target domain may be different, in a natural state, the branches and the leaves may present obviously different geometric characteristics, for example, the branches are more linear, and the leaves are more planar or scattered. In order to avoid the phenomenon of negative migration, the invention mainly adopts the calculation of geometric characteristic vectors to establish a training model. The branch and leaf separation is realized by calculating the covariance tensor of the point cloud local area and further solving 5 eigenvectors such as linearity, surface property, scatter property, surface curvature, intrinsic entropy and the like. Because the Random Forest (RF) has the characteristics of simplicity, easy realization, low calculation overhead and the like, the method adopts the RF to construct the training model for transfer learning. The specific calculation method of 5 eigenvectors such as linearity, surface property, scatter property, surface curvature and intrinsic entropy is as follows:
at the current judging point
Figure BDA0002756377380000043
As a center, all the points with the distance less than r are searched to form a neighboring point set Sx={p1,p2…pkUsing the set of points to construct a neighboring points covariance tensor, covariance tensor CxThe calculation formula is shown as formula (1).
Figure BDA0002756377380000041
From covariance tensor CxThree eigenvalues lambda can be calculated1>λ2>λ3> 0 and corresponding feature vector e1,e2,e3. Normalizing the eigenvalues so that λ123From the three eigenvalues, the following five eigenvectors can be constructed as shown in table 1.
TABLE 1 eigenvector calculation formula
Figure BDA0002756377380000042
Figure BDA0002756377380000051
And S2, performing nearest neighbor clustering on the basis of the trunk point cloud to obtain an initial segmentation result.
Wherein, the step S2 specifically includes steps S21 to S23:
and S21, performing voxelization on the point cloud, and removing misjudgment points of scattered distribution according to the characteristic that the trunk point cloud has strong continuity in the branch direction.
In which, when the trunk is detected by using the transfer learning, some branch and leaf points are still misjudged as the trunk point cloud, as shown in fig. 2 (a). Compared with the trunk point cloud, the branch and leaf misjudgment points are distributed in a scattered manner, are more expressed as isolated points, and have no continuity in the vertical direction. According to the two characteristics, the method is mainly used for gradually removing misjudgment points in two steps to obtain accurate trunk point cloud. Firstly, point cloud is voxelized, and most misjudgment points which are scattered and distributed are removed according to the characteristic that the trunk point cloud has strong continuity in the branch direction. As shown in fig. 3, (a) is a trunk point cloud voxelization result, and (b) is a misjudgment point voxelization result. As can be seen from the figure, the trunk point cloud has strong continuity in the vertical direction, and is more in non-space nets in the vertical direction. And the continuity of the misjudgment points in the vertical direction is poor, and the specific expression is that the non-space nets in the vertical direction are fewer. By setting a threshold value, misjudgment points with poor continuity in the vertical direction can be eliminated. In addition, the misjudged points are further removed according to the characteristic that the misjudged points are generally distributed in a scattered manner. The point clouds with less cluster point sets are removed mainly by clustering adjacent points of the point clouds subjected to continuity judgment processing. The point cloud in fig. 2(a) can obtain a more accurate trunk point cloud by the above method of gradually removing misjudged points in two steps, as shown in fig. 2 (b).
As can be seen from fig. 2(b), most misjudged point clouds are effectively filtered after the trunk point cloud optimization processing, but some 'burr' points still exist on the trunk. These bur points are primarily formed by the branches adjacent to the trunk. To obtain the exact center of the trunk, these bur points need to be removed. The method comprises the steps of firstly carrying out horizontal projection on each trunk point cloud in the step (b) in the figure 2, and then carrying out horizontal and vertical grid division on the point cloud after horizontal projection, as shown in the step (a) in the figure 4. The invention sets the grid width to 0.05 meter. As can be seen from fig. 4(a), after horizontal projection, the distribution of the trunk body point cloud is more concentrated, and the distribution of the bur point cloud is less concentrated. Therefore, burr points in each trunk point cloud can be removed through point density constraint. According to the invention, after the point density constraint threshold is set as the average value of the number of point clouds in each grid after the point cloud horizontal projection, the formula is as follows:
Figure BDA0002756377380000061
in the formula, th is a point density constraint threshold, IM is a two-dimensional grid formed by horizontal projection of the trunk point cloud, and num (·) represents the number of the point clouds in each two-dimensional grid. M and N are maximum values of the two-dimensional grid in the transverse and longitudinal directions, and mean (-) is used for solving the mean value. Run.xiAnd trunkiFor each point p in the point cloud of the trunkiIn x and y coordinates, m and n being piThe grid coordinates of points in the two-dimensional grid, floor (·), are rounded down. The trunk point cloud after deburring can be expressed according to the formula (3).
{trunk}={pi∈IM(m,n)|num(IM(m,n))>th} (3)
The point cloud of the trunk of each tree after deburring is shown in fig. 4 (b). As can be seen from the figure, the bur points on the trunk are all effectively removed.
After the burr points are removed, the plane position of the central point of each trunk can be calculated according to the formula (4).
Figure BDA0002756377380000062
In the formula, Loci(x, y) is the plane coordinate of the ith tree trunk center, (trunk)i.xj,trunki.yj) For the planar coordinates of the points of the ith tree trunk after deburring, KiThe total number of point clouds of the ith tree trunk.
After the trunk center points are calculated, the trunk centers are used as clustering centers, the point clouds are subjected to nearest clustering in the horizontal direction, the initial segmentation result of the vegetation is obtained, and the formula is as follows:
clusteri={pi∈ptc|disxy(pi,Loci)<disxy(pi,Locj),j≠i,j∈[1,K]} (5)
in the formula, clusteriIs by LociIs an initial classification cluster of a cluster center, ptc is a cloud set of vegetation points, disxy(. cndot.) is the planar distance between two points, and K is the number of cluster centers, i.e., the number of trunks in the region.
And S22, performing crown point cloud projection transformation based on principal component analysis.
The vegetation can realize the separation of different trees after the initial segmentation with the trunk as the center. But the vegetation under-segmentation phenomenon still exists in the initial segmentation result. Especially for some low vegetation, it is difficult to achieve effective detection. In order to obtain the single-plant vegetation extraction result with higher precision, the method continues to perform further optimized segmentation on the basis of the initial segmentation result.
In the present invention, the crown and the trunk are extracted separately. To avoid the influence of the trunk or the short shrub on the crown extraction, the present invention first removes these points, as shown in formula (6).
Figure BDA0002756377380000071
In the formula, cantyiRepresenting crown point clouds, pkCluster for initial classificationiAt any point in (nc)iThe number of point clouds in the initial classification cluster,
Figure BDA0002756377380000072
is pkThe elevation of the point(s) is,
Figure BDA0002756377380000073
for trunk point clouds trunkiThe maximum value of the elevation.
As shown in fig. 5(a), crown point clouds of different neighboring plants are easily divided together. In order to realize high-precision extraction of single-plant vegetation, the under-segmented crown point cloud needs to be further separated. Generally speaking, the horizontal projection of an accurate single plant vegetation point cloud is similar to a circle, and the horizontal projection of the vegetation point clouds segmented before each other is similar to an ellipse. Fig. 5(b) is the result of the horizontal projection of fig. 5(a), and it can be seen that the point cloud in the area is obviously distributed in an elliptical shape.
In order to realize the correct separation of the crown point cloud, the invention firstly adopts a principal component analysis method to perform projection transformation on the point cloud. From FIG. 6(a)Compared with the distribution of the point cloud in the x and y directions, the discrimination of the point cloud in the long axis direction of the ellipse is more obvious, and the previously segmented crown point cloud is easier to separate. FIG. 6(b) is a diagram showing the point cloud on the major axis F of the ellipse1And projecting the direction. As can be seen from fig. 6(b), the two trees are more easily separated when they are brought together.
The direction of the major axis of the ellipse may be generally defined as the direction of the first principal component. In order to avoid the interference of part of isolated points on the analysis and calculation of the main components, the invention firstly calculates the number of adjacent points of each point in an initial classification cluster, and determines the points with less number of adjacent points as isolated points and eliminates the isolated points. Then, for each initial classification cluster, a covariance tensor of the classification cluster is calculated according to formula (1), and eigenvalues and eigenvectors of the covariance tensor are calculated. Defining the direction of the eigenvector corresponding to the maximum eigenvalue as the major axis direction of the ellipse, and performing projection transformation on the point cloud in the direction. The transformation process can be described by equation (7):
score=X*coeff (7)
where score is the result of principal component transformation, X is an n × 2 matrix,
Figure BDA0002756377380000074
Figure BDA0002756377380000075
cluster canopy for initial classificationiIn (c) pkThe x and y coordinates of the points, n is the total number of the classified cluster point clouds. coeff is an eigenvector matrix of the covariance matrix corresponding to the classification cluster.
And S23, determining the number of classification clusters through Gaussian kernel density estimation.
From the foregoing, it can be seen that the vegetation point cloud segmentation performed by using the trunk center is prone to an under-segmentation phenomenon, that is, there may be a plurality of trees in the same segmentation object. As can be seen from fig. 6(b), two independent trees exist in the split object. It can also be seen that to achieve the optimized segmentation of the crown point cloud, the number of classification clusters in each initial segmented object needs to be determined first, i.e. each initial segmented object consists of several trees.
As can be seen from fig. 6(b), the point density tends to be higher at the center of each tree, and the point density tends to decrease from the center of the tree to the two sides. Thus, the determination of the number of classification clusters may be achieved by detecting the number of local maxima in spot density. FIGS. 7(a) and (b) are histograms of the point density distribution in FIG. 6(b), except that the major axis F of the ellipse1The dot density statistics vary in interval in the direction. In order to realize accurate detection of the local maximum value of the point density, the invention adopts a kernel density estimation method to calculate the probability density function distribution of each initial segmentation object. The kernel density estimate is defined by equation (8):
Figure BDA0002756377380000081
in the formula, n is the number of point clouds of each initial segmentation object, h is the bandwidth, and K is the kernel function, the probability density estimation is carried out by adopting the Gaussian kernel function, and the formula is expressed as follows:
Figure BDA0002756377380000082
the bandwidth parameter h has a large influence on the result of gaussian kernel density estimation, and fig. 8(a) and (b) are gaussian kernel density estimation curves calculated by different bandwidth parameters for the point cloud in fig. 8 (b). In order to realize accurate Gaussian probability density estimation, the invention adopts a Silverman rule of thumb to carry out self-adaptive calculation on the bandwidth, and the formula is expressed as follows:
Figure BDA0002756377380000083
σi=MAD/0.6745 (10)
in the formula, hiIs the bandwidth of the ith dimension, d is the dimension number, in the invention, d is equal to 1, and n is the total number of the point clouds. SigmaiThe constant 0.6745 ensures that the estimation is carried out under normal distributionThe meter is unbiased.
And S3, determining the number of mixed components of each part in the initial segmentation by adopting principal component transformation and kernel density estimation, and realizing Gaussian mixture model separation based on the number of the mixed components to obtain a crown separation result.
It can be seen from fig. 8 that the nuclear density distribution curves of the point clouds of different plants of vegetation clustered together can be regarded as the superposition of different gaussian distributions, so that the optimized segmentation of different plants of vegetation can be realized by separating the gaussian models with different parameters superposed together. Using equation (8) and equation (10), a gaussian kernel density estimation curve of the point cloud in the first principal component direction in fig. 6(a) can be obtained through calculation, and a total of 2 classification clusters in the initial segmentation object can be determined by detecting the number of local maxima, as shown in fig. 9 (a). And further, a Gaussian mixture model separation method is adopted to divide the mixed clustering point cloud into 2 types.
In general, assuming that point clouds of different classes are included in the point clouds, the density function of the gaussian mixture distribution is:
Figure BDA0002756377380000084
where V is a feature vector, and in the present invention, V is a result of principal component analysis transformation, i.e., score. S is each mixed component. Lambda [ alpha ]kIs a specific gravity coefficient, which represents the prior probability of each mixed part. (u)kk) Parameters representing the Gaussian distribution, mean and variance, respectively, Gk(. cndot.) represents a Gaussian density function, and the formula is as follows:
Figure BDA0002756377380000091
estimating parameters of the Gaussian mixture model by adopting an Expectation Maximization (EM) algorithm, and specifically comprising the following four steps:
i initializing Gaussian mixture distribution parameters including λk、μkAnd deltak,k=1,2,…,N;
Ii, E: calculating the probability P (S) of each mixture componentk|Vi);
Figure BDA0002756377380000092
In the formula, SkRepresenting a k-th type point cloud set, ViRepresenting the feature vector of the ith point primitive.
Iii M step: updating a Gaussian mixture distribution parameter λk、μkAnd deltak,k=1,2,…,N;
Figure BDA0002756377380000093
Figure BDA0002756377380000094
Figure BDA0002756377380000095
Iv, checking whether convergence occurs or not, if yes, stopping iteration, and outputting parameters of the Gaussian mixture model; otherwise, updating the mixed distribution parameters and continuing iteration.
The EM algorithm needs to be implemented repeatedly. The convergence condition of the iteration is that the variation of the mixed distribution parameter calculated in the last iteration and the variation of the mixed distribution parameter calculated in the next iteration are smaller than a threshold value or reach the maximum iteration number. And after the EM algorithm stops iteration, dividing each point according to the maximum probability category to which the point belongs. Fig. 9(b) shows the result of the hybrid point cloud optimization segmentation in fig. 6 (a).
And S4, performing over-segmentation vegetation optimization combination based on the point density gravity center.
The vegetation point cloud is segmented and extracted through Gaussian mixture model separation, and the phenomenon of vegetation over-segmentation is possible. I.e. the same tree is divided into two or more classification clusters. Further, when the initial division is performed based on the center position of the trunk plane, it is highly possible to divide the same tree into a plurality of trees. This over-segmentation not only makes the extracted individual vegetation incomplete, but also results in excessive errors.
Generally, over-segmented vegetation point clouds are typically closer in their planar position. Many scholars merge the vegetation with the closer distance by calculating the plane distance between the highest points of all the classification clusters. And some scholars judge whether to combine the classification clusters or not by calculating the mean value of the plane coordinates of each classification cluster. The two methods can obtain effective combined results under ideal conditions (the highest point of the tree is the top point of the tree, and the tree grows uniformly and symmetrically). However, the distribution of vegetation in nature can be diverse due to exposure to light and water environments. Therefore, the highest and mean points do not represent the center position of each classification cluster well. In order to make the vegetation optimization combination method more robust, the invention combines the vegetation with closer distance by calculating the point density gravity center position of each classification cluster.
As can be seen from fig. 10, for any classification cluster, the point cloud distribution is dense in the vertical direction of its center position. Therefore, the plane gravity center position of each classified cluster point cloud can represent the center position of the classified cluster. The invention positions the center of gravity
Figure BDA0002756377380000104
Defined as the weighted average value with the point density distribution as the weight after the horizontal projection of the point cloud, the formula is expressed as follows:
Figure BDA0002756377380000101
Figure BDA0002756377380000102
Figure BDA0002756377380000103
in the formula, m and n are maximum values of the horizontal and vertical grids after grid division is performed on the point cloud horizontal projection, respectively, as shown in fig. 10.
Figure BDA0002756377380000105
Is the mean planar coordinate of the grid (i, j), (x)q,yq) For any point within the grid (i, j), mean () is the mean calculation. P (i, j) is the weight of the mesh (i, j), and num (i, j) represents the number of point clouds in the mesh (i, j).
And S5, acquiring the trunk point clouds corresponding to the crowns from top to bottom based on the vertical continuity principle, and completing the single-tree extraction.
In the invention, the trunk detection is realized by adopting a direct push type transfer learning method. However, the extracted trunks are often few, the false rejection error is large, and the trunks cannot correspond to the crown point clouds after the subsequent optimization and segmentation one by one. In order to realize the detection and extraction of complete single-plant vegetation, the invention further provides a tree crown point cloud extraction method based on the tree crown point cloud from top to bottom.
The optimized segmentation of the crown point cloud can be realized through the separation of the Gaussian mixture model, and the single plant vegetation crown point cloud is obtained, as shown in fig. 11 (a). It is now necessary to acquire a trunk point cloud below this crown point cloud. First, calculate the horizontal projection range [ Canopy ] of the crown point cloudi.xmin,Canopyi.xmax]、[Canopyi.ymin,Canopyi.ymax]. Then, from the remaining point cloud set { left _ pts } obtained by the calculation of the formula (6), the points in the horizontal projection range of the crown point cloud are obtained, and the formula is expressed as follows:
Figure BDA0002756377380000111
in the formula, CanopyiRepresenting the ith crown point cloud, within _ ptsiIs the point cloud below the crown of the tree. The two point clouds are combined to obtain a vegetation point cloud shown in fig. 11 (b). As can be seen from FIG. 11(b), except for the crown point cloudAlso included are partial crotch point clouds and partial discrete points, which are typically formed by understreel low bush bushes. To obtain accurate trunk point clouds, these point clouds need to be rejected.
Firstly, the point cloud has _ ptsiThe voxelization is performed as shown in fig. 11 (c). For any voxeljIf there is a point within the voxel, the voxel value is 1. From the foregoing, the trunk point cloud generally has better continuity in the branch direction. Therefore, the accurate trunk point cloud is obtained by counting the number of 1 voxel in the vertical direction of each horizontal grid and eliminating the point cloud in the horizontal grid with the number smaller than the threshold value.
In order to verify the mode provided by the invention, an international standard foundation LiDAR public test data set is adopted to carry out experimental analysis (http:// laser. fi/tls-benchmark-results /). The data set is obtained by the Finland Geodetic Institute (FGI) by means of Leica HDS1600 measurement, and aims to facilitate researchers to explore the application potential of the TLS technology in the forest resource investigation field. The data set was located in the elver region of finland (61.19 ° N, 25.11 ° E), covering 24 experimental plots, containing a variety of different types of vegetation, with varying densities of vegetation distribution and abundant crown types. The size of each experimental sample plot is fixed, is 32m multiplied by 32m, and point cloud data acquisition is carried out by adopting two modes of 'single measuring station' and 'multi-measuring station'. Depending on the complexity of the vegetation distribution within the experimental plots, these experimental plots are classified into three categories, simple, medium, and complex, respectively. Of these 24 experimental plots, point cloud data for 6 experimental plots were publicly available. The invention adopts the 6 groups of data to carry out experimental analysis, and the specific distribution characteristics of the 6 groups of experimental data are shown in the table 1. Fig. 14 is a schematic point cloud of three different types of vegetation areas.
TABLE 2 Experimental data distribution characteristics
Figure BDA0002756377380000112
The following method is mainly adopted for precision calculation, and the specific calculation method is shown in table 3:
TABLE 3 calculation method for single plant vegetation extraction precision
Figure BDA0002756377380000121
And quantitatively evaluating the quality of the vegetation extraction method by adopting the integrity rate Com, the accuracy Corr and the average precision Mean _ acc. The completeness rate reflects the detection capability of the method for vegetation, and the accuracy rate reflects the accuracy of the method for detecting trees. The average accuracy measures the combined probability that a randomly selected extraction tree is correctly detected and that a randomly selected reference tree is detected by the method. The three accuracy calculation methods are as follows:
Figure BDA0002756377380000122
Figure BDA0002756377380000123
Figure BDA0002756377380000124
two single-wood point clouds with labeled information are used as a source domain of the transfer learning. The point cloud data has been accurately separated into branches and leaves by Moorthy et al using the open source software CloudCompare. The point cloud data for these two trees are acquired by Riegl VZ-400 and Riegl VZ-1000 ground laser scanners, respectively. As shown in fig. 13(a) and (b), the branch point cloud is a light point in the graph, and the leaf point cloud is a dark point in the graph. In migration learning, branch points and leaf points tend to have significantly different geometric characteristics, although tree species may differ in source and target domains. Typically, tree points appear linear, while leaf points appear scattered. By calculating five geometric feature vectors of each point, a migration learning model of the source domain can be established. The established model is then applied to the target domains of the six scenarios mentioned above to distinguish the branch points and leaf points in the point cloud of the singletree. After the trunk points are optimized by the method provided by the invention, the trunk points of each scene can be extracted in a single measuring station mode or a multi-measuring station mode, as shown in fig. 14. As can be seen from the figure, a simple scene (fig. 14(a-d)) tends to extract more trunks than a complex scene (fig. 14 (i-l)). This is because trees in complex scenes are more dense and complex, as shown in fig. 12(e) and (f). In a complex scene, the linear geometric characteristics of the trunk points and the leaf points are not obviously different. Thus, many trunks cannot be detected. In addition, it can be found that more trunks are extracted from the multi-station mode point cloud than from the single-station mode point cloud. This is because the point cloud data obtained in the multi-station mode is more complete. Thus, the linear characteristics of the trunk may be more pronounced. However, the extraction of the trunk is still not very good, especially in difficult scenes. The extraction of the trunk often contains large spurious errors. However, the trunk extracted in the present invention is only used as a cluster center of the initial segmentation. The under-segmentation result can be optimized through separation of a following Gaussian mixture model, and a more accurate result is obtained.
After extracting the trunk, an initial cluster center can be obtained by projecting the trunk point onto the horizontal plane. Based on the cluster center, an initial segmentation result can be obtained. As mentioned above, the initial single-wood segmentation result tends to be under-segmented, since the extracted trunk result typically contains a false-negative error. By adopting the method of steps S2-S4, the under-divided crowns can be accurately separated. And then, extracting the trunk point corresponding to each crown by adopting a top-down method according to a vertical continuity principle. FIG. 15 shows a portion of individual vegetation extracted using the method of the present invention.
As can be seen from FIG. 15, the method of the present invention can obtain a better extraction result of the individual vegetation. The method firstly adopts a transfer learning method to obtain the trunk point cloud. Then, using the extracted trunk center, crown points of each tree are extracted on the basis of the initial segmentation. After the crowns are accurately extracted, the tree trunk corresponding to each crown is extracted by a top-down method. Thus, the process of the present invention can be viewed as a combination of bottom-up and top-down processes.
The results of the prior art and the method provided by the invention are compared with the accurate results for analysis. The extracted trees and the exact tree positions are shown in fig. 16. It can be seen from the figure that although some of the trees are not accurately detected, most of the extracted trees are correct. Only a few trees are falsely detected. In addition, compared with a single-measuring-station mode point cloud, the multi-measuring-station mode point cloud can accurately extract more single trees. This is because the multistation mode can provide a more complete tree point. Another point to be noted is that, as the forest scene becomes more and more complex, the number of the single trees that can be effectively extracted becomes less and less. As shown in fig. 16, the simple scene (fig. 16(a) - (d)) can extract more singles than the medium and complex scene (fig. 16(e) - (l)). This is because the forest density of medium and complex scenes is much greater than for simple scenes, as shown in table 2. Obviously, dense trees are not easily extracted. Furthermore, as shown in fig. 12, trees in a simple scene are intuitively easier to separate.
In order to quantitatively evaluate the advantages and disadvantages of the method, three indexes of the complete rate, the correct rate and the average precision of six scenes are respectively calculated according to formulas (18-20). Meanwhile, two classical single-wood extraction methods in the prior art are tested and compared with the precision index of the method disclosed by the invention. The first classical approach is a marker-based watershed segmentation method. In this method, the tree top is detected using a variable size window, the window size being estimated from a regression curve between the crown size and the tree height. The top of the tree is then selected as a marker for watershed segmentation to prevent over-segmentation. The second classical approach is to separate the individual trees according to their horizontal spacing. Generally, the horizontal distance between the tops of the trees is greater than the horizontal distance at the bottom of the trees. Therefore, the single-wood point cloud can be obtained gradually by starting to grow from the top of the tree according to the relative spacing between the single wood. In other words, the points of the same tree are relatively closely spaced and thus can be added in step-wise; while points of different trees are spaced apart by a large distance and will not be added step by step. The reason for choosing these two methods is that their principle is simple and that these two methods have been applied by researchers to commercial software. Therefore, the comparison of the single-wood extraction results can be objectively performed.
The results of the accuracy index calculations for the above six scenarios are shown in table 4. Each scenario includes single station and multi-station modes. As can be seen from Table 4, the monoliths extracted by the method of the present invention have high accuracy. The accuracy of almost all scenes is greater than 80%, with an average accuracy of 87.68%. Furthermore, the average integrity of the process was 37.33%. Compared with the accuracy and the integrity, the average accuracy is a relatively balanced accuracy index. The average accuracy for simple scenes is 69.85%, for medium scenes 57.36%, and for complex scenes 18.57%. Therefore, it can be concluded that the same conclusion as in fig. 16 is that the performance of the method mentioned in the present invention will be degraded when the forest environment becomes complex.
TABLE 4 comparison of three types of accuracy indexes
Figure BDA0002756377380000141
FIGS. 17-19 show a comparison of the completeness, accuracy and average accuracy of the method of the present invention and two methods of the prior art. In terms of the integrity ratio, the method of the present invention can obtain better extraction results in addition to plot _5_ SS and plot _6_ SS. The experimental results show that the average integrity rate of the invention is much higher than that of the other two methods. The results of the present invention are also superior to the results of the other two methods in terms of accuracy. By adopting the method, the accuracy of almost all scenes is more than 80 percent, and the maximum accuracy of other two methods is below 40 percent. Combining the completeness and the accuracy shown in fig. 17 and fig. 18, it can be seen that the method of the present invention can realize more singlewood extractions while ensuring a higher extraction accuracy of singlewood. In terms of average accuracy (fig. 19), it can be seen that the present invention works better than the other two methods. In addition, when the forest environment is changed from simple to complex, the average accuracy of the three methods is reduced.
In summary, according to the single plant vegetation extraction method based on the transfer learning and the gaussian mixture model separation provided by the invention, the process is combined from bottom to top and from top to bottom, in the bottom to top method, the transfer learning is firstly used for classifying the trunk points in the point cloud, the extracted trunk points can be used as the clustering centers of the initial segmentation, and the crown of each single plant can be accurately extracted through principal component analysis transformation, kernel density estimation and gaussian mixture model separation. In the top-down method, the extracted crown points can be used as the basis for trunk extraction, and the trunk points can be extracted according to the principle of vertical continuity. Experimental results show that the average accuracy of the method can reach 87.68% in six scenes of a single-measuring-station mode and a multi-measuring-station mode, and the method is far superior to two methods in the prior art. In addition, this method is also superior to the other two methods in terms of the integrity rate and the average accuracy. Therefore, the method and the device can realize the detection of more trees while ensuring the acquisition of higher tree extraction precision.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A single-plant vegetation extraction method based on transfer learning and Gaussian mixture model separation is characterized by comprising the following steps:
s1, detecting the trunk based on the direct push type transfer learning to obtain trunk points;
s2, carrying out nearest clustering on the basis of the trunk point cloud to obtain an initial segmentation result;
s3, determining the number of mixed components of each part in the initial segmentation by adopting principal component transformation and kernel density estimation, and realizing Gaussian mixture model separation based on the number of the mixed components to obtain a crown separation result;
s4, performing over-segmentation vegetation optimization merging based on the point density gravity center;
and S5, acquiring the trunk point clouds corresponding to the crowns from top to bottom based on the vertical continuity principle, and completing the single-tree extraction.
2. The method for extracting single-plant vegetation based on transfer learning and gaussian mixture model separation according to claim 1, wherein step S1 specifically comprises:
the method comprises the steps of utilizing point cloud data of existing trunk and leaf marking information as a source domain, building a training model for each sample in the source domain, and transferring and applying the trained model to a target domain lacking sample marking information to realize the separation of trunk and leaf point clouds in the target domain.
3. The method for extracting single plant vegetation based on transfer learning and gaussian mixture model separation according to claim 2, wherein in step S1, a training model is established by using a computational geometric feature vector, and 5 feature vectors in total, including linearity, planarity, scattering, surface curvature and intrinsic entropy, are obtained by computing covariance tensor of a point cloud local region, so as to realize branch and leaf separation.
4. The method for extracting individual vegetation based on migratory learning and gaussian mixture model separation according to claim 3, wherein in step S1, the method for calculating the 5 eigenvectors of linearity, planarity, scatter, surface curvature and intrinsic entropy is as follows:
at the current judging point
Figure FDA0002756377370000011
As a center, all the points with the distance less than r are searched to form a neighboring point set Sx={p1,p2…pkUsing the set of points to construct a neighboring points covariance tensor, covariance tensor CxThe calculation formula is shown as follows:
Figure FDA0002756377370000012
from covariance tensor CxThree eigenvalues lambda are obtained by calculation1>λ2>λ3> 0 and corresponding feature vector e1,e2,e3Normalizing the eigenvalues such that λ123The following five eigenvectors were constructed from the three eigenvalues as 1:
linearity: v1=(λ12)/λ1
Surface property: v2=(λ23)/λ1
Dispersing property: v3=λ31
Surface curvature: v4=λ3
Intrinsic entropy:
Figure FDA0002756377370000021
5. the method for extracting single-plant vegetation based on transfer learning and gaussian mixture model separation according to claim 1, wherein step S2 specifically comprises:
s21, performing voxelization on the point cloud, and removing misjudgment points of scattered distribution according to the characteristic that the trunk point cloud has strong continuity in the branch direction;
s22, performing crown point cloud projection transformation based on principal component analysis;
and S23, determining the number of classification clusters through Gaussian kernel density estimation.
6. The method for extracting single-plant vegetation based on migratory learning and gaussian mixture model separation according to claim 5, wherein in step S21, the point density constraint threshold is set as the horizontal projection of point clouds, and the average value of the number of point clouds in each grid is calculated by using the following formula:
Figure FDA0002756377370000022
in the formula, th is a point density constraint threshold, IM is a two-dimensional grid formed by horizontal projection of trunk point clouds, num (·) represents the number of the point clouds in each two-dimensional grid, M and N are the maximum values of the two-dimensional grid in the horizontal and vertical directions, mean (·) is the mean value, trunkiAnd trunkiFor each point p in the point cloud of the trunkiIn x and y coordinates, m and n being piAnd (3) grid coordinates of the points in the two-dimensional grid, floor (·), taking the shape downwards, and expressing the trunk point cloud after removing burrs according to the following formula:
{trunk}={pi∈IM(m,n)|num(IM(m,n))>th}
after the burr points are removed, the plane position of the central point of each trunk is calculated according to the following formula:
Figure FDA0002756377370000023
in the formula, Loci(x, y) is the plane coordinate of the ith tree trunk center, (trunk)i.xj,trunki.yj) For the planar coordinates of the points of the ith tree trunk after deburring, KiThe total number of the point clouds of the ith tree trunk;
after the trunk center point is calculated, each trunk center is used as a clustering center, the point clouds are subjected to nearest clustering in the horizontal direction, and the initial segmentation result of the vegetation is obtained, wherein the formula is as follows:
clusteri={pi∈ptc|disxy(pi,Loci)<disxy(pi,Locj),j≠i,j∈[1,K]}
in the formula, clusteriIs by LociIs an initial classification cluster of a cluster center, ptc is a cloud set of vegetation points, disxy(. cndot.) is the planar distance between two points, and K is the number of cluster centers, i.e., the number of trunks in the region.
7. The method for extracting single-plant vegetation based on migratory learning and gaussian mixture model separation according to claim 6, wherein in step S22, the point removal is performed by using the following formula:
Figure FDA0002756377370000031
in the formula, cantyiRepresenting crown point clouds, pkCluster for initial classificationiAt any point in (nc)iThe number of point clouds in the initial classification cluster,
Figure FDA0002756377370000032
is pkThe elevation of the point(s) is,
Figure FDA0002756377370000033
for trunk point clouds trunkiThe maximum value of the elevation.
8. The method for extracting single plant vegetation based on migratory learning and gaussian mixture model separation according to claim 1, wherein in step S3, if the point clouds include N different types of point clouds, the density function of the gaussian mixture distribution is:
Figure FDA0002756377370000034
where V is a feature vector, specifically, a result of principal component analysis transformation, i.e., V ═ score, S is each mixed component, λkRepresenting the prior probability of each mixture part for a specific gravity coefficient, (u)kk) Parameters representing the Gaussian distribution, mean and variance, respectively, Gk(. cndot.) represents a Gaussian density function.
9. The method for extracting individual vegetation based on migratory learning and gaussian mixture model separation of claim 1, wherein in step S4, the gravity center position is determined
Figure FDA0002756377370000035
Defined as the weighted average value with the point density distribution as the weight after the horizontal projection of the point cloud, the formula is expressed as follows:
Figure FDA0002756377370000036
Figure FDA0002756377370000037
Figure FDA0002756377370000038
in the formula, m and n are respectively the maximum values of the horizontal and vertical grids after grid division of the point cloud horizontal projection,
Figure FDA0002756377370000039
is the mean planar coordinate of the grid (i, j), (x)q,yq) For any point within the mesh (i, j), mean () is the mean calculation, P (i, j) is the weight of the mesh (i, j), and num (i, j) represents the number of point clouds within the mesh (i, j).
10. The method for extracting single-plant vegetation based on migratory learning and Gaussian mixture model separation as claimed in claim 1, wherein in step S5, the horizontal projection range [ Canopy ] of the crown point cloud is first calculatedi.xmin,Canopyi.xmax]、[Canopyi.ymin,Canopyi.ymax]Then, in the obtained residual point cloud set { left _ pts } is calculated, the following formula is adopted to obtain the points in the horizontal projection range of the crown point cloud:
Figure FDA0002756377370000041
in the formula, CanopyiRepresenting the ith crown point cloud, within _ ptsiIs the point cloud below the crown of the tree.
CN202011203898.4A 2020-11-02 2020-11-02 Single plant vegetation extraction method based on transfer learning and Gaussian mixture model separation Active CN112347894B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011203898.4A CN112347894B (en) 2020-11-02 2020-11-02 Single plant vegetation extraction method based on transfer learning and Gaussian mixture model separation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011203898.4A CN112347894B (en) 2020-11-02 2020-11-02 Single plant vegetation extraction method based on transfer learning and Gaussian mixture model separation

Publications (2)

Publication Number Publication Date
CN112347894A true CN112347894A (en) 2021-02-09
CN112347894B CN112347894B (en) 2022-05-20

Family

ID=74356046

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011203898.4A Active CN112347894B (en) 2020-11-02 2020-11-02 Single plant vegetation extraction method based on transfer learning and Gaussian mixture model separation

Country Status (1)

Country Link
CN (1) CN112347894B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113139569A (en) * 2021-03-04 2021-07-20 山东科技大学 Target classification detection method, device and system
CN113204998A (en) * 2021-04-01 2021-08-03 武汉大学 Airborne point cloud forest ecological estimation method and system based on single wood scale
CN114743008A (en) * 2022-06-09 2022-07-12 西南交通大学 Single plant vegetation point cloud data segmentation method and device and computer equipment
CN117079130A (en) * 2023-08-23 2023-11-17 广东海洋大学 Intelligent information management method and system based on mangrove habitat
WO2024009352A1 (en) * 2022-07-04 2024-01-11 日本電信電話株式会社 Location estimation device, location estimation method, and location estimation program

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8655070B1 (en) * 2009-11-04 2014-02-18 Google Inc. Tree detection form aerial imagery
CN107274417A (en) * 2017-07-05 2017-10-20 电子科技大学 A kind of single wooden dividing method based on airborne laser point cloud aggregation
CN109325431A (en) * 2018-09-12 2019-02-12 内蒙古大学 The detection method and its device of vegetation coverage in Crazing in grassland sheep feeding path
CN109977802A (en) * 2019-03-08 2019-07-05 武汉大学 Crops Classification recognition methods under strong background noise
CN110223314A (en) * 2019-06-06 2019-09-10 电子科技大学 A kind of single wooden dividing method based on the distribution of tree crown three-dimensional point cloud
CN110348478A (en) * 2019-06-04 2019-10-18 西安理工大学 It is a kind of based on Shape Classification and trees extracting method in the outdoor point cloud scene combined
CN111507194A (en) * 2020-03-20 2020-08-07 东华理工大学 Foundation L iDAR branch and leaf point cloud separation method based on fractal dimension supervised learning
CN111667529A (en) * 2020-05-25 2020-09-15 东华大学 Plant point cloud blade segmentation and phenotype characteristic measurement method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8655070B1 (en) * 2009-11-04 2014-02-18 Google Inc. Tree detection form aerial imagery
CN107274417A (en) * 2017-07-05 2017-10-20 电子科技大学 A kind of single wooden dividing method based on airborne laser point cloud aggregation
CN109325431A (en) * 2018-09-12 2019-02-12 内蒙古大学 The detection method and its device of vegetation coverage in Crazing in grassland sheep feeding path
CN109977802A (en) * 2019-03-08 2019-07-05 武汉大学 Crops Classification recognition methods under strong background noise
CN110348478A (en) * 2019-06-04 2019-10-18 西安理工大学 It is a kind of based on Shape Classification and trees extracting method in the outdoor point cloud scene combined
CN110223314A (en) * 2019-06-06 2019-09-10 电子科技大学 A kind of single wooden dividing method based on the distribution of tree crown three-dimensional point cloud
CN111507194A (en) * 2020-03-20 2020-08-07 东华理工大学 Foundation L iDAR branch and leaf point cloud separation method based on fractal dimension supervised learning
CN111667529A (en) * 2020-05-25 2020-09-15 东华大学 Plant point cloud blade segmentation and phenotype characteristic measurement method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
WENKAI LI等: "A New Method for Segmenting Individual Trees from the Lidar Point Cloud", 《PHOTOGRAMM.ENG.REMOTE SENS》 *
ZHONGHUA SU等: "Extracting Wood Point Cloud of Individual Trees Based on Geometric Features", 《IEEE GEOSCIENCE AND REMOTE SENSING LETTERS》 *
胡海瑛等: "基于多基元特征向量融合的机载LiDAR点云分类", 《中国激光》 *
邢万里等: "基于体元逐层聚类的TLS点云数据单木分割算法", 《中南林业科技大学学报》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113139569A (en) * 2021-03-04 2021-07-20 山东科技大学 Target classification detection method, device and system
CN113139569B (en) * 2021-03-04 2022-04-22 山东科技大学 Target classification detection method, device and system
CN113204998A (en) * 2021-04-01 2021-08-03 武汉大学 Airborne point cloud forest ecological estimation method and system based on single wood scale
CN113204998B (en) * 2021-04-01 2022-03-15 武汉大学 Airborne point cloud forest ecological estimation method and system based on single wood scale
CN114743008A (en) * 2022-06-09 2022-07-12 西南交通大学 Single plant vegetation point cloud data segmentation method and device and computer equipment
WO2024009352A1 (en) * 2022-07-04 2024-01-11 日本電信電話株式会社 Location estimation device, location estimation method, and location estimation program
CN117079130A (en) * 2023-08-23 2023-11-17 广东海洋大学 Intelligent information management method and system based on mangrove habitat
CN117079130B (en) * 2023-08-23 2024-05-14 广东海洋大学 Intelligent information management method and system based on mangrove habitat

Also Published As

Publication number Publication date
CN112347894B (en) 2022-05-20

Similar Documents

Publication Publication Date Title
CN112347894B (en) Single plant vegetation extraction method based on transfer learning and Gaussian mixture model separation
Yang et al. An individual tree segmentation method based on watershed algorithm and three-dimensional spatial distribution analysis from airborne LiDAR point clouds
Malambo et al. Automated detection and measurement of individual sorghum panicles using density-based clustering of terrestrial lidar data
CN104091321B (en) It is applicable to the extracting method of the multi-level point set feature of ground laser radar point cloud classifications
Li et al. A dual growing method for the automatic extraction of individual trees from mobile laser scanning data
CN106295124B (en) The method of a variety of image detecting technique comprehensive analysis gene subgraph likelihood probability amounts
CN110992341A (en) Segmentation-based airborne LiDAR point cloud building extraction method
CN102324032B (en) Texture feature extraction method for gray level co-occurrence matrix in polar coordinate system
CN110378909A (en) Single wooden dividing method towards laser point cloud based on Faster R-CNN
Pirotti et al. A comparison of tree segmentation methods using very high density airborne laser scanner data
CN113269825B (en) Forest breast diameter value extraction method based on foundation laser radar technology
CN108052886A (en) A kind of puccinia striiformis uredospore programming count method of counting
Li et al. SPM-IS: An auto-algorithm to acquire a mature soybean phenotype based on instance segmentation
CN102855485A (en) Automatic wheat earing detection method
CN108764157A (en) Building laser footpoint extracting method and system based on normal vector Gaussian Profile
Hui et al. Multi-level self-adaptive individual tree detection for coniferous forest using airborne LiDAR
Tamim et al. A simple and efficient approach for coarse segmentation of Moroccan coastal upwelling
CN112669363A (en) Urban green land three-dimensional green volume calculation method
CN110348478B (en) Method for extracting trees in outdoor point cloud scene based on shape classification and combination
CN104573701B (en) A kind of automatic testing method of Tassel of Corn
CN114092814A (en) Unmanned plane navel orange tree image target identification and statistics method based on deep learning
CN109359680A (en) Explosion sillar automatic identification and lumpiness feature extracting method and device
CN111860359A (en) Point cloud classification method based on improved random forest algorithm
Zhu et al. Research on deep learning individual tree segmentation method coupling RetinaNet and point cloud clustering
CN106326927A (en) Shoeprint new class detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant