CN113222917B - DBI tree vertex detection method of airborne laser radar point cloud data CHM - Google Patents

DBI tree vertex detection method of airborne laser radar point cloud data CHM Download PDF

Info

Publication number
CN113222917B
CN113222917B CN202110470600.4A CN202110470600A CN113222917B CN 113222917 B CN113222917 B CN 113222917B CN 202110470600 A CN202110470600 A CN 202110470600A CN 113222917 B CN113222917 B CN 113222917B
Authority
CN
China
Prior art keywords
dbi
canopy
model
foreground
canopy height
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110470600.4A
Other languages
Chinese (zh)
Other versions
CN113222917A (en
Inventor
周国清
穆叶煊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin University of Technology
Original Assignee
Guilin University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin University of Technology filed Critical Guilin University of Technology
Priority to CN202110470600.4A priority Critical patent/CN113222917B/en
Publication of CN113222917A publication Critical patent/CN113222917A/en
Application granted granted Critical
Publication of CN113222917B publication Critical patent/CN113222917B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30188Vegetation; Agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

本发明涉及激光雷达数据处理领域,公开了机载激光雷达点云数据冠层高度模型的DBI树顶点探测方法,发明了一种基于冠层高度冠层模型的DBI控制前景像素探测树顶点的方法,包括:1.规范冠层高度模型的标记像素生成条件;2.利用高度冠层模型的灰度变化特点对伪前景像素进行过滤剔除;3.引入相似度判断因子DBI进行DBI‑K筛选树冠顶点。本发明提供了一种机载激光雷达点云数据冠层高度模型的树顶位置探测方法,能够有效的解决传统窗口探测树顶点方法的阈值依赖问题,提高树顶点识别准确度。

Figure 202110470600

The invention relates to the field of laser radar data processing, discloses a DBI tree vertex detection method for a canopy height model of airborne laser radar point cloud data, and invents a method for detecting tree vertices by DBI control foreground pixels based on the canopy height canopy model , including: 1. Standardize the marker pixel generation conditions of the canopy height model; 2. Use the grayscale variation characteristics of the height canopy model to filter out the false foreground pixels; 3. Introduce the similarity judgment factor DBI for DBI-K screening of tree crowns vertex. The invention provides a tree top position detection method for a canopy height model of airborne laser radar point cloud data, which can effectively solve the threshold dependence problem of the traditional window detection tree vertex method and improve the tree vertex identification accuracy.

Figure 202110470600

Description

机载激光雷达点云数据CHM的DBI树顶点探测方法DBI tree vertex detection method for airborne lidar point cloud data CHM

技术领域technical field

本发明涉及激光雷达(Light Laser Detection and Ranging,LiDAR)数据处理领域,具体涉及机载激光雷达点云数据冠层高度模型(Canopy Height Model,CHM)的DBI(Davies-Bouldin Index)树顶点探测方法。The invention relates to the field of laser radar (Light Laser Detection and Ranging, LiDAR) data processing, in particular to a DBI (Davies-Bouldin Index) tree vertex detection method of airborne laser radar point cloud data (Canopy Height Model, CHM) .

技术背景technical background

激光雷达对森林冠层具有很强穿透性,十分有利于森林普查及林分参数获取,能够做到树梢位置和单木树冠精确识别。机载激光雷达技术为精准林业的发展提供理论依据和技术支持,在森林经营管理与生态系统研究中具有广阔应用前景。Lidar has strong penetration to the forest canopy, which is very beneficial to forest census and forest stand parameter acquisition, and can accurately identify treetop positions and single tree crowns. Airborne lidar technology provides theoretical basis and technical support for the development of precision forestry, and has broad application prospects in forest management and ecosystem research.

目前,由于机载LiDAR点云生成冠层高度模型图像的复杂性,导致了在树冠位置识别领域的树顶点识别不准确。在已有的传统方法中,研究人员频繁使用固定窗口尺寸探索局部最大值的方法探测树顶,然而,常见的基于窗口探测树顶点的方法容易受到树冠冠径尺寸大小不一的影响,如果窗口太小,会造成一些树冠半径较大的树会被分配多个树顶点;反之如果窗口大小太大,则会造成一些树冠半径小于指定窗口尺寸的树将不会被分配到树顶点。At present, due to the complexity of the canopy height model image generated from the airborne LiDAR point cloud, the recognition of tree vertices in the field of canopy position recognition is inaccurate. In the existing traditional methods, researchers frequently use a fixed window size to explore local maxima to detect tree tops. However, common methods based on window detection of tree vertices are easily affected by the size of the crown diameter of the tree crown. If it is too small, some trees with a larger crown radius will be allocated multiple tree vertices; otherwise, if the window size is too large, some trees with a crown radius smaller than the specified window size will not be allocated to tree vertices.

综上所述,为了解决传统窗口探测树顶点方法的阈值依赖和识别不准确问题,有必要提出一种新的树顶点识别方法,从而解决以上问题。本发明根据冠层高度模型灰度级特点生成前景标记像素,并引入相似度判断因子-类间距之比DBI,即可以解决传统利用窗口探测高度冠层模型的树顶点的阈值依赖,又提高了树冠顶点的识别准确度。In summary, in order to solve the threshold dependence and inaccurate identification of the traditional window detection tree vertex method, it is necessary to propose a new tree vertex identification method to solve the above problems. The invention generates foreground marker pixels according to the gray level characteristics of the canopy height model, and introduces the similarity judgment factor-class spacing ratio DBI, which can solve the threshold dependence of the tree vertices of the traditional window detection height canopy model, and improve the The recognition accuracy of canopy vertices.

发明内容SUMMARY OF THE INVENTION

本发明的目的是提供一种机载激光雷达点云数据冠层高度模型的树顶位置探测方法,能够有效的解决传统窗口探测树顶点方法的阈值依赖问题,提高树顶点识别准确度。The purpose of the present invention is to provide a tree top position detection method for airborne lidar point cloud data canopy height model, which can effectively solve the threshold dependence problem of the traditional window detection tree vertex method and improve the tree vertex recognition accuracy.

为实现本发明之目的,本文提出了一种机载激光雷达点云数据CHM的DBI树梢位置探测方法,采用以下技术方案予以实现:一种基于冠层高度冠层模型的DBI控制前景像素生成树顶点方法,包括规范冠层高度模型的标记像素生成条件,利用高度冠层模型的灰度变化特点对伪前景像素进行过滤剔除,引入相似度判断因子DBI进行DBI-K(DBI-Kmeans)筛选树冠顶点;规范冠层高度模型的标记像素生成条件:(1)标记外像素点的灰度级值都比标记内部的低,(2)前景图像的像素点组成一个连通分量,(3)同一个标记内部的像素点具有相同的灰度级值;利用高度冠层模型的灰度变化特点对伪前景像素进行过滤剔除,结合高度冠层模型图像的灰度变化特点对已经计算求出的前景标志像素做进一步的处理,根据不同图像灰度的差异,对已经出现在前景标记上的背景点进行了过滤;DBI-K(DBI-Kmeans)筛选树冠顶点,对前景图像进行DBI-K聚类,当DBI取最小值时得到聚类中心的最优解,将其作为单株树木树梢,算法终止。In order to achieve the purpose of the present invention, this paper proposes a DBI treetop position detection method of airborne lidar point cloud data CHM, which is realized by the following technical scheme: a DBI control foreground pixel generation based on canopy height canopy model The tree vertex method includes standardizing the marker pixel generation conditions of the canopy height model, filtering and eliminating false foreground pixels by using the grayscale variation characteristics of the canopy height model, and introducing the similarity judgment factor DBI for DBI-K (DBI-Kmeans) screening Tree canopy vertex; standard canopy height model marked pixel generation conditions: (1) the gray level values of the pixels outside the mark are lower than those inside the mark, (2) the pixels of the foreground image form a connected component, (3) the same as The pixels inside a marker have the same gray level value; the pseudo foreground pixels are filtered and eliminated by using the gray level change characteristics of the high canopy model image, and the calculated foreground pixels are combined with the gray level change characteristics of the high canopy model image. The marker pixels are further processed, and the background points that have appeared on the foreground markers are filtered according to the difference in the gray level of different images; DBI-K (DBI-Kmeans) filters the crown vertices, and performs DBI-K clustering on the foreground image. , when the DBI takes the minimum value, the optimal solution of the cluster center is obtained, which is regarded as the treetop of a single tree, and the algorithm terminates.

本发明的有益效果是:采用一种基于冠层高度冠层模型的DBI控制前景像素生成树顶点,包括规范冠层高度模型的标记像素生成条件,利用高度冠层模型的灰度变化特点对伪前景像素进行过滤剔除,引入相似度判断因子DBI进行DBI-K(DBI-Kmeans)筛选树冠顶点,既解决了传统窗口探测树顶点方法的阈值依赖问题,又提高了树顶点识别准确度;一方面,规范冠层高度模型的标记像素生成条件,利用冠层高度模型图像的局部极大值确定感兴趣区域,以此作为前景像素及标记,可以避免穿口探测的阈值依赖问题;另一方面,利用高度冠层模型的灰度变化特点对伪前景像素进行过滤剔除,克服了因纹理与噪声导致前景标记中掺杂了伪极大值(前景标记)的问题;最后,引入相似度判断因子DBI进行DBI-K筛选树冠顶点,解决了基于冠层高度模型的树顶点不明确问题,实现了树顶点的有效识别。The beneficial effects of the present invention are: adopting a DBI based canopy height canopy model to control foreground pixels to generate tree vertices, including standard canopy height model marked pixel generation conditions, and utilizing the grayscale variation characteristics of the canopy height model to detect false The foreground pixels are filtered and eliminated, and the similarity judgment factor DBI is introduced to screen the crown vertices by DBI-K (DBI-Kmeans), which not only solves the threshold dependence problem of the traditional window detection tree vertex method, but also improves the tree vertex recognition accuracy; on the one hand , standardize the marker pixel generation conditions of the canopy height model, and use the local maxima of the canopy height model image to determine the region of interest as foreground pixels and markers, which can avoid the threshold dependence problem of perforation detection; on the other hand, The pseudo-foreground pixels are filtered and eliminated by using the gray-scale variation characteristics of the high canopy model, which overcomes the problem that the foreground marks are mixed with pseudo-maxima (foreground marks) due to texture and noise. Finally, the similarity judgment factor DBI is introduced. DBI-K is used to screen the canopy vertices, which solves the problem of unclear tree vertices based on the canopy height model, and realizes the effective identification of tree vertices.

直接进行基于特征的前景标记像素提取,会因纹理与噪声的存在导致前景标记中掺杂了伪极大值。Direct feature-based foreground marker pixel extraction will result in pseudo-maxima in foreground markers due to the existence of texture and noise.

附图说明Description of drawings

图1是点云数据滤波分类图。Figure 1 is a classification diagram of point cloud data filtering.

图2是点云数据生成的DEM、DSM、CHM图。Figure 2 is the DEM, DSM, and CHM diagrams generated from point cloud data.

图3是本发明的原理图。Figure 3 is a schematic diagram of the present invention.

图4是本发明在规范条件下生成的冠层高度模型的前景图像和前景标记。Figure 4 is a foreground image and foreground markers of a canopy height model generated by the present invention under normative conditions.

图5是本发明伪前景像素过滤剔除后前景标记图。FIG. 5 is a foreground marker diagram after the pseudo foreground pixels are filtered and eliminated according to the present invention.

图6是本发明DBI-K筛选树冠顶点图。Fig. 6 is a canopy apex diagram of the DBI-K screening of the present invention.

具体实施方式Detailed ways

下面结合附图对本发明的具体实施方式做进一步说明。The specific embodiments of the present invention will be further described below with reference to the accompanying drawings.

实施例:Example:

步骤1)结合图1,点云数据滤波分类。林区样地地形信息的保留十分重要,在此前提下,为了能够达到更好的滤波效果,本发明采用渐进三角网滤波算法,以下为分离地面点步骤:Step 1) Combined with Figure 1, the point cloud data is filtered and classified. The preservation of topographic information of forest sample plots is very important. Under this premise, in order to achieve better filtering effect, the present invention adopts a progressive triangulation filtering algorithm. The following are the steps of separating ground points:

(1)设置最大地形坡度:将点云中显示的地形最大坡度设置为80°;(1) Set the maximum terrain slope: set the maximum slope of the terrain displayed in the point cloud to 80°;

(2)设置迭代角度:将待测地分类点和目前已知的地面分类点之间所有可允许值的转换角度为8°;(2) Setting the iteration angle: the conversion angle of all allowable values between the classification point to be surveyed and the currently known classification points on the ground is 8°;

(3)设置迭代距离:将待分类点与三角网间的距离阈值设置为1.6m;(3) Set the iterative distance: set the distance threshold between the point to be classified and the triangulation to 1.6m;

利用上述方法,对实验样地进行点云滤波分类,将分离出的地面点类用棕色显示,分离出的植被点类用绿色显示。Using the above method, point cloud filter classification is performed on the experimental plot, and the separated ground point classes are displayed in brown, and the separated vegetation point classes are displayed in green.

步骤2)结合图2,利用滤波分类后的点云数据生成DEM、DSM、CHM。对非地面激光点云和地面反射激光点云综合利用TIN计算方法得出来分别生成了实验林区的DSM和DEM。在LiDAR 360中多次实验后发现临界变长设置为1m时生成的三角网较平滑,当设置插入缓冲区的值为2m时三角网结构面包含更多细节,并且设置权重为2。从DSM中减去DEM即可得到冠层高度模型,如下列公式所示:Step 2) Combined with Figure 2, use the filtered and classified point cloud data to generate DEM, DSM, and CHM. The DSM and DEM of the experimental forest area were generated by the comprehensive use of the TIN calculation method for the non-ground laser point cloud and the ground reflected laser point cloud. After many experiments in LiDAR 360, it is found that the triangulation generated when the critical variable length is set to 1m is smoother. When the value of the insertion buffer is set to 2m, the triangulation surface contains more details, and the weight is set to 2. The canopy height model is obtained by subtracting the DEM from the DSM as follows:

CHM=DSM-DEM (1)CHM=DSM-DEM (1)

步骤3)结合图3,一种基于冠层高度冠层模型的DBI控制前景像素生成树顶点,规范条件下冠层高度模型的标记像素生成中,输入冠层高度模型,根据三个规范条件生成冠层高度模型的前景标记像素;利用高度冠层模型的灰度变化特点对伪前景像素进行过滤剔除中,输入冠层高度模型初始前景标记像素,根据局部最大灰度级像素原理过滤剔除掉伪前景标记像素C;引入相似度判断因子DBI进行DBI-K筛选树冠顶点中,输入滤除伪标记的前景标记像素,并将其作为初始聚类中心,并根据聚类中心公式生成新的聚类中心,根据生成聚类中心的前后相似度判断因子DBI值比较,当DBI最小值时得到聚类中心的最优解,算法终止;输出DBI取最小值时的聚类中心,将其作为单株树木树顶点。Step 3) With reference to Figure 3, a DBI based canopy height canopy model controls foreground pixels to generate tree vertices, and in the canopy height model marked pixel generation under standard conditions, the canopy height model is input, and the canopy height model is generated according to three standard conditions. The foreground marker pixels of the canopy height model; the false foreground pixels are filtered and eliminated by using the gray level change characteristics of the height canopy model, and the initial foreground marker pixels of the canopy height model are input, and the false foreground pixels are filtered and eliminated according to the principle of local maximum gray level pixels. In front of the foreground marker pixel C; the similarity judgment factor DBI is introduced to screen the crown vertices by DBI-K, and the foreground marker pixels from which the pseudo-markers are filtered are input and used as the initial cluster center, and a new cluster is generated according to the cluster center formula. The cluster center is compared according to the DBI value of the similarity judgment factor before and after the cluster center is generated. When the DBI is the minimum value, the optimal solution of the cluster center is obtained, and the algorithm is terminated; the cluster center when the DBI is the minimum value is output as a single tree vertices.

步骤4)结合图4,规范条件下冠层高度模型的标记像素生成。根据3个规范条件生成冠层高度模型的初始前景标记:(1)标记外像素点的灰度级值都比标记内部的低,(2)前景图像的像素点组成一个连通分量,(3)同一个标记内部的像素点具有相同的灰度级值;由于图像的局部极大值确定感兴趣区域,以此作为前景像素及标记。Step 4) Combined with Figure 4, the labeled pixels of the canopy height model are generated under normative conditions. The initial foreground markers of the canopy height model are generated according to 3 canonical conditions: (1) the gray level values of the pixels outside the marker are lower than those inside the marker, (2) the pixels of the foreground image form a connected component, (3) The pixels inside the same marker have the same gray level value; the region of interest is determined by the local maximum value of the image, which is used as the foreground pixel and marker.

步骤5)结合图5,高度冠层模型的灰度变化特点对伪前景像素进行过滤剔除。高度灰度冠层模型图像的树冠层区域较其他区域更加明亮,通过灰度图中颜色深浅可以知道冠层相对高度信息,即灰度值越大冠层越高,由此可知树梢(前景标记)为局部灰度极大值像素点;由于灰度冠层高度模型通过中心像元相邻8像素的灰度值表示。需要使R、G、B三者的值相等就可以得到灰度图像(其中,R=G=B=255为白色,R=G=B=0为黑色);当冠层高度模型前景像素的R、G、B分量值之和取最大值且R、G、B分量值相等时,标记像素值取0,其他情况下,标记像素值取1。表示为以下公式:Step 5) With reference to Fig. 5, the gray level variation characteristics of the high canopy model are used to filter out the false foreground pixels. The canopy area of the high-level grayscale canopy model image is brighter than other areas. The relative height information of the canopy can be known through the color depth in the grayscale image. Mark) is the local grayscale maximum value pixel; because the grayscale canopy height model is represented by the gray value of the adjacent 8 pixels of the central pixel. It is necessary to make the values of R, G, and B equal to obtain a grayscale image (where R=G=B=255 is white, R=G=B=0 is black); when the foreground pixels of the canopy height model are When the sum of the R, G, and B component values takes the maximum value and the R, G, and B component values are equal, the marked pixel value takes 0, and in other cases, the marked pixel value takes 1. Expressed as the following formula:

Figure BDA0003045243950000041
Figure BDA0003045243950000041

式中,

Figure BDA0003045243950000042
Figure BDA0003045243950000043
分别表示初始前景标记后的冠层高度模型图像C中像素点处的R、G、B分量值。通过以上的公式可以对图像C进行转换,得到一个新的图像C′作为最终的前景标记图像。In the formula,
Figure BDA0003045243950000042
and
Figure BDA0003045243950000043
respectively represent the R, G, and B component values at the pixel points in the canopy height model image C after the initial foreground marking. Image C can be converted through the above formula to obtain a new image C' as the final foreground marker image.

步骤6)结合图6,DBI-K(DBI-Kmeans)筛选树冠顶点,在生成高度冠层模型的前景标记像素的基础上,利用相似度判断因子对标记像素进一步筛选;通过对输入的冠层高度模型二维灰度图像进行聚类,在K-means聚类方法中加入DBI相似度判断因子,进一步确定前景图像内部的聚类中心m和聚类中心数目K值。Step 6) Combined with Fig. 6, DBI-K (DBI-Kmeans) filters the canopy vertices, and on the basis of generating the foreground marked pixels of the high canopy model, the similarity judgment factor is used to further screen the marked pixels; The height model two-dimensional grayscale image is clustered, and the DBI similarity judgment factor is added to the K-means clustering method to further determine the cluster center m and the number of cluster centers K value in the foreground image.

假设给定的样本冠层高度模型前景图像内包含数据集A={ai};将A中n个样本数据集初始分割为K个不同聚簇,保证聚簇内相似度较高,簇外相似度较低。因此,K-means聚类方法的核心理论可以描述为:将数据集A聚类为以K个聚簇组成的集合B,B={bi};可以得到各个聚簇子集的簇内中心为:Assuming that the foreground image of the given sample canopy height model contains the dataset A={a i }; the n sample datasets in A are initially divided into K different clusters, to ensure that the similarity within the cluster is high, and the outside of the cluster is high. Similarity is low. Therefore, the core theory of the K-means clustering method can be described as: cluster the dataset A into a set B composed of K clusters, B={b i }; the intra-cluster centers of each cluster subset can be obtained for:

Figure BDA0003045243950000044
Figure BDA0003045243950000044

式中,A表示样本数据对象,Ni表示聚类bi中的样本数。In the formula, A represents the sample data object, and Ni represents the number of samples in the cluster b i .

相似度判断因子DBI的值取最小时,输出DBI相似度判断因子,取得聚类数目K的最优解。When the value of the similarity judgment factor DBI is the smallest, the DBI similarity judgment factor is output, and the optimal solution of the number of clusters K is obtained.

Figure BDA0003045243950000045
Figure BDA0003045243950000045

式中,ai表示第i个类内的各个数据对象,mi表示第i个类的聚簇中心,Ni表示第i个类内的数据对象个数,Wi表示第i个类内各数据对象分散程度,Wj表示第j个类内各数据对象分散程度,Di,j表示第i个类与第j个类的质心之间欧几里德距离。In the formula, a i represents each data object in the ith class, m i represents the cluster center of the ith class, N i represents the number of data objects in the ith class, and Wi represents the number of data objects in the ith class. The degree of dispersion of each data object, W j represents the degree of dispersion of each data object in the jth class, and D i,j represents the Euclidean distance between the centroids of the ith class and the jth class.

公式(1.2)中Di,j表示类间距离,表达公式如下:In formula (1.2), D i,j represent the distance between classes, and the expression formula is as follows:

Di,j=||mi-mj|| (5)D i,j =||m i -m j || (5)

公式(1.2)中Wi表示类内距离,表达公式如下:In formula (1.2), Wi represents the intra-class distance, and the expression formula is as follows:

Figure BDA0003045243950000051
Figure BDA0003045243950000051

如果DBInew<DBIlast,则可以形成新的聚簇中心,否则算法终止。迭代结束时确定树顶点及个数K,并将迭代终止产生的聚类中心作为树梢点,聚类K值则作为目标图像的单木株数,将筛选后的聚类中心像元作为描绘每棵树冠的种子点(树梢点)位置。If DBI new < DBI last , a new cluster center can be formed, otherwise the algorithm terminates. At the end of the iteration, the tree vertices and the number K are determined, and the cluster center generated by the iteration termination is used as the treetop point, the clustering K value is used as the number of single trees in the target image, and the filtered cluster center pixel is used as the description of each tree. The seed point (top point) position of the tree crown.

Figure BDA0003045243950000052
Figure BDA0003045243950000052

式中,&&为逻辑关系‘与’,m′为新的中心集合,m1…j…为聚类中心m1至mi,m1...i为聚类中心m1至mj。当DBI取最小值时,获得最优解,此时确定树顶点的个数和位置。In the formula, && is the logical relationship 'and', m' is the new center set, m 1...j ... are the cluster centers m 1 to m i , and m 1...i are the cluster centers m 1 to m j . When DBI takes the minimum value, the optimal solution is obtained, and the number and position of tree vertices are determined at this time.

以上所述仅为结合附图描述了本发明的优选实施方式,应当指出,对于本领域内的技术人员来说,可以在所附权利要求的范围内做出各种变形和修改,这些改进和变形也应视为本发明的保护范围。The above only describes the preferred embodiments of the present invention in conjunction with the accompanying drawings. It should be pointed out that for those skilled in the art, various changes and modifications can be made within the scope of the appended claims. These improvements and Deformation should also be regarded as the protection scope of the present invention.

Claims (3)

1. The DBI tree vertex detection method of airborne laser radar point cloud data CHM comprises the following specific steps:
step 1) filtering and classifying point cloud data and generating a Digital Elevation Model (DEM), a Digital Surface Model (DSM) and a Canopy Height Model (Canopy Height Model, CHM);
step 2) standardizing the foreground mark pixel generation conditions of the canopy height model according to the pixel gray level of the height canopy model (CHM) to generate an initial foreground mark of the canopy height model, which specifically comprises the following steps:
(1) the gray level values of the pixels outside the canopy height model image markers are all lower than those inside the markers;
(2) forming a connected component by pixel points of the foreground image of the canopy height model image;
(3) the canopy height model image has the same gray level value with the pixel point inside the same mark;
generating a canopy height model image C after the initial foreground marking according to the 3 standard conditions;
step 3) filtering and removing the pseudo foreground pixels by utilizing the gray value change characteristics of the pixels in eight adjacent domains of the central pixel based on the initial foreground mark of the height canopy model;
step 4) introducing a similarity judgment factor DBI, and carrying out crown vertex screening of a DBI-K method on the canopy height model foreground mark, wherein the method specifically comprises the following steps:
clustering the input two-dimensional gray level images of the canopy height model, and generating clustering center m and clustering center number K values of the foreground images of the canopy height model by utilizing a K-means clustering method introducing a similarity judgment factor DBI (direct binary input) for multiple times of iteration; if DBInew<DBIlastForming a new cluster center, otherwise, terminating the algorithm, and taking the cluster center m generated by iteration termination as a treetop point TpointThe value K of the number of the clustering centers is used as the number of the single plants of the target image, and the screened clustering center pixels are used for describing the treetop point T of each crownpointThen the tree vertices are labeled and displayed visually in three dimensions.
2. The method according to claim 1, characterized in that said step 1) is in particular:
in order to ensure the integrity of terrain confidence and the filtering and classifying effect of laser radar point cloud, the method adopts a progressive triangulation network filtering algorithm, filtering and classifying the experimental sample plot data by setting a maximum gradient, an iteration angle and an iteration distance, performing coloring display on the point cloud after filtering and classifying, calculating the non-ground laser point cloud and the ground reflected laser after filtering and classifying by using a TIN method to obtain DSM and DEM which respectively generate experimental forest zones, and subtracting the DEM from the DSM to obtain CHM.
3. The method according to claim 1, characterized in that said step 3), in particular:
the method utilizes R, G, B values to obtain gray level image of the canopy height model, when C of foreground pixel of the canopy height modelR、CG、CBThe sum of the component values is taken to be the maximum value and the requirement R, G, B that the component values are equal, the pixel value of the foreground pixel is marked to be 0, otherwise the pixel value is 1 in all cases, resulting in a new image C' as the final foreground marked image.
CN202110470600.4A 2021-04-29 2021-04-29 DBI tree vertex detection method of airborne laser radar point cloud data CHM Active CN113222917B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110470600.4A CN113222917B (en) 2021-04-29 2021-04-29 DBI tree vertex detection method of airborne laser radar point cloud data CHM

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110470600.4A CN113222917B (en) 2021-04-29 2021-04-29 DBI tree vertex detection method of airborne laser radar point cloud data CHM

Publications (2)

Publication Number Publication Date
CN113222917A CN113222917A (en) 2021-08-06
CN113222917B true CN113222917B (en) 2022-06-14

Family

ID=77089898

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110470600.4A Active CN113222917B (en) 2021-04-29 2021-04-29 DBI tree vertex detection method of airborne laser radar point cloud data CHM

Country Status (1)

Country Link
CN (1) CN113222917B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106023178A (en) * 2016-05-16 2016-10-12 浙江工业大学 Method for detecting single tree in remote sensing data based on gradient direction clustering
CN106845399A (en) * 2017-01-18 2017-06-13 北京林业大学 A kind of method that use hierarchical cluster mode extracts individual tree information from LiDAR point cloud
CN109164459A (en) * 2018-08-01 2019-01-08 南京林业大学 A kind of method that combination laser radar and high-spectral data classify to forest species
CN110389369A (en) * 2019-07-30 2019-10-29 南京林业大学 Canopy Point Cloud Acquisition Method Based on RTK-GPS and Mobile 2D Laser Scanning
CN110992375A (en) * 2019-11-29 2020-04-10 桂林理工大学 Chord angle discrimination clustering single tree segmentation method for layer-by-layer LiDAR point cloud
CN111739031A (en) * 2020-06-19 2020-10-02 华南农业大学 A crop canopy segmentation method based on depth information

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AUPR301401A0 (en) * 2001-02-09 2001-03-08 Commonwealth Scientific And Industrial Research Organisation Lidar system and method
CN104820830B (en) * 2015-05-08 2018-01-02 南京林业大学 A kind of wood recognition method based on Full wave shape LiDAR canopy section models
CN107274417B (en) * 2017-07-05 2020-06-16 电子科技大学 A single tree segmentation method based on the aggregation relationship of airborne laser point clouds
CN109902686B (en) * 2019-01-22 2021-01-15 中国科学院植物研究所 Forest single tree parameter extraction method
CN110196432B (en) * 2019-04-28 2021-04-02 湖南工学院 Deciduous forest tree-level parameter determination method based on small-spot airborne radar
CN110223314B (en) * 2019-06-06 2021-09-24 电子科技大学 A single tree segmentation method based on the three-dimensional point cloud distribution of tree canopy
CN110288594B (en) * 2019-07-02 2021-06-04 河北农业大学 A method for analyzing plant canopy structure traits
CN111428784B (en) * 2020-03-23 2024-06-18 湖南工学院 Robust segmentation method for determining deciduous forest tree level parameters by using airborne laser radar
CN111462134A (en) * 2020-03-31 2020-07-28 武汉大学 Single-tree segmentation method and system for fusion of high-resolution remote sensing images and lidar point clouds
CN112163458A (en) * 2020-09-04 2021-01-01 江苏东晟辉科技开发有限公司 Ground feature classification method based on integration of CASI hyperspectrum and airborne LiDAR
CN112669333B (en) * 2021-01-11 2024-07-12 四川测绘地理信息局测绘技术服务中心 Single wood information extraction method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106023178A (en) * 2016-05-16 2016-10-12 浙江工业大学 Method for detecting single tree in remote sensing data based on gradient direction clustering
CN106845399A (en) * 2017-01-18 2017-06-13 北京林业大学 A kind of method that use hierarchical cluster mode extracts individual tree information from LiDAR point cloud
CN109164459A (en) * 2018-08-01 2019-01-08 南京林业大学 A kind of method that combination laser radar and high-spectral data classify to forest species
CN110389369A (en) * 2019-07-30 2019-10-29 南京林业大学 Canopy Point Cloud Acquisition Method Based on RTK-GPS and Mobile 2D Laser Scanning
CN110992375A (en) * 2019-11-29 2020-04-10 桂林理工大学 Chord angle discrimination clustering single tree segmentation method for layer-by-layer LiDAR point cloud
CN111739031A (en) * 2020-06-19 2020-10-02 华南农业大学 A crop canopy segmentation method based on depth information

Also Published As

Publication number Publication date
CN113222917A (en) 2021-08-06

Similar Documents

Publication Publication Date Title
CN111898688B (en) Airborne LiDAR data tree classification method based on three-dimensional deep learning
Wang et al. A multiscale and hierarchical feature extraction method for terrestrial laser scanning point cloud classification
CN113591766B (en) Multi-source remote sensing tree species identification method for unmanned aerial vehicle
CN106296695B (en) Adaptive threshold natural target image segmentation extraction algorithm based on conspicuousness
CN111476170A (en) Remote sensing image semantic segmentation method combining deep learning and random forest
CN105427309B (en) The multiple dimensioned delamination process of object-oriented high spatial resolution remote sense information extraction
CN105631892B (en) It is a kind of that detection method is damaged based on the aviation image building of shade and textural characteristics
CN102005034A (en) Remote sensing image segmentation method based on region clustering
CN108241871A (en) Laser point cloud and image fusion data classification method based on multi-features
CN107291855A (en) A kind of image search method and system based on notable object
CN106203448B (en) A scene classification method based on nonlinear scale space
CN106548141A (en) A kind of object-oriented farmland information extraction method based on the triangulation network
Deng et al. Cloud detection in satellite images based on natural scene statistics and gabor features
CN110348478B (en) Method for extracting trees in outdoor point cloud scene based on shape classification and combination
CN115147746B (en) Saline-alkali geological identification method based on unmanned aerial vehicle remote sensing image
CN106446925A (en) Dolphin identity recognition method based on image processing
CN114581771B (en) Method for detecting collapse building by high-resolution heterogeneous remote sensing
CN116543325A (en) Artificial intelligence automatic recognition method and system for crops based on UAV images
CN110070545B (en) A Method for Automatically Extracting Urban Built-up Areas from Urban Texture Feature Density
CN109447111A (en) A kind of remote sensing supervised classification method based on subclass training sample
Zheng et al. YOLOv4-lite–based urban plantation tree detection and positioning with high-resolution remote sensing imagery
CN111860359B (en) A Point Cloud Classification Method Based on Improved Random Forest Algorithm
CN108931825A (en) A kind of remote sensing image clouds thickness detecting method based on atural object clarity
CN105023269B (en) A kind of vehicle mounted infrared image colorization method
CN106529600A (en) SVM-based recognition method of building angular points in high-resolution optical image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant