CN117437537B - Building target level change detection method and system based on airborne LiDAR point cloud data - Google Patents

Building target level change detection method and system based on airborne LiDAR point cloud data Download PDF

Info

Publication number
CN117437537B
CN117437537B CN202311245296.9A CN202311245296A CN117437537B CN 117437537 B CN117437537 B CN 117437537B CN 202311245296 A CN202311245296 A CN 202311245296A CN 117437537 B CN117437537 B CN 117437537B
Authority
CN
China
Prior art keywords
point cloud
point
building
cloud data
change
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311245296.9A
Other languages
Chinese (zh)
Other versions
CN117437537A (en
Inventor
张振超
纪松
戴晨光
汪汉云
季虹良
张永生
李力
张磊
牛雁飞
周汝琴
王鹏
张英健
杜跃飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Information Engineering University of PLA Strategic Support Force
Original Assignee
Information Engineering University of PLA Strategic Support Force
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Information Engineering University of PLA Strategic Support Force filed Critical Information Engineering University of PLA Strategic Support Force
Priority to CN202311245296.9A priority Critical patent/CN117437537B/en
Publication of CN117437537A publication Critical patent/CN117437537A/en
Application granted granted Critical
Publication of CN117437537B publication Critical patent/CN117437537B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/176Urban or other man-made structures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The invention relates to the technical field of building change detection, in particular to a building target level change detection method and system based on airborne LiDAR point cloud data, which are characterized in that airborne laser LiDAR point cloud data in different time phases are acquired, and point set features of a single target level of a building in the point cloud data are acquired through point cloud data processing, wherein the point set features comprise spherical point set features and patch point set features; and judging whether the building point cloud surface patches are changed by comparing the point set characteristics of the same positions in different time phases, and merging building change information based on the judged change results to acquire and output the detected information of the building in the point cloud data, wherein the detected information type of the building comprises a newly built building, building dismantling and building invariance. According to the invention, the single-point characteristics and the change characteristics are fused, and the change detection of the target-level point cloud of the building is realized from coarse to fine, so that the difference of the point cloud and the interference of the point cloud noise can be inhibited, and the detection precision and the robustness are improved.

Description

Building target level change detection method and system based on airborne LiDAR point cloud data
Technical Field
The invention relates to the technical field of building change detection, in particular to a building target level change detection method and system based on airborne LiDAR point cloud data.
Background
Lidar is a sensor that uses laser for echo ranging, orientation, and identifies targets by position, radial velocity, and object reflection characteristics, including special transmit, scan, receive, and signal processing techniques. Laser Detection and ranging (Light Detection AND RANGING, LIDAR) is a technology for acquiring high-precision and high-density laser point cloud data, echo intensity and spectrum information by scanning the terrain or ground features and recording the distance and other information based on the basic principle of a laser radar. The distance is calculated by measuring the time required for the laser to travel to and from the target and combining the light velocity and the atmospheric refractive index, so that the position coordinates of the target can be calculated. Compared with visible light remote sensing, the laser LiDAR technology belongs to active remote sensing, namely, the technology actively emits laser to detect ground objects, and can work all the time; the laser beam is narrow, has monochromaticity, higher resolution and sensitivity, stronger anti-interference capability and small interference by the meteorological image, the ground background and the sky background; in the aspect of data acquisition, amplitude, frequency and phase information can be obtained, and dynamic targets can be measured and identified. The laser LiDAR technology can be classified into Satellite-borne laser scanning (Satellite-based), airborne laser scanning (Airborne), vehicle-mounted laser scanning (Mobile), and Terrestrial laser scanning (Terrestrial) according to the platform on which the sensor is mounted. On-board LiDAR is the installation of a LiDAR system on board an aircraft for acquiring a wide range of three-dimensional data. Advantages of the airborne laser scanning technology include: firstly, large-scale laser point cloud data can be obtained efficiently, the flight time of laser detection ground objects is mainly calculated into target three-dimensional point cloud coordinates (X, Y and Z) in the data post-processing process, and the post-processing efficiency is high; secondly, airborne laser scanning is carried out on ground remote sensing from 500m-3000m in the air, and the method can be widely used for large-scale urban space mapping and three-dimensional geographic information acquisition; thirdly, the airborne laser scanning system can work all day long; and fourthly, the data precision of the airborne laser point cloud can reach the centimeter level, and the large-scale mapping requirement is met. At present, the airborne laser scanning technology is widely applied to the fields of topographic mapping, forest resource investigation, agricultural monitoring, urban planning, emergency response and the like.
The airborne LiDAR point cloud change detection technology is a technology for comparing and analyzing airborne LiDAR point cloud data of two time phases in the same region so as to obtain the change information of the ground feature elements of the region. The airborne LiDAR point cloud change detection is also a hot spot research direction in the current laser point cloud data processing, and can be applied to change detection of land utilization/coverage information, geographic information update, dynamic monitoring of resource environment and disasters, urban building change monitoring, target tracking, vegetation growth condition supervision, biomass evaluation and the like. With the wide acquisition and utilization of airborne point cloud data, research on airborne LiDAR point cloud transformation detection technology is becoming more and more important. An on-board LiDAR point cloud is a collection of unordered three-dimensional coordinates (X, Y, Z). The point cloud has order invariance (permutation-invariant), i.e. the point cloud storage order is changed, and the object is still the same object. Besides the three-dimensional coordinates, the point cloud obtained by the airborne laser scanning technology can record information such as echo intensity, echo times and the like. The difficulty of the airborne LiDAR point cloud change detection technology is that: the laser Point cloud data of the same area acquired by different time phases are likely to be different in terms of Point cloud quantity, point cloud density, point cloud precision, point cloud noise level, point cloud distribution state and the like, and the Point clouds have sequence invariance, so that the Point cloud change detection cannot implement the change detection of pixels (pixels-to-pixels) of different time phases at the same position like the two-dimensional remote sensing image change detection, but only implement the change detection of a Point cloud Cluster to a Point cloud Cluster (Cluster) and a single laser Point to Point cloud Cluster (Point-to-Cluster). (II) distinguish between correlated changes (RELEVANT CHANGES), false alarm changes (FALSE CHANGES) and uncorrelated changes (IRRELEVANT CHANGES): the related changes are interesting change types, and the airborne LiDAR point cloud change detection technology usually focuses on surface deformation, building changes (new construction or demolition) and the like; the false alarm change is that the target is not changed in practice between two time phases, but false change is detected due to data quality problem or algorithm defect, such as the false alarm detection of the line type of the edge of the building in the change detection result; uncorrelated changes refer to changes that do occur in reality, and are also reflected in the two-phase point cloud data, but are not changes of interest to us. For example, leaves of different seasons fall, pedestrians shuttle, water surface wave, container change in ports, etc.
Compared with the two-dimensional change detection of remote sensing images, the airborne LiDAR point cloud change detection technology has the following advantages: firstly, three-dimensional geometric information is not influenced by illumination change and visual angle change, and a change detection result is displayed in a three-dimensional space; secondly, reliable registration can be realized among the point cloud data, which provides important guarantee for high-reliability change detection. The detection method of the airborne LiDAR point cloud change is divided into two types, namely a geometric comparison method and a geometric-spectral analysis method. Elevation difference is the most straightforward method to detect change information by calculating the vertical distance of two point clouds or DSMs. Euclidean distance refers to calculating the face-to-face distance of two three-dimensional data to indicate a change. Projection-based comparisons are commonly used in the detection of changes in multi-view images and three-dimensional point clouds, where after a three-dimensional object is projected onto an image, the changes are detected by comparing the similarity of two phase objects on the image. The post-optimization method is to obtain initial change positions by utilizing geometric comparison, and then optimize the results layer by utilizing other available data sources and other characteristic information. Direct feature fusion is the direct fusion of change information, such as geometric changes, spectral changes, texture changes, etc., and is typically performed by Change Vector Analysis (CVA), dempster-Shafer, supervised classifiers (e.g., SVM, random forest, etc.). The post-classification method is also a common change detection method, and the basic idea is to firstly perform ground object classification or target detection, and then detect the change by comparing the labels. However, the existing on-board laser point cloud change detection method still has the following defects in building change detection: the method has the following defects: the method based on artificial feature extraction is characterized in that features are extracted from point cloud data of two time phases respectively, then the features of the front time phase and the back time phase are subjected to Change Vector Analysis (CVA), or the point cloud data of the two time phases are respectively classified, semantic tags at the same position are compared to obtain change information.
Disclosure of Invention
Therefore, the invention provides a building target level change detection method and system based on airborne LiDAR point cloud data, which solve the problems of large calculation amount, poor robustness, limitation on industrial application and the like in the building change detection of the conventional airborne laser point cloud change detection method.
According to the design scheme provided by the invention, on one hand, a building target level change detection method based on airborne LiDAR point cloud data is provided, which comprises the following steps:
Acquiring airborne laser LiDAR point cloud data in different time phases, and acquiring point set features of a single target level of a building in the point cloud data through point cloud data processing, wherein the point set features comprise spherical point set features and patch point set features;
And judging whether the building point cloud surface patches are changed by comparing the point set characteristics of the same positions in different time phases, and merging building change information based on the judged change results to acquire and output the detected information of the building in the point cloud data, wherein the detected information type of the building comprises a newly built building, building dismantling and building invariance.
As the building target level change detection method based on the airborne LiDAR point cloud data, the invention further obtains the point set characteristics of a building single target level in the point cloud data through the point cloud data processing, and comprises the following steps:
Firstly, filtering airborne laser LiDAR point cloud data in different time phases respectively by a progressive triangular network encryption method based on gradient information, separating ground points and non-ground points, and filtering and removing non-building point clouds by point cloud elevation and ground point elevation;
Then, dividing the filtered point cloud data based on the point cloud segmentation generated by the surface to obtain a large point cloud patch under a preset large block of a building, a small point cloud patch under a preset small block and undivided point cloud;
And then, merging the large-block point cloud patches, the small-block point cloud patches and the undivided point cloud by using a cluster analysis method to obtain a spherical point set and a patch point set of a single building target level.
As the building target level change detection method based on the airborne LiDAR point cloud data of the present invention, further, the airborne laser LiDAR point cloud data in different phases is respectively filtered, including:
firstly, constructing point cloud data square grids with the side length of L, selecting the lowest point in each grid as an initial seed point, and constructing an irregular triangular network for the initial seed point;
Then, aiming at each laser point cloud in the point cloud data, judging that the distance from the laser point cloud to the nearest triangular surface and the included angle of the triangular surface are smaller than a preset threshold value, and judging that the laser point cloud belongs to a ground point until all the point cloud data are traversed;
then, a new triangle net is constructed by utilizing the ground points, the local gradient of each point is calculated, and a smooth triangle net is obtained by eliminating discontinuous triangle points.
As the building target level change detection method based on the airborne LiDAR point cloud data, the invention further comprises the steps of:
firstly, constructing a digital elevation model DEM based on a ground point set;
Then, for each laser point cloud, subtracting the DEM Gao Chenglai at the position of the laser point cloud from the elevation of the laser point cloud to obtain the elevation difference of the laser point cloud, and using the elevation difference to represent the elevation of the laser point cloud from the earth surface;
and then, screening the non-ground point set based on the height difference and a preset height difference threshold value to remove non-building point clouds.
As the building target level change detection method based on airborne LiDAR point cloud data of the present invention, further, the method for performing segmentation processing on filtered point cloud data based on point cloud segmentation generated by a surface includes:
Firstly, extracting point clusters positioned on the same plane from point cloud data by utilizing 3D Hough transformation, and forming the extracted point clusters into a seed patch;
then, analyzing the points around each seed surface patch by using a surface generation algorithm, and adding the surrounding points, the distance from the nearest point of the target surface patch to which is smaller than a preset distance value and the vertical distance from the point to the surface patch is smaller than the preset vertical distance, into the seed surface patch;
And extracting covariance matrixes and eigenvalues of each seed patch based on the point coordinates, and obtaining a large-block point cloud patch under a preset large block, a small-block point cloud patch under a preset small block and undivided point cloud of the building by representing mathematical expressions of the patch aggregate shape through the preset eigenvalues.
As the building target level change detection method based on the airborne LiDAR point cloud data, the invention further utilizes a cluster analysis method to combine the large-block point cloud patches, the small-block point cloud patches and the undivided point cloud, and comprises the following steps:
firstly, establishing a KD tree structure based on point cloud data, and constructing a 3D tree based on three-dimensional coordinates of point cloud and searching by adopting a K nearest neighbor method;
then, merging the large-block point cloud surface patches and the small-block point cloud surface patches of the adjacent buildings belonging to the same building target based on the 3D tree;
and then, judging whether points in the undivided point cloud and adjacent points belong to the same point cluster or not by searching the nearest adjacent points around each point cloud so as to cluster the undivided point cloud in a discrete cluster distribution point set.
The building target level change detection method based on the airborne LiDAR point cloud data, provided by the invention, further comprises the steps of constructing a 3D tree based on the three-dimensional coordinates of the point cloud and searching by adopting a K nearest neighbor method, wherein the method comprises the following steps:
Searching the nearest neighbor approximate point along the search path based on the binary tree, and setting the search path to be a left subtree branch or a right subtree branch by comparing the values of the splitting dimensions of the point to be queried and the splitting nodes aiming at the leaf nodes which are in the same subspace with the point to be queried; the searching route is traced back, whether other sub-node spaces of the nodes on the searching route have data points nearer to the query point or not is judged, if so, the searching route jumps to the corresponding other sub-node spaces to search, and the nodes in the other sub-node spaces are added to the searching route; and iteratively searching the nearest neighbor approximate points until the search path is empty, and obtaining the nearest neighbor point of each point.
As the building target level change detection method based on the airborne LiDAR point cloud data, the invention further judges whether the building point cloud surface patch changes by comparing the point set characteristics of the same position in different time phases, and comprises the following steps:
Triggering a large block point cloud surface patch of a building in one time phase, calculating the elevation change of each point to the other time phase in the vertical direction, judging whether the building changes according to whether the point proportion exceeds a preset proportion threshold value or not by the point proportion of the elevation change point exceeding the preset proportion threshold value, and judging the elevation change point by point according to the changed building point cloud data so as to detect a new building, a dismantled building and an unchanged building according to the point cloud data.
As the building target level change detection method based on the airborne LiDAR point cloud data, the building change information is combined based on the judged change result, and the method comprises the following steps:
And merging the same type of building change detection results in the same time phase and different time phase change detection results, and based on the roof point cloud neighborhood in the large-block point cloud surface patches, merging the points in the neighborhood to the wall surface points below the roof point cloud if the points in the neighborhood are in a preset threshold range below the roof point cloud set and the horizontal distance is smaller than a preset value, so as to obtain complete single building change information through information merging.
Further, the invention also provides a building target level change detection system based on airborne LiDAR point cloud data, which comprises: a data acquisition module and a target detection module, wherein,
The data acquisition module is used for acquiring airborne laser LiDAR point cloud data in different time phases, and acquiring point set characteristics of a single target level of a building in the point cloud data through point cloud data processing, wherein the point set characteristics comprise spherical point set characteristics and patch point set characteristics;
The target detection module is used for judging whether the building point cloud surface patches are changed by comparing the point set characteristics of the same positions in different time phases, merging building change information based on the judged change results so as to acquire and output the information detected by the building in the point cloud data, wherein the type of the information detected by the building comprises a newly built building, building dismantling and building invariance.
The invention has the beneficial effects that:
The invention realizes the change detection of the target level point cloud of the building from coarse to fine by fusing single-point characteristics and change characteristics, can implement stable change detection on the laser LiDAR point cloud, can be better suitable for the change detection task of medium-dense urban areas of the building by analyzing the geometric characteristics and the spatial distribution rule of the laser point cloud, outputs three-dimensional point-by-point change information including four types of new construction, demolition, local change and unchanged, can detect the change information of a single building, can detect the change of the local area of the building, and has strong applicability and larger potential value in practical application; the scheme has strong robustness, and low-altitude difference points are adopted to remove after filtering, so that the interference of low vegetation, automobiles and other ground features is removed in advance; eliminating the interference of spherical vegetation through the point clustering clusters, reducing the change detection of a single building target to the change detection of a local area, and obtaining final change information from coarse-to-fine comprehensive elevation change information and double-phase change information; the three-dimensional tree structure is built for the non-ground point cloud, and K neighbor searching is adopted to improve the searching efficiency of the neighborhood points, so that full-automatic realization can be supported, and the method can be effectively applied to the detection task of the laser point cloud change of the urban scale.
Drawings
FIG. 1 is a schematic illustration of a building target level change detection flow based on airborne LiDAR point cloud data in an embodiment;
FIG. 2 is a schematic diagram of three-dimensional change detection method classification in an embodiment;
FIG. 3 is a schematic illustration of a building target level change detection algorithm according to an embodiment;
FIG. 4 is a triangle mesh encryption determination schematic in point cloud filtering in an embodiment;
fig. 5 is a schematic flow of multi-level change detection in an embodiment.
Detailed Description
The present invention will be described in further detail with reference to the drawings and the technical scheme, in order to make the objects, technical schemes and advantages of the present invention more apparent.
Aiming at the problems of large calculation amount, poor robustness, limitation on industrialization application and the like of the conventional airborne laser point cloud change detection method described in the background art, the embodiment of the invention provides a building target level change detection method based on airborne LiDAR point cloud data, which is shown in fig. 1 and comprises the following contents:
s101, acquiring airborne laser LiDAR point cloud data in different time phases, and acquiring point set features of a single target level of a building in the point cloud data through point cloud data processing, wherein the point set features comprise spherical point set features and patch point set features.
As shown in fig. 2 and 3, laser LiDAR point cloud data of two front and rear time phases is input, and the point cloud data is assumed to contain only (X, Y, Z) position information, and the first stage is point cloud segmentation: firstly, filtering laser point cloud data of two time phases respectively, and separating ground points and non-ground points; calculating normalized DSM (nDSM) by using the point cloud elevation Z and the ground point elevation, and removing non-building point clouds with low elevation differences through nDSM elevation difference screening; performing point cloud segmentation (Surface Growing Segmentation) on the rest non-ground point main bodies based on surface growth to obtain large-block point cloud patches, small point cloud patches and undivided point clouds of building roofs and the like; combining the large point cloud patches, the small point cloud patches and the undivided point clouds by utilizing the adjacent clustering analysis to obtain a spherical point set and a patch point set of a single target level; the second stage is target level change analysis: comparing the point set characteristics of the same position of the two time phases, and comprehensively judging whether the point cloud patch is changed or not through elevation change, normal vector direction change and smoothness change; and combining the building change information of the two time phases to obtain three kinds of information, namely a newly built building, building dismantling and building unchanged.
Specifically, the point set characteristics of a single target level of a building in the point cloud data are acquired through the point cloud data processing, and the point set characteristics can be designed to comprise the following contents:
Firstly, filtering airborne laser LiDAR point cloud data in different time phases respectively by a progressive triangular network encryption method based on gradient information, separating ground points and non-ground points, and filtering and removing non-building point clouds by point cloud elevation and ground point elevation;
Then, dividing the filtered point cloud data based on the point cloud segmentation generated by the surface to obtain a large point cloud patch under a preset large block of a building, a small point cloud patch under a preset small block and undivided point cloud;
And then, merging the large-block point cloud patches, the small-block point cloud patches and the undivided point cloud by using a cluster analysis method to obtain a spherical point set and a patch point set of a single building target level.
The filtering of the airborne laser LiDAR point cloud data in different time phases may include:
firstly, constructing point cloud data square grids with the side length of L, selecting the lowest point in each grid as an initial seed point, and constructing an irregular triangular network for the initial seed point;
Then, aiming at each laser point cloud in the point cloud data, judging that the distance from the laser point cloud to the nearest triangular surface and the included angle of the triangular surface are smaller than a preset threshold value, and judging that the laser point cloud belongs to a ground point until all the point cloud data are traversed;
then, a new triangle net is constructed by utilizing the ground points, the local gradient of each point is calculated, and a smooth triangle net is obtained by eliminating discontinuous triangle points.
The process of separating ground points and non-ground points from an on-board laser point cloud is also referred to as "point cloud filtering. The method can better consider the local change details of the terrain, introduces the whole gradient information, effectively suppresses the point cloud noise and reserves local terrain breaking points. When filtering is specifically implemented, the algorithm steps can be designed as follows:
(1) An initial seed point is selected. And constructing square Li grids with the side length L, and selecting a lowest point in each grid as an initial seed point. Constructing an irregular triangular net TIN (Triangular Irregular Network) for the seed points;
(2) Progressive triangulation encryption. As shown in fig. 4, for each laser point, it belongs to a ground point if it is satisfied that its distance d to the nearest neighbor triangular surface and the included angle α i (i is 1,2, 3) of the triangular surface are smaller than a threshold value. Until all laser points are calculated;
(3) And constructing a new triangular net by using the ground points, and calculating the local gradient of each point. Removing discontinuous triangular points to obtain a smooth triangular net;
(4) Can be set as follows: repeating the steps 2 to 3 for three times to obtain a smooth triangular net. It should be noted that the number of times of repeated execution may be adjusted according to the specific implementation scenario.
Filtering and removing non-building point clouds by point cloud elevation and ground point elevation may include:
firstly, constructing a digital elevation model DEM based on a ground point set;
Then, for each laser point cloud, subtracting the DEM Gao Chenglai at the position of the laser point cloud from the elevation of the laser point cloud to obtain the elevation difference of the laser point cloud, and using the elevation difference to represent the elevation of the laser point cloud from the earth surface;
and then, screening the non-ground point set based on the height difference and a preset height difference threshold value to remove non-building point clouds.
The point cloud height difference screening aims to remove a part of non-building point sets with lower height differences through the point cloud height difference screening, so that interference of the point sets on subsequent building change detection is restrained. The specific screening method is as follows: the digital elevation model DEM (Digital Elevation Model) is obtained through interpolation of the ground point set obtained through filtering in the last step; the difference in elevation ΔZ i for each laser spot may be expressed as the point elevation Z i minus the DEM elevation at that location
The physical meaning of the difference in elevation Δz i for each laser point is the height of that point from the surface. The non-ground point set is screened according to Δz i, and the point set of Δz i<TZ is considered to be a non-building point, and these low points may be low vegetation, automobiles, sculpture, road boundary, rail, bus stop, etc. The low sites can be removed in advance, so that irrelevant information interference can be reduced, and the robustness of a change detection algorithm is improved. Wherein T Z is a height difference screening threshold, taken as the lowest height difference of the building in the experimental area, for example 3.0m.
The processing of the filtered point cloud data based on the point cloud segmentation generated by the surface can comprise:
Firstly, extracting point clusters positioned on the same plane from point cloud data by utilizing 3D Hough transformation, and forming the extracted point clusters into a seed patch;
then, analyzing the points around each seed surface patch by using a surface generation algorithm, and adding the surrounding points, the distance from the nearest point of the target surface patch to which is smaller than a preset distance value and the vertical distance from the point to the surface patch is smaller than the preset vertical distance, into the seed surface patch;
And extracting covariance matrixes and eigenvalues of each seed patch based on the point coordinates, and obtaining a large-block point cloud patch under a preset large block, a small-block point cloud patch under a preset small block and undivided point cloud of the building by representing mathematical expressions of the patch aggregate shape through the preset eigenvalues.
After ground points and low-level non-building points are filtered from the airborne laser point cloud, the rest point set mainly comprises three major categories of building roofs, partial wall surfaces higher than 3.0m and tall vegetation, and a small amount of interference point sets are street lamp poles, scaffolds and the like. In order to further extract the building point cloud, a building point cloud extraction method can be designed according to the geometric characteristics of the building. Since the roof point cloud is geometrically characterized mainly by smooth planes or curved surfaces, a surface growth-based segmentation method is employed to segment a set of panel points from a non-ground point cloud above 3.0m as candidate points for a set of building points.
The point cloud segmentation implementation process based on surface growth can be described as follows: first, a 3D Hough Transform (Hough Transform) is used to extract a "seed patch" from the point cloud. These panels are clusters of points lying in the same plane, mainly roof, wall. Due to noise or hole effects of the point cloud, a roof is often split into a plurality of point cloud patches, which affect data processing. Therefore, the points around each "seed patch" are analyzed by the surface growth algorithm, and only points with a distance less than D from the nearest point in the existing patch and a perpendicular distance less than D 0 from the point to the patch are added to the patch, a "growth" process. Some scattered points or vegetation points are not segmented as they do not belong to any plane.
In order to extract the roof sheet from the above division results, a feature value of each sheet is calculated. If a patch contains N points, each point has coordinates (X i,Yi,Zi), i [1, N ]. The patch covariance matrix M and the three eigenvalues λ 1231≥λ2≥λ3 can be obtained by three-dimensional coordinates). The mathematical expression derived from the eigenvalues can characterize the patch geometry. Roof sheets were screened with two features: tilt angle θ, plane fit difference σ.
Where d i is the distance of each point to the plane of fit. Then, excluding the dough sheet meeting any one of the following conditions according to the priori knowledge; (1) theta is more than 70 degrees and is a vertical wall surface or a railing; (2) σ > 0.2, and many noise points. After the characteristic screening, the selected roof point cloud surface patch is obtained.
The method for combining the large-block point cloud patches, the small-block point cloud patches and the undivided point cloud by utilizing the cluster analysis method can comprise the following steps:
firstly, establishing a KD tree structure based on point cloud data, and constructing a 3D tree based on three-dimensional coordinates of point cloud and searching by adopting a K nearest neighbor method;
then, merging the large-block point cloud surface patches and the small-block point cloud surface patches of the adjacent buildings belonging to the same building target based on the 3D tree;
and then, judging whether points in the undivided point cloud and adjacent points belong to the same point cluster or not by searching the nearest adjacent points around each point cloud so as to cluster the undivided point cloud in a discrete cluster distribution point set.
Wherein, construct the 3D tree, may include: searching the nearest neighbor approximate point along the search path based on the binary tree, and setting the search path to be a left subtree branch or a right subtree branch by comparing the values of the splitting dimensions of the point to be queried and the splitting nodes aiming at the leaf nodes which are in the same subspace with the point to be queried; the searching route is traced back, whether other sub-node spaces of the nodes on the searching route have data points nearer to the query point or not is judged, if so, the searching route jumps to the corresponding other sub-node spaces to search, and the nodes in the other sub-node spaces are added to the searching route; and iteratively searching the nearest neighbor approximate points until the search path is empty, and obtaining the nearest neighbor point of each point.
The purpose of neighborhood clustering analysis is to combine and cluster non-ground points segmented by more than 3.0m to form a single building target and spherical vegetation point cluster. The input point set has completed the segmentation of the building roof, and the main steps in the implementation process of the specific analysis algorithm can be described as follows:
(1) 3D tree construction: and constructing a KD Tree (KD-Tree) data structure by using the point cloud, and accelerating the searching speed by adopting a K nearest neighbor method (KNN), wherein each point has three dimensions (X, Y and Z), so that a 3D Tree is constructed. The point cloud distance calculation adopts Euclidean distance measure. The 3D tree construction process is as follows: searching a binary tree, searching nearest neighbor approximate points along a searching path, namely leaf nodes in the same subspace as the point to be queried, comparing the values of the splitting dimensions of the node to be queried and the splitting nodes, and entering a left subtree branch if the values are smaller than or equal to the values, and entering a right subtree branch until the leaf nodes; secondly, backtracking a search path, judging whether other sub-node spaces of nodes on the search path possibly have data points nearer to the query point, if so, jumping to the other sub-node spaces to search, and adding other sub-nodes to the search path; the above two steps are repeated until the search path is empty. Finally, the nearest point of each point is obtained.
(2) Roof sheets are consolidated into a single building object: judging adjacent building roof sheets, if the horizontal distance of the nearest point in the two roof sheets is smaller than a threshold Th and the vertical distance is smaller than a threshold Tv, considering that the two roof sheets belong to one building target, and combining the two building targets. Through the step, the fine structure (such as a chimney and a self-built framework) on the building can be also judged as the building, so that the accuracy of the subsequent building change detection is improved.
(3) The scattered points are clustered into spherical vegetation targets: most of the undivided point clouds are vegetation points, and vegetation crowns are represented geometrically as discrete, clustered point sets. For the rest laser points excluding the building points, searching the nearest point around each point, and if the distance between the adjacent two points is smaller than the threshold value Td, considering the points to belong to the same point cluster, so as to cluster the unclassified point cloud. And judging the size of the point cluster, and if the number of the point cluster is less than 10, considering the point cluster as noise and discarding the point cluster. In the ideal case, each crown constitutes a cluster of points. However, due to the fact that adjacent crowns are too close or the density of point clouds is low, some crown point clusters are omitted, and some crowns are connected into a whole, but the follow-up building change detection accuracy is not affected.
S102, judging whether the building point cloud surface patches are changed by comparing the point set features at the same positions in different time phases, and merging building change information based on the judged change results to acquire and output the detected information of the building in the point cloud data, wherein the detected information type of the building comprises a newly built building, building dismantling and building invariance.
The method comprises the following steps of comparing point set characteristics of the same position in different time phases to judge whether the building point cloud surface patch changes or not, wherein the method can be designed to comprise the following steps:
Triggering a large block point cloud surface patch of a building in one time phase, calculating the elevation change of each point to the other time phase in the vertical direction, judging whether the building changes according to whether the point proportion exceeds a preset proportion threshold value or not by the point proportion of the elevation change point exceeding the preset proportion threshold value, and judging the elevation change point by point according to the changed building point cloud data so as to detect a new building, a dismantled building and an unchanged building according to the point cloud data.
In the experimental area applicable to the scheme, the building change types can be divided into four types of new building, demolition, local change and unchanged. Where a locally changing building refers to a single building that is viewed on a two-dimensional footprint (footprint) with more than two building changes occurring simultaneously. For example, a single building section is removed, while the remainder is unchanged; as another example, a single building is locally unchanged, the remainder being raised. In this case, the change situation of the individual building target is complicated, and the change situation of each laser spot needs to be considered point by point.
The purpose of the change vector analysis is to integrate the change information at the target level and the single point level to obtain a comprehensive change detection structure, which is realized through multi-step determination of the building panel, as shown in fig. 5. First, from a building roof sheet in one time phase, the elevation change of each point in the vertical direction to the other time phase is calculated, and the point proportion of the elevation change exceeding a threshold value is counted. If the point proportion of the elevation change in the roof point cloud of the single building is not more than 80 percent, the building is considered to be unchanged, and the analysis process is ended; if the number of the building roof points is greater than 80%, the next step of point-by-point elevation change judgment is carried out, and each roof point is divided into three types of a new building, a dismantled building and an unchanged building. The process transits from the target level to the single-point level, so that the robustness of the algorithm to the local noise point is improved.
The building change information is combined based on the determined change result, and can be designed to comprise the following contents:
And merging the same type of building change detection results in the same time phase and different time phase change detection results, and based on the roof point cloud neighborhood in the large-block point cloud surface patches, merging the points in the neighborhood to the wall surface points below the roof point cloud if the points in the neighborhood are in a preset threshold range below the roof point cloud set and the horizontal distance is smaller than a preset value, so as to obtain complete single building change information through information merging.
The purpose of the double-phase change information combination is to combine the two-way multi-level change detection results to obtain the final building change information of the target level. Objects that need to be merged can be summarized mainly as follows:
(1) The detection results of the building changes in the same time phase are discontinuous, the same building changes are separated into local small change information, and the detection results of the same category of changes are combined in the step, and meanwhile noise interference is removed;
(2) Building changes in the two phases are combined. The change information obtained by respectively carrying out change detection in the previous step from the old time phase and the new time phase is necessarily different. For example, starting from an old phase roof sheeting, it is impossible to detect the new building location; starting from the new phase roof sheet, it is not possible to detect the removed building location. Therefore, the change detection results from two time phases are combined to obtain complete change information;
(3) At present, the change detection result only contains roof point cloud, and the vertical wall point cloud of the building is also incorporated into the whole building by downward growth. The specific implementation process is as follows: and calculating a plurality of points of the roof point Yun Linyu, and if a certain point is within a certain threshold range below the roof point set and the horizontal distance is smaller than the threshold value, considering the point to be determined as a wall point below the roof sheet.
Through the information combination of the three sub-steps, the complete change information of a single building can be obtained; the method can also effectively detect the change of the local area of the building.
Further, based on the above method, the embodiment of the present invention further provides a building target level change detection system based on airborne LiDAR point cloud data, including: a data acquisition module and a target detection module, wherein,
The data acquisition module is used for acquiring airborne laser LiDAR point cloud data in different time phases, and acquiring point set characteristics of a single target level of a building in the point cloud data through point cloud data processing, wherein the point set characteristics comprise spherical point set characteristics and patch point set characteristics;
The target detection module is used for judging whether the building point cloud surface patches are changed by comparing the point set characteristics of the same positions in different time phases, merging building change information based on the judged change results so as to acquire and output the information detected by the building in the point cloud data, wherein the type of the information detected by the building comprises a newly built building, building dismantling and building invariance.
The relative steps, numerical expressions and numerical values of the components and steps set forth in these embodiments do not limit the scope of the present invention unless it is specifically stated otherwise.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the system disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
The elements and method steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or a combination thereof, and the elements and steps of the examples have been generally described in terms of functionality in the foregoing description to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Those of ordinary skill in the art may implement the described functionality using different methods for each particular application, but such implementation is not considered to be beyond the scope of the present invention.
Those of ordinary skill in the art will appreciate that all or a portion of the steps in the above methods may be performed by a program that instructs associated hardware, and that the program may be stored on a computer readable storage medium, such as: read-only memory, magnetic or optical disk, etc. Alternatively, all or part of the steps of the above embodiments may be implemented using one or more integrated circuits, and accordingly, each module/unit in the above embodiments may be implemented in hardware or may be implemented in a software functional module. The present invention is not limited to any specific form of combination of hardware and software.
Finally, it should be noted that: the above examples are only specific embodiments of the present invention, and are not intended to limit the scope of the present invention, but it should be understood by those skilled in the art that the present invention is not limited thereto, and that the present invention is described in detail with reference to the foregoing examples: any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or perform equivalent substitution of some of the technical features, while remaining within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (7)

1. The building target level change detection method based on the airborne LiDAR point cloud data is characterized by comprising the following steps of:
Acquiring airborne laser LiDAR point cloud data in different time phases, and acquiring point set features of a single target level of a building in the point cloud data through point cloud data processing, wherein the point set features comprise spherical point set features and patch point set features; the method for acquiring the point set characteristics of the building single target level in the point cloud data through the point cloud data processing comprises the following steps: firstly, filtering airborne laser LiDAR point cloud data in different time phases respectively by a progressive triangular network encryption method based on gradient information, separating ground points and non-ground points, and filtering and removing non-building point clouds by point cloud elevation and ground point elevation; then, dividing the filtered point cloud data based on the point cloud segmentation generated by the surface to obtain a large point cloud patch under a preset large block of a building, a small point cloud patch under a preset small block and undivided point cloud; then, combining the large-block point cloud patches, the small-block point cloud patches and the undivided point cloud by using a cluster analysis method to obtain a spherical point set and a patch point set of a single building target level;
Judging whether the building point cloud surface patches are changed or not by comparing the point set characteristics of the same positions in different time phases, and merging building change information based on the judged change results to acquire and output building detection information in the point cloud data, wherein the types of the building detection information comprise newly built buildings, building dismantling and unchanged buildings; wherein, judge whether building point cloud dough sheet changes through the point set characteristic of the same position in the comparison different time phases, include: starting from a large building point cloud surface patch of one time phase, calculating the elevation change of each point to the other time phase in the vertical direction, judging whether the building changes according to whether the point proportion exceeds a preset proportion threshold value or not by using the point proportion of the elevation change points exceeding the preset proportion threshold value, and judging the elevation change point by point according to the changed building point cloud data so as to detect a new building, a dismantled building and an unchanged building according to the point cloud data; merging building change information based on the judged change result, comprising: and merging the same type of building change detection results in the same time phase and different time phase change detection results, and based on the roof point cloud neighborhood in the large-block point cloud surface patches, merging the points in the neighborhood to the wall surface points below the roof point cloud if the points in the neighborhood are in a preset threshold range below the roof point cloud set and the horizontal distance is smaller than a preset value, so as to obtain complete single building change information through information merging.
2. The method for detecting the change of the target level of the building based on the airborne LiDAR point cloud data according to claim 1, wherein the steps of respectively filtering the airborne laser LiDAR point cloud data in different phases comprise:
firstly, constructing point cloud data square grids with the side length of L, selecting the lowest point in each grid as an initial seed point, and constructing an irregular triangular network for the initial seed point;
Then, aiming at each laser point cloud in the point cloud data, judging that the distance from the laser point cloud to the nearest triangular surface and the included angle of the triangular surface are smaller than a preset threshold value, and judging that the laser point cloud belongs to a ground point until all the point cloud data are traversed;
then, a new triangle net is constructed by utilizing the ground points, the local gradient of each point is calculated, and a smooth triangle net is obtained by eliminating discontinuous triangle points.
3. The method for detecting the change of the target level of the building based on the on-board LiDAR point cloud data according to claim 1, wherein the non-building point cloud is filtered and removed through the point cloud elevation and the ground point elevation, and the method comprises the following steps:
firstly, constructing a digital elevation model DEM based on a ground point set;
Then, for each laser point cloud, subtracting the DEM Gao Chenglai at the position of the laser point cloud from the elevation of the laser point cloud to obtain the elevation difference of the laser point cloud, and using the elevation difference to represent the elevation of the laser point cloud from the earth surface;
and then, screening the non-ground point set based on the height difference and a preset height difference threshold value to remove non-building point clouds.
4. The method for detecting the change of the target level of the building based on the airborne LiDAR point cloud data according to claim 1, wherein the method for carrying out the segmentation processing on the filtered point cloud data based on the point cloud segmentation generated by the surface comprises the following steps:
Firstly, extracting point clusters positioned on the same plane from point cloud data by utilizing 3D Hough transformation, and forming the extracted point clusters into a seed patch;
then, analyzing the points around each seed surface patch by using a surface generation algorithm, and adding the surrounding points, the distance from the nearest point of the target surface patch to which is smaller than a preset distance value and the vertical distance from the point to the surface patch is smaller than the preset vertical distance, into the seed surface patch;
And extracting covariance matrixes and eigenvalues of each seed patch based on the point coordinates, and obtaining a large-block point cloud patch under a preset large block, a small-block point cloud patch under a preset small block and undivided point cloud of the building by representing mathematical expressions of the patch aggregate shape through the preset eigenvalues.
5. The method for detecting the target level change of the building based on the airborne LiDAR point cloud data according to claim 1, wherein the step of merging the large-block point cloud patches, the small-block point cloud patches and the undivided point cloud by using a cluster analysis method comprises the following steps:
firstly, establishing a KD tree structure based on point cloud data, and constructing a 3D tree based on three-dimensional coordinates of point cloud and searching by adopting a K nearest neighbor method;
then, merging the large-block point cloud surface patches and the small-block point cloud surface patches of the adjacent buildings belonging to the same building target based on the 3D tree;
and then, judging whether points in the undivided point cloud and adjacent points belong to the same point cluster or not by searching the nearest adjacent points around each point cloud so as to cluster the undivided point cloud in a discrete cluster distribution point set.
6. The method for detecting the change of the target level of the building based on the on-board LiDAR point cloud data according to claim 5, wherein the method for constructing the 3D tree based on the three-dimensional coordinates of the point cloud and searching by adopting a K nearest neighbor method comprises the following steps:
Searching the nearest neighbor approximate point along the search path based on the binary tree, and setting the search path to be a left subtree branch or a right subtree branch by comparing the values of the splitting dimensions of the point to be queried and the splitting nodes aiming at the leaf nodes which are in the same subspace with the point to be queried; the searching route is traced back, whether other sub-node spaces of the nodes on the searching route have data points nearer to the query point or not is judged, if so, the searching route jumps to the corresponding other sub-node spaces to search, and the nodes in the other sub-node spaces are added to the searching route; and iteratively searching the nearest neighbor approximate points until the search path is empty, and obtaining the nearest neighbor point of each point.
7. A building target level change detection system based on airborne LiDAR point cloud data, comprising: a data acquisition module and a target detection module, wherein,
The data acquisition module is used for acquiring airborne laser LiDAR point cloud data in different time phases, and acquiring point set characteristics of a single target level of a building in the point cloud data through point cloud data processing, wherein the point set characteristics comprise spherical point set characteristics and patch point set characteristics; the method for acquiring the point set characteristics of the building single target level in the point cloud data through the point cloud data processing comprises the following steps: firstly, filtering airborne laser LiDAR point cloud data in different time phases respectively by a progressive triangular network encryption method based on gradient information, separating ground points and non-ground points, and filtering and removing non-building point clouds by point cloud elevation and ground point elevation; then, dividing the filtered point cloud data based on the point cloud segmentation generated by the surface to obtain a large point cloud patch under a preset large block of a building, a small point cloud patch under a preset small block and undivided point cloud; then, combining the large-block point cloud patches, the small-block point cloud patches and the undivided point cloud by using a cluster analysis method to obtain a spherical point set and a patch point set of a single building target level;
The target detection module is used for judging whether the building point cloud surface patches are changed by comparing the point set characteristics of the same positions in different time phases, merging building change information based on the judged change results so as to acquire and output the information detected by the building in the point cloud data, wherein the type of the information detected by the building comprises a newly built building, building dismantling and building invariance; wherein, judge whether building point cloud dough sheet changes through the point set characteristic of the same position in the comparison different time phases, include: starting from a large building point cloud surface patch of one time phase, calculating the elevation change of each point to the other time phase in the vertical direction, judging whether the building changes according to whether the point proportion exceeds a preset proportion threshold value or not by using the point proportion of the elevation change points exceeding the preset proportion threshold value, and judging the elevation change point by point according to the changed building point cloud data so as to detect a new building, a dismantled building and an unchanged building according to the point cloud data; merging building change information based on the judged change result, comprising: and merging the same type of building change detection results in the same time phase and different time phase change detection results, and based on the roof point cloud neighborhood in the large-block point cloud surface patches, merging the points in the neighborhood to the wall surface points below the roof point cloud if the points in the neighborhood are in a preset threshold range below the roof point cloud set and the horizontal distance is smaller than a preset value, so as to obtain complete single building change information through information merging.
CN202311245296.9A 2023-09-26 2023-09-26 Building target level change detection method and system based on airborne LiDAR point cloud data Active CN117437537B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311245296.9A CN117437537B (en) 2023-09-26 2023-09-26 Building target level change detection method and system based on airborne LiDAR point cloud data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311245296.9A CN117437537B (en) 2023-09-26 2023-09-26 Building target level change detection method and system based on airborne LiDAR point cloud data

Publications (2)

Publication Number Publication Date
CN117437537A CN117437537A (en) 2024-01-23
CN117437537B true CN117437537B (en) 2024-07-12

Family

ID=89557376

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311245296.9A Active CN117437537B (en) 2023-09-26 2023-09-26 Building target level change detection method and system based on airborne LiDAR point cloud data

Country Status (1)

Country Link
CN (1) CN117437537B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015042772A1 (en) * 2013-09-24 2015-04-02 中国科学院自动化研究所 Remote sensing image salient object change detection method
CN110570428A (en) * 2019-08-09 2019-12-13 浙江合信地理信息技术有限公司 method and system for segmenting roof surface patch of building from large-scale image dense matching point cloud

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104049245B (en) * 2014-06-13 2017-01-25 中原智慧城市设计研究院有限公司 Urban building change detection method based on LiDAR point cloud spatial difference analysis
CN108074232B (en) * 2017-12-18 2021-08-24 辽宁工程技术大学 Voxel segmentation-based airborne LIDAR building detection method
JPWO2022162859A1 (en) * 2021-01-29 2022-08-04

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015042772A1 (en) * 2013-09-24 2015-04-02 中国科学院自动化研究所 Remote sensing image salient object change detection method
CN110570428A (en) * 2019-08-09 2019-12-13 浙江合信地理信息技术有限公司 method and system for segmenting roof surface patch of building from large-scale image dense matching point cloud

Also Published As

Publication number Publication date
CN117437537A (en) 2024-01-23

Similar Documents

Publication Publication Date Title
Lee et al. Fusion of lidar and imagery for reliable building extraction
CN110781827A (en) Road edge detection system and method based on laser radar and fan-shaped space division
Goodwin et al. Characterizing urban surface cover and structure with airborne lidar technology
CN114764871B (en) Urban building attribute extraction method based on airborne laser point cloud
Aplin 2 Comparison of simulated IKONOS and SPOT HRV imagery for
Jiangui et al. A method for main road extraction from airborne LiDAR data in urban area
Lin et al. Noise point detection from airborne lidar point cloud based on spatial hierarchical directional relationship
Süleymanoğlu et al. Comparison of filtering algorithms used for DTM production from airborne lidar data: A case study in Bergama, Turkey
Sarıtaş et al. Enhancing Ground Point Extraction in Airborne LiDAR Point Cloud Data Using the CSF Filter Algorithm
CN117932333A (en) Urban building height extraction method considering different terrain scenes
CN117437537B (en) Building target level change detection method and system based on airborne LiDAR point cloud data
WO2011085433A1 (en) Acceptation/rejection of a classification of an object or terrain feature
KR101737889B1 (en) filtering and extraction of feature boundary method from terrestrial lidar data using data mining techniques and device thereof
Zaletnyik et al. LIDAR waveform classification using self-organizing map
CN112907567B (en) SAR image ordered artificial structure extraction method based on spatial reasoning method
CN112381029B (en) Method for extracting airborne LiDAR data building based on Euclidean distance
Shokri et al. POINTNET++ Transfer Learning for Tree Extraction from Mobile LIDAR Point Clouds
CN114063107A (en) Ground point cloud extraction method based on laser beam
Ma et al. Discrimination of residential and industrial buildings using LiDAR data and an effective spatial-neighbor algorithm in a typical urban industrial park
Zhang Photogrammetric point clouds: quality assessment, filtering, and change detection
Kim et al. Automatic generation of digital building models for complex structures from LiDAR data
Pendyala et al. Comparative Study of Automatic Urban Building Extraction Methods from Remote Sensing Data
Narwade et al. Automatic road extraction from airborne LiDAR: A review
Wang et al. Progressive TIN Densification with Connection Analysis for Urban Lidar Data
Wang et al. Segmentation of Closely Packed Buildings from Airborne LiDAR Point Clouds

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant