CN114120110A - Multi-granularity calculation method for airborne laser point cloud classification of hybrid scene - Google Patents

Multi-granularity calculation method for airborne laser point cloud classification of hybrid scene Download PDF

Info

Publication number
CN114120110A
CN114120110A CN202111383336.7A CN202111383336A CN114120110A CN 114120110 A CN114120110 A CN 114120110A CN 202111383336 A CN202111383336 A CN 202111383336A CN 114120110 A CN114120110 A CN 114120110A
Authority
CN
China
Prior art keywords
point cloud
scene
point
classification
ground
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111383336.7A
Other languages
Chinese (zh)
Other versions
CN114120110B (en
Inventor
秦楠楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Purple Mountain Observatory of CAS
Original Assignee
Purple Mountain Observatory of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Purple Mountain Observatory of CAS filed Critical Purple Mountain Observatory of CAS
Priority to CN202111383336.7A priority Critical patent/CN114120110B/en
Publication of CN114120110A publication Critical patent/CN114120110A/en
Application granted granted Critical
Publication of CN114120110B publication Critical patent/CN114120110B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Abstract

The invention provides a multi-granularity calculation method for airborne laser point cloud classification in a hybrid scene, which comprises the following steps: selecting a classification neighborhood point set and a scene neighborhood point set by taking a sampling point as a center; training a point cloud global feature extraction model in an unsupervised learning mode to realize scene perception of coarse granularity; embedding spatial context information in a point cloud semantic segmentation model by adopting a feature fusion strategy based on an attention mechanism, defining a multitask loss function which gives consideration to the ground clearance and the category, and supervising the category and the ground clearance of each point in a classification neighborhood point set to realize point cloud semantic segmentation and ground clearance prediction with fine granularity; and realizing the refinement of ground classification results with fine granularity by point cloud segmentation based on graph cut optimization and iterative adsorption of a ground irregular triangular net. The invention decomposes the difficult classification problem of the mixed scene point cloud into the combination of three relatively single problems, effectively reduces the complexity of the whole problem, and can realize the robust and fine classification of the point cloud of different complex scenes.

Description

Multi-granularity calculation method for airborne laser point cloud classification of hybrid scene
Technical Field
The invention belongs to the field of laser scanning data processing, and particularly relates to a multi-granularity calculation method for airborne laser point cloud classification of a hybrid scene.
Background
Airborne Laser Scanning (ALS) is an important means for rapidly acquiring large-scale three-dimensional geospatial data, plays an important role in national important requirements of smart cities, global cartography, global change and the like, and is widely applied to national production or scientific research such as urban three-dimensional reconstruction, forestry investigation, shallow sea depth measurement, vestige archaeology, deep space exploration and the like at present. As a key basis of a plurality of applications of the ALS system, point cloud classification has wide practical value. However, until now, the technology still puzzles the industry and the academia (the problem is not completely solved), and has become a key bottleneck for restricting the efficiency and the automation degree of three-dimensional geographic data processing.
As a classic problem in the field of photogrammetry and remote sensing, point cloud classification has been widely studied. Early research focused on designing artificial features with strong discriminative power, and current research increasingly adopted end-to-end deep learning techniques.
The point cloud classification method based on traditional machine learning generally comprises two steps: firstly, extracting manual features from local neighborhood points; secondly, training and utilizing classifiers (such as a support vector machine, a random forest, a Gaussian mixture model, an artificial neural network and the like) to carry out point-by-point classification. However, the method only utilizes the isolated local neighborhood geometric information of each point, ignores the correlation of the context in a large-scale space, and causes serious noise point misclassification in the classification result. The inconsistency of the classification result can be better solved by utilizing the spatial context information, so that a probability graph model (such as a Markov random field, a conditional random field and the like) is introduced into the point cloud classification method by partial research, and a good denoising effect is obtained. However, the manual features relied on by the methods cannot well characterize the terrain with complex and changeable shapes in the ALS point cloud, and the final classification performance of the methods is limited. Deep learning techniques can learn deep features with more representation ability from original data, and are increasingly applied to three-dimensional point cloud classification at present. According to different adopted input expressions, the existing point cloud classification model based on deep learning can be divided into: feature graph representation based classification models, voxel representation based classification models, point representation based classification models, and hybrid representation based classification models.
In general, the existing point cloud classification method can obtain high classification precision in a simple scene. However, the actual survey area usually covers a large number of mixed complex scenes, and such scenes have the following characteristics: (1) a plurality of single scene mixtures (such as dense building areas, large building areas, terraces, gentle slopes, steep slopes, and the like); (2) the terrain is changed sharply (such as local areas like scarps, ridges, valleys, etc.); (3) complex ground structure (size, shape, relationship between them) such as flat house, linear power line, vegetation with different forms, etc.; (4) the size difference between different ground objects and the same ground object is large (such as houses and power lines, large-scale plants and ordinary houses, and the like). Severe terrain fluctuation and complex ground object structures in a mixed scene bring great challenges to the robustness and fineness of the algorithm, so that the current most advanced point cloud classification algorithm still cannot completely meet the actual production requirements.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a multi-granularity calculation method for the airborne laser point cloud classification of the hybrid scene, which decomposes the classification problem of the hybrid scene point cloud into the combination of three relatively single problems of scene knowledge learning, knowledge-guided classification and ground fault classification correction, and realizes the steady high-precision classification capability of the point clouds of different complex scenes.
In order to achieve the purpose, the invention adopts the following technical scheme:
a multi-granularity calculation method for hybrid scene airborne laser point cloud classification comprises the following steps:
step 1: randomly selecting a sampling point from the whole point cloud, and respectively searching a classification neighborhood point set and a scene neighborhood point set by taking the sampling point as a center, wherein the range of the scene neighborhood point set is larger than that of the classification neighborhood point set;
step 2: training a point cloud global feature extraction model in an unsupervised learning mode, and extracting spatial context information from a scene neighborhood point set to realize coarse-grained scene perception;
and step 3: embedding space context information extracted from a scene neighborhood point set into a point cloud semantic segmentation model by adopting a feature fusion strategy based on an attention mechanism, defining a multitask loss function which gives consideration to the ground clearance and the category, and simultaneously supervising the category and the ground clearance of each point in a classification neighborhood point set so as to realize point cloud semantic segmentation and ground clearance prediction with fine granularity;
and 4, step 4: through point cloud segmentation based on graph cut optimization and iterative adsorption of a ground irregular triangular network, pseudo ground points are accurately identified from a point cloud semantic segmentation result, and wrongly-segmented real ground points are found back, so that ground classification result refinement of fine granularity is realized;
and 5: and (4) repeating the steps 1 to 4 until the total of the classification neighborhood point sets of all the sampling points covers the whole point cloud, and finally obtaining the classification result of the whole point cloud.
Further, step 2 comprises the steps of:
s2.1, collecting a plurality of point cloud blocks containing different scene categories;
s2.2, respectively generating a multi-view feature map and three-dimensional voxels corresponding to the cloud blocks of each point by taking the cloud blocks as a processing unit;
s2.3, inputting the multi-view feature map into a multi-view feature extraction module to extract multi-view features, and inputting the three-dimensional voxels into a voxel feature extraction module to extract voxel features;
s2.4, fusing the multi-view features and the voxel features by adopting a feature fusion strategy based on an attention mechanism to obtain strengthened multi-view features;
s2.5, combining the strengthened multi-view features into scene context features through a view pooling layer;
s2.6, based on unsupervised metric learning, automatically generating pseudo labels of cloud blocks of all points through feature clustering, and simultaneously carrying out supervised training on all neural network layers in S2.3-S2.5 based on the pseudo labels;
and S2.7, combining the trained neural network layers used for generating the final scene context feature in the S2.3-S2.5 into a point cloud scene context feature extraction model, and extracting coarse-grained spatial context information of a scene neighborhood point set.
Further, in S2.3, the multi-view feature extraction module includes a plurality of two-dimensional convolutional network branches, and the voxel feature extraction module is constructed based on a sub-manifold sparse convolutional layer.
Further, step 3 comprises the steps of:
s3.1, extracting Point cloud local features of the classified neighborhood Point set through a core Point Convolution layer (Kernel Point convention);
s3.2, extracting spatial context information from a scene neighborhood point set by using the point cloud scene context feature extraction model constructed in the S2.7, and fusing the spatial context information and the point cloud local features by adopting a feature fusion strategy based on an attention mechanism to obtain enhanced point cloud semantic segmentation features;
s3.3, defining a multitask loss function considering both the ground clearance and the type, and constructing a point cloud semantic segmentation model by using a plurality of convolution layers, a type output layer and a ground clearance output layer;
s3.4, performing supervision training on the point cloud semantic segmentation model constructed in the S3.3 based on the marked training set;
s3.5, obtaining semantic segmentation and ground clearance estimation results of the classified neighborhood point set by using the trained point cloud semantic segmentation model;
further, the multitask loss function considering the ground clearance and the category is as follows:
Lossα=αLosspred+(1-α)Losshag
therein, LossαRepresents the total Loss function, alpha represents the weight, LosspredRepresenting a cross entropy loss function commonly used in the classification task; losshagA loss function representing the terrain clearance consisting of the difference between the estimated terrain clearance and the true terrain clearance.
Further, step 4 comprises the steps of:
s4.1, performing Delaunay triangulation on the ground points in the point cloud semantic segmentation result of the classification neighborhood point set to obtain a triangulation network of the ground points, and obtaining neighborhood index information of each ground point according to the connection relation between the points in the triangulation network;
s4.2, constructing a Markov random field energy function:
E=Edata+β·Esmooth
wherein the data item EdataThe probability that each point in the point cloud semantic segmentation result belongs to the ground category and the ground clearance of each point are formed, and a smoothing item E is obtained according to neighborhood index informationsmoothThe height difference and normal included angle between each point and the adjacent domain are formed, and beta represents weight;
and S4.3, optimizing a Markov random field energy function by adopting a maximum flow-minimum cut algorithm in the graph theory, and further identifying pseudo ground points in the point cloud semantic segmentation result. Compared with the prior art, the invention has the following advantages:
(1) an unsupervised scene context feature learning technology is adopted, and scene categories and manufacturing data sets are not required to be predefined, so that the problem that partial terrain scenes are difficult to define is avoided, and the cost for manufacturing the scene data sets is saved;
(2) by adopting a feature fusion strategy based on an attention mechanism and embedding large-range scene context information into the point cloud semantic segmentation model, the selective highlighting of scene associated features can be realized to improve the identification precision, and sufficient spatial context information can be provided for the accurate identification of large-size ground objects to improve the robustness;
(3) a multi-task loss function considering the ground clearance is designed, so that the wrong separation of higher ground object points can be inhibited, and the distinguishing capability of a semantic segmentation model on sub-ground object categories (such as high and low vegetation) with different ground clearances can be enhanced;
(4) a ground point refinement method based on graph cut optimization is provided, an optimal division plane between the ground and the pseudo ground points can be obtained, and the pseudo ground points with abnormal ground clearance can be accurately detected.
Drawings
FIG. 1 is a schematic block diagram of the process of the present invention;
FIG. 2 is a schematic diagram of a scene neighborhood point set and a classification neighborhood point set of random sampling points in the invention;
FIG. 3 is a schematic diagram of unsupervised point cloud scene context feature learning in the present invention;
FIG. 4 is a schematic diagram of scene context guided point cloud semantic segmentation and terrain clearance prediction in the present invention;
FIG. 5 is a diagram illustrating data items and smoothing items used by the graph cut optimization module of the present invention;
FIG. 6 is an exemplary diagram of a complex scene point cloud classification result according to the present invention.
Detailed Description
The present invention will now be described in further detail with reference to the accompanying drawings.
As shown in fig. 1, the method of the present invention mainly comprises the following steps:
step 1: randomly selecting a sampling point from the whole point cloud, and searching a group of square scene neighborhood point sets with the side length of 150 meters and a group of square classification neighborhood point sets with the side length of 50 meters respectively by taking the sampling point as a center, as shown in fig. 2.
Step 2: and training a point cloud global feature extraction model in an unsupervised learning mode, and extracting spatial context information (scene knowledge) from a scene neighborhood point set with a large range to realize large-range scene perception of coarse granularity. As shown in fig. 3, the following sub-steps are included:
(1) collecting a large number of square point cloud blocks with the side length of 150 meters and containing different scene categories;
(2) the cloud blocks of each point are respectively converted into a multi-view characteristic graph and three-dimensional voxels by taking the cloud blocks as a processing unit;
(3) respectively transmitting the multi-view feature map and the three-dimensional voxels corresponding to the point cloud block to a multi-view feature extraction module (consisting of a plurality of two-dimensional convolutional network branches) to extract multi-view features and a voxel feature extraction module (constructed based on a sub-manifold sparse convolutional layer) to extract global morphology features;
(4) guiding the key view characteristics in the multi-view to be highlighted in a self-adaptive mode by using the global morphology characteristics of the voxels by using a characteristic fusion strategy based on an attention mechanism, and fusing the multi-view characteristics and the global characteristics of the voxels to obtain enhanced multi-view characteristics;
(5) the enhanced multi-view features are further combined through a view pooling layer to obtain scene context features;
(6) based on unsupervised metric learning, training each neural network layer shown in fig. 3 by using a pseudo label as supervision information while automatically generating the pseudo label through feature clustering;
(7) combining the trained neural network layers used in the steps (3) to (5) into a point cloud scene context feature extraction model, and extracting coarse-grained spatial context information (scene knowledge) from a scene neighborhood point set.
And step 3: a feature fusion strategy based on an attention mechanism is adopted, large-range scene knowledge is embedded in a point cloud semantic segmentation model, a multitask loss function considering the ground clearance is designed, the categories and the ground clearance of all points in a small-range classification neighborhood point set are simultaneously supervised, and point cloud semantic segmentation and ground clearance estimation with fine granularity are realized. As shown in fig. 4, the following sub-steps are included:
(1) transmitting the classified neighborhood point set to a series of core point convolution layers to extract deep point cloud local features;
(2) guiding the local feature self-adaptive highlighting of the key point cloud by using a feature fusion strategy based on an attention mechanism and utilizing large-range spatial context information (scene knowledge) extracted from the scene neighborhood point set in the step 2, and fusing the spatial context feature and the local feature of the point cloud to obtain an enhanced point cloud semantic segmentation feature;
(3) combining the neural network layers used in the steps (1) to (2), and constructing a point cloud semantic segmentation model shown in fig. 4 by using a plurality of convolution layers, a category output layer and a ground clearance output layer;
(4) the terrain clearance-dependent multitask loss function is defined as follows:
Lossα=αLosspred+(1-α)Losshag
therein, LossαRepresents the total Loss function, alpha represents the weight, LosspredRepresenting a cross entropy loss function commonly used in the classification task; losshagA loss function representing the terrain clearance dependence is formed by the difference between the estimated terrain clearance and the true terrain clearance. Using the Loss of total function LossαAnd training the point cloud semantic segmentation model shown in FIG. 4 by the marked training set.
(5) And processing the classified neighborhood point set by using the trained point cloud semantic segmentation model to obtain high-precision semantic segmentation and ground clearance estimation results of the classified neighborhood point set.
And 4, step 4: through point cloud segmentation based on graph cut optimization and iterative adsorption of a ground irregular triangular network, pseudo ground points are accurately identified from semantic segmentation results of a classification neighborhood point set, wrongly-divided real ground points are found, and fine-grained ground classification results are refined. The method comprises the following specific steps of identifying pseudo ground points based on point cloud segmentation optimized by graph cut:
(1) performing Delaunay triangulation on the ground points in the semantic segmentation result of the classification neighborhood point set to obtain a triangular network (undirected connected graph) of the ground points, and obtaining neighborhood index information (if edge connection exists between the two points, the two points are neighborhoods of each other) of each ground point according to the connection relation between the points in the triangular network.
(2) Constructing a Markov random field energy function as follows:
E=Edata+β·Esmooth
wherein the data item EdataThe probability that each point in the point cloud semantic segmentation result belongs to the ground category and the ground clearance of each point (as shown in figure 5) form a smoothing item E according to the neighborhood index informationsmoothThe height difference between each point and its adjacent domain and the normal angle form the relationship, and beta represents the weight.
(3) And optimizing the energy function by adopting a maximum flow-minimum cut algorithm in a graph theory, and further identifying pseudo ground points in a point cloud semantic segmentation result.
And 5: and (4) repeatedly adopting the steps 1-4 to process the whole point cloud until the total of the classification neighborhood point sets of all the sampling points covers the whole point cloud. As shown in fig. 6, the left side is the input point cloud, and the right side is the final point cloud semantic segmentation result.
The above is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above-mentioned embodiments, and all technical solutions belonging to the idea of the present invention belong to the protection scope of the present invention. It should be noted that modifications and embellishments within the scope of the invention may be made by those skilled in the art without departing from the principle of the invention.

Claims (6)

1. A multi-granularity calculation method for hybrid scene airborne laser point cloud classification is characterized by comprising the following steps:
step 1: randomly selecting a sampling point from the whole point cloud, and respectively searching a classification neighborhood point set and a scene neighborhood point set by taking the sampling point as a center, wherein the range of the scene neighborhood point set is larger than that of the classification neighborhood point set;
step 2: training a point cloud global feature extraction model in an unsupervised learning mode, and extracting spatial context information from a scene neighborhood point set to realize coarse-grained scene perception;
and step 3: embedding space context information extracted from a scene neighborhood point set into a point cloud semantic segmentation model by adopting a feature fusion strategy based on an attention mechanism, defining a multitask loss function which gives consideration to the ground clearance and the category, and simultaneously supervising the category and the ground clearance of each point in a classification neighborhood point set so as to realize point cloud semantic segmentation and ground clearance prediction with fine granularity;
and 4, step 4: through point cloud segmentation based on graph cut optimization and iterative adsorption of a ground irregular triangular network, pseudo ground points are accurately identified from a point cloud semantic segmentation result, and wrongly-segmented real ground points are found back, so that ground classification result refinement of fine granularity is realized;
and 5: and (4) repeating the steps 1 to 4 until the total of the classification neighborhood point sets of all the sampling points covers the whole point cloud, and finally obtaining the classification result of the whole point cloud.
2. The multi-granularity calculation method for the hybrid scene on-board laser point cloud classification as claimed in claim 1, wherein the step 2 comprises the steps of:
s2.1, collecting a plurality of point cloud blocks containing different scene categories;
s2.2, respectively generating a multi-view feature map and three-dimensional voxels corresponding to the cloud blocks of each point by taking the cloud blocks as a processing unit;
s2.3, inputting the multi-view feature map into a multi-view feature extraction module to extract multi-view features, and inputting the three-dimensional voxels into a voxel feature extraction module to extract voxel features;
s2.4, fusing the multi-view features and the voxel features by adopting a feature fusion strategy based on an attention mechanism to obtain strengthened multi-view features;
s2.5, combining the strengthened multi-view features into scene context features through a view pooling layer;
s2.6, based on unsupervised metric learning, automatically generating pseudo labels of cloud blocks of all points through feature clustering, and simultaneously carrying out supervised training on all neural network layers in S2.3-S2.5 based on the pseudo labels;
and S2.7, combining the trained neural network layers used for generating the final scene context feature in the S2.3-S2.5 into a point cloud scene context feature extraction model, and extracting coarse-grained spatial context information of a scene neighborhood point set.
3. The multi-granularity computing method for the hybrid scene on-board laser point cloud classification as claimed in claim 2, wherein in S2.3, the multi-view feature extraction module includes a plurality of two-dimensional convolutional network branches, and the voxel feature extraction module is constructed based on sub-manifold sparse convolutional layers.
4. The multi-granularity computing method for the hybrid scene on-board laser point cloud classification as claimed in claim 2, wherein the step 3 comprises the steps of:
s3.1, extracting point cloud local features of the classified neighborhood point set through the core point convolution layer;
s3.2, extracting spatial context information from a scene neighborhood point set by using the point cloud scene context feature extraction model constructed in the S2.7, and fusing the spatial context information and the point cloud local features by adopting a feature fusion strategy based on an attention mechanism to obtain enhanced point cloud semantic segmentation features;
s3.3, defining a multitask loss function considering both the ground clearance and the type, and constructing a point cloud semantic segmentation model by using a plurality of convolution layers, a type output layer and a ground clearance output layer;
s3.4, performing supervision training on the point cloud semantic segmentation model constructed in the S3.3 based on the marked training set;
s3.5, obtaining semantic segmentation and ground clearance estimation results of the classified neighborhood point set by using the trained point cloud semantic segmentation model;
5. the multi-granularity computing method for the hybrid scene on-board laser point cloud classification as claimed in claim 4, wherein the multitask loss function considering terrain clearance and category is:
Lossα=αLosspred+(1-α)Losshag
therein, LossαRepresents the total Loss function, alpha represents the weight, LosspredRepresenting a cross entropy loss function commonly used in the classification task; losshagA loss function representing the terrain clearance consisting of the difference between the estimated terrain clearance and the true terrain clearance.
6. The multi-granularity computing method for the hybrid scene on-board laser point cloud classification as claimed in claim 4, wherein the step 4 comprises the steps of:
s4.1, performing Delaunay triangulation on the ground points in the point cloud semantic segmentation result of the classification neighborhood point set to obtain a triangulation network of the ground points, and obtaining neighborhood index information of each ground point according to the connection relation between the points in the triangulation network;
s4.2, constructing a Markov random field energy function:
E=Edata+β·Esmooth
wherein the data item EdataThe probability that each point in the point cloud semantic segmentation result belongs to the ground category and the ground clearance of each point are formed, and a smoothing item E is obtained according to neighborhood index informationsmoothThe height difference and normal included angle between each point and the adjacent domain are formed, and beta represents weight;
and S4.3, optimizing a Markov random field energy function by adopting a maximum flow-minimum cut algorithm in the graph theory, and further identifying pseudo ground points in the point cloud semantic segmentation result.
CN202111383336.7A 2021-11-22 2021-11-22 Multi-granularity calculation method for airborne laser point cloud classification of hybrid scene Active CN114120110B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111383336.7A CN114120110B (en) 2021-11-22 2021-11-22 Multi-granularity calculation method for airborne laser point cloud classification of hybrid scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111383336.7A CN114120110B (en) 2021-11-22 2021-11-22 Multi-granularity calculation method for airborne laser point cloud classification of hybrid scene

Publications (2)

Publication Number Publication Date
CN114120110A true CN114120110A (en) 2022-03-01
CN114120110B CN114120110B (en) 2022-08-05

Family

ID=80438990

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111383336.7A Active CN114120110B (en) 2021-11-22 2021-11-22 Multi-granularity calculation method for airborne laser point cloud classification of hybrid scene

Country Status (1)

Country Link
CN (1) CN114120110B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114511682A (en) * 2022-04-19 2022-05-17 清华大学 Three-dimensional scene reconstruction method and device based on laser radar and electronic equipment
CN114596420A (en) * 2022-03-16 2022-06-07 中关村科学城城市大脑股份有限公司 Laser point cloud modeling method and system applied to urban brain
CN114639024A (en) * 2022-03-03 2022-06-17 江苏方天电力技术有限公司 Automatic laser point cloud classification method for power transmission line
CN115424022A (en) * 2022-11-03 2022-12-02 南方电网数字电网研究院有限公司 Power transmission corridor ground point cloud segmentation method and device and computer equipment
CN115830597A (en) * 2023-01-05 2023-03-21 安徽大学 Domain self-adaptive remote sensing image semantic segmentation method from local to global based on pseudo label generation
CN115861571A (en) * 2023-01-18 2023-03-28 武汉大学 Semantic-aware triangulation network model building entity reconstruction method
CN115862013A (en) * 2023-02-09 2023-03-28 南方电网数字电网研究院有限公司 Attention mechanism-based power transmission and distribution scene point cloud semantic segmentation model training method
CN116363099A (en) * 2023-03-31 2023-06-30 西南大学 Fine granularity analysis method, device, medium and equipment for three-dimensional point cloud data
CN116863432A (en) * 2023-09-04 2023-10-10 之江实验室 Weak supervision laser travelable region prediction method and system based on deep learning

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9633483B1 (en) * 2014-03-27 2017-04-25 Hrl Laboratories, Llc System for filtering, segmenting and recognizing objects in unconstrained environments
CN109493344A (en) * 2018-09-18 2019-03-19 北京大学 A kind of semantic segmentation method of large-scale city three-dimensional scenic
US20190108639A1 (en) * 2017-10-09 2019-04-11 The Board Of Trustees Of The Leland Stanford Junior University Systems and Methods for Semantic Segmentation of 3D Point Clouds
CN110097556A (en) * 2019-04-29 2019-08-06 东南大学 Large-scale point cloud semantic segmentation algorithm based on PointNet
CN110110802A (en) * 2019-05-14 2019-08-09 南京林业大学 Airborne laser point cloud classification method based on high-order condition random field
CN110197215A (en) * 2019-05-22 2019-09-03 深圳市牧月科技有限公司 A kind of ground perception point cloud semantic segmentation method of autonomous driving
US20200301013A1 (en) * 2018-02-09 2020-09-24 Bayerische Motoren Werke Aktiengesellschaft Methods and Apparatuses for Object Detection in a Scene Based on Lidar Data and Radar Data of the Scene
CN111815776A (en) * 2020-02-04 2020-10-23 山东水利技师学院 Three-dimensional building fine geometric reconstruction method integrating airborne and vehicle-mounted three-dimensional laser point clouds and streetscape images
US20210201569A1 (en) * 2019-12-31 2021-07-01 Lyft, Inc. Map Feature Extraction Using Overhead View Images
US11099275B1 (en) * 2020-04-29 2021-08-24 Tsinghua University LiDAR point cloud reflection intensity complementation method and system
US20210279950A1 (en) * 2020-03-04 2021-09-09 Magic Leap, Inc. Systems and methods for efficient floorplan generation from 3d scans of indoor scenes

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9633483B1 (en) * 2014-03-27 2017-04-25 Hrl Laboratories, Llc System for filtering, segmenting and recognizing objects in unconstrained environments
US20190108639A1 (en) * 2017-10-09 2019-04-11 The Board Of Trustees Of The Leland Stanford Junior University Systems and Methods for Semantic Segmentation of 3D Point Clouds
US20200301013A1 (en) * 2018-02-09 2020-09-24 Bayerische Motoren Werke Aktiengesellschaft Methods and Apparatuses for Object Detection in a Scene Based on Lidar Data and Radar Data of the Scene
CN109493344A (en) * 2018-09-18 2019-03-19 北京大学 A kind of semantic segmentation method of large-scale city three-dimensional scenic
CN110097556A (en) * 2019-04-29 2019-08-06 东南大学 Large-scale point cloud semantic segmentation algorithm based on PointNet
CN110110802A (en) * 2019-05-14 2019-08-09 南京林业大学 Airborne laser point cloud classification method based on high-order condition random field
CN110197215A (en) * 2019-05-22 2019-09-03 深圳市牧月科技有限公司 A kind of ground perception point cloud semantic segmentation method of autonomous driving
US20210201569A1 (en) * 2019-12-31 2021-07-01 Lyft, Inc. Map Feature Extraction Using Overhead View Images
CN111815776A (en) * 2020-02-04 2020-10-23 山东水利技师学院 Three-dimensional building fine geometric reconstruction method integrating airborne and vehicle-mounted three-dimensional laser point clouds and streetscape images
US20210279950A1 (en) * 2020-03-04 2021-09-09 Magic Leap, Inc. Systems and methods for efficient floorplan generation from 3d scans of indoor scenes
US11099275B1 (en) * 2020-04-29 2021-08-24 Tsinghua University LiDAR point cloud reflection intensity complementation method and system

Non-Patent Citations (11)

* Cited by examiner, † Cited by third party
Title
YULAN GUO等: "Deep Learning for 3D Point Clouds: A Survey", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 1》, 23 June 2020 (2020-06-23) *
ZHIDONG LIANG: "3D Instance Embedding Learning with a Structure-Aware Loss Function", 《2020 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS)》, 29 October 2020 (2020-10-29) *
秦楠楠: "一种基于主成分分析的空间数据配准算法", 《武汉大学学报(工学版)》, vol. 45, no. 04, 30 August 2012 (2012-08-30) *
贾迪等: "图像匹配方法研究综述", 《中国图象图形学报》, no. 05, 16 May 2019 (2019-05-16) *
赖旭东等: "一种迭代的小光斑LiDAR波形分解方法", 《红外与毫米波学报》 *
赖旭东等: "一种迭代的小光斑LiDAR波形分解方法", 《红外与毫米波学报》, vol. 32, no. 04, 15 August 2013 (2013-08-15) *
赵传等: "基于深度残差网络的机载LiDAR点云分类", 《测绘学报》 *
赵传等: "基于深度残差网络的机载LiDAR点云分类", 《测绘学报》, no. 02, 15 February 2020 (2020-02-15) *
韩姗姗等: "基于DGCNN语义分割的倾斜摄影测量三维点云建筑物分类", 《测绘标准化》 *
韩姗姗等: "基于DGCNN语义分割的倾斜摄影测量三维点云建筑物分类", 《测绘标准化》, no. 03, 25 September 2020 (2020-09-25) *
顾军华等: "基于点云数据的分割方法综述", 《燕山大学学报》, no. 02, 31 March 2020 (2020-03-31) *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114639024A (en) * 2022-03-03 2022-06-17 江苏方天电力技术有限公司 Automatic laser point cloud classification method for power transmission line
CN114596420A (en) * 2022-03-16 2022-06-07 中关村科学城城市大脑股份有限公司 Laser point cloud modeling method and system applied to urban brain
CN114511682A (en) * 2022-04-19 2022-05-17 清华大学 Three-dimensional scene reconstruction method and device based on laser radar and electronic equipment
CN115424022A (en) * 2022-11-03 2022-12-02 南方电网数字电网研究院有限公司 Power transmission corridor ground point cloud segmentation method and device and computer equipment
CN115424022B (en) * 2022-11-03 2023-03-03 南方电网数字电网研究院有限公司 Power transmission corridor ground point cloud segmentation method and device and computer equipment
CN115830597A (en) * 2023-01-05 2023-03-21 安徽大学 Domain self-adaptive remote sensing image semantic segmentation method from local to global based on pseudo label generation
CN115861571A (en) * 2023-01-18 2023-03-28 武汉大学 Semantic-aware triangulation network model building entity reconstruction method
CN115862013A (en) * 2023-02-09 2023-03-28 南方电网数字电网研究院有限公司 Attention mechanism-based power transmission and distribution scene point cloud semantic segmentation model training method
CN116363099A (en) * 2023-03-31 2023-06-30 西南大学 Fine granularity analysis method, device, medium and equipment for three-dimensional point cloud data
CN116863432A (en) * 2023-09-04 2023-10-10 之江实验室 Weak supervision laser travelable region prediction method and system based on deep learning
CN116863432B (en) * 2023-09-04 2023-12-22 之江实验室 Weak supervision laser travelable region prediction method and system based on deep learning

Also Published As

Publication number Publication date
CN114120110B (en) 2022-08-05

Similar Documents

Publication Publication Date Title
CN114120110B (en) Multi-granularity calculation method for airborne laser point cloud classification of hybrid scene
Liu et al. Cylinder detection in large-scale point cloud of pipeline plant
CN110619282A (en) Automatic extraction method for unmanned aerial vehicle orthoscopic image building
CN114004938B (en) Urban scene reconstruction method and device based on mass data
Li et al. A GCN-based method for extracting power lines and pylons from airborne LiDAR data
Özdemir et al. Aerial point cloud classification with deep learning and machine learning algorithms
CN115512216A (en) City functional area fine recognition method coupling block space-time characteristics and ensemble learning
CN114926469A (en) Semantic segmentation model training method, semantic segmentation method, storage medium and terminal
CN114549537A (en) Unstructured environment point cloud semantic segmentation method based on cross-modal semantic enhancement
Hu et al. Geometric feature enhanced line segment extraction from large-scale point clouds with hierarchical topological optimization
Xu et al. Classification of 3-D point clouds by a new augmentation convolutional neural network
Zhang et al. A dual attention neural network for airborne LiDAR point cloud semantic segmentation
CN112200248B (en) Point cloud semantic segmentation method, system and storage medium based on DBSCAN clustering under urban road environment
Qin et al. Deep learning for filtering the ground from ALS point clouds: A dataset, evaluations and issues
He Automated 3D building modelling from airborne LiDAR data
Lei et al. Automatic identification of street trees with improved RandLA-Net and accurate calculation of shading area with density-based iterative α-shape
Gupta et al. Tree annotations in LiDAR data using point densities and convolutional neural networks
Zhou 3D urban modeling from city-scale aerial LiDAR data
CN114494586B (en) Lattice projection deep learning network broadleaf branch and leaf separation and skeleton reconstruction method
Li et al. PointLAE: A Point Cloud Semantic Segmentation Neural Network via Multifeature Aggregation for Large-Scale Application
CN113673619B (en) Geographic big data space latent pattern analysis method based on topology analysis
CN114565730A (en) Method and device for processing map vector sample and computer readable storage medium
CN112634447A (en) Outcrop rock stratum layering method, device, equipment and storage medium
Fang et al. Semantic supported urban change detection using ALS point clouds
Yastikli et al. Automatic 3D building model generations with airborne LiDAR data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 210023 No. 10, Yuanhua Road, Qixia District, Nanjing, Jiangsu Province

Applicant after: PURPLE MOUNTAIN OBSERVATORY, CHINESE ACADEMY OF SCIENCES

Address before: 210008 No. 2 West Beijing Road, Gulou District, Jiangsu, Nanjing

Applicant before: PURPLE MOUNTAIN OBSERVATORY, CHINESE ACADEMY OF SCIENCES

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant