CN115375902B - Multi-spectral laser radar point cloud data-based over-point segmentation method - Google Patents

Multi-spectral laser radar point cloud data-based over-point segmentation method Download PDF

Info

Publication number
CN115375902B
CN115375902B CN202211320284.3A CN202211320284A CN115375902B CN 115375902 B CN115375902 B CN 115375902B CN 202211320284 A CN202211320284 A CN 202211320284A CN 115375902 B CN115375902 B CN 115375902B
Authority
CN
China
Prior art keywords
point
points
point cloud
super
spectral
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211320284.3A
Other languages
Chinese (zh)
Other versions
CN115375902A (en
Inventor
王青旺
王铭野
王盼新
沈韬
陶智敏
宋健
汪志锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Zhengtu Information Technology Co ltd
Original Assignee
Kunming University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kunming University of Science and Technology filed Critical Kunming University of Science and Technology
Priority to CN202211320284.3A priority Critical patent/CN115375902B/en
Publication of CN115375902A publication Critical patent/CN115375902A/en
Application granted granted Critical
Publication of CN115375902B publication Critical patent/CN115375902B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The invention relates to a hyper-point segmentation method based on multispectral laser radar point cloud data, and belongs to the technical field of multispectral laser radar point cloud segmentation. The invention firstly makes the multispectral laser radar point cloud according toKDModeling a tree, and storing the radar point cloud in tree-shaped nodes; then carrying out nearest neighbor segmentation on the point cloud to form an initial super point; and finally, carrying out similarity measurement on the inner point pairs of adjacent super points, designing a point exchange mechanism between the super points, finishing the point exchange process between the super points according to the similarity measurement, and generating a multi-spectral point cloud super point set. The method can effectively utilize the space geometric structure and the spectrum information contained in the multispectral laser radar point cloud to generate the super-point set with high consistency of the geometric structure and the spectrum information, and effectively reduces the time complexity of subsequent tasks and the requirement on computing resources.

Description

Multi-spectral laser radar point cloud data-based over-point segmentation method
Technical Field
The invention relates to a hyper-point segmentation method based on multispectral laser radar point cloud data, and belongs to the technical field of multispectral laser radar point cloud segmentation.
Background
The multispectral LiDAR system can synchronously acquire three-dimensional space distribution information and spectral information in a scene, and provides richer characteristic information for a remote sensing scene interpretation task. In related processing tasks of multispectral LiDAR, the point cloud processing speed is limited due to the fact that the number of point clouds corresponding to a remote sensing scene is huge, the over-point segmentation is necessary preparation work in the early stage, spectral features in the multispectral LiDAR can assist in more accurate segmentation of the point clouds, and processing efficiency of follow-up tasks is improved.
Currently, there is no method for multi-spectral point cloud segmentation. The existing point cloud segmentation method only utilizes the space geometric structure information of the point cloud and does not consider the spectral information of a target, so that the spectral information of the object in the over-point obtained by segmentation has larger difference. In order to solve the above problems, the super point is a compact representation of the scatter point, and can replace the original point to perform calculation (such as feature calculation and convolution filtering), thereby enlarging the field of perception of the operation and further improving the spatial generalization capability. However, in the process of the break-over point segmentation, each break-over point inevitably contains a plurality of categories of points, and some points are endowed with error labels, so that the performance of subsequent tasks is reduced. Therefore, how to use the spatial (spatial geometry) -spectral information in the multispectral LiDAR to perform more accurate over-point segmentation for subsequent tasks is a technical problem to be solved at present.
Disclosure of Invention
The invention aims to provide a method for segmenting a hyper-point based on multispectral laser radar point cloud data, which is used for solving the problem of inconsistent hyper-point inner point categories when the hyper-point segmentation is carried out by a traditional method.
The technical scheme of the invention is as follows: a method for segmenting the points in cloud data based on multispectral laser radar includes such steps as dividing the points in cloud data by multispectral laser radarKDModeling a tree, and storing the radar point cloud in tree-shaped nodes; then carrying out nearest neighbor segmentation on the point cloud to form an initial super point; and finally, carrying out similarity measurement on the inner point pairs of adjacent super points, designing a point exchange mechanism between the super points, finishing the point exchange process between the super points according to the similarity measurement, and generating a multi-spectral point cloud super point set.
The method comprises the following specific steps:
step1: according to Euclidean distance, the multispectral laser radar point cloud is calculatedKDTree modeling, namely calculating the variance of each dimension of the point cloud according to the following formula, and recording the dimension of the feature with the maximum variance ask
Figure 790936DEST_PATH_IMAGE001
Step2: including multi-spectral point cloudsNPoints are marked as
Figure 786705DEST_PATH_IMAGE002
The multispectral radar points are arranged according to the secondkThe magnitude of the dimension characteristic value is arranged in ascending order, and the first point cloud of the multispectral points is calculatedkMedian of set of dimensional feature valuesm
Step3: according to the point cloudkThe dimensional features divide the multi-spectral point cloud into two parts, specifically: first, thekDimensional eigenvalue is greater thanmThe points of (A) constitute a set of points, less than or equal tomForm another set of points, which are stored inKDIn a first generation leaf node of the tree;
step4: repeating the step for the two point sets obtained in the step3 until the two point sets can not be divided;
step5: will be provided withNTo be dividedKDPoints in the set of tree-structured point clouds are sequentially defined as a super point and are marked as S = &s 1 , s 2 , …, s N };
Step6: and measuring the difference between each point and the nearest neighbor point of each point, and selecting the closest point for fusion to form a new super point. Repeating the steps until the number of the over point containing points reaches a preset size;
step7: calculating the distance between each point in the target over point and the center point of the over pointL 1 Then, the distance between the point in the over point and the center point of the adjacent over point is calculatedL 2 If, ifL 2 <L 1 Then the point is assigned to an adjacent super-point.
The Step6 specifically comprises the following steps:
in Step6, the similarity between two points is calculated according to the following formula:
Figure 363180DEST_PATH_IMAGE003
wherein the content of the first and second substances,pandqare two points at which the position of the target is changed,
Figure 834088DEST_PATH_IMAGE005
is a parameter for balancing the importance of normal vectors in similarity measurement, and is set as follows: 1,n p andn q is thatpAndqthe normal vector of (a) is,
Figure 710777DEST_PATH_IMAGE006
a spectral vector representing a multi-spectral point cloud,R 1 is a parameter for constraining the spatial range of the over-point, set to: 5,R 2 is a parameter for constraining the over-spot spectral range, set to: 10.
the Step7 specifically comprises the following steps:
in Step7, measureLThe method specifically comprises the following steps:
Figure 346289DEST_PATH_IMAGE007
wherein the content of the first and second substances,
Figure 675639DEST_PATH_IMAGE008
and
Figure 218747DEST_PATH_IMAGE009
representspAndqthe spectral vector of the multi-spectral LiDAR point cloud of (a),
Figure 356467DEST_PATH_IMAGE010
is a parameter for balancing the importance of the geometric distance between the points in the similarity measure, and is set as follows: 1,
Figure 943306DEST_PATH_IMAGE011
the parameters for balancing the importance of the normal vector in the similarity measurement are set as follows: 1.
when the traditional method is used for carrying out the super-point segmentation, the situation that multispectral laser radar points belonging to different types of ground objects are segmented into the same super-point can occur. According to the method and the device, the space geometric information and the spectrum information are combined for segmentation in the process of dividing the super points, so that the multispectral laser radar points contained in the same super point are ensured to come from the same ground object as much as possible, the point cloud can be assisted to be segmented more accurately, and the processing efficiency of subsequent tasks is improved.
The invention has the beneficial effects that: compared with the prior art, the method mainly solves the phenomenon that radar points of different types of ground objects are divided into the same super point when the super point of the point cloud is divided by the traditional point cloud dividing method, and divides the super point by combining the space geometric information and the spectral information, thereby enhancing the consistency of the space-spectral information of the point cloud in the super point and effectively reducing the time complexity of subsequent tasks and the requirement on computing resources.
Drawings
FIG. 1 is a flow chart of the steps of the present invention;
FIG. 2 is a multi-spectral radar point cloud visualization in an embodiment of the invention;
FIG. 3 is a visualization of the segmentation of three scene hyper-points in an embodiment of the present invention;
FIG. 4 is a visualization diagram of the segmentation of the whole scene over points in the embodiment of the present invention.
Detailed Description
The invention is further described with reference to the following drawings and detailed description.
Example 1: as shown in fig. 1, a method for segmenting a hyper-point based on multispectral lidar point cloud data includes the following specific steps:
step S1: according to Euclidean distance, the multispectral laser radar point cloud is calculatedKDTree modeling, namely calculating the variance of each dimension of the point cloud according to the following formula, and recording the dimension of the feature with the maximum variance ask
Figure 166477DEST_PATH_IMAGE012
Step S2: including multi-spectral point cloudsNPoints are marked as
Figure 637910DEST_PATH_IMAGE002
The multispectral radar points are arranged according to thekThe magnitude of the dimension characteristic value is arranged in an ascending order, and the multispectral is calculatedFirst point cloudkMedian of set of dimensional feature valuesm
And step S3: according to the point cloudkThe dimensional characteristics divide the multispectral point cloud into two parts, specifically: first, thekDimensional eigenvalue is greater thanmThe points of (A) constitute a set of points, less than or equal tomForm another set of points, which are stored inKDIn a first generation leaf node of the tree;
and step S4: repeating the step for the two subsets obtained in the step3 until the two subsets can not be divided;
step S5: will be provided withNTo be dividedKDDefining the points in the tree structure point cloud set as a super point in turn, and recording as S =s 1 , s 2 , …, s N };
Step S6: the similarity between each point and its nearest neighbors is measured according to the following formula
Figure 239923DEST_PATH_IMAGE014
Is arranged as 1,R 1 Is arranged as 5,R 2 Set to 10. And selecting the closest point for fusion to form a new super point. Repeating the steps until the number of the inner points of the over point reaches the preset size;
Figure 404189DEST_PATH_IMAGE003
step S7: calculating the distance between the point in each of the over-points and the center point according to the following formulaL 1 Then calculating the distance between the point in the over point and the center point of the adjacent over pointL 2 . Wherein, will
Figure 239289DEST_PATH_IMAGE010
The setting is as follows: 1. device for selecting or keeping>
Figure 983254DEST_PATH_IMAGE011
The following settings are set: 1. if it isL 2 <L 1 Then the point is assigned to the corresponding super point. The segmentation results are shown in FIGS. 2-4 for eachThe accuracy of the segmentation of the surface features is shown in table 1.
Figure 968003DEST_PATH_IMAGE007
The following description is made by way of experiment and is made in the light of the following description:
1. experimental data
Houston university dataset: the data set scene is a part of area of the Houston campus, and three-band point cloud data are acquired by an Optech Titan laser radar, and the wavelengths are 1550nm, 1064nm and 532nm respectively. The study area was divided into 8 categories, bare land, cars, commercial buildings, grasslands, roads, power lines, residential buildings and trees, according to the height, material and semantic information of the land cover. Evaluating the point types in each over point by adopting recall ratio, precision andFthe score was used as an evaluation index.
2. Contents of the experiment
In the experiment, all points in the whole data set are used as input, the method of the invention is adopted to carry out the over-point segmentation, and the segmentation result is shown in figure 2. The results of the segmentation are evaluated by the evaluation indexes in the following formula, and table 1 shows the segmentation recall ratio of the method of the invention in different ground features (racall) Precision (1)precision) AndFfraction (A), (B)Fscore)。
Figure 37591DEST_PATH_IMAGE016
Table 1: evaluating data
Figure 359988DEST_PATH_IMAGE017
Figure 907644DEST_PATH_IMAGE018
Figure 140042DEST_PATH_IMAGE019
Wherein the content of the first and second substances,TPthe number of positive type points divided into positive type overtime points,FPIs the number of negative class points divided into positive class over points,FNIs the number of positive class points that are split into negative class over points.
While the present invention has been described in detail with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, and various changes can be made without departing from the spirit and scope of the present invention.

Claims (3)

1. A method for segmenting a super point based on multispectral laser radar point cloud data is characterized by comprising the following steps: first, the multispectral lidar point cloud is calculatedKDModeling a tree, and storing the radar point cloud in tree-shaped nodes; then carrying out nearest neighbor segmentation on the point cloud to form an initial super point; finally, similarity measurement is carried out on the inner point pairs of adjacent super points, a point exchange mechanism between the super points is designed, the point exchange process between the super points is completed according to the similarity measurement, and a multi-spectral point cloud super point set is generated;
the method comprises the following specific steps:
step1: according to Euclidean distance, the multispectral laser radar point cloud is calculatedKDModeling tree, calculating the variance of each dimension of point cloud according to the following formula, and recording the dimension with the maximum variance ask
Figure DEST_PATH_IMAGE002
Wherein the content of the first and second substances,μis the mean of the data over the feature of the dimension,Nis the number of points in the point cloud in that dimension,XYZis the spatial characteristics of the points contained in the multi-spectral lidar point cloud,
Figure DEST_PATH_IMAGE003
the method comprises the following steps of (1) obtaining spectral characteristics of points contained in multispectral laser radar point cloud;
step2: including multi-spectral point cloudsNPoints are marked as
Figure DEST_PATH_IMAGE004
The multispectral radar points are arranged according to thekThe magnitude of the dimension characteristic value is arranged in ascending order, and the first point cloud of the multispectral points is calculatedkMedian of set of dimensional feature valuesm
Step3: according to the point cloudkThe dimensional features divide the multi-spectral point cloud into two parts, specifically: first, thekDimensional eigenvalue is greater thanmThe points of (A) constitute a set of points, less than or equal tomForm another set of points, which are stored inKDIn a first generation leaf node of the tree;
step4: repeating the Step for the two point sets obtained in Step3 until the division can not be carried out any more;
step5: will be provided withNTo be dividedKDPoints in the set of tree-structured point clouds are sequentially defined as a super point and are marked as S = &s 1 , s 2 , …, s N };
Step6: measuring the difference between each point and the nearest neighbor point, and selecting the closest point for fusion to form a new super point;
repeating the steps until the number of the over point containing points reaches a preset size;
step7: calculating the distance between each point in the target over point and the center point of the over pointL 1 Then, the distance between the point in the over point and the center point of the adjacent over point is calculatedL 2 If, ifL 2 <L 1 Then the point is assigned to an adjacent super-point.
2. The method for hyper-segmentation of point cloud data based on multispectral lidar according to claim 1, wherein Step6 is specifically:
in Step6, the similarity between two points is calculated according to the following formula:
Figure DEST_PATH_IMAGE005
wherein the content of the first and second substances,pandqare two points at which the position of the target is changed,
Figure DEST_PATH_IMAGE006
is a parameter for balancing the importance of the normal vector in the similarity measurement, is set to 1,n p andn q is thatpAndqthe normal vector of (a) is,
Figure DEST_PATH_IMAGE007
a spectral vector representing the multi-spectral point cloud,R 1 is a parameter for constraining the spatial extent of the over-point, is set to 5,R 2 is a parameter for restricting the over-point spectral range and is set to 10.
3. The method of claim 2, wherein the method comprises: the Step7 specifically comprises the following steps:
in Step7, measureLThe method specifically comprises the following steps:
Figure DEST_PATH_IMAGE008
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE009
and
Figure DEST_PATH_IMAGE010
representspAndqthe spectral vector of the multi-spectral LiDAR point cloud of (a),
Figure DEST_PATH_IMAGE011
is a parameter that balances the importance of the geometric distance between points in the similarity measure, is set to 1,
Figure DEST_PATH_IMAGE012
the parameter is a parameter for balancing the importance of the normal vector in the similarity measurement and is set to 1.
CN202211320284.3A 2022-10-26 2022-10-26 Multi-spectral laser radar point cloud data-based over-point segmentation method Active CN115375902B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211320284.3A CN115375902B (en) 2022-10-26 2022-10-26 Multi-spectral laser radar point cloud data-based over-point segmentation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211320284.3A CN115375902B (en) 2022-10-26 2022-10-26 Multi-spectral laser radar point cloud data-based over-point segmentation method

Publications (2)

Publication Number Publication Date
CN115375902A CN115375902A (en) 2022-11-22
CN115375902B true CN115375902B (en) 2023-03-24

Family

ID=84072986

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211320284.3A Active CN115375902B (en) 2022-10-26 2022-10-26 Multi-spectral laser radar point cloud data-based over-point segmentation method

Country Status (1)

Country Link
CN (1) CN115375902B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111882593A (en) * 2020-07-23 2020-11-03 首都师范大学 Point cloud registration model and method combining attention mechanism and three-dimensional graph convolution network
CN113311449A (en) * 2021-05-21 2021-08-27 中国科学院空天信息创新研究院 Hyperspectral laser radar vegetation blade incident angle effect correction method

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10520482B2 (en) * 2012-06-01 2019-12-31 Agerpoint, Inc. Systems and methods for monitoring agricultural products
US9230168B2 (en) * 2013-07-31 2016-01-05 Digitalglobe, Inc. Automatic generation of built-up layers from high resolution satellite image data
CN107085710B (en) * 2017-04-26 2020-06-02 长江空间信息技术工程有限公司(武汉) Single-tree automatic extraction method based on multispectral LiDAR data
CN108241871A (en) * 2017-12-27 2018-07-03 华北水利水电大学 Laser point cloud and visual fusion data classification method based on multiple features
CN112183247B (en) * 2020-09-14 2023-08-08 广东工业大学 Laser point cloud data classification method based on multispectral image
CN112130169B (en) * 2020-09-23 2022-09-16 广东工业大学 Point cloud level fusion method for laser radar data and hyperspectral image
WO2022174263A1 (en) * 2021-02-12 2022-08-18 Magic Leap, Inc. Lidar simultaneous localization and mapping
CN113989685A (en) * 2021-10-25 2022-01-28 辽宁工程技术大学 Method for land cover classification of airborne multispectral LiDAR data based on super voxel
CN115061150A (en) * 2022-04-14 2022-09-16 昆明理工大学 Building extraction method based on laser radar point cloud data pseudo-waveform feature processing
CN114972628A (en) * 2022-04-14 2022-08-30 昆明理工大学 Building three-dimensional extraction method based on multispectral laser radar point cloud data
CN114969030A (en) * 2022-04-29 2022-08-30 上海精测半导体技术有限公司 Method for creating index tree of spectrum library and method for searching spectrum library

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111882593A (en) * 2020-07-23 2020-11-03 首都师范大学 Point cloud registration model and method combining attention mechanism and three-dimensional graph convolution network
CN113311449A (en) * 2021-05-21 2021-08-27 中国科学院空天信息创新研究院 Hyperspectral laser radar vegetation blade incident angle effect correction method

Also Published As

Publication number Publication date
CN115375902A (en) 2022-11-22

Similar Documents

Publication Publication Date Title
EP3955158A1 (en) Object detection method and apparatus, electronic device, and storage medium
CN108846352B (en) Vegetation classification and identification method
Jorstad et al. NeuroMorph: a toolset for the morphometric analysis and visualization of 3D models derived from electron microscopy image stacks
Ji et al. A novel simplification method for 3D geometric point cloud based on the importance of point
CN104616349B (en) Scattered point cloud data based on local surface changed factor simplifies processing method
CN112348867B (en) Urban high-precision three-dimensional terrain construction method and system based on LiDAR point cloud data
CN113916130B (en) Building position measuring method based on least square method
RU2674326C2 (en) Method of formation of neural network architecture for classification of object taken in cloud of points, method of its application for teaching neural network and searching semantically alike clouds of points
Kowalczuk et al. Classification of objects in the LIDAR point clouds using Deep Neural Networks based on the PointNet model
Semerjian A new variational framework for multiview surface reconstruction
Bayu et al. Semantic segmentation of lidar point cloud in rural area
CN115375902B (en) Multi-spectral laser radar point cloud data-based over-point segmentation method
CN111782739A (en) Map updating method and device
CN112634447B (en) Outcrop stratum layering method, device, equipment and storage medium
CN110222742B (en) Point cloud segmentation method, device, storage medium and equipment based on layered multi-echo
CN116994012A (en) Map spot matching system and method based on ecological restoration
CN110954133A (en) Method for calibrating position sensor of nuclear distance fuzzy clustering orthogonal spectral imaging
CN113850304A (en) High-accuracy point cloud data classification segmentation improvement algorithm
CN104463924A (en) Digital elevation terrain model generation method based on scattered point elevation sample data
CN115131571A (en) Building local feature point identification method based on six fields of point cloud pretreatment
Akar Improving the accuracy of random forest‐based land‐use classification using fused images and digital surface models produced via different interpolation methods
CN114511571A (en) Point cloud data semantic segmentation method and system and related components
CN105787493A (en) BIM-based method for intelligent extraction of setting-out feature points
He et al. Minimum spanning tree based stereo matching using image edge and brightness information
Sun et al. Research on target classification method for dense matching point cloud based on improved random forest algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240409

Address after: 510000, 4th Floor, No.4 and No.6 Yadun Street, Dongxiao Road, Haizhu District, Guangzhou City, Guangdong Province (Location: Room 426)

Patentee after: Guangdong Zhengtu Information Technology Co.,Ltd.

Country or region after: China

Address before: 650093 No. 253, Xuefu Road, Wuhua District, Yunnan, Kunming

Patentee before: Kunming University of Science and Technology

Country or region before: China