CN114004740A - Building wall line extraction method based on unmanned aerial vehicle laser radar point cloud - Google Patents

Building wall line extraction method based on unmanned aerial vehicle laser radar point cloud Download PDF

Info

Publication number
CN114004740A
CN114004740A CN202111647124.5A CN202111647124A CN114004740A CN 114004740 A CN114004740 A CN 114004740A CN 202111647124 A CN202111647124 A CN 202111647124A CN 114004740 A CN114004740 A CN 114004740A
Authority
CN
China
Prior art keywords
point cloud
wall line
point
data
building
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111647124.5A
Other languages
Chinese (zh)
Other versions
CN114004740B (en
Inventor
高文飞
王磊
王辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Rongling Technology Group Co ltd
Original Assignee
Shandong Rongling Technology Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Rongling Technology Group Co ltd filed Critical Shandong Rongling Technology Group Co ltd
Priority to CN202111647124.5A priority Critical patent/CN114004740B/en
Publication of CN114004740A publication Critical patent/CN114004740A/en
Application granted granted Critical
Publication of CN114004740B publication Critical patent/CN114004740B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T3/067
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the technical field of building wall line extraction, and particularly discloses a building wall line extraction method based on unmanned aerial vehicle laser radar point cloud. Preprocessing collected mass point cloud data, and performing topographic filtering, surface feature extraction and other steps to obtain a data source suitable for building feature extraction; the method aims to automatically extract and draw wall lines of buildings with different shapes, simplify and regularize rough wall lines extracted preliminarily and finally obtain a real building. The invention has the advantages of simple and orderly data processing mode, high data extraction accuracy and accurate obtained result, solves the problem of inaccurate extraction of the building wall line in the prior art, and is suitable for wide popularization and application.

Description

Building wall line extraction method based on unmanned aerial vehicle laser radar point cloud
Technical Field
The invention relates to the technical field of building wall line extraction, in particular to a building wall line extraction method based on unmanned aerial vehicle laser radar point cloud.
Background
In recent years, with the advent of concepts such as digital cities, digital earth, and virtual reality, there is a strong demand for obtaining spatial three-dimensional information of ground features. The reconstruction of the building model cannot be carried out no matter digital city, cultural relic protection or city street reconstruction. Building model reconstruction generally needs to be provided with basic data such as the actual size of a building, detailed information of characteristic components and the like, namely information such as a facade drawing and a side elevation drawing of the building. Due to the limitation of the technical level, the traditional method for acquiring the three-dimensional information of the building scene mainly adopts means such as a total station, close-range photogrammetry and the like. The total station is a scattered point type measurement, which is time-consuming, labor-consuming and inefficient, close-range photogrammetry cannot directly obtain a three-dimensional model of a ground object, and is difficult to extract homonymous points in an area with insignificant image gray level change, thereby influencing the geometric accuracy of building modeling and the integrity of three-dimensional information.
Unmanned aerial vehicle has safe quick, low cost and does not receive the advantage of shooting angle restriction, combines together with laser radar, can full play advantage. The technical approach based on LiDAR systems is the most efficient way to obtain digital earth surface models recognized so far and has precedent for reconstructing urban local areas by radar means. However, airborne laser radar using an unmanned aerial vehicle as a platform is easily interfered by airflow jitter, dust particles in air, the surface reflectivity of a measured object, and tree swinging in the environment, so that a great amount of high-frequency noise exists in original data while sharp information of urban buildings is contained, and therefore denoising processing before use is particularly important.
Many scholars have conducted relevant research on extracting building wall lines using point cloud data. The method comprises the steps of performing secondary interpolation encryption on laser point cloud data to generate DSM data, extracting building wall lines by using an image processing method, extracting the building wall lines by using laser scanning data and aerial image data in a span of Wuhan university, wherein the wall line information can be extracted from scattered unorganized point cloud data by using the method, but the extraction effect of the multi-layer building wall lines is poor, resampling the point cloud data by using a Mengfeng and the like to generate images with different scales, segmenting the images, and extracting the building wall lines by using a Canny algorithm.
By analyzing the existing algorithm, it can be known that the building wall lines can be extracted from massive discrete data, but the point cloud data has the characteristics of high noise, large data volume, high redundancy, disorder and the like, which can affect the wall line extraction precision. Extracting the building wall lines based on the digital image processing method can also lose part of the building information.
Disclosure of Invention
In order to make up for the defects of the prior art, the invention provides the building wall line extraction method based on the unmanned aerial vehicle laser radar point cloud, which has high accuracy and simple operation steps.
The invention is realized by the following technical scheme:
a building wall line extraction method based on unmanned aerial vehicle laser radar point cloud comprises the following steps:
(1) collecting point cloud data;
(2) preprocessing point cloud data, obtaining three-dimensional point cloud data by resolving and splicing the data, and then performing normalization processing;
(3) directly projecting the point cloud to a two-dimensional plane to generate a point cloud characteristic diagram;
(4) extracting depth features of the point cloud feature map, and performing feature fusion and classification to obtain point cloud data of all building walls;
(5) and extracting the wall line by using a bilateral filtering Canny algorithm, and then performing linear detection and fitting by using Hough change to accurately extract the wall line.
Preprocessing collected mass point cloud data, and performing topographic filtering, surface feature extraction and other steps to obtain a data source suitable for building feature extraction; the method aims to automatically extract and draw wall lines of buildings with different shapes, simplify and regularize rough wall lines extracted preliminarily and finally obtain a real building.
The more preferable technical scheme of the invention is as follows:
in the step (1), an unmanned aerial vehicle is used for carrying a laser radar, the unmanned aerial vehicle GPS and the inertial navigation system are used for carrying out navigation band registration and coordinate system conversion, and multi-frame point cloud data are spliced.
Further preferably, the unmanned aerial vehicle is a pulping M100 quad-rotor unmanned aerial vehicle, the laser radar is a 16-line laser radar of the Thson company, and the spliced multi-frame point cloud data adopts a convertible standard LiDAR storage format (.las), and can be converted into formats such as pcd and txt according to requirements.
In the step (2), resolving and splicing the data by using Cyclone software to obtain three-dimensional point cloud data of the whole area;
in LiDAR360 software, extracting ground points from the point cloud of a survey area by adopting a progressive encryption triangulation network filtering algorithm, dividing the whole point cloud into the ground points and non-ground points, generating a Digital Elevation Model (DEM) in the software by utilizing ground point data through interpolation, and carrying out normalization processing on the non-ground point data based on the DEM to obtain a normalized Elevation value of the non-ground point data, thereby eliminating the influence of ground fluctuation on wall line extraction;
selecting normal vectors of point cloudszThe components, intensities and normalized elevations are used as the red, green and blue 3 channels of the point cloud feature map.
Further preferably, willzComponent, intensity and normalized elevation are recorded separatelyf zv f in Andf ht normalization and normalization are performed, and are disclosed as follows:
Figure 100002_DEST_PATH_IMAGE001
(ii) a Wherein the content of the first and second substances,fthe characteristic value representing a single point is represented,f∈{f zv f in f ht },f in order to obtain the normalized characteristic value,f max f min respectively representing the maximum and minimum values of the corresponding characteristic values in the whole point cloud.
In the step (3), the point cloud is directly projected to a two-dimensional plane, the neighborhood range of the points of the feature map to be generated is divided into grids, the plane coordinate of each grid point is calculated according to the plane coordinate of the point, the elevations of all the grid points are set as the elevations of the point, and then the feature value of the point closest to each grid point in the neighborhood range is used as the feature value of each grid point, so that the point cloud feature map is obtained.
In the step (4), extracting depth features by adopting a 50-layer residual error network (ResNet 50) pre-trained on ImageNet, setting the grids to different values, such as 64 × 64, 128 × 128 and 256 × 256, obtaining feature maps of 3 different scales, and then fusing the features;
and classifying by using the neural network models of the two fully-connected layers to finally obtain point cloud data containing the building wall.
Further preferably, the loss of the point cloud data is cross entropy loss, which is disclosed as follows:
Figure 100002_DEST_PATH_IMAGE002
(ii) a Wherein the content of the first and second substances,NKrespectively the total number of training samples and the classification number;
Figure 100002_DEST_PATH_IMAGE003
e {0,1}, sampleiIs a categorykWhen the value is 1, otherwise, the value is 0;
Figure 100002_DEST_PATH_IMAGE004
representing a sampleiIs a categorykThe probability of (c).
In the step (5), the gridded DSM depth influence is produced on the point cloud data by a point-by-point interpolation method, and denoising is performed by using bilateral filtering, which specifically comprises the following operations:
firstly, combining the spatial domain and the value domain, simultaneously considering the geometric position and the gray scale, finally designing a filter,
Figure 100002_DEST_PATH_IMAGE005
Figure 100002_DEST_PATH_IMAGE006
wherein f (x, y) is an input image, g (i, j) is an output image, the weighting coefficients w (i, j, x, y) depend on a spatial domain and a value domain, and two variances σ of the spatial domain and the value domain can be simply set to be equal;
calculating the gradient and the direction of the gradient,
Figure 100002_DEST_PATH_IMAGE007
Figure 100002_DEST_PATH_IMAGE008
wherein G (x, y) represents an amplitude value, an edge intensity of the image;
Figure 100002_DEST_PATH_IMAGE009
indicating azimuth, gradient direction;
and thirdly, distinguishing background data and target data, and determining a threshold value by using a maximum between-class variance method to obtain a wall line.
Binarizing the image subjected to the bilateral filtering Canny algorithm to generate a binary image, and then obtaining linear characteristics by utilizing hough transformation;
and (3) utilizing point cloud data of buildings in the image and utilizing projection blocking to generate a point cloud distribution matrix. And then carrying out graying processing to obtain a grayscale image, and then carrying out binarization processing to obtain a point cloud distribution binary image. Detecting straight lines by using a Hough transform algorithm, detecting equations of all the straight lines, solving straight line intersection points, and sequentially connecting all the intersection points on the same straight line to form straight line segments serving as alternative wall lines; and superposing the obtained candidate wall line and the binary image, if no point cloud exists between two end points of the candidate wall line, setting the value of each pixel point coordinate on the candidate wall line in the binary image to be 0, dividing each candidate wall line into two halves step by step, removing each segmented pixel point coordinate, judging the value of the coordinate in the binary image, if the candidate wall line is 0, rejecting the candidate wall line, and otherwise, reserving the candidate wall line as a final wall line.
The invention also provides a system adopting the building wall line extraction method, which comprises the following steps:
an acquisition module;
a preprocessing module;
a classification module;
an extraction module;
and (5) packaging the module.
The invention also provides an electronic device loaded with the system, which comprises a storage, a processor and computer instructions stored on the storage and run on the processor, wherein when the computer instructions are run by the processor, the building wall line extraction method is completed.
The invention also provides a computer readable storage medium for storing computer instructions, and the computer instructions are executed by a processor to complete the building wall line extraction method.
Compared with the prior art, the invention has the beneficial effects that:
according to the method, an unmanned aerial vehicle is used for collecting massive point cloud data, the point cloud data are preprocessed to generate a point cloud characteristic diagram, multi-scale characteristic extraction is used for carrying out extraction and fusion, so that the point cloud data with values including buildings are obtained, then wall line extraction is carried out on the point cloud data, a bilateral filtering Canny algorithm is used firstly, then detection and summation are carried out through Hough, and finally a high-precision wall line is obtained.
The invention has the advantages of simple and orderly data processing mode, high data extraction accuracy and accurate obtained result, solves the problem of inaccurate extraction of the building wall line in the prior art, and is suitable for wide popularization and application.
Drawings
The invention will be further described with reference to the accompanying drawings.
FIG. 1 is a schematic flow chart of the operation of the present invention;
FIG. 2 is a schematic view of a DMS depth image structure of the present invention;
FIG. 3 is a schematic diagram of a bilateral filtering image according to the present invention;
FIG. 4 is a schematic diagram of the Hough transform results of the present invention;
FIG. 5 is a schematic diagram of the invention after the results are superimposed;
FIG. 6 is a graph showing the results of the fusion of features of the present invention;
FIG. 7 is a diagram illustrating the results after classification according to the present invention.
Detailed Description
The invention is further described with reference to the following figures and examples. The embodiments and features of the embodiments of the present invention may be combined with each other without conflict.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present disclosure. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
Example (b): the embodiment provides a building wall line extraction method based on unmanned aerial vehicle laser radar point cloud;
as shown in fig. 1, the method for extracting the building wall line based on the laser radar point cloud of the unmanned aerial vehicle includes:
s1: collecting point cloud data;
s2: preprocessing point cloud data; the method comprises the following steps:
resolving and splicing data by using Cyclone software to obtain three-dimensional point cloud data of the whole area;
secondly, in LiDAR360 software, ground points are extracted from the point cloud of the survey area by adopting a progressive encryption triangulation network filtering algorithm, the whole point cloud is divided into ground points and non-ground points, then interpolation is carried out by utilizing ground point data to obtain (Digital Tertain Model, DTM), and normalization processing is carried out on the non-ground point data based on the DTM to obtain a normalized elevation value of the non-ground point data.
Selecting point cloud normal vectorzComponent, intensity and normalized elevation (separately notedf zv f in Andf ht ) And 3 channels of red, green and blue are used as point cloud characteristic maps. Due to the fact thatf zv f in Andf ht the order of magnitude of the point cloud feature map is different, and the point cloud feature map is required to be normalized in order to be generated by the point cloud feature map. Will be provided withzThe components, intensities and normalized elevations are normalized separately, as follows:
Figure 269070DEST_PATH_IMAGE001
fthe characteristic value representing a single point is represented,f∈{f zv f in f ht },f in order to obtain the normalized characteristic value,f max f min respectively representing the maximum and minimum values of the corresponding characteristic values in the whole point cloud.
S3: generating a point cloud characteristic map; directly projecting the point cloud to a two-dimensional plane, dividing a neighborhood range of points of a feature map to be generated into n x n grids, calculating a plane coordinate of each grid point according to the plane coordinate of the points, setting the elevations of all the grid points as the elevations of the points, and taking a feature value of a point closest to each grid point in the neighborhood range as a feature value of each grid point so as to obtain a point cloud feature map;
s4: extracting depth features by using a 50-layer residual network (ResNet 50) pre-trained on ImageNet, and setting the grids to different values, such as 64 × 64, 128 × 128,256 × 256; obtaining 3 feature maps with different scales, and then fusing the features. Classifying by using the neural network models of two full-connection layers to finally obtain point cloud data containing the building wall, wherein the loss is cross entropy loss, and the formula is as follows:
Figure 592735DEST_PATH_IMAGE002
NKrespectively the total number of training samples and the classification number;
Figure 638051DEST_PATH_IMAGE003
e {0,1}, sampleiIs a categorykWhen the value is 1, otherwise, the value is 0;
Figure 439785DEST_PATH_IMAGE004
representing a sampleiIs a categorykThe probability of (c).
S5: the method comprises the steps of firstly extracting a wall line by using a bilateral filtering Canny algorithm, and then carrying out linear detection and fitting by using Hough change, thereby accurately extracting the wall line. The method comprises the following specific steps:
(1) and generating a gridded DSM depth image for the point cloud data by a point-by-point interpolation method, denoising by utilizing bilateral filtering, and outputting a weighted combination of values of pixels depending on values of neighborhood pixels.
Firstly, combining a space domain and a value domain, simultaneously considering the geometric position and the gray scale, and finally designing a filter.
Figure 519737DEST_PATH_IMAGE005
Figure 607778DEST_PATH_IMAGE006
Where f (x, y) is the input image, g (i, j) is the output image, the weighting factors w (i, j, x, y) depend on the spatial and the range, and the two variances σ of the spatial and the range can simply be set equal.
② calculating gradient and direction
Figure 281336DEST_PATH_IMAGE007
Figure 745816DEST_PATH_IMAGE008
G (x, y) represents an amplitude value, an edge intensity of the image;
Figure 680274DEST_PATH_IMAGE009
indicating the azimuth, the gradient direction.
Third, determining threshold value by maximum variance between classes
And distinguishing background data and target data, and determining a threshold value by using the background data and the target data as inter-class variance. A wall line is obtained, but the line may be broken or discontinuous.
(2) Wall line finishing by hough transformation
Binarizing the image subjected to the bilateral filtering Canny algorithm to generate a binary image, and then obtaining linear characteristics by utilizing hough transformation; acquiring coordinates of point cloud data of a building by using the point cloud data of the building in the image, and generating a point cloud distribution matrix; then carrying out graying processing to obtain a grayscale image, and then carrying out binarization processing to obtain a point cloud distribution binary image; detecting straight lines by using a Hough transform algorithm, detecting equations of all the straight lines, solving straight line intersection points, and sequentially connecting all the intersection points on the same straight line to form straight line segments serving as alternative wall lines; superposing the obtained candidate wall line with a binary image, wherein if no point cloud exists between two end points of the candidate wall line, the value of each pixel point coordinate on the candidate wall line in the binary image is 0; therefore, each candidate wall line is divided into two parts step by step, the pixel point coordinates of each subsection are taken, the value of the coordinates in the binary image is judged, if all the candidate wall lines are 0, the candidate wall line is removed, and otherwise, the candidate wall line is reserved as the final wall line.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A building wall line extraction method based on unmanned aerial vehicle laser radar point cloud is characterized by comprising the following steps: (1) collecting point cloud data; (2) preprocessing point cloud data, obtaining three-dimensional point cloud data by resolving and splicing the data, and then performing normalization processing; (3) directly projecting the point cloud to a two-dimensional plane to generate a point cloud characteristic diagram; (4) extracting depth features of the point cloud feature map, and performing feature fusion and classification to obtain point cloud data of all building walls; (5) and extracting the wall line by using a bilateral filtering Canny algorithm, and then performing linear detection and fitting by using Hough change to accurately extract the wall line.
2. The building wall line extraction method of claim 1, wherein: in the step (1), an unmanned aerial vehicle is used for carrying a laser radar, the unmanned aerial vehicle GPS and the inertial navigation system are used for carrying out navigation band registration and coordinate system conversion, and multi-frame point cloud data are spliced.
3. The building wall line extraction method of claim 1, wherein: in the step (2), resolving and splicing the data by using Cyclone software to obtain three-dimensional point cloud data of the whole area; in LiDAR360 software, extracting ground points from survey area point cloud by adopting a progressive encryption triangulation network filtering algorithm, dividing the whole point cloud into ground points and non-ground points, generating a digital elevation model in the software by utilizing ground point data through interpolation, and carrying out normalization processing on the non-ground point data based on DEM to obtain a normalized elevation value of the non-ground point data; selecting normal vectors of point cloudszThe components, intensities and normalized elevations are used as the red, green and blue 3 channels of the point cloud feature map.
4. The building wall line extraction method of claim 1, wherein: in the step (3), the point cloud is directly projected to a two-dimensional plane, the neighborhood range of the points of the feature map to be generated is divided into grids, the plane coordinate of each grid point is calculated according to the plane coordinate of the point, the elevations of all the grid points are set as the elevations of the point, and then the feature value of the point closest to each grid point in the neighborhood range is used as the feature value of each grid point, so that the point cloud feature map is obtained.
5. The building wall line extraction method of claim 1, wherein: in the step (4), extracting depth features by adopting 50 layers of residual error networks pre-trained on ImageNet, setting the grids into different values to obtain feature maps of 3 different scales, and then fusing the features; and classifying by using the neural network models of the two fully-connected layers to finally obtain point cloud data containing the building wall.
6. The building wall line extraction method of claim 1, wherein: in the step (5), the gridded DSM depth influence is produced on the point cloud data by a point-by-point interpolation method, and denoising is performed by using bilateral filtering, which specifically comprises the following operations: firstly, combining the spatial domain and the value domain, simultaneously considering the geometric position and the gray scale, finally designing a filter,
Figure DEST_PATH_IMAGE001
Figure DEST_PATH_IMAGE002
where f (x, y) is the input image, g (i, j) is the output image, the weighting coefficients w (i, j, x, y) depend on the spatial domain and the value domain, and the two variances σ of the spatial domain and the value domain can simply be set equal; calculating the gradient and the direction of the gradient,
Figure DEST_PATH_IMAGE003
Figure DEST_PATH_IMAGE004
(ii) a Wherein G (x, y) represents an amplitude value, an edge intensity of the image;
Figure DEST_PATH_IMAGE005
indicating azimuth, gradient direction; and thirdly, distinguishing background data and target data, and determining a threshold value by using a maximum between-class variance method to obtain a wall line.
7. The building wall line extraction method of claim 1, wherein: in the step (5), the image subjected to bilateral filtering Canny algorithm is binarized to generate a binary image, and then straight line features are obtained by means of hough transformation; acquiring coordinates of point cloud data of a building by using the point cloud data of the building in the image, and generating a point cloud distribution matrix; then carrying out graying processing to obtain a grayscale image, and then carrying out binarization processing to obtain a point cloud distribution binary image; detecting straight lines by using a Hough transform algorithm, detecting equations of all the straight lines, solving straight line intersection points, and sequentially connecting all the intersection points on the same straight line to form straight line segments serving as alternative wall lines; and superposing the obtained candidate wall line and the binary image, if no point cloud exists between two end points of the candidate wall line, taking the value of each pixel point coordinate on the candidate wall line in the binary image as 0, halving each candidate wall line step by step, taking the coordinates of each segmented pixel point, judging the value of the coordinate in the binary image, if the coordinates are all 0, rejecting the candidate wall line, and otherwise, reserving the candidate wall line as a final wall line.
8. The building wall line extraction method of claim 2, wherein: the unmanned aerial vehicle is a pulping M100 quadrotor unmanned aerial vehicle, the laser radar is a 16-line laser radar of the Thson company, and the spliced multiframe point cloud data adopts a convertible standard LiDAR storage format.
9. The building wall line extraction method of claim 3, wherein: will be provided withzComponent, intensity and normalized elevation are recorded separatelyf zv f in Andf ht normalization and normalization are performed, and are disclosed as follows:
Figure DEST_PATH_IMAGE006
(ii) a Wherein the content of the first and second substances,fthe characteristic value representing a single point is represented,f∈{f zv f in f ht },f in order to obtain the normalized characteristic value,f max f min respectively representing the maximum and minimum values of the corresponding characteristic values in the whole point cloud.
10. The building wall line extraction method of claim 5, wherein:the loss of the point cloud data is cross entropy loss, and is disclosed as follows:
Figure DEST_PATH_IMAGE007
(ii) a Wherein the content of the first and second substances,NKrespectively the total number of training samples and the classification number;
Figure DEST_PATH_IMAGE008
e {0,1}, sampleiIs a categorykWhen the value is 1, otherwise, the value is 0;
Figure DEST_PATH_IMAGE009
representing a sampleiIs a categorykThe probability of (c).
CN202111647124.5A 2021-12-31 2021-12-31 Building wall line extraction method based on unmanned aerial vehicle laser radar point cloud Active CN114004740B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111647124.5A CN114004740B (en) 2021-12-31 2021-12-31 Building wall line extraction method based on unmanned aerial vehicle laser radar point cloud

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111647124.5A CN114004740B (en) 2021-12-31 2021-12-31 Building wall line extraction method based on unmanned aerial vehicle laser radar point cloud

Publications (2)

Publication Number Publication Date
CN114004740A true CN114004740A (en) 2022-02-01
CN114004740B CN114004740B (en) 2022-04-12

Family

ID=79932363

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111647124.5A Active CN114004740B (en) 2021-12-31 2021-12-31 Building wall line extraction method based on unmanned aerial vehicle laser radar point cloud

Country Status (1)

Country Link
CN (1) CN114004740B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114170397A (en) * 2022-02-09 2022-03-11 四川省安全科学技术研究院 Rapid mathematical modeling method for irregular discrete element simulation model based on real terrain

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102103202A (en) * 2010-12-01 2011-06-22 武汉大学 Semi-supervised classification method for airborne laser radar data fusing images
US20190197311A1 (en) * 2017-12-26 2019-06-27 Harbin Institute Of Technology Evaluation Method of Solar Energy Utilization Potential in Urban High-density Areas Based on Low-altitude Photogrammetry
CN111583263A (en) * 2020-04-30 2020-08-25 北京工业大学 Point cloud segmentation method based on joint dynamic graph convolution

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102103202A (en) * 2010-12-01 2011-06-22 武汉大学 Semi-supervised classification method for airborne laser radar data fusing images
US20190197311A1 (en) * 2017-12-26 2019-06-27 Harbin Institute Of Technology Evaluation Method of Solar Energy Utilization Potential in Urban High-density Areas Based on Low-altitude Photogrammetry
CN111583263A (en) * 2020-04-30 2020-08-25 北京工业大学 Point cloud segmentation method based on joint dynamic graph convolution

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ANIL KATIYAR等: "Automated Defect Detection in Physical Components using Machine Learning", 《IEEE XPLORE》 *
李迁等: "基于机载LiDAR点云和建筑物轮廓线构建DSM的方法", 《国土资源遥感》 *
袁晨鑫等: "基于LiDAR点云数据的建筑物轮廓线提取", 《工程勘察》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114170397A (en) * 2022-02-09 2022-03-11 四川省安全科学技术研究院 Rapid mathematical modeling method for irregular discrete element simulation model based on real terrain
CN114170397B (en) * 2022-02-09 2022-04-29 四川省安全科学技术研究院 Rapid mathematical modeling method for irregular discrete element simulation model based on real terrain

Also Published As

Publication number Publication date
CN114004740B (en) 2022-04-12

Similar Documents

Publication Publication Date Title
Chen et al. Automatic building information model reconstruction in high-density urban areas: Augmenting multi-source data with architectural knowledge
Lari et al. An adaptive approach for the segmentation and extraction of planar and linear/cylindrical features from laser scanning data
San et al. Building extraction from high resolution satellite images using Hough transform
CN103703490A (en) Device for generating three-dimensional feature data, method for generating three-dimensional feature data, and recording medium on which program for generating three-dimensional feature data is recorded
Yan et al. Estimation of building height using a single street view image via deep neural networks
Karsli et al. Automatic building extraction from very high-resolution image and LiDAR data with SVM algorithm
CN116452852A (en) Automatic generation method of high-precision vector map
CN114004740B (en) Building wall line extraction method based on unmanned aerial vehicle laser radar point cloud
Jiangui et al. A method for main road extraction from airborne LiDAR data in urban area
Li et al. New methodologies for precise building boundary extraction from LiDAR data and high resolution image
Kim et al. Tree and building detection in dense urban environments using automated processing of IKONOS image and LiDAR data
Zhang et al. Building footprint and height information extraction from airborne LiDAR and aerial imagery
CN117115683A (en) Remote sensing extraction method and system for dangerous rock falling rocks under vegetation coverage
Gorbachev et al. Digital processing of aerospace images
Zhu A pipeline of 3D scene reconstruction from point clouds
Cömert et al. Object based building extraction and building period estimation from unmanned aerial vehicle data
CN115063698A (en) Automatic identification and information extraction method and system for slope surface deformation crack
Susetyo et al. Automatic building model extraction using LiDAR data
Atik et al. Comparison of automatic feature extraction methods for building roof planes by using airborne lidar data and high resolution satellite image
Xu Application of remote sensing image data scene generation method in smart city
CN114677435A (en) Point cloud panoramic fusion element extraction method and system
Polat LIDAR Derived 3d City Modelling
Büyüksalih Building Zone Regulation Compliance Using LIDAR Data: Real-Life Tests in İstanbul
Lafarge et al. Modeling urban landscapes from point clouds: a generic approach
Yu et al. A cue line based method for building modeling from LiDAR and satellite imagery

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant