CN111354083A - Progressive building extraction method based on original laser point cloud - Google Patents
Progressive building extraction method based on original laser point cloud Download PDFInfo
- Publication number
- CN111354083A CN111354083A CN202010108337.XA CN202010108337A CN111354083A CN 111354083 A CN111354083 A CN 111354083A CN 202010108337 A CN202010108337 A CN 202010108337A CN 111354083 A CN111354083 A CN 111354083A
- Authority
- CN
- China
- Prior art keywords
- point
- building
- point cloud
- plane
- contour line
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000605 extraction Methods 0.000 title claims abstract description 34
- 230000000750 progressive effect Effects 0.000 title claims abstract description 19
- 238000000034 method Methods 0.000 claims abstract description 32
- 238000001914 filtration Methods 0.000 claims abstract description 17
- 238000006243 chemical reaction Methods 0.000 claims abstract description 8
- 230000008569 process Effects 0.000 claims description 4
- 239000011159 matrix material Substances 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 238000012545 processing Methods 0.000 abstract description 2
- 230000000694 effects Effects 0.000 description 6
- 238000005259 measurement Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 238000009825 accumulation Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/06—Topological mapping of higher dimensional structures onto lower dimensional surfaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/757—Matching configurations of points or features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20048—Transform domain processing
- G06T2207/20061—Hough transform
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Computing Systems (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Remote Sensing (AREA)
- Computer Graphics (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a progressive building extraction method based on original laser point cloud, which comprises the following steps: step 1: adopting progressive mathematical morphology filtering to the original laser point cloud to separate non-ground points; step 2: detecting a plane in a non-ground point by adopting an improved 3D-Hough conversion algorithm, extracting a building point cloud by using the plane, and projecting the building point cloud to a two-dimensional plane; and step 3: determining key points of the building point cloud by adopting an adjacent point azimuth threshold method; and 4, step 4: based on the key points of the building point cloud, fitting by using RANSAC to obtain an initial contour line of the building; and 5: and utilizing a regularization algorithm to regularize the initial contour line of the building to obtain a regularized contour line of the building. The invention provides a complete set of complete extraction process from the original point cloud to the building contour line, can directly start processing from the acquired laser radar point cloud data, and can adjust the type, range and precision of the extracted point cloud according to requirements.
Description
Technical Field
The invention relates to the technical field of geographic information, in particular to a progressive building extraction method based on original laser point cloud.
Background
The building which is used for constructing the city main body is an important element of city three-dimensional modeling, the extraction of the building contour line is a key step in the three-dimensional modeling, and the building contour information has important significance for the supervision of illegal buildings, the updating of maps, the real estate evaluation, the construction of smart cities and the like. At present, aiming at the extraction and regularization of building contour lines, methods based on high-resolution remote sensing images, airborne laser radar (LiDAR) data and combination of the high-resolution remote sensing images and the airborne laser radar (LiDAR) data are mainly used. Traditional image-based building contour extraction is usually interfered by various ground objects, the technical difficulty is high, and the efficiency is not high. Therefore, many scholars extract the building outline by combining the high-resolution image with the laser radar point cloud data, such as a method for generating a Digital Elevation Model (DEM) of the urban ground surface by combining the high-resolution satellite image with airborne laser scanning data and a method for extracting the fused point cloud of the LIDAR point cloud of the building and the orthoimage based on a graph cut algorithm. Although the method of extracting the building contour by combining the two data can make up for the defects of a single data source and has a good extraction effect, the data and the workload which need to be processed are greatly increased, and the existing research focuses on the extraction of the building land parcel, while the research on the specific contour line is less. Therefore, how to extract the building outline information from the LIDAR point cloud data with high precision and automation is an important point to be researched.
The method comprises the steps of detecting building edge points by using a point-to-point distance threshold method, dividing boundaries into a plurality of sections by using a new tracking algorithm, finally using an improved angle line extraction algorithm, automatically adjusting extracted straight lines according to the main direction of the building, And finally using an improved angle line extraction algorithm, wherein the extracted straight lines are extracted by using a multiband edge And local information, And the extracted straight lines are extracted by using an improved angle line extraction algorithm, And are applied to a more-than-normal building boundary extraction process, so that the extracted straight lines are extracted by using a multi-band edge And local information, And the extracted straight lines are extracted by using a more-than-standard building boundary extraction algorithm, so that the extracted straight lines are extracted by using a more-than-standard building boundary extraction algorithm, And the extracted straight lines are extracted by using a more-than-standard building boundary extraction algorithm.
Disclosure of Invention
The invention aims to provide a progressive building extraction method based on original laser point cloud, and solves the problem that the existing building outline extraction method is low in efficiency.
In order to solve the technical problems, the invention adopts the following technical scheme:
a progressive building extraction method based on original laser point cloud comprises the following steps:
step 1: adopting progressive mathematical morphology filtering to the original laser point cloud to separate non-ground points;
step 2: detecting a plane in a non-ground point by adopting an improved 3D-Hough conversion algorithm, extracting a building point cloud by using the plane, and projecting the building point cloud to a two-dimensional plane;
and step 3: determining key points of the building point cloud by adopting an adjacent point azimuth threshold method;
and 4, step 4: based on the key points of the building point cloud, fitting by using RANSAC to obtain an initial contour line of the building;
and 5: and utilizing a regularization algorithm to regularize the initial contour line of the building to obtain a regularized contour line of the building.
Preferably, in the step 1, progressive mathematical morphology filtering is adopted for the original laser point cloud, the size of the filtering window is adjusted step by step, and the surface ground object measurement value is removed step by step and replaced by the minimum elevation in the structural element in the process of executing the opening operation of the gradually increased filtering window.
Preferably, the specific method for extracting the building point cloud by using the improved 3D-Hough conversion algorithm in the step 2 is as follows:
21) uniquely determining a plane in a three-dimensional Cartesian coordinate system OXYZ by an included angle theta between the projection of a normal vector n of the plane on an xoy plane and the positive direction of an x axis, an included angle phi between n and the xoy plane, and a distance rho between an original point and the plane;
22) in the three-dimensional parameter coordinate O theta phi rho, expressing a plane in an original coordinate system in the coordinate system in a point form;
23) the method comprises the following steps that a plurality of planes passing through a point P in the three-dimensional point cloud are countless, if theta and phi are determined, rho is determined, a curved surface is formed in a three-dimensional coordinate space O theta phi rho by the (theta, phi, rho), if N points exist, N curved surfaces exist, and the (theta, phi, rho) located at the intersection peak value among the curved surfaces is regarded as a plane existing in the laser radar point cloud; then accumulating the existing (theta _ n, phi _ n, rho _ n) in an accumulator;
24) finally, in the accumulated result of the accumulator, the local peak point can represent a plane existing in the point cloud, and whether the plane is a building plane or other non-ground objects is judged according to a plane area threshold value;
25) projecting the three-dimensional point cloud in the OXYZ space into an OXY two-dimensional plane.
Preferably, the specific method for determining the key points of the building point cloud by using the adjacent point azimuth threshold method in step 3 is as follows:
31) connecting the adjacent first and second edge points, calculating the azimuth angle of the line segment, and presetting the vertical distance threshold of the adjacent point positions so as to determine the variation range of the azimuth angle;
32) if the azimuth angle of the connecting line of the first point and the third point is not within the variation range, the second point can be regarded as a key point, the second point is connected with the third point, and the steps are repeated to judge whether the third point is the key point;
33) if the azimuth angle of the connecting line of the first point and the third point is within the variation range, the second point is not a key point, the first point and the third point are connected, and the steps are repeated to judge whether the third point is a key point;
34) and repeating the circulation until all the edge points are judged.
Preferably, the specific method for obtaining the initial contour line of the building by using RANSAC fitting in step 4 is as follows:
41) determining the number of segments of the contour line and the edge point of each segment according to the obtained key points;
42) randomly extracting 4 non-collinear sample data from a point set of a section of edge points, calculating a transformation matrix H, and marking as a model M;
43) calculating the average value of errors between all data in the point set of the edge point and the model M, and adding an inner point set I if the errors are smaller than a threshold value;
44) if the number of the elements I of the current internal point set is greater than the number of the elements I _ best of the optimal internal point set, updating the I _ best to I, and updating the iteration times k; if the iteration times are more than k, exiting; otherwise, adding 1 to the iteration number, and repeating the steps.
Preferably, the specific method for regularizing the initial contour line of the building by using a regularization algorithm in the step 5 is as follows:
51) firstly, determining the longest building contour line, taking the longest building contour line as the positive starting direction, and numbering the longest building contour line as 1;
52) numbering the remaining contour lines clockwise, wherein the odd numbered contour lines are drawn in a positive direction, and the even numbered contour lines are drawn in a negative direction;
53) respectively calculating positive and negative weighted average directions D, D_;
54) Making the positive and negative weighted average directions mutually perpendicular;
55) the fitted contours are rotated to the correct orientation around their midpoints, respectively.
Compared with the prior art, the invention has the beneficial effects that:
the invention provides a complete set of complete extraction process from the original point cloud to the building contour line, which can directly start processing from the acquired laser radar point cloud data and adjust the type, range and precision of the extracted point cloud according to requirements; by adopting various improved and innovative algorithms, such as improved 3D-Hough conversion (3D-Hough Transform) and a key point extraction algorithm, the efficiency of the extraction process and the precision of the contour line of the building are improved; the extraction effect on the regularized buildings with vertical adjacent edges is better.
Drawings
Fig. 1 is a flow chart of automatic extraction of a building contour based on original point cloud data in the present invention.
Fig. 2 is a diagram illustrating the effect of separating ground points and non-ground points in the data range according to the embodiment of the present invention.
FIG. 3 is a diagram illustrating the effect of extracting the point cloud of the building within the data range according to the embodiment of the present invention.
FIG. 4 is a schematic diagram of the principle of the keypoint extraction algorithm of the present invention.
FIG. 5 is a comparison graph of the results of the embodiment of the present invention and the remote sensing image profile.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In this embodiment, a built area in water county built in Yunnan province is used as a research object, and an unmanned aerial vehicle carries a LiDAR point cloud acquisition M600Pro + Rigel scanning system to acquire data of the area, so as to acquire original point cloud data of the ground surface of a target area. The invention provides a building extraction method based on oblique photography and laser point cloud, which comprises the following steps:
step 1, separating non-ground points by using progressive mathematical morphology filtering on original laser point clouds;
step 2, detecting a plane in a non-ground point by adopting improved 3D-Hough conversion, extracting a building point cloud according to the plane, and projecting the building point cloud to a two-dimensional plane;
step 3, determining key points of the building point cloud by adopting an adjacent point azimuth threshold method;
step 4, based on the key points, fitting by using an RANSAC algorithm to obtain an initial contour line of the building;
and 5, utilizing a regularization algorithm to regularize the initial contour line of the building to obtain a regularized building contour line.
The progressive morphological filtering described above can remove different sizes of objects protruding from the ground, such as buildings, trees, utility poles, etc., from the lidar data set by performing an opening operation with gradually increasing filtering windows on the laser point cloud data, thereby obtaining a relatively smooth terrain surface, i.e., gradually removing surface terrain measurements by size and replacing them with the minimum elevation in the structural elements during the gradually increasing filtering windows (structural elements). But when doing an opening operation, a surface is often created that is below the terrain measurement, resulting in an incorrect removal of the measurement of the high terrain top. The building is known to have discontinuous abrupt elevation changes at the edge, and the terrain appears to be continuous, so that the judgment can be further made according to the elevation difference threshold value:
ΔHp=|Hp-H′p| (1)
building: Δ Hp>Hpt(2)
Ground surface: Δ Hp≤Hpt(3)
Wherein HpIs the elevation of point p before the filtering process, and H'pIs the filtered elevation of point p, HptIs a preset altitude difference threshold. The filtering separation effect of the area is shown in fig. 2, the gray part is a ground point, the other parts are non-ground points, the lowest elevation of the building in the area is used as a height difference threshold value to ensure that the point cloud of the building can be completely reserved, and the change of the filtering window can be represented by a linear equation:
wk=2kb+1 (4)
where b is the initial window size and k is an increasing integer. The window is enlarged, so that the filtering windows are symmetrical around the central point, and the opening operation is simplified.
And 2, detecting a plane in the non-ground point by adopting improved 3D-Hough conversion:
21) uniquely determining a plane in a three-dimensional Cartesian coordinate system (OXYZ) by an included angle theta between the projection of a normal vector n of the plane on an xoy plane and the positive direction of an x axis, an included angle phi between n and the xoy plane, and a distance rho between an original point and the plane; the planar features are expressed using the following formula:
22) in three-dimensional parametric coordinatesIn the method, the plane in the original coordinate system is in the form of pointsExpressed in the coordinate system;
23) there are countless planes through a point P in the three-dimensional point cloud if θ anddetermine, then p may also be determined, at that timeCan be in three-dimensional coordinate spaceIf there are N points, there are N curved surfaces at the peak of intersection between the curved surfacesI.e. the plane present in the laser-like radar point cloud.
In actual operation, to increase the speed and efficiency of the algorithm, the sum of theta and theta is usedDiscrete division of the variation range of nθAndrespectively representing the number of segments into which they are divided, to what is presentThe accumulation is performed in an accumulator, a process also called voting. Thus the number of planes of a point in the point cloud is limitedI.e. for each point, needs to find a satisfaction formulaA plane, votingNext, if there are N points in total, then vote is neededThen, finally, in the accumulated result of the accumulator, the local peak point can represent the plane existing in the point cloud
24) Finally, in the accumulation result of the accumulator, a local peak point can represent a plane existing in the point cloud, whether the plane is a building plane or other non-ground objects is judged according to a plane area threshold, in the embodiment, the area threshold is 6 square meters which is the minimum roof area, and the final building point cloud extraction result is shown in the attached figure 3, wherein a grey part is the building point cloud, and a black part is the non-building point cloud on the ground;
25) projecting the three-dimensional point cloud in the OXYZ space into an OXY two-dimensional plane.
In step 3, determining key points of the building point cloud by adopting an adjacent point azimuth threshold method:
31) referring to FIG. 4, adjacent building edge points P1,P2,P3,P4From P1Start of connection P1And P2Wherein β12Is a line P1-P2D is a predetermined constant and is P1-P2D is 1m in this embodiment,
32) let β be β13,βaβ 12- Δ β 12, β b- β 12+ Δ β 12, ifThen point P2Is a key point, connect P2-P3Composition β23When β is equal to β24,βa=β23-αβ23,βb=β23-Δβ23Repeating the above steps to determine the point P3Whether the key point is determined;
33) if β∈ (β)a,βb) Then P2Not being critical, connection P1-P3Composition β13When β is equal to β14,βa=β13-Δβ13,βb=β13-Δβ13Repeating the above steps to judge P3Whether it is a key point.
34) And repeating the loop before all the edge points are judged. This algorithm with N as an edge point can be expressed as:
n=1;m=2;x=1;y=3;βa;βb;
Define()
{βa=βnm-Δβnm;βb=βnm+Δβnm;Returnβ=βxy;}
Function(β){
Else Ifβ∈(βa,βb)and m≤N+1{Pn+1is not the key point;m=m+1;
y=y+1;Function(Define());}
}
step 4, fitting the contour line by using an RANSAC algorithm: determining the number of segments of the contour line and the edge point of each segment according to the obtained key points; randomly extracting 4 non-collinear sample data from a section of edge point set, calculating a transformation matrix H, and marking as a model M; calculating the average value of errors between all data in the edge point set and the model M, and adding an inner point set I if the errors are smaller than a threshold value; if the number of the elements I of the current internal point set is greater than the number of the elements I _ best of the optimal internal point set, updating the I _ best to I, and updating the iteration times k; if the iteration times are more than k, exiting; otherwise, adding 1 to the iteration times, repeating the steps, and finding out the optimal model when the I _ best is maximum.
In step 5, the contour line based on the length weighted direction algorithm is regularized:
51) firstly, determining the longest building contour line, taking the longest building contour line as the positive starting direction, and numbering the longest building contour line as 1;
52) numbering the remaining contour lines clockwise, wherein the odd numbered contour lines are drawn in a positive direction, and the even numbered contour lines are drawn in a negative direction;
53) the positive and negative weighted average directions D, D are calculated-The following formula is adopted for calculation:
wherein D is the weighted average direction, liIs the length of the ith directional contour line, L is the sum of the lengths of the directional contours, diIs the direction of the ith directional profile;
54) the positive and negative weighted average directions are made to be perpendicular to each other, so that the included angles between the positive and negative weighted average directions and the X axis satisfy the following relation:
α=σ1+σ2-90 (8)
in the formula sigma1And σ2The included angle between the weighted positive direction and the weighted negative direction and the X axis in a plane coordinate system, α is the total correction value of the positive direction and the negative direction, l and l-The lengths of the contour lines in the positive and negative directions are shown, and L is the length of all contour lines;
55) the fitted contours are rotated to the correct orientation around their midpoints, respectively.
The regularized contour result of the building is shown in figure 5, and in order to verify the validity and correctness of the method for extracting the contour line of the building from the original point cloud data, the real contours of two buildings are extracted according to the regional high-definition satellite image data, and the average contour line angle difference, the average contour region area difference and the average contour line are utilizedThe length difference is used to quantitatively analyze the error of the result. As can be seen from the comparison graph, the calculated contour line obtained by the algorithm is approximately coincident with the image contour line, and the calculation of the contour of the long side and the short side is more accurate. To more accurately express the accuracy of the results, 3 quantities are used to reflect the error between the actual contour line and the calculated contour line, the average contour line angle difference is the average of the difference between the two corner degrees, the average contour area difference is the difference between the two enclosed area divided by the number of sides, the average contour line length difference is the average of the difference between the two sides, the final result shows that the average error of the building contour line angle extracted by the method is within 1 degree per side, and the average error of the area is about 4.09m per side2And the average error of the length is about 0.38m on each side, which shows that the method for extracting the building contour line from the original point cloud data can accurately and effectively extract the building contour line and has high precision.
Reference throughout this specification to "one embodiment," "another embodiment," "an embodiment," "a preferred embodiment," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment described generally in this application. The appearances of the same phrase in various places in the specification are not necessarily all referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with any embodiment, it is submitted that it is within the scope of the invention to effect such feature, structure, or characteristic in connection with other embodiments.
Although the invention has been described herein with reference to a number of illustrative embodiments thereof, it should be understood that numerous other modifications and embodiments can be devised by those skilled in the art that will fall within the spirit and scope of the principles of this disclosure. More specifically, various variations and modifications are possible in the component parts and/or arrangements of the subject combination arrangement within the scope of the disclosure, the drawings and the appended claims. In addition to variations and modifications in the component parts and/or arrangements, other uses will also be apparent to those skilled in the art.
Claims (6)
1. A progressive building extraction method based on original laser point cloud is characterized in that: the method comprises the following steps:
step 1: adopting progressive mathematical morphology filtering to the original laser point cloud to separate non-ground points;
step 2: detecting a plane in a non-ground point by adopting an improved 3D-Hough conversion algorithm, extracting a building point cloud by using the plane, and projecting the building point cloud to a two-dimensional plane;
and step 3: determining key points of the building point cloud by adopting an adjacent point azimuth threshold method;
and 4, step 4: based on the key points of the building point cloud, fitting by using RANSAC to obtain an initial contour line of the building;
and 5: and utilizing a regularization algorithm to regularize the initial contour line of the building to obtain a regularized contour line of the building.
2. The progressive building extraction method based on original laser point cloud of claim 1, characterized in that: in the step 1, progressive mathematical morphology filtering is adopted for the original laser point cloud, the size of a filtering window is adjusted step by step, and the measured values of the surface features are removed step by step and replaced by the minimum elevation in the structural elements in the process of executing the opening operation of the filtering window which is increased gradually.
3. The progressive building extraction method based on original laser point cloud of claim 1, characterized in that: the specific method for extracting the building point cloud by adopting the improved 3D-Hough conversion algorithm in the step 2 comprises the following steps:
21) uniquely determining a plane in a three-dimensional Cartesian coordinate system OXYZ by an included angle theta between the projection of a normal vector n of the plane on an xoy plane and the positive direction of an x axis, an included angle phi between n and the xoy plane, and a distance rho between an original point and the plane;
22) in the three-dimensional parameter coordinate O theta phi rho, expressing a plane in an original coordinate system in the coordinate system in a point form;
23) the method comprises the following steps that a plurality of planes passing through a point P in the three-dimensional point cloud are countless, if theta and phi are determined, rho is determined, a curved surface is formed in a three-dimensional coordinate space O theta phi rho by the (theta, phi, rho), if N points exist, N curved surfaces exist, and the (theta, phi, rho) located at the intersection peak value among the curved surfaces is regarded as a plane existing in the laser radar point cloud; then accumulating the existing (theta _ n, phi _ n, rho _ n) in an accumulator;
24) finally, in the accumulated result of the accumulator, the local peak point represents a plane existing in the point cloud, and whether the plane is a building plane or other non-ground objects is judged according to a plane area threshold value;
25) projecting the three-dimensional point cloud in the OXYZ space into an OXY two-dimensional plane.
4. The progressive building extraction method based on original laser point cloud of claim 1, characterized in that: the specific method for determining the key points of the building point cloud by using the adjacent point azimuth threshold method in the step 3 is as follows:
31) connecting the adjacent first and second edge points, calculating the azimuth angle of the line segment, and presetting the vertical distance threshold of the adjacent point positions so as to determine the variation range of the azimuth angle;
32) if the azimuth angle of the connecting line of the first point and the third point is not within the variation range, the second point can be regarded as a key point, the second point is connected with the third point, and the steps are repeated to judge whether the third point is the key point;
33) if the azimuth angle of the connecting line of the first point and the third point is within the variation range, the second point is not a key point, the first point and the third point are connected, and the steps are repeated to judge whether the third point is a key point;
34) and repeating the circulation until all the edge points are judged.
5. The progressive building extraction method based on original laser point cloud of claim 1, characterized in that: the specific method for obtaining the initial contour line of the building by using RANSAC fitting in the step 4 comprises the following steps:
41) determining the number of segments of the contour line and the edge point of each segment according to the obtained key points;
42) randomly extracting 4 non-collinear sample data from a point set of a section of edge points, calculating a transformation matrix H, and marking as a model M;
43) calculating the average value of errors between all data in the point set of the edge point and the model M, and adding an inner point set I if the errors are smaller than a threshold value;
44) if the number of the elements I of the current internal point set is greater than the number of the elements I _ best of the optimal internal point set, updating the I _ best to I, and updating the iteration times k; if the iteration times are more than k, exiting; otherwise, adding 1 to the iteration number, and repeating the steps.
6. The progressive building extraction method based on original laser point cloud of claim 1, characterized in that: the concrete method for regularizing the initial contour line of the building by using the regularization algorithm in the step 5 is as follows:
51) firstly, determining the longest building contour line, taking the longest building contour line as the positive starting direction, and numbering the longest building contour line as 1;
52) numbering the remaining contour lines clockwise, wherein the odd numbered contour lines are drawn in a positive direction, and the even numbered contour lines are drawn in a negative direction;
53) respectively calculating positive and negative weighted average directions D, D-;
54) Making the positive and negative weighted average directions mutually perpendicular;
55) the fitted contours are rotated to the correct orientation around their midpoints, respectively.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010108337.XA CN111354083B (en) | 2020-02-21 | 2020-02-21 | Progressive building extraction method based on original laser point cloud |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010108337.XA CN111354083B (en) | 2020-02-21 | 2020-02-21 | Progressive building extraction method based on original laser point cloud |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111354083A true CN111354083A (en) | 2020-06-30 |
CN111354083B CN111354083B (en) | 2022-09-27 |
Family
ID=71194084
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010108337.XA Active CN111354083B (en) | 2020-02-21 | 2020-02-21 | Progressive building extraction method based on original laser point cloud |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111354083B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112270758A (en) * | 2020-10-29 | 2021-01-26 | 山东科技大学 | Building room contour line extraction method based on ceiling point cloud segmentation |
CN112683221A (en) * | 2020-12-21 | 2021-04-20 | 深圳集智数字科技有限公司 | Building detection method and related device |
CN114526720A (en) * | 2020-11-02 | 2022-05-24 | 北京四维图新科技股份有限公司 | Positioning processing method, device, equipment and storage medium |
CN116540872A (en) * | 2023-04-28 | 2023-08-04 | 中广电广播电影电视设计研究院有限公司 | VR data processing method, device, equipment, medium and product |
CN116540872B (en) * | 2023-04-28 | 2024-06-04 | 中广电广播电影电视设计研究院有限公司 | VR data processing method, device, equipment, medium and product |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105844629A (en) * | 2016-03-21 | 2016-08-10 | 河南理工大学 | Automatic segmentation method for point cloud of facade of large scene city building |
US20170161945A1 (en) * | 2015-12-04 | 2017-06-08 | Autodesk, Inc. | Keypoint-based point-pair-feature for scalable automatic global registration of large rgb-d scans |
CN108765568A (en) * | 2018-06-04 | 2018-11-06 | 河海大学 | A kind of multi-level building quick three-dimensional reconstructing method based on laser radar point cloud |
-
2020
- 2020-02-21 CN CN202010108337.XA patent/CN111354083B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170161945A1 (en) * | 2015-12-04 | 2017-06-08 | Autodesk, Inc. | Keypoint-based point-pair-feature for scalable automatic global registration of large rgb-d scans |
CN105844629A (en) * | 2016-03-21 | 2016-08-10 | 河南理工大学 | Automatic segmentation method for point cloud of facade of large scene city building |
CN108765568A (en) * | 2018-06-04 | 2018-11-06 | 河海大学 | A kind of multi-level building quick three-dimensional reconstructing method based on laser radar point cloud |
Non-Patent Citations (9)
Title |
---|
ELYTAWIDYANINGRUM ET AL: "Automatic Building Outline Extraction from ALS Point Clouds by Ordered Points Aided Hough Transform", 《REMOTE SENSING》 * |
WEI SHEN ET AL: "A New Algorithm of Building Boundary Extraction Based on LIDAR Data", 《IEEE XPLORE》 * |
XU-QING ZHANG ET AL: "Building Contour Extraction Based on LiDAR Point Cloud", 《EDP SCIENCES》 * |
周平华等: "半自动机载LiDAR点云建筑物三维重建方法", 《测绘科学》 * |
尤红建等: "基于机载激光扫描数据提取建筑物的研究现状", 《测绘科学》 * |
张东东等: "机载LiDAR数据处理与地物提取", 《测绘信息与工程》 * |
朱依民等: "基于机载LiDAR数据的建筑物轮廓提取", 《测绘通报》 * |
李迁等: "基于机载LiDAR点云和建筑物轮廓线构建DSM的方法", 《国土资源遥感》 * |
沈蔚等: "基于LIDAR数据的建筑轮廓线提取及规则化算法研究", 《遥感学报》 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112270758A (en) * | 2020-10-29 | 2021-01-26 | 山东科技大学 | Building room contour line extraction method based on ceiling point cloud segmentation |
CN112270758B (en) * | 2020-10-29 | 2022-10-14 | 山东科技大学 | Building room contour line extraction method based on ceiling point cloud segmentation |
CN114526720A (en) * | 2020-11-02 | 2022-05-24 | 北京四维图新科技股份有限公司 | Positioning processing method, device, equipment and storage medium |
CN114526720B (en) * | 2020-11-02 | 2024-04-16 | 北京四维图新科技股份有限公司 | Positioning processing method, device, equipment and storage medium |
CN112683221A (en) * | 2020-12-21 | 2021-04-20 | 深圳集智数字科技有限公司 | Building detection method and related device |
CN112683221B (en) * | 2020-12-21 | 2022-05-17 | 深圳集智数字科技有限公司 | Building detection method and related device |
CN116540872A (en) * | 2023-04-28 | 2023-08-04 | 中广电广播电影电视设计研究院有限公司 | VR data processing method, device, equipment, medium and product |
CN116540872B (en) * | 2023-04-28 | 2024-06-04 | 中广电广播电影电视设计研究院有限公司 | VR data processing method, device, equipment, medium and product |
Also Published As
Publication number | Publication date |
---|---|
CN111354083B (en) | 2022-09-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110570428B (en) | Method and system for dividing building roof sheet from large-scale image dense matching point cloud | |
CN109243289B (en) | Method and system for extracting parking spaces of underground garage in high-precision map manufacturing | |
CN111354083B (en) | Progressive building extraction method based on original laser point cloud | |
Wei | Building boundary extraction based on lidar point clouds data | |
CN103727930B (en) | A kind of laser range finder based on edge matching and camera relative pose scaling method | |
CN111007531A (en) | Road edge detection method based on laser point cloud data | |
CN111508073B (en) | Method for extracting roof contour line of three-dimensional building model | |
CN115372989A (en) | Laser radar-based long-distance real-time positioning system and method for cross-country automatic trolley | |
CN114419085A (en) | Automatic building contour line extraction method and device, terminal device and storage medium | |
CN114332366A (en) | Digital city single house point cloud facade 3D feature extraction method | |
CN111242000A (en) | Road edge detection method combining laser point cloud steering | |
Dai et al. | Road extraction from high-resolution satellite images based on multiple descriptors | |
CN112014856A (en) | Road edge extraction method and device suitable for cross road section | |
CN115690138A (en) | Road boundary extraction and vectorization method fusing vehicle-mounted image and point cloud | |
CN110288620B (en) | Image matching method based on line segment geometric features and aircraft navigation method | |
CN113409332B (en) | Building plane segmentation method based on three-dimensional point cloud | |
CN113012206B (en) | Airborne and vehicle-mounted LiDAR point cloud registration method considering eave characteristics | |
CN112258570B (en) | Method and system for measuring and calculating water surface width of river full water area | |
CN112232248A (en) | Method and device for extracting plane features of multi-line LiDAR point cloud data | |
CN117029870A (en) | Laser odometer based on road surface point cloud | |
CN115965712A (en) | Building two-dimensional vector diagram construction method, system, equipment and storage medium | |
CN117253205A (en) | Road surface point cloud rapid extraction method based on mobile measurement system | |
CN116012737A (en) | High-speed construction monitoring method and system based on unmanned aerial vehicle laser and vision fusion | |
Zhang et al. | A road extraction method based on high resolution remote sensing image | |
CN111696054B (en) | Rubber dam body detection method based on full-polarization SAR image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |