CN112132969B - Vehicle-mounted laser point cloud building target classification method - Google Patents

Vehicle-mounted laser point cloud building target classification method Download PDF

Info

Publication number
CN112132969B
CN112132969B CN202010902655.3A CN202010902655A CN112132969B CN 112132969 B CN112132969 B CN 112132969B CN 202010902655 A CN202010902655 A CN 202010902655A CN 112132969 B CN112132969 B CN 112132969B
Authority
CN
China
Prior art keywords
grid
point
point cloud
grids
building
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010902655.3A
Other languages
Chinese (zh)
Other versions
CN112132969A (en
Inventor
李少先
李玉兵
赵常伟
谢欣鹏
颜敏
张庆轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinan Real Estate Surveying And Mapping Research Institute (jinan Housing Safety Inspection And Appraisal Center)
Original Assignee
Jinan Real Estate Surveying And Mapping Research Institute (jinan Housing Safety Inspection And Appraisal Center)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinan Real Estate Surveying And Mapping Research Institute (jinan Housing Safety Inspection And Appraisal Center) filed Critical Jinan Real Estate Surveying And Mapping Research Institute (jinan Housing Safety Inspection And Appraisal Center)
Priority to CN202010902655.3A priority Critical patent/CN112132969B/en
Publication of CN112132969A publication Critical patent/CN112132969A/en
Application granted granted Critical
Publication of CN112132969B publication Critical patent/CN112132969B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a vehicle-mounted laser point cloud building target classification method, and belongs to the technical field of ground feature information extraction of a mobile measurement system. The invention mainly comprises the following implementation steps: establishing a three-dimensional grid according to the coordinate maximum value of the point cloud and the preset grid size, counting the height coverage value of the grid with high weighting along the Z axis, extracting a complete building point cloud candidate area according to threshold judgment and neighborhood expansion, extracting the main body elevation of the candidate area through HOG characteristics based on projection density, calculating the normal vector and dimension characteristics of the rest point clouds through the optimal neighborhood, extracting the merging of the normal horizontal planar points and the main body elevation point clouds, removing residual crown points and building peripheral miscellaneous points by utilizing a statistical filter, and quickly clustering based on grid spacing to obtain a fine building target point cloud.

Description

Vehicle-mounted laser point cloud building target classification method
Technical Field
The invention discloses a vehicle-mounted laser point cloud building target classification method, and belongs to the technical field of ground feature information extraction of a mobile measurement system.
Background
In recent years, the country has organized a series of research, exploration and experimentation concerning digital urban construction. In the construction of digital cities, buildings are the main targets for three-dimensional modeling of cities, and building elevation information is an important target for buildings. The accuracy of the facade extraction directly affects the accuracy of the subsequent building model reconstruction. Therefore, how to accurately and rapidly extract the target data from massive data and provide accurate data for the next model reconstruction work is a research hot spot in recent years.
The vehicle-mounted laser scanning system can acquire accurate three-dimensional information of the surfaces of the road, buildings on two sides of the road, trees and other ground features in a high-speed moving state, and becomes an important means for rapidly acquiring space data, and provides data support for digital city construction.
The prior art mainly has the following defects: the projection density method is adopted for the building extraction of the vehicle-mounted point cloud data, the projection density characteristics of the building elevation are obvious and easy to calculate, but the density of the same building possibly has larger difference due to different parts or incomplete scanning, when a large-scale grid is used, the density distinction between the building and the tree is smaller, the small-scale grid can seriously influence the processing speed, and the different building densities of the same area are different, so that the building elevation extraction is difficult to set a unified threshold value.
Disclosure of Invention
The invention discloses a vehicle-mounted laser point cloud building target classification method, which aims to solve the problem that a vehicle-mounted laser scanning system in the prior art cannot accurately extract building related data.
A vehicle-mounted laser point cloud building target classification method comprises the following steps:
s1, acquiring a three-dimensional space range of vehicle-mounted point cloud data, interactively determining scale information, establishing a point cloud grid index, counting density and height of each grid and maximum height information in a current area, generating two-dimensional grids for recording a height coverage value, comparing and extracting corresponding grids through a threshold value to form a building point cloud candidate area, extracting other grids in the adjacent area of the grid of the candidate area as the supplement of an original area, and repairing tiny holes of the area;
s2, acquiring a point cloud two-dimensional space range of a candidate area, establishing a grid index, counting grid density, converting to a gray space to generate a characteristic image, obtaining a density HOG characteristic through gradient analysis of a pixel neighborhood, and extracting a building main body elevation; establishing a K-d tree index for the point cloud which is not extracted, calculating the optimal neighborhood, normal vector and dimension feature of each point by adopting a dimension feature method, and merging the point which satisfies the normal vector level and is in the shape of a plane feature with the main body vertical point cloud extracted before;
s3, filtering the peripheral miscellaneous points of the building and the residual crown points by adopting a statistical filter, establishing a mesoscale two-dimensional grid, clustering adjacent grids with point clouds according to whether the distance between the grids meets the set threshold requirement, and obtaining the correct fine building point clouds by limiting the minimum clustering number.
Step S1 comprises the following sub-steps:
s1.1, establishing grid indexes to organize point cloud data, wherein the method comprises the following steps of;
s1.1.1, acquiring a minimum circumscribed rectangular frame of the point cloud, wherein the minimum circumscribed rectangular frame is expressed as: xmin, ymin, zmin, xmax, ymax, zmax;
s1.1.2, interactively inputting the dimensions dx, dy and dz of the grid along the row, column and layer directions, and calculating the row number RowNum, the column number ColNum and the layer number LayerNum of the whole three-dimensional grid;
s1.1.3, calculating the position of the point falling into the corresponding grid according to the three-dimensional space coordinates of the point, and calculating the row number row, the column number col and the layer number layer of the scanning point p (xp, yp, zp) falling into the corresponding grid;
s1.2, traversing the three-dimensional grids from bottom to top, obtaining the lowest dotted grid layer number Lminij of each grid, calculating the layer number Lijk of each dotted grid, obtaining the maximum layer difference Lmax in the area, calculating the height Hijk of each dotted grid and the maximum height Hmax in the area, calculating the number Nijk of point clouds in each grid as the density of the grid, carrying out weighted summation on the density of the same grid, and obtaining the height coverage value Hcoverij, wherein the calculation formula is as follows:
s1.3, reserving a grid with a height coverage value larger than a set threshold value, and expanding the grid in eight neighborhoods to form a complete building point cloud candidate area.
Step S2 comprises the following sub-steps:
s2.1, detecting vertical face pixel points by using the density HOG characteristics, and performing vertical face crude extraction, wherein the method comprises the following steps of;
s2.1.1, acquiring a projection range of the point cloud of the candidate region on an xoy plane, namely xmin, ymin, xmax, ymax;
s2.1.2, interactively inputting grid sizes dx and dy, wherein dx and dy are the sizes of the grids along the row and column directions respectively, distributing the point clouds to the corresponding grids, and calculating the row numbers and column numbers of the grids where the point clouds are located;
s2.1.3, counting the number of points in each two-dimensional grid, presetting a row and col blank image as projection density, wherein the grid corresponds to the row and column numbers of pixels one by one, and compressing the grid density to 0-255 gray scale of the pixel as the gray value of the pixel; the compression calculation formula of the density to gray value is as follows:wherein I (x, y) is the gray value of the pixels in x rows and y columns, N row,col Density value of row column grid, N max Is the maximum density value in the area;
s2.1.4, calculating gradients of the pixel points (x, y) in the horizontal direction and the vertical direction;
s2.1.5, calculating gradient amplitude and gradient direction at the pixel points (x, y);
s2.1.6, traversing pixel points by using a 3 multiplied by 3 window, mapping gradient magnitudes of other pixel points except the center to a fixed angle range according to respective gradient directions, and generating a gradient direction distribution histogram;
s2.1.7, sequencing gradient values in each interval of the histogram to obtain a maximum value m1, excluding five intervals corresponding to the value and left and right adjacent intervals thereof, obtaining a maximum value m2 in the remaining three intervals, and carrying out difference between the sum of the obtained two values and the total gradient value in the neighborhood to obtain m3; setting HOG characteristics of a building elevation, and extracting building elevation points;
s2.2, extracting point cloud fine by adopting the neighborhood characteristics of the optimal neighborhood calculation point, wherein the method comprises the following steps of;
s2.2.1, establishing a K-d tree index, and constructing a local point cloud 3 multiplied by 3 covariance matrix C for all points in a K adjacent domain of the available points, wherein the local point cloud 3 multiplied by 3 covariance matrix C is as follows:wherein P is i Is the coordinates (x i ,y i ,z i ),P center Is the point cloud center point (x 0 ,y 0 ,z 0 );
S2.2.2. three eigenvalues of covariance matrix can be obtained by matrix solution, lambda 1 ≥λ 2 ≥λ 3
S2.2.3. selecting minimum radius r min Increment of radius r =10 cm Δ =20 cm, maximum radius r max =50cm, initializing the current radius r c =r min Continuously increasing the radius, calculating the characteristic value of the corresponding neighborhood, and obtaining the dimension characteristic and the entropy function of each neighborhood;
s2.2.4. Comparing dimension characteristics to determine the point classification, a 1D The point is a rod-shaped point at maximum, a 2D At maximum, the point is a planar point, a 3D The point is a spherical point at maximum;
s2.2.5, in the process of continuously increasing the radius, when the characteristics of a scanning point in a neighborhood of a certain radius are classified into spherical points, directly stopping searching of the optimal neighborhood of the point, and removing the point;
s2.2.6. minimum entropy function E f The corresponding neighborhood is determined to be the optimal neighborhood, and the feature classification calculated in the neighborhood is used as the accurate feature classification of the point;
s2.2.7, a feature vector corresponding to the minimum feature value is a normal vector of the point, whether the feature vector is perpendicular to the z-axis direction or not is taken as a judging condition, and a point with an excessively large included angle between the feature vector and the z-axis direction is removed.
Step S3 comprises the following sub-steps:
s3.1, performing statistical filtering to remove noise of the point cloud;
s3.2, performing point cloud rapid clustering based on a grid, wherein the method comprises the following steps of;
s3.2.1, establishing a two-dimensional grid, performing plane projection on the point cloud, distributing the point cloud into the corresponding grid, taking the grid as a clustering unit, and determining grid coordinates according to the row and column numbers of the grid;
s3.2.2, traversing all the point grids, and adding adjacent grids into a grid set where the grids are positioned by using horizontal distances among the grids as a clustering criterion;
s3.2.3, repeating the clustering step by using grids which do not perform clustering operation in the set until all grids in the set are clustered by using adjacent grids, and searching a grid with a certain point outside the set in the area to continue clustering until all points in the area participate in clustering;
and S3.3, taking the number of grids in the object as a judging condition, and considering the object as a building class when the number of grids is larger than a set threshold value.
Compared with the prior art, the invention has the beneficial effects that:
(1) The grid organization data, the density of the same bundle of grids is weighted and summed by the height fixed weight, the grid is endowed with a new description attribute of a height coverage value, compared with the pure use of the density or the height-shaped grid, the grid organization data has more comprehensive attribute, has good extraction effect on the object with higher projection such as a building elevation, and can also retain details such as building eave with higher height but insufficient density while removing low-altitude features.
(2) Combining two processing modes of grid index and point index, using the grid index to obtain grid density, calculating gradient according to density change, converting density characteristics into HOG characteristics, and rapidly extracting the main point cloud of the building elevation by utilizing the neighborhood characteristics of the grid. And then establishing point index organization residual data, extracting vertical points by analyzing neighborhood characteristics of points, and supplementing the vertical points as the former, so that different vertical face parts of different buildings in the same area can be extracted. The method solves the problems that the threshold value is difficult to determine when the building is extracted by using the grid density characteristics only, and the mass data processing is too slow when the building vertical points are extracted by using the point neighborhood characteristics.
(3) The target point cloud can be extracted rapidly based on the grid distance clustering, and the problem that the speed is low based on the point distance clustering is solved.
Drawings
FIG. 1 is a flow chart of a method for classifying targets of a vehicle-mounted laser point cloud building;
fig. 2 is a diagram of an elevation projection density HOG feature statistical method.
Detailed Description
The invention is described in further detail below in connection with the following detailed description:
the flow chart of the vehicle-mounted laser point cloud building target classification method is shown in fig. 1, and mainly comprises the following steps:
1. and preprocessing the original point cloud data to generate candidate areas extracted from the building elevation.
1) The three-dimensional space range of the vehicle-mounted point cloud data, namely the minimum circumscribed rectangular frame determined by xmin, ymin, zmin, xmax, ymax, zmax, is acquired, and the dimensions dx, dy and dz of the grid along the row, column and layer directions are interactively input, so that the row number RowNum, the column number ColNum and the layer number LayerNum of the whole three-dimensional grid are calculated:
traversing the positions of the points of the calculation point cloud in the three-dimensional grid, and if the existence point cloud p (xp, yp, zp) falls into the corresponding grid, calculating the row number row, the column number col and the layer number layer of the grid as follows:
the grid index of the point cloud data is established through the method.
2) And calculating the grid characteristics, acquiring a height coverage value, and generating a candidate area of the building point cloud.
Traversing the three-dimensional grids from bottom to top to obtain the lowest dotted grid layer number Lmin of each grid ij Calculate each dotted grid layer number L ijk And obtaining the maximum layer difference Lmax in the region, thereby calculating the height H of each dotted grid ijk And the maximum height Hmax in the region, the specific calculation formula is as follows:
traversing the grids to calculate the number N of point clouds in each grid ijk As the density of the grid, the density of the same bundle of grids is weighted and summed according to the height information calculated before to obtain a height coverage value Hcover ij The specific calculation formula is as follows:and extracting grids meeting the requirement of the height coverage value through threshold judgment to form a candidate area extracted by the building, and traversing the grids to expand to eight adjacent areas to repair the holes due to the fact that partial missing of the vertical surfaces possibly exists to form holes, so that the integrity of the area is ensured.
2. The method comprises the steps of firstly adopting a traditional projection density method to directly perform coarse extraction on a high-density grid of a candidate area from coarse to fine, and then performing fine extraction on planar points in the rest grids.
1) A feature image is generated based on the projection density.
The method comprises the steps of obtaining a point cloud xoy plane projection range of a candidate area, namely a minimum circumscribed rectangle determined by xmin, ymin, xmax, ymax, interactively inputting grid dimensions dx and dy, wherein dx and dy are dimensions of the grids along row and column directions respectively, distributing the point cloud to a corresponding grid, and calculating a row number and a column number of the grid where the point cloud is located:statistics of eachThe number of points in the two-dimensional grid is taken as the projection density. A row and col blank image is preset, the grids are in one-to-one correspondence with the row and column numbers of the pixels, and the density of the grids is compressed to the gray scale of uint8 (0-255) to be used as the gray value of the pixels. The density to gray value compression calculation is as follows: />Wherein N is row,col Density value of row column grid, N max Is the maximum density value in the area. The density characteristic image is obtained by the above method.
2) And detecting the pixel points of the vertical surfaces by using the HOG characteristics.
The gradient at the pixel point (x, y) is calculated as follows:
wherein Gx (x, y), gy (x, y) respectively represent gradients of pixel points in the image in the horizontal direction and the vertical direction. The gradient amplitude and gradient direction at the pixel point (x, y) are calculated as follows:
wherein G (x, y), α (x, y) represent the gradient magnitude and gradient direction, respectively, of the pixel points in the image.
Using a 3×3 window to traverse the pixel points, mapping gradient magnitudes of other pixel points except the center to a fixed angle range according to respective gradient directions, as shown in fig. 2, starting from the forward east direction, the angle division is 1 interval from 0 to 90 degrees, 2 intervals from 45 to 135 degrees, and so on, the last interval is 315 to 45 degrees, and the gradient direction distribution histogram has eight such intervals in total, so as to generate the gradient direction distribution histogram.
And counting and sequencing gradient values in each interval, obtaining a maximum value m1, excluding five intervals corresponding to the value and left and right adjacent intervals, obtaining a maximum value m2 in the remaining three intervals, and carrying out difference between the sum of the obtained two values and the total gradient value in the neighborhood to obtain m3. The HOG features of the building facade are utilized to extract building facade points, and the concrete features of the facade are as follows: and m1 is approximately equal to m2 > m3, and the building elevation is roughly extracted through the method.
3) And (5) finely extracting the building vertical point cloud.
Because the projection densities of different buildings are different, density variation exists in different parts of the same building, and only rough extraction by adopting a density method can cause incomplete extraction of a vertical face, a K-d tree index is built for the residual unextracted point cloud, and for each point, a local point cloud 3 multiplied by 3 covariance matrix C can be built by utilizing all points in the K adjacent areas of the point, wherein the specific calculation formula is as follows:wherein P is i Is the coordinates (x i ,y i ,z i ),P center Is the point cloud center point (x 0 ,y 0 ,z 0 ) Three eigenvalues (lambda) of the covariance matrix can be found by matrix solution 1 ≥λ 2 ≥λ 3 ) Since density differences of the vehicle-mounted point clouds affect calculation of point cloud neighborhood characteristics, it is necessary to determine an optimal neighborhood size for each point. First selecting the minimum radius r min Increment of radius r =10 cm Δ =20 cm, maximum radius r max =50cm, initializing the current radius r c =r min The radius is continuously increased, the characteristic value of the corresponding neighborhood is calculated, the dimension characteristic and the entropy function of each neighborhood are obtained, and the specific calculation method is as follows:
determining the point classification by comparing dimensional features, a 1D The point is a rod-shaped point at maximum, a 2D At maximum, the point is a planar point, a 3D The point at maximum is a spherical point. During the radius increasing process, if the feature of a certain radius neighborhood is classified as a spherical point, the search of the optimal neighborhood of the point is directly stopped and the point is removed. Will minimum entropy function E f Corresponding neighborhoodAnd determining the optimal neighborhood, wherein the feature classification calculated under the neighborhood is the accurate feature classification of the point.
The feature vector corresponding to the minimum feature value obtained by solving the neighborhood covariance matrix of the point is the normal vector of the point, and since the normal vector of the vertical face is generally perpendicular to the z axis, a verticality threshold is set as a judgment condition to remove the point with an excessively large included angle between the vector and the z axis.
Under the optimal neighborhood, the normal vector and the dimension feature are taken as constraint conditions, the scanning points conforming to the building elevation point feature are extracted, and the obtained point cloud and the previous rough extraction point cloud are reserved as the extraction result of the building elevation point cloud.
3. And (5) carrying out rapid classification and identification based on the grid point cloud.
1) And (5) carrying out point cloud denoising by statistical filtering.
Because the actual situation is complex, sparse tree crown points and other noise points remain, and interference is generated on clustering. A statistical analysis is performed on the neighborhood of each point and some non-standard points are pruned. The specific method is that the distance distribution from the point to the adjacent point in the input data is calculated, the average distance from the point to all the adjacent points is calculated for each point, the result is a Gaussian distribution, the shape of the Gaussian distribution is determined by the mean value and the standard deviation, and then the points with the average distance being out of the standard range can be defined as outliers and removed from the data.
2) Fast clustering based on grids.
And establishing a two-dimensional grid, distributing the point cloud as a plane projection to a corresponding grid, taking the grid as a clustering unit, and determining grid coordinates according to the row and column numbers of the grid. Traversing the dotted grids, searching adjacent dotted grids of each grid by using the horizontal distance between the grids as a clustering criterion, adding the adjacent grids into a grid set where the grids are located when the horizontal distance between the adjacent grids and the grids is smaller than a distance threshold, and repeating the clustering step on grids which are not subjected to clustering operation in the set after all adjacent grids of the grids are judged to be over until all adjacent grids in the set are clustered, so that the clustering of the set is completed. And then searching a grid with a certain point outside the set in the area to continue clustering. Until all points in the region participate in the cluster.
3) And (5) target identification of the clustered objects.
Non-building objects exist in the point cloud objects formed after the grid clustering, the number of grids in the objects is used as a judging condition, when the number of grids is larger than a set threshold value, the objects are considered as building classes, and otherwise, the objects are eliminated as other classes.
It should be understood that the above description is not intended to limit the invention to the particular embodiments disclosed, but to limit the invention to the particular embodiments disclosed, and that the invention is not limited to the particular embodiments disclosed, but is intended to cover modifications, adaptations, additions and alternatives falling within the spirit and scope of the invention.

Claims (3)

1. The vehicle-mounted laser point cloud building target classification method is characterized by comprising the following steps of:
s1, acquiring a three-dimensional space range of vehicle-mounted point cloud data, interactively determining scale information, establishing a point cloud grid index, counting density and height of each grid and maximum height information in a current area, generating two-dimensional grids for recording a height coverage value, comparing and extracting corresponding grids through a threshold value to form a building point cloud candidate area, extracting other grids in the adjacent area of the grid of the candidate area as the supplement of an original area, and repairing tiny holes of the area;
s2, acquiring a point cloud two-dimensional space range of a candidate area, establishing a grid index, counting grid density, converting to a gray space to generate a characteristic image, obtaining a density HOG characteristic through gradient analysis of a pixel neighborhood, and extracting a building main body elevation; establishing a K-d tree index for the point cloud which is not extracted, calculating the optimal neighborhood, normal vector and dimension feature of each point by adopting a dimension feature method, and merging the point which meets the level of the normal vector and is in the shape of a plane feature with the main body vertical point cloud extracted before;
step S2 comprises the following sub-steps:
s2.1, detecting vertical face pixel points by using the density HOG characteristics, and performing vertical face crude extraction, wherein the method comprises the following steps of;
s2.1.1, acquiring a projection range of the point cloud of the candidate region on an xoy plane, namely xmin, ymin, xmax, ymax;
s2.1.2, interactively inputting grid sizes dx and dy, wherein dx and dy are the sizes of the grids along the row and column directions respectively, distributing the point clouds to the corresponding grids, and calculating the row numbers and column numbers of the grids where the point clouds are located;
s2.1.3, counting the number of points in each two-dimensional grid, presetting a row and col blank image as projection density, wherein the grid corresponds to the row and column numbers of pixels one by one, and compressing the grid density to 0-255 gray scale of the pixel as the gray value of the pixel; the compression calculation formula of the density to gray value is as follows:wherein I (x, y) is the gray value of the pixels in x rows and y columns, N row,col Density value of row column grid, N max Is the maximum density value in the area;
s2.1.4, calculating gradients of the pixel points (x, y) in the horizontal direction and the vertical direction;
s2.1.5, calculating gradient amplitude and gradient direction at the pixel points (x, y);
s2.1.6, traversing pixel points by using a 3 multiplied by 3 window, mapping gradient magnitudes of other pixel points except the center to a fixed angle range according to respective gradient directions, and generating a gradient direction distribution histogram;
s2.1.7, sequencing gradient values in each interval of the histogram to obtain a maximum value m1, excluding five intervals corresponding to the value and left and right adjacent intervals thereof, obtaining a maximum value m2 in the remaining three intervals, and carrying out difference between the sum of the obtained two values and the total gradient value in the neighborhood to obtain m3; setting HOG characteristics of a building elevation, and extracting building elevation points;
s2.2, extracting point cloud fine by adopting the neighborhood characteristics of the optimal neighborhood calculation point, wherein the method comprises the following steps of;
s2.2.1, establishing a K-d tree index, and constructing a local point cloud 3 multiplied by 3 covariance matrix C for all points in a K adjacent domain of the available points, wherein the local point cloud 3 multiplied by 3 covariance matrix C is as follows:wherein P is i Is the coordinates (x i ,y i ,z i ),P center Is the point cloud center point (x 0 ,y 0 ,z 0 );
S2.2.2. three eigenvalues of covariance matrix can be obtained by matrix solution, lambda 1 ≥λ 2 ≥λ 3
S2.2.3. selecting minimum radius r min Increment of radius r =10 cm Δ =20 cm, maximum radius r max =50cm, initializing the current radius r c =r min Continuously increasing the radius, calculating the characteristic value of the corresponding neighborhood, and obtaining the dimension characteristic and the entropy function of each neighborhood;
s2.2.4. Comparing dimension characteristics to determine the point classification, a 1D The point is a rod-shaped point at maximum, a 2D At maximum, the point is a planar point, a 3D The point is a spherical point at maximum;
s2.2.5, in the process of continuously increasing the radius, when the characteristics of a scanning point in a neighborhood of a certain radius are classified into spherical points, directly stopping searching of the optimal neighborhood of the point, and removing the point;
s2.2.6. minimum entropy function E f The corresponding neighborhood is determined to be the optimal neighborhood, and the feature classification calculated in the neighborhood is used as the accurate feature classification of the point;
s2.2.7, a feature vector corresponding to the minimum feature value is a normal vector of the point, whether the feature vector is perpendicular to the z-axis direction or not is taken as a judging condition, and a point with an excessively large included angle between the feature vector and the z-axis direction is removed;
s3, filtering the peripheral miscellaneous points of the building and the residual crown points by adopting a statistical filter, establishing a mesoscale two-dimensional grid, clustering adjacent grids with point clouds according to whether the distance between the grids meets the set threshold requirement, and obtaining the correct fine building point clouds by limiting the minimum clustering number.
2. The method for classifying vehicle-mounted laser point cloud building targets according to claim 1, wherein the step S1 comprises the following sub-steps:
s1.1, establishing grid indexes to organize point cloud data, wherein the method comprises the following steps of;
s1.1.1, acquiring a minimum circumscribed rectangular frame of the point cloud, wherein the minimum circumscribed rectangular frame is expressed as: xmin, ymin, zmin, xmax, ymax, zmax;
s1.1.2, interactively inputting the dimensions dx, dy and dz of the grid along the row, column and layer directions, and calculating the row number RowNum, the column number ColNum and the layer number LayerNum of the whole three-dimensional grid;
s1.1.3, calculating the position of the point falling into the corresponding grid according to the three-dimensional space coordinates of the point, and calculating the row number row, the column number col and the layer number layer of the scanning point p (xp, yp, zp) falling into the corresponding grid;
s1.2, traversing the three-dimensional grids from bottom to top, obtaining the lowest dotted grid layer number Lminij of each grid, calculating the layer number Lijk of each dotted grid, obtaining the maximum layer difference Lmax in the area, calculating the height Hijk of each dotted grid and the maximum height Hmax in the area, calculating the number Nijk of point clouds in each grid as the density of the grid, carrying out weighted summation on the density of the same grid, and obtaining the height coverage value Hcoverij, wherein the calculation formula is as follows:
s1.3, reserving a grid with a height coverage value larger than a set threshold value, and expanding the grid in eight neighborhoods to form a complete building point cloud candidate area.
3. The method for classifying vehicle-mounted laser point cloud building targets according to claim 1, wherein the step S3 comprises the following sub-steps:
s3.1, performing statistical filtering to remove noise of the point cloud;
s3.2, performing point cloud rapid clustering based on a grid, wherein the method comprises the following steps of;
s3.2.1, establishing a two-dimensional grid, performing plane projection on the point cloud, distributing the point cloud into the corresponding grid, taking the grid as a clustering unit, and determining grid coordinates according to the row and column numbers of the grid;
s3.2.2, traversing all the point grids, and adding adjacent grids into a grid set where the grids are positioned by using horizontal distances among the grids as a clustering criterion;
s3.2.3, repeating the clustering step by using grids which do not perform clustering operation in the set until all grids in the set are clustered by using adjacent grids, and searching a grid with a certain point outside the set in the area to continue clustering until all points in the area participate in clustering;
and S3.3, taking the number of grids in the object as a judging condition, and considering the object as a building class when the number of grids is larger than a set threshold value.
CN202010902655.3A 2020-09-01 2020-09-01 Vehicle-mounted laser point cloud building target classification method Active CN112132969B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010902655.3A CN112132969B (en) 2020-09-01 2020-09-01 Vehicle-mounted laser point cloud building target classification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010902655.3A CN112132969B (en) 2020-09-01 2020-09-01 Vehicle-mounted laser point cloud building target classification method

Publications (2)

Publication Number Publication Date
CN112132969A CN112132969A (en) 2020-12-25
CN112132969B true CN112132969B (en) 2023-10-10

Family

ID=73847738

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010902655.3A Active CN112132969B (en) 2020-09-01 2020-09-01 Vehicle-mounted laser point cloud building target classification method

Country Status (1)

Country Link
CN (1) CN112132969B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104318266A (en) * 2014-10-19 2015-01-28 温州大学 Image intelligent analysis processing early warning method
CN105069843A (en) * 2015-08-22 2015-11-18 浙江中测新图地理信息技术有限公司 Rapid extraction method for dense point cloud oriented toward city three-dimensional modeling
CN106056614A (en) * 2016-06-03 2016-10-26 武汉大学 Building segmentation and contour line extraction method of ground laser point cloud data
CN108984599A (en) * 2018-06-01 2018-12-11 青岛秀山移动测量有限公司 A kind of vehicle-mounted laser point cloud road surface extracting method referred to using driving trace
CN110322497A (en) * 2019-06-18 2019-10-11 中国石油大学(华东) A kind of interactive point cloud object extraction method based on three-dimensional visualization

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8422825B1 (en) * 2008-11-05 2013-04-16 Hover Inc. Method and system for geometry extraction, 3D visualization and analysis using arbitrary oblique imagery
GB2543749A (en) * 2015-10-21 2017-05-03 Nokia Technologies Oy 3D scene rendering

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104318266A (en) * 2014-10-19 2015-01-28 温州大学 Image intelligent analysis processing early warning method
CN105069843A (en) * 2015-08-22 2015-11-18 浙江中测新图地理信息技术有限公司 Rapid extraction method for dense point cloud oriented toward city three-dimensional modeling
CN106056614A (en) * 2016-06-03 2016-10-26 武汉大学 Building segmentation and contour line extraction method of ground laser point cloud data
CN108984599A (en) * 2018-06-01 2018-12-11 青岛秀山移动测量有限公司 A kind of vehicle-mounted laser point cloud road surface extracting method referred to using driving trace
CN110322497A (en) * 2019-06-18 2019-10-11 中国石油大学(华东) A kind of interactive point cloud object extraction method based on three-dimensional visualization

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Hamid-Lakzaeian et al."Structural-based point cloud segmentation of highly ornate building facades for computational modelling".Automation in Construction.2019,第108卷第1-19页. *
彭晨 ; 余柏蒗 ; 吴宾 ; 吴健平 ; .基于移动激光扫描点云特征图像和SVM的建筑物立面半自动提取方法.地球信息科学学报.2016,(07),第411-417页. *
杨必胜 ; 董震 ; 魏征 ; 方莉娜 ; 李汉武 ; .从车载激光扫描数据中提取复杂建筑物立面的方法.测绘学报.2013,(03),第877-885页. *

Also Published As

Publication number Publication date
CN112132969A (en) 2020-12-25

Similar Documents

Publication Publication Date Title
CN110570428B (en) Method and system for dividing building roof sheet from large-scale image dense matching point cloud
CN111915730B (en) Method and system for automatically generating indoor three-dimensional model by taking semantic slave point cloud into consideration
CN112070769B (en) Layered point cloud segmentation method based on DBSCAN
CN112595258A (en) Ground object contour extraction method based on ground laser point cloud
CN112633092B (en) Road information extraction method based on vehicle-mounted laser scanning point cloud
CN111598780B (en) Terrain adaptive interpolation filtering method suitable for airborne LiDAR point cloud
CN114764871B (en) Urban building attribute extraction method based on airborne laser point cloud
CN112164145B (en) Method for rapidly extracting indoor three-dimensional line segment structure based on point cloud data
CN114549879A (en) Target identification and central point extraction method for tunnel vehicle-mounted scanning point cloud
Macher et al. Semi-automatic segmentation and modelling from point clouds towards historical building information modelling
CN111369606B (en) Cultural relic object high-precision micro-deformation monitoring method based on uncontrolled scanning point cloud
CN114299318A (en) Method and system for rapid point cloud data processing and target image matching
CN115018249A (en) Subway station construction quality evaluation method based on laser scanning technology
CN116379915A (en) Building mapping method, device, system and storage medium
CN116883754A (en) Building information extraction method for ground LiDAR point cloud
CN116736331A (en) Automatic measuring method for coal storage amount in coal bunker based on laser radar
CN114549751A (en) Template monitoring system and method for box girder production
CN112132969B (en) Vehicle-mounted laser point cloud building target classification method
CN116071530B (en) Building roof voxelized segmentation method based on airborne laser point cloud
CN116051771A (en) Automatic photovoltaic BIM roof modeling method based on unmanned aerial vehicle oblique photography model
CN116385659A (en) Point cloud building modeling method, system, storage medium and electronic equipment
CN115937149A (en) Wall surface local deviation automatic detection method based on triangular gridding
CN115294302A (en) Airborne point cloud rapid filtering method based on broken line constraint
CN114677388A (en) Room layout dividing method based on unit decomposition and space division
CN113963114A (en) Discrete boundary point tracking method based on polygon growth

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant