CN113920360A - Road point cloud rod extraction and multi-scale identification method - Google Patents

Road point cloud rod extraction and multi-scale identification method Download PDF

Info

Publication number
CN113920360A
CN113920360A CN202111113789.8A CN202111113789A CN113920360A CN 113920360 A CN113920360 A CN 113920360A CN 202111113789 A CN202111113789 A CN 202111113789A CN 113920360 A CN113920360 A CN 113920360A
Authority
CN
China
Prior art keywords
rod
point cloud
voxel
shaped object
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111113789.8A
Other languages
Chinese (zh)
Inventor
王子阳
杨林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Nanjing Normal University
Original Assignee
Nanjing Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Normal University filed Critical Nanjing Normal University
Priority to CN202111113789.8A priority Critical patent/CN113920360A/en
Publication of CN113920360A publication Critical patent/CN113920360A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques

Abstract

The invention discloses a road point cloud rod extraction and multi-scale identification method, which comprises the following steps: (1) acquiring point clouds of roads and roadside ground objects by using a vehicle-mounted laser scanner, resolving and outputting point cloud data in a PCD format; (2) point cloud data are preprocessed, namely point clouds are preprocessed on a closed match platform by using a cloth simulation filtering and down-sampling algorithm, so that the subsequent processing efficiency is improved; (3) full-automatic segmentation of the rod-shaped object point cloud; (4) multi-scale classification results fused to each other. The invention can solve the problems of low extraction efficiency and poor identification precision of the rod-shaped object.

Description

Road point cloud rod extraction and multi-scale identification method
Technical Field
The invention relates to the technical field of intelligent traffic, in particular to a road point cloud rod extraction and multi-scale identification method.
Background
At present, the method for extracting the rod-shaped objects based on the road scene point cloud can be mainly divided into three categories, namely a method based on the structural features of the rod-shaped objects, a method based on clustering and then recognition and a method based on template matching. The method based on the structural features of the shaft mainly separates the shaft from the point cloud according to the three-dimensional structure of the shaft, wherein the method commonly comprises the step of extracting the shaft by using the geometrical features of the height, the echo intensity, the thickness and the like of the shaft. Such extraction methods require high geometric characteristics of the shaft, require the shaft to be extracted to be regular in shape and to have large differences from surrounding ground objects. The method based on clustering and recognition comprises the steps of firstly processing discrete point clouds, dividing the point clouds into the same classes or clusters according to spatial distance, and then extracting and recognizing the rod according to the characteristics of the classes or the clusters. The template matching method is characterized in that ground objects in the original point cloud are matched by using the characteristic that the same shaft objects have the same geometric attributes, and the shaft objects are extracted according to the similarity between every two ground objects. The method has high requirements on the robustness of the matched features and is not efficient in calculation.
The main disadvantages of the extraction of shaft-shaped objects in road scenes are two, firstly, the diversity of the shaft-shaped objects and the complexity of the structure are not considered in the extraction method, resulting in that most methods are more or less incapable of accurately extracting the shaft-shaped objects or the prerequisite requirements for the extractable shaft-shaped objects are high. Secondly, the quality requirement on the original data is high, the rod-shaped object point cloud is required to be pure and has no redundant data for a road point cloud scene to be extracted, and the practicability of the extraction method is greatly reduced due to the extraction requirement.
After the point cloud of the rod-shaped objects is extracted, the extracted rod-shaped objects are often required to be identified and processed to meet the requirements of road resource general survey, intelligent traffic construction and the like, and the main methods at the present stage include a semantic-based identification method, a machine learning point-by-point classification method and a deep learning method. The semantic segmentation method is mainly characterized in that a set of semantic rules suitable for a certain road environment is artificially drawn according to the characteristics of the road scene environment, and the road rod-shaped objects are identified according to the semantic rules drawn in advance, so that the same set of semantic rules cannot be well applied to different environments, technicians can draw different semantic rules according to different road scenes, and the universality of the method is greatly reduced. The method for identifying the rod-shaped objects by machine learning point-by-point identification only considers the characteristics of points in a certain neighborhood and does not consider the mutual influence among the characteristics under different scales, so that the method can achieve a good identification effect on point cloud data with large difference in local neighborhood and is not satisfactory for rod-shaped objects with similar characteristics in local neighborhood. The deep learning method requires a large number of training samples, and different scenes cannot be mutually used. The limitations of the algorithms result in not being suitable for a wide range of uses.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a road point cloud rod extraction and multi-scale identification method, which can solve the problems of low rod extraction efficiency and poor identification precision.
In order to solve the technical problems, the invention provides a road point cloud rod extraction and multi-scale identification method, which comprises the following steps:
(1) acquiring point clouds of roads and roadside ground objects by using a vehicle-mounted laser scanner, resolving and outputting point cloud data in a PCD format;
(2) point cloud data are preprocessed, namely point clouds are preprocessed on a closed match platform by using a cloth simulation filtering and down-sampling algorithm, so that the subsequent processing efficiency is improved;
(3) full-automatic segmentation of the rod-shaped object point cloud;
(4) multi-scale classification results fused to each other.
Preferably, in the step (3), the full-automatic segmentation of the point cloud of the rod-shaped object specifically comprises: three segmentation concepts were followed to preserve the shaft: the shaft has the characteristic of longitudinal continuity, the non-shaft part of the shaft generally vertically extends to the shaft part, and the encoding difference between the non-shaft part and the shaft part is larger when the detail part of the shaft utilizes a unidirectional double encoding strategy.
Preferably, the shaft has the characteristic of longitudinal continuity, and longitudinal continuous retention is carried out; firstly, performing voxel division on road scene point cloud data, and dividing the point cloud data of a large scene into 0.3 × 0.3 voxel blocks; traversing the voxel blocks divided in the previous step, wherein the traversal sequence is that the voxel blocks in each row are traversed from bottom to top; and finally, recording the voxel blocks containing the point clouds in the column, and if the number of the voxel blocks containing the point clouds is more than a certain threshold value, considering the voxel blocks as potential rods and reserving the column.
Preferably, the non-shaft portion of the shaft extends generally perpendicularly to the shaft portion, the non-shaft portion of the shaft remaining; the non-rod part is mostly vertical to the extracted rod part, and the non-rod part of the rod is reserved by utilizing a voxel growing strategy vertical to the rod part; and using the voxels of the rod remained in the first step as initial seed voxels to perform voxel-based region growing, and stopping growing until no continuous voxels with point clouds exist in the transverse direction.
Preferably, the difference between the two codes is larger when a unidirectional double-coding strategy is used for the detail part of the shaft, and the unidirectional double-coding strategy reserves the detail of the shaft; a one-way double-coding strategy is adopted for the part which is not directly connected with the rod part of the rod to carry out secondary reservation; taking the voxel grid of each column as a research object, and one of the double codes is that the voxels from the lowest point to the highest point of the voxel grid are coded; the other encoding is carried out by taking the voxel with the point cloud as the initial voxel of the encoding, the voxel containing the point cloud is encoded, the point cloud data which is not reserved is reserved by utilizing the characteristic that the difference of double encoding is larger than a certain threshold value, and meanwhile, the characteristic that two encodings are close to each other for low vegetation is filtered to a certain extent.
Preferably, in the step (4), the rod classification with fused multi-scale classification results specifically includes the following steps:
(41) acquiring local point cloud characteristics of the rod-shaped object;
(42) obtaining the global point cloud characteristics of the shaft-shaped object,
(43) and fusing the classification results under different scales.
Preferably, in the step (41), the obtaining of the local point cloud feature of the rod specifically includes: firstly, constructing a 14-dimensional point cloud feature vector according to the structural characteristics of a rod, wherein the method comprises the following steps: the method comprises the steps of sequentially giving different labels to different rod-shaped objects according to the intensity, the height difference variance, each anisotropy, the surface characteristic, the spherical characteristic, the all-round difference, the linear characteristic, the number of points in a cylinder, the height difference of the cylinder, the density, the volume density, the curvature and the roughness, combining the characteristic vectors and the labels, and classifying different rod-shaped object point clouds by using a random forest classifier.
Preferably, in the step (42), clustering the divided rod-like object point clouds, clustering different point clouds into different point cloud clusters, wherein the different point cloud clusters are rod-like object point clouds, so that the different point cloud clusters certainly have the point cloud clusters, slicing and clustering the different point cloud clusters, solving the floor point of the point cloud clusters according to different rod-like objects having different rod main bodies, extracting the rod-like objects with the distance between the floor points smaller than a certain threshold value after solving the floor point, regarding the rod-like objects and the rod-like objects as mutually overlapped rod-like objects, performing voxel generation on an overlapped area by fusing voxel generation results of two different limiting conditions, dividing the rod-like object point clouds into two parts according to the voxel, taking the two parts as a single rod-like object point cloud, and solving the characteristics of the single rod-like object; wherein the global features include: and (3) global viewpoint feature histogram VFH, geometric size of an outer enclosure box, voxel proportion and average intensity, and placing the calculated global feature composition feature vectors into a random forest classifier to classify the scene to be recognized by using a trained model.
Preferably, in the step (43), the recognition effects under different scales are fused according to the recognition progress, and the parts with better performance in the recognition processes of the two are combined, so that the recognition accuracy under a single scale is improved.
The invention has the beneficial effects that: the invention takes the point cloud obtained by the vehicle-mounted three-dimensional laser scanning system as an entry point, and makes certain improvement on the extraction and identification method of the common rod-shaped object point cloud in the road scene according to the characteristics of the road scene point cloud data: in the aspect of segmentation, the method segments the rod by using three constraints and growth conditions, and compared with the traditional method of retaining only according to the geometric structure of the rod, the method has the advantages that the retaining of the rod is more complete, the method is not limited by scene complexity, and the method is more universal in an extraction method; in the aspect of rod identification, the invention uses classification results under two different scales to perform fusion, solves the limitation that the traditional identification scheme only performs identification according to the characteristics under a certain neighborhood scale, and improves the identification precision to a certain extent.
Drawings
FIG. 1 is a schematic flow chart of the method of the present invention
Fig. 2 is a schematic view of the acquisition apparatus of the present invention.
Fig. 3 is a front view of the present invention based on a longitudinally continuous shaft retention.
Fig. 4(a) is a top view of the voxel growing coordinate system of the present invention.
FIG. 4(b) is a schematic view of the voxel local growth according to the present invention.
FIG. 5 is a schematic diagram of a rod point cloud extracted according to the present invention.
FIG. 6 is a schematic view of a purified cloud of rod points according to the present invention.
FIG. 7 is a diagram illustrating the segmentation result of the overlapped region according to the present invention.
Detailed Description
A road point cloud rod extraction and multi-scale identification method comprises the following steps:
(1) acquiring point clouds of roads and roadside ground objects by using a vehicle-mounted laser scanner, resolving and outputting point cloud data in a PCD format;
(2) point cloud data are preprocessed, namely point clouds are preprocessed on a closed match platform by using a cloth simulation filtering and down-sampling algorithm, so that the subsequent processing efficiency is improved;
(3) full-automatic segmentation of the rod-shaped object point cloud;
(4) multi-scale classification results fused to each other.
The invention utilizes an Alpha3D vehicle-mounted mobile measurement system to acquire data of a road in an urban area, the horizontal precision of the system is less than 0.030m RMS, the vertical precision is less than 0.025m RMS, the laser spot frequency can reach 1000000 points/second, and the measurement precision can reach 2 mm. About 3km of experimental data are collected, and the total amount of the data is 60244135 points including natural rods, artificial rods, vehicles, low vegetation and the like. The scanning device is shown in fig. 2.
Firstly, dividing scene point cloud of an experimental road into voxel blocks according to the specification 0.3 × 0.3, wherein the dividing method is shown as the following formula 1 and formula 2.
Mx=ceil((xmax-xmin)/S)
Ny=ceil((ymax-ymin)/S)
Hz=ceil((zmax-zmin)/S) (1)
M=ceil((point.x-xmin)/S)
N=ceil((point.y-ymin)/S)
H=ceil((point.z-zmin)/S) (2)
X in the formulamax、xmin、ymax、ymin、zmax、zminPoint.x, point.y and point.z are respectively the maximum and minimum coordinate value of the point cloud after pretreatment and the specific x, y and z values of a certain point, Mx、Ny、HzThe numbers of the voxel grids in the X-axis direction, the Y-axis direction, and the Z-axis direction are respectively indicated, and M, N, H respectively indicate the row and column numbers and the height positions of the points in the voxel grids. And searching the point cloud data of the divided voxels according to a column, if the number of voxel blocks containing the point cloud on the column meets a certain threshold value, reserving the column, and obtaining the reserved point cloud which is the rod part of the rod-shaped object. The front view of the retention principle schematic is shown in fig. 3 below.
Region growing and double coding strategies of the transverse grid. In order to reduce the calculation amount and simultaneously reserve the non-rod parts, a transverse grid region growing algorithm is adopted. The main idea of the algorithm is to establish a local spatial coordinate system by using a longitudinally reserved rod-shaped part grid as a reference to perform divergent query towards four quadrants, and according to the characteristic that a non-rod-shaped part is connected with the rod-shaped part in morphological structure, if a continuously divergent transverse grid appears around the reference grid, the non-rod-shaped part of the rod-shaped object is considered and reserved. The algorithm diagrams are shown in fig. 4(a) and 4(b) below. After the step is finished, the part of the rod-shaped object, such as the lamp cap of a street lamp, which is higher than the rod-shaped object, and the part which is not directly connected with the rod-shaped object, such as the lower half part of a traffic light, cannot be completely reserved, so that the method adopts a one-way double-coding strategy to carry out secondary reservation on the part. The mathematical model of the one-way double-coding method is represented as D (n1, n2), wherein n1 and n2 respectively represent the number of voxel codes from bottom to top in a certain column and the number of voxel codes containing point cloud. The detailed algorithm idea is as follows: taking the voxel grid in each column as a research object, one of the double codes is to start coding from the lowest point grid of the voxel grid to the highest point grid. The other encoding is performed by using the voxel with the point cloud as the initial voxel of the encoding, and encoding is performed when the voxel containing the point cloud is encountered. The point cloud data which is not reserved is reserved by utilizing the characteristic that the difference of double codes is larger than a certain threshold value, and meanwhile, the characteristic that two codes of low vegetation are close can be filtered to a certain extent. The cloud of rod points after these two steps is shown in figure 5 below.
In order to obtain purer rod point cloud data, the rod point cloud is purified by using a method combining Euclidean distance clustering and projection height difference, firstly, points with similar distances are divided into the same point cluster through the Euclidean distance clustering, when the number of the point clusters is too small, the point is considered as a noise point, and the point cloud is filtered by taking the cluster as a unit. This operation can effectively remove some outliers. And after the outliers are removed, performing two-dimensional projection on the three-dimensional points according to the Z axis, dividing the projected point cloud into a grid which is finer than the point cloud mentioned above, and when the difference between the maximum height of the points in the grid and the height in the grid is smaller than a threshold value, considering that the points are low vegetation and filtering the low vegetation. The cleaned point cloud is shown in fig. 6 below.
And determining the category and outputting the attribute on the basis of the point cloud segmentation of the rod-shaped object. Firstly, selecting a certain complete rod piece point cloud after segmentation as a standard model, and obtaining R neighborhood characteristics of the standard model, wherein the R neighborhood characteristics comprise 14 dimensions of attribute characteristics and geometric characteristics. The method specifically comprises the following steps: intensity, height difference, elevation variance, anisotropy, surface characteristics, spherical characteristics, omnidirectional differences, linear characteristics, cylinder height difference, number of points in the cylinder, density, bulk density, curvature and roughness. Defined as a 14-dimensional feature vector.
Inputting the feature vector into a random forest model to train to obtain a classifier model of local neighborhood features, then performing feature calculation under the whole R neighborhood on the extracted rod piece to form the feature vector, and performing rod piece classification under the local neighborhood by using the trained classifier.
The method only depends on the local neighborhood to perform identification, because the method only considers the characteristics shown in the local neighborhood of the point cloud, the classification effect is not robust, and the method combines the global characteristics of the point cloud to classify the rod-shaped objects, and the steps are as follows: firstly carrying out Euclidean clustering on the extracted rod-shaped objects, clustering point clouds of a large scene into point cloud clusters, then solving floor points according to a slice clustering mode, calculating the distance between every two floor points, if the distance between the two floor points is too small, considering that the two rod-shaped object entities are overlapped with each other, extracting the two rod-shaped object entities, generating two super-voxels under two different limiting conditions by the extracted overlapped part, combining the two super-voxels, then extracting the rod-shaped super-voxels in the super-voxels, then classifying the sphere super-voxels to the nearest rod-shaped object according to the distance, and classifying the planar super-voxels after the growth of the sphere super-voxels according to the scheme. The above method can separate the overlapped rods, and the effect of dividing the overlapped ground objects is shown in fig. 7.
And processing the clustered point clouds according to the conditions to obtain non-overlapping rod-shaped objects, and selecting a part of complete rod-shaped object point clouds from the collected data to solve the global characteristics. Wherein the global features comprise the geometric size of the outer surrounding box of the shaft, the average strength of the shaft and the different proportion of the hyper-voxels. And then putting the point cloud into a random forest model according to a local training method, training a rod classifier based on global features, solving the global features of the segmented point cloud according to the same method, and putting the point cloud into the trained random forest model as a prediction sample for prediction.
The prediction result made under the local neighborhood characteristics is robust to the missing condition because only the point cloud characteristics under a certain neighborhood are considered, but for some similar shaft objects, only the neighborhood characteristics are considered, and the prediction result and the point cloud characteristics cannot be identified. Compared with local features, the global feature recognition result is more robust to similar rod recognition results, but the recognition result is not ideal for missing and spatially short trees due to the consideration of the features under the global point cloud. Therefore, the method fuses the identification results of the two, and the experimental result proves that compared with the identification result under a single scale, the method effectively improves the identification precision. The recognition result obtained by fusing the two is shown in table 1 below, and the present invention uses the determination method shown in formula 3 for accuracy evaluation.
Figure BDA0003274596090000071
Figure BDA0003274596090000072
Where RDP represents the correct extraction rate and ADP represents the complete extraction rate. a is the number of correct extractions, b is the number of erroneous extractions, and c is the actual number. The verification proves that the identification accuracy of the invention can reach 96 percent, and the identification accuracy is improved to a certain extent compared with the prior method.
TABLE 1 recognition result table of the two fusion
Species of Actual quantity (c) Correct quantity (a) Number of omission (n) Number of wrong lifting (b) RDP(%) ADP(%)
Signboard 4 4 0 0 100 100
Low signboard 2 1 1 0 100 50
Low traffic light 2 1 1 1 50 50
Traffic light 4 3 1 2 60 75
Monitoring 7 6 1 1 85.7 85.7
Street lamp 26 24 2 0 100 92.3
Tree (a tree) 154 152 2 0 100 98.7
The method is based on vehicle-mounted three-dimensional laser point cloud data, follows the prior knowledge that a road rod piece is continuous in the Z direction and a non-rod piece part and a rod piece part extend vertically, and effectively extracts the point cloud of the road rod piece in combination with a one-way double-coding rule. The simple and intuitive extraction rule effectively improves the segmentation efficiency, and the complete extraction of the rod-shaped objects and the subsequent classification bring great convenience. In the rod-shaped object classification part, a method of mutually fusing recognition results under different scales is used, so that the limitation of point cloud recognition under the characteristic of a single scale is effectively reduced, and the recognition precision is improved. And the recognition result is more stable, more universal and efficient.

Claims (9)

1. A road point cloud rod extraction and multi-scale identification method is characterized by comprising the following steps:
(1) acquiring point clouds of roads and roadside ground objects by using a vehicle-mounted laser scanner, resolving and outputting point cloud data in a PCD format;
(2) point cloud data are preprocessed, namely point clouds are preprocessed on a closed match platform by using a cloth simulation filtering and down-sampling algorithm, so that the subsequent processing efficiency is improved;
(3) full-automatic segmentation of the rod-shaped object point cloud;
(4) multi-scale classification results fused to each other.
2. The method for extracting and multi-scale identifying the rod-shaped objects of the road point cloud according to claim 1, wherein in the step (3), the full-automatic segmentation of the rod-shaped object point cloud follows three segmentation ideas to retain the rod-shaped objects: the shaft has the characteristic of longitudinal continuity, the non-shaft part of the shaft generally vertically extends to the shaft part, and the encoding difference between the non-shaft part and the shaft part is larger when the detail part of the shaft utilizes a unidirectional double encoding strategy.
3. The method for extracting and multi-scale identifying a rod-shaped object of a road point cloud as claimed in claim 2, wherein the rod-shaped object has the characteristic of longitudinal continuity, and longitudinal continuity is reserved; firstly, performing voxel division on road scene point cloud data, and dividing the point cloud data of a large scene into 0.3 × 0.3 voxel blocks; traversing the voxel blocks divided in the previous step, wherein the traversal sequence is that the voxel blocks in each row are traversed from bottom to top; and finally, recording the voxel blocks containing the point clouds in the column, and if the number of the voxel blocks containing the point clouds is more than a certain threshold value, considering the voxel blocks as potential rods and reserving the column.
4. The method of claim 2, wherein the non-rod portion of the rod extends generally perpendicular to the rod portion, and the non-rod portion of the rod remains; the non-rod part is mostly vertical to the extracted rod part, and the non-rod part of the rod is reserved by utilizing a voxel growing strategy vertical to the rod part; and using the voxels of the rod remained in the first step as initial seed voxels to perform voxel-based region growing, and stopping growing until no continuous voxels with point clouds exist in the transverse direction.
5. The method for extracting and multi-scale identifying the rod-shaped object of the road point cloud as claimed in claim 2, wherein the detail part of the rod-shaped object has a larger coding difference when a unidirectional double coding strategy is used, and the unidirectional double coding strategy retains the detail of the rod-shaped object; a one-way double-coding strategy is adopted for the part which is not directly connected with the rod part of the rod to carry out secondary reservation; taking the voxel grid of each column as a research object, and one of the double codes is that the voxels from the lowest point to the highest point of the voxel grid are coded; the other encoding is carried out by taking the voxel with the point cloud as the initial voxel of the encoding, the voxel containing the point cloud is encoded, the point cloud data which is not reserved is reserved by utilizing the characteristic that the difference of double encoding is larger than a certain threshold value, and meanwhile, the characteristic that two encodings are close to each other for low vegetation is filtered to a certain extent.
6. The method for extracting and multi-scale identifying the rod-shaped objects of the road point cloud as claimed in claim 1, wherein in the step (4), the rod-shaped object classification with the multi-scale classification results fused with each other specifically comprises the following steps:
(41) acquiring local point cloud characteristics of the rod-shaped object;
(42) obtaining the global point cloud characteristics of the shaft-shaped object,
(43) and fusing the classification results under different scales.
7. The method for extracting and multi-scale identifying a rod-shaped object of a road point cloud as claimed in claim 6, wherein in the step (41), the obtaining of the local point cloud features of the rod-shaped object is specifically as follows: firstly, constructing a 14-dimensional point cloud feature vector according to the structural characteristics of a rod, wherein the method comprises the following steps: the method comprises the steps of sequentially giving different labels to different rod-shaped objects according to the intensity, the height difference variance, each anisotropy, the surface characteristic, the spherical characteristic, the all-round difference, the linear characteristic, the number of points in a cylinder, the height difference of the cylinder, the density, the volume density, the curvature and the roughness, combining the characteristic vectors and the labels, and classifying different rod-shaped object point clouds by using a random forest classifier.
8. The method for extracting and multi-scale identifying the rod-shaped objects of the road point cloud according to claim 6, wherein in the step (42), the segmented rod-shaped object point clouds are clustered, different point clouds are clustered into different point cloud clusters, then the different point cloud clusters are clustered by slicing, the point cloud clusters are subjected to ground point solving according to different rod-shaped objects with different rod main bodies, after the ground points are solved, the rod-shaped objects with the distance difference between the ground points smaller than a certain threshold value are extracted, the two rod-shaped objects are regarded as mutually overlapped rod-shaped objects, the generation of the super voxels in the overlapped area is carried out by using two different limiting conditions, and the results of the two super voxels are fused to obtain the final super voxel point cloud in the overlapped area; dividing the two rods according to the ultravoxels, using the points as a single rod point cloud, and obtaining the characteristics of the single rod; wherein the global features include: and (3) global viewpoint feature histogram VFH, geometric size of an outer enclosure box, voxel proportion and average intensity, and placing the calculated global feature composition feature vectors into a random forest classifier to classify the scene to be recognized by using a trained model.
9. The method for extracting and multi-scale identifying a rod-shaped object of a road point cloud as claimed in claim 6, wherein in the step (43), the identification effects at different scales are fused according to the identification progress, and the better part of the two identification processes is merged, so as to improve the identification accuracy at a single scale.
CN202111113789.8A 2021-09-23 2021-09-23 Road point cloud rod extraction and multi-scale identification method Pending CN113920360A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111113789.8A CN113920360A (en) 2021-09-23 2021-09-23 Road point cloud rod extraction and multi-scale identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111113789.8A CN113920360A (en) 2021-09-23 2021-09-23 Road point cloud rod extraction and multi-scale identification method

Publications (1)

Publication Number Publication Date
CN113920360A true CN113920360A (en) 2022-01-11

Family

ID=79235837

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111113789.8A Pending CN113920360A (en) 2021-09-23 2021-09-23 Road point cloud rod extraction and multi-scale identification method

Country Status (1)

Country Link
CN (1) CN113920360A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112446343A (en) * 2020-12-07 2021-03-05 苏州工业园区测绘地理信息有限公司 Vehicle-mounted point cloud road rod-shaped object machine learning automatic extraction method integrating multi-scale features
CN114399762A (en) * 2022-03-23 2022-04-26 成都奥伦达科技有限公司 Road scene point cloud classification method and storage medium
CN116310849A (en) * 2023-05-22 2023-06-23 深圳大学 Tree point cloud monomerization extraction method based on three-dimensional morphological characteristics
WO2023134142A1 (en) * 2022-01-13 2023-07-20 南京邮电大学 Multi-scale point cloud classification method and system

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112446343A (en) * 2020-12-07 2021-03-05 苏州工业园区测绘地理信息有限公司 Vehicle-mounted point cloud road rod-shaped object machine learning automatic extraction method integrating multi-scale features
CN112446343B (en) * 2020-12-07 2024-03-15 园测信息科技股份有限公司 Vehicle-mounted point cloud road shaft-shaped object machine learning automatic extraction method integrating multi-scale features
WO2023134142A1 (en) * 2022-01-13 2023-07-20 南京邮电大学 Multi-scale point cloud classification method and system
CN114399762A (en) * 2022-03-23 2022-04-26 成都奥伦达科技有限公司 Road scene point cloud classification method and storage medium
CN116310849A (en) * 2023-05-22 2023-06-23 深圳大学 Tree point cloud monomerization extraction method based on three-dimensional morphological characteristics
CN116310849B (en) * 2023-05-22 2023-09-19 深圳大学 Tree point cloud monomerization extraction method based on three-dimensional morphological characteristics

Similar Documents

Publication Publication Date Title
CN113920360A (en) Road point cloud rod extraction and multi-scale identification method
CN106022381B (en) Automatic extraction method of street lamp pole based on vehicle-mounted laser scanning point cloud
CN107292276B (en) Vehicle-mounted point cloud clustering method and system
CN106709946B (en) LiDAR point cloud-based automatic multi-split conductor extraction and fine modeling method
CN112070769B (en) Layered point cloud segmentation method based on DBSCAN
CN106529431B (en) Road bank point based on Vehicle-borne Laser Scanning data automatically extracts and vectorization method
CN110210431B (en) Point cloud semantic labeling and optimization-based point cloud classification method
CN106528662A (en) Quick retrieval method and system of vehicle image on the basis of feature geometric constraint
CN112419505B (en) Automatic extraction method for vehicle-mounted point cloud road shaft by combining semantic rules and model matching
CN110047036B (en) Polar grid-based ground laser scanning data building facade extraction method
CN106874421A (en) Image search method based on self adaptation rectangular window
CN111783722B (en) Lane line extraction method of laser point cloud and electronic equipment
WO2023060632A1 (en) Street view ground object multi-dimensional extraction method and system based on point cloud data
CN114187310A (en) Large-scale point cloud segmentation method based on octree and PointNet ++ network
CN114387288A (en) Single standing tree three-dimensional information extraction method based on vehicle-mounted laser radar point cloud data
CN114004938A (en) Urban scene reconstruction method and device based on mass data
CN110210415A (en) Vehicle-mounted laser point cloud roadmarking recognition methods based on graph structure
CN113345094A (en) Electric power corridor safety distance analysis method and system based on three-dimensional point cloud
CN110348478B (en) Method for extracting trees in outdoor point cloud scene based on shape classification and combination
CN115063555A (en) Method for extracting vehicle-mounted LiDAR point cloud street tree growing in Gaussian distribution area
CN116258857A (en) Outdoor tree-oriented laser point cloud segmentation and extraction method
Xu et al. Instance segmentation of trees in urban areas from MLS point clouds using supervoxel contexts and graph-based optimization
CN111861946B (en) Adaptive multi-scale vehicle-mounted laser radar dense point cloud data filtering method
CN112200248B (en) Point cloud semantic segmentation method, system and storage medium based on DBSCAN clustering under urban road environment
CN113724400A (en) Oblique photography-oriented multi-attribute fusion building point cloud extraction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination