CN106780524B - Automatic extraction method for three-dimensional point cloud road boundary - Google Patents
Automatic extraction method for three-dimensional point cloud road boundary Download PDFInfo
- Publication number
- CN106780524B CN106780524B CN201610996779.6A CN201610996779A CN106780524B CN 106780524 B CN106780524 B CN 106780524B CN 201610996779 A CN201610996779 A CN 201610996779A CN 106780524 B CN106780524 B CN 106780524B
- Authority
- CN
- China
- Prior art keywords
- point
- points
- road boundary
- hyper
- boundary points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000605 extraction Methods 0.000 title claims abstract description 18
- 238000012216 screening Methods 0.000 claims abstract description 4
- 238000000034 method Methods 0.000 claims description 38
- 239000013598 vector Substances 0.000 claims description 26
- 239000000284 extract Substances 0.000 claims description 5
- 238000003064 k means clustering Methods 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 3
- 238000012545 processing Methods 0.000 abstract description 5
- 238000005259 measurement Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 6
- 230000011218 segmentation Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 238000001914 filtration Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 230000007547 defect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
- G06T2207/30256—Lane; Road marking
Landscapes
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention relates to the field of point cloud processing, and particularly discloses a three-dimensional point cloud road boundary automatic extraction method which comprises the following steps of S1, screening seed points for a whole three-dimensional point cloud data set P to perform voxel division, S2, extracting boundary points between adjacent non-coplanar voxels by using a α -shape algorithm, S3, extracting road boundary points by using an energy minimization algorithm based on graph cut, S4, removing outliers based on an Euclidean distance clustering algorithm, and S5, fitting the extracted road boundary points into a smooth curve.
Description
Technical Field
The invention relates to the field of point cloud processing, in particular to a method for automatically extracting a three-dimensional point cloud road boundary.
Background
The road is used as a traffic infrastructure, and the digital management and construction of the road have important significance for application such as city planning, traffic management and navigation. As a high and new surveying and mapping technology which develops rapidly, compared with the traditional surveying and mapping means, the vehicle-mounted laser scanning technology has the advantages of high data acquisition speed, high data precision, non-contact active measurement, strong real-time performance and the like, can quickly acquire detailed three-dimensional space information of roads and ground objects on two sides in the normal traveling process of a vehicle, and has obvious advantages for acquiring road information in strip distribution.
The traditional method for acquiring road information mainly comprises two modes of manual measurement and digital photogrammetry. Although the manual measurement can obtain more accurate information such as road coordinates and the like, the measurement speed is slow, and the updating period of the road information is long; digital photogrammetry is gradually developed and matured along with the development of scientific technology and the wide application of high and new technologies such as computers, but is limited by reasons such as image resolution and the like, and the requirements on road characteristic information and precision extracted from images still need to be further improved. The vehicle-mounted laser scanning system comprises a global positioning system, an inertial navigation system, a laser scanner, a CCD camera and the like, and becomes a new means for acquiring three-dimensional space data. The vehicle-mounted laser scanning technology can effectively save the measurement time, improve the measurement efficiency, shorten the road information updating period, avoid the exposure risk of measurement operators in the traffic environment and provide powerful technical support for the survey and planning of urban space resources.
However, the city is usually complex in environment, not only are the accessory components complex and numerous, but also the scanned targets are shielded from each other to cause data loss, and the automatic extraction of the road boundary is examined. In addition, the complexity caused by different road environments (such as vehicle parking, vegetation surrounding, fences, etc.) increases the difficulty of automatic extraction of road boundaries. Therefore, the difficulty and the requirement for rapidly and automatically extracting the road boundary from the massive point cloud are high, but the technology has important economic and application requirements and is a research hotspot at home and abroad.
At present, most of research on vehicle-mounted laser scanning data processing focuses on ground object point cloud classification, building facade information extraction and modeling, road accessory facility extraction and the like, while research on road boundary information extraction is relatively few, and main work can be divided into indirect extraction and direct extraction.
The indirect extraction method generally first uses the attributes (height, intensity, wavelength, etc.) of the point cloud to generate a depth image, and then uses image processing methods (cropping, fitting, filtering, etc.) to detect and extract the road boundary. For indirect extraction methods, namely converting point clouds into depth images and then extracting road boundaries by using image processing methods, errors are inevitably generated in the conversion process by the methods, and accurate road boundary results are difficult to obtain.
The direct extraction method generally uses road features (such as planes, road teeth, etc.) to detect and extract road boundaries. A common method is to extract the road surface by a random sample consensus (RANSAC) method and then obtain the road boundary by a linear fitting algorithm. The road tooth is detected by a gaussian filtering method or a sliding window method, thereby obtaining a road boundary. For the direct extraction method, the application range of the scene is greatly limited. The method of extracting the road surface based on random sample consensus (RANSAC) has difficulty in the case of road undulation, and some details of the extracted road surface are lost. The method of using gaussian filtering or sliding window to detect the road tooth is often challenging when facing irregular road boundaries (such as walls, fences) or vegetation surrounding.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, and provides the automatic extraction method of the three-dimensional point cloud road boundary, which can be directly operated on large-scale three-dimensional point cloud, can be used for different scenes, has high calculation speed and good algorithm robustness, and can quickly extract the road boundary.
The specific technical scheme is as follows:
a three-dimensional point cloud road boundary automatic extraction method comprises the following steps:
s1, screening seed points of the obtained whole three-dimensional point cloud data set P for superpixel division;
s2, extracting boundary points between adjacent non-coplanar hyper-voxels by using a α -shape algorithm;
s3, extracting road boundary points by using an energy minimization algorithm based on graph cut;
s4, removing outliers based on an Euclidean distance clustering algorithm;
and S5, fitting the extracted road boundary points into a smooth curve.
Preferably, the partitioning procedure of the super-voxel in step S1 is as follows:
s11 solving fitting plane Tp(pi):
For each of the entire three-dimensional point cloud data sets PInput point piIts tangent plane Tp (p)i) Can be represented as a point o from its centeriSum normal vector nlFormed doublets, i.e.
Any point p to Tp (p) in three-dimensional spacei) Can be expressed as
Note piK neighbors of (a) are set of Nbk(pi) The best fit plane in the least squares sense can be obtained by solving the following equation
The fitted plane is then optimized using an iterative reweighted least squares method
Solving the least square equation of the band weight to obtain the optimized fitting plane Tp (p)i)
To plane Tp (p)i) The above process is repeated until the algorithm converges.
S12, removing non-ground points:
note that the final component tangent plane Tp (p)i) Three eigenvalues of the covariance matrix of the point set of (a) are λ1,λ2And λ3And satisfy lambda1≥λ2≥λ3. Then point piSmoothness of(s) (p)i) Can be expressed as
The following two constraints are used to remove non-ground points:
A. removing points significantly above the road surface (z)i≥5m)(ziIs a point piHeight value of (d);
B. and removing the point of which the included angle between the normal vector and the Z axis is more than 22.5 degrees.
S13, calculating the hyper-voxel fi:
Removing the point set P after the non-ground pointgAccording to the smoothness ranking of each point, the point with high smoothness is selected as a seed point. The hyper-voxels are computed in such a way that region growing starts from the seed point. Let voxel fiFormalized definition as a set of points P to which there belongsiCenter point oiAnd the normal vector nlFormed tripletSeed for each seed pointiLet its initial hyper-voxel fiIs { p }iCenter point and normal vector are Tp (p) respectivelyi).oiAnd Tp (p)i).nl. Then, the principle of width priority is adopted to fiA region growing is performed. For each candidate point pjIf (1) p is satisfiedjTo piIs less than a threshold value Rseed(ii) a (2) Vector Tp (p)j).nlAnd Tp (p)i).nlThe included angle of the angle is less than 22.5 degrees; (3) p is a radical ofjTo Tp (p)i) Is less than a threshold e; then p will bejIs added to fiIs concentrated. When f isiWhen no longer expandable, according to fi.piFitting the plane using least squares, and fitting fi.nlg is updated as the normal vector of the fitted plane. And (3) assigning points to the hyper-voxels by adopting local K-means clustering on the basis of the initial facets, and ensuring that the distance from each point to the hyper-voxel to which the point belongs is smaller than the distance from each point to other hyper-voxels. The distance function here is defined as:
wherein Ds,DnAnd DiRespectively euclidean distance, normal vector distance and intensity distance. Omegas,ωnAnd ωiRespectively, the corresponding weights.
Preferably, the specific steps of extracting boundary points between adjacent non-coplanar hyper-voxels using the α -shape algorithm in step S2 are as follows:
after the point cloud is segmented into the hyper-voxels, for each hyper-voxel, an α -shape algorithm can be used to extract boundary points, and meanwhile, the boundary points between two hyper-voxels coplanar with each other are removed, namely, if the included angle of normal vectors of the two hyper-voxels is less than 22.5 degrees, the boundary points between the two hyper-voxels are deleted, and the boundary point P is the boundary point at the momentbIncluding road boundary points and non-road boundary points.
Preferably, step S3 uses an energy minimization algorithm based on graph cut to extract road boundary points, as follows:
and providing vehicle running track line data by a vehicle-mounted laser scanning system, and taking the track line data as an initial observation model of a graph cutting algorithm. The energy formula is defined as:
E(f)=Edata(f)+λ·Esmooth(f)
where P isbRefers to the set of boundary points extracted in step 2. n is piThe potential of the point set of the belonging superpixel. Δ djIs pointing at pjTo a straight line LpiThe distance of (c). Δ diIs pointing at piAll points in the neighborhood to the straight line LpiAverage redundancy of (2). Sigma1Mean redundancy of all points. Straight line LpiIs defined as passing through point piAnd the direction and the distance piThe direction of the nearest trajectory line.
(xi,yi,zi),(xj,yj,zj) Are respectively a point piAnd pjIs determined by the three-dimensional coordinates of (a),is pointing at piAnd pjThe euclidean distance of (c). Here, theIndicates if the point piAnd pjIf the allocated labels are consistent, the cost is zero, otherwise, the cost is
Where σ is2Refers to a set of points PbThe spatial resolution of (a). The result of using the graph cut algorithm to find the minimum value of the above energy formula is to divide the boundary points into two types, one is the road boundary points, and the other is the non-road boundary points.
Compared with the prior art, the scheme of the invention has the following advantages:
(1) the method can be directly operated on large-scale three-dimensional point cloud, and provides a set of rapid, effective and automatic solution for extracting and positioning the road boundary. The parameters which need to be set manually are very few, and the subjective intervention of human is reduced. Compared with the prior art, the method adopts the voxel segmentation and the minimum energy algorithm based on the graph segmentation, can still effectively extract the road boundary under the complex urban environment condition, overcomes the defects of point cloud data shielding, uneven density distribution and the like due to the combined use of vehicle-mounted system trajectory data for calculation, enables the result to be stable and robust, has universality for different scenes, and is easy to practically apply.
(2) The method fully excavates the basic attributes (spatial distance, geometric properties and intensity information) of the point cloud, carries out hyper-voxel segmentation on the point cloud, removes non-ground points and improves the subsequent calculation efficiency. Due to the method of sequencing and preferentially selecting the seed points, the boundary information is well preserved in the super-voxel segmentation result, and the robustness of the subsequent road boundary extraction algorithm is improved.
(3) The method is characterized in that an energy minimization algorithm based on graph cut is provided for the first time, a graph cut model is established by utilizing trajectory line data provided by a vehicle-mounted system and combining the internal characteristics of the road boundary, and the road boundary is effectively and quickly extracted.
Drawings
FIG. 1 is a schematic flow diagram of the present invention;
FIG. 2 illustrates raw point cloud data according to an embodiment of the present invention;
figure 3 shows the effect after treatment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Examples
The specific implementation of the method for extracting the road boundary based on the vehicle-mounted laser scanning point cloud data provided by the invention is as follows (the flow of the general technical scheme can be shown in figure 1):
s1, screening seed points for the obtained whole three-dimensional point cloud data set P to perform voxel division (the original point cloud data in this embodiment can be shown in fig. 2);
the hyper-voxel division refers to gathering adjacent points with consistent properties into a hyper-point so as to reduce the complexity of data processing;
s11 solving fitting plane Tp(pi):
For each input point P of the entire three-dimensional point cloud dataset PiIts tangent plane Tp (p)i) Can be represented as a point o from its centeriSum normal vector nlFormed doublets, i.e.
Any point p to Tp (p) in three-dimensional spacei) Can be expressed as
Note piK neighbors of (a) are set of Nbk(pi) The best fit plane in the least squares sense can be obtained by solving the following equation
The fitted plane is then optimized using an iterative reweighted least squares method
Solving the least square equation of the band weight to obtain the optimized fitting plane Tp (p)i)
To plane Tp (p)i) The above process is repeated until the algorithm converges.
S12, removing non-ground points:
note that the final component tangent plane Tp (p)i) Three eigenvalues of the covariance matrix of the point set of (a) are λ1,λ2And λ3And satisfy lambda1≥λ2≥λ3. Then point piSmoothness of(s) (p)i) Can be expressed as
The following two constraints are used to remove non-ground points:
A. removing points significantly above the road surface (z)i≥5m)(ziIs a point piHeight value of (d);
B. and removing the point of which the included angle between the normal vector and the Z axis is more than 22.5 degrees.
S13, calculating the hyper-voxel fi:
Thus, the point set P after the non-ground point is removedgAccording to the smoothness ranking of each point, the point with high smoothness is selected as a seed point. The hyper-voxels are computed in such a way that region growing starts from the seed point. Let voxel fiFormalized definition as a set of points P to which there belongsiCenter point oiAnd the normal vector nlFormed tripletSeed for each seed pointiLet its initial hyper-voxel fiIs { p }iCenter point and normal vector are Tp (p) respectivelyi).oiAnd Tp (p)i).nl. Then, the principle of width priority is adopted to fiA region growing is performed. For each candidate point pjIf (1) p is satisfiedjTo piIs less than a threshold value Rseed(ii) a (2) Vector Tp (p)j).nlAnd Tp (p)i).nlThe included angle of the angle is less than 22.5 degrees; (3) p is a radical ofjTo Tp (p)i) Is less than a threshold e; then p will bejIs added to fiIs concentrated. When f isiWhen no longer expandable, according to fi.piFitting the plane using least squares, and fitting fi.nlg is updated as the normal vector of the fitted plane. And (3) assigning points to the hyper-voxels by adopting local K-means clustering on the basis of the initial facets, and ensuring that the distance from each point to the hyper-voxel to which the point belongs is smaller than the distance from each point to other hyper-voxels. The distance function here is defined as:
wherein Ds,DnAnd DiRespectively euclidean distance, normal vector distance and intensity distance. Omegas,ωnAnd ωiRespectively, the corresponding weights.
S2, extracting boundary points between adjacent non-coplanar hyper-voxels by using a α -shape algorithm;
after the point cloud is segmented into superpixels, for each superpixel, an α -shape algorithm can be used to extract boundary points, and meanwhile, the boundary points between two superpixels which are coplanar with each other are removed, namely, if the included angle of normal vectors of the two superpixels is less than 22.5 degrees, the boundary points between the two superpixels are deletedbIncluding road boundary points and non-road boundary points.
The α -shape algorithm can be viewed as an extension of the closure Convex Hull, which can compute a finer closure by adjusting the α parameters to describe roughly the shape of a group of points in a plane or space, specifically, a pair of points is nested with a circle of a certain fixed radius, and when a pair of points just falls on the circle and no other point is contained in the circle, the two points are the boundary points of the shape.
S3, extracting road boundary points by using an energy minimization algorithm based on graph cut;
road boundary points are next extracted using a graph cut based energy minimization algorithm. The vehicle-mounted laser scanning system provides vehicle running track line data, and the track line data is observed to be basically consistent with the position direction of a measured road. The trajectory data is thus used as an initial observation model for the graph cut algorithm. The energy formula is defined as:
E(f)=Edata(f)+λ·Esmooth(f)
where P isbRefers to the set of boundary points extracted in step 2. n is piThe potential of the point set of the belonging superpixel. Δ djIs pointing at pjTo a straight lineThe distance of (c). Δ diIs pointing at piAll points to straight lines in the neighborhoodAverage redundancy of (2). Sigma1Mean redundancy of all points. Straight lineIs defined as passing through point piAnd the direction and the distance piThe direction of the nearest trajectory line.
(xi,yi,zi),(xj,yj,zj) Are respectively a point piAnd pjIs determined by the three-dimensional coordinates of (a),is pointing at piAnd pjThe euclidean distance of (c). Here, theIndicates if the point piAnd pjIf the allocated labels are consistent, the cost is zero, otherwise, the cost is
Where σ is2Refers to a set of points PbThe spatial resolution of (a). The result of using the graph cut algorithm to find the minimum value of the above energy formula is to divide the boundary points into two types, one is the road boundary points, and the other is the non-road boundary points.
S4, removing outliers based on an Euclidean distance clustering algorithm;
and clustering the obtained road boundary points by using an Euclidean distance clustering algorithm, and deleting the category with small point number, namely deleting the category if the number of the points contained in one category is less than 5 after clustering.
And S5, fitting the extracted road boundary points into a smooth curve.
The remaining classes are fitted to smooth curves, respectively, thereby obtaining the road boundaries. Here, Cubic Spline Interpolation (Cubic Spline Interpolation) is used to fit the line.
Fig. 3 is an effect diagram after the processing, showing the extracted road.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (5)
1. A three-dimensional point cloud road boundary automatic extraction method is characterized by comprising the following steps: the method comprises the following steps:
s1, screening seed points of the obtained whole three-dimensional point cloud data set P for superpixel division;
s2, extracting boundary points between adjacent non-coplanar hyper-voxels by using a α -shape algorithm;
s3, extracting road boundary points by using an energy minimization algorithm based on graph cut;
s4, removing outliers based on an Euclidean distance clustering algorithm;
s5, fitting the road boundary points with the outliers removed into a smooth curve;
the process of dividing the hyper-voxels in step S1 includes the following steps:
s11 solving fitting plane Tp(Pi) Wherein P isiFor each input point of the entire three-dimensional point cloud dataset P;
s12, removing non-ground points;
s13, calculating the hyper-voxel fi;
And, solving the fitting plane Tp(Pi) The method comprises the following specific steps:
for each input point P of the entire three-dimensional point cloud dataset PiTangent plane T thereofp(Pi) Can be represented as a point o from its centeriSum normal vector nlThe binary set of components, namely:
any point p to T in three-dimensional spacep(Pi) The distance of (d) can be expressed as:
note PiK neighbors of (a) are set of NbK(Pi) The best fit plane in the least squares sense can be obtained by solving the following equation:
the fitted plane is then optimized using an iterative reweighted least squares method:
obtaining the optimized fitting plane T by solving the weighted least square equationp(Pi):
For the optimized fitting plane Tp(Pi) The above solving process is repeated until the algorithm for solving the weighted least squares equation converges.
2. The method for automatically extracting the road boundary by using the three-dimensional point cloud as claimed in claim 1, wherein the step S2 of extracting the boundary points between the adjacent non-coplanar hyper-voxels by using α -shape algorithm comprises the following specific steps:
after the point cloud is segmented into superpixels, for each superpixel, the α -shape algorithm may be used to extract boundary points while removing edges between two superpixels that are coplanar with each otherBoundary points, i.e. if the normal vector angle of two hyper-voxels is less than 22.5 °, the boundary point between the two hyper-voxels is deleted, the boundary point P being the point at this timebIncluding road boundary points and non-road boundary points.
3. The method for automatically extracting the road boundary by the three-dimensional point cloud according to claim 1, wherein the method comprises the following steps:
s12, removing the non-ground points, the concrete steps are as follows:
finally form fitting plane Tp(Pi) Three eigenvalues of the covariance matrix of the point set of (a) are λ1,λ2And λ3And satisfy lambda1≥λ2≥λ3Then point PiSmoothness of (S) (P)i) Can be expressed as:
the following two constraints are used to remove non-ground points:
A. removing points significantly above the road surface, i.e. ziA point of not less than 5m, ziIs a point PiA height value of (d);
B. and removing the point of which the included angle between the normal vector and the Z axis is more than 22.5 degrees.
4. The method for automatically extracting the road boundary by the three-dimensional point cloud according to claim 2, wherein the method comprises the following steps:
s13, calculating the hyper-voxel fiThe method comprises the following specific steps:
removing the point set P after the non-ground pointgSorting according to the smoothness of each point, firstly selecting the point with the smoothness larger than a preset value as a seed point, and calculating the hyper-voxels in a mode of starting region growing from the seed point; let voxel fiFormalized definition as a point PiCenter point oiAnd the normal vector nlFormed tripletSeed for each seed pointiLet its initial hyper-voxel fiIs { P }iThe center point and normal vector are T respectivelyp(Pi).oiAnd Tp(Pi).nl(ii) a Then, the principle of width priority is adopted to fiPerforming region growing on each candidate point PjIf (1) P is satisfied at the same timejTo PiIs less than a threshold value Rseed(ii) a (2) Vector Tp(Pj).nlAnd Tp(Pi).nlThe included angle of the angle is less than 22.5 degrees; (3) pjTo Tp(Pi) Is less than a threshold e; then P will bejIs added to fiWhen f is concentratediWhen the expansion can not be performed any more, according to the triple fiP in (1)iThis term fi.PiFitting the plane using least squares and fitting the triplet fiIn (1)This itemUpdating the normal vector of the fitting plane; at all f calculatediOn the basis of the triple, points are assigned to the hyper-voxels by adopting local K-means clustering, and the distance from each point to the hyper-voxel to which the point belongs is ensured to be smaller than the distances to other hyper-voxels, wherein a distance function is defined as:
wherein Ds,DnAnd DiRespectively Euclidean distance, normal vector distance and intensity distance, omegas,ωnAnd ωiThe weights are corresponding to the three distance values, respectively.
5. The method for automatically extracting the three-dimensional point cloud road boundary according to claim 4, wherein the method comprises the following steps: step S3 extracts road boundary points using an energy minimization algorithm based on graph cut, as follows:
the method comprises the steps of taking vehicle running track line data provided by a vehicle-mounted laser scanning system as an initial observation model, dividing boundary points into the following two categories { 'road boundary points', 'non-road boundary points' } by using a graph cut algorithm, namely, solving a classification function f to assign a label f to each point by using the graph cut algorithmpE.g., L is a category set { "road boundary points", "non-road boundary points" }, so that the cost of payment is minimal, i.e., the energy formula is minimized,
the energy formula is defined here as:
E(f)=Edata(f)+λ·Esmooth(f)
here Edata(f) I.e. the data item in the energy formula, refers to the error of the comparison of the classification result with the initial observation model, which is the cost of assigning a label to each point in the classification process, Esmooth(f) That is, the smooth term in the energy formula refers to the degree of non-smoothness of the classification function f, specifically, the cost of inconsistency of classification results between each point and neighboring points in the classification process, and λ is the smooth term Esmooth(f) Is set to 32, here empirically, wherein,
where P isbRefers to the set of boundary points extracted in step S2, n is PiPotential of point set of the supervoxel to which it belongs, Δ djIs pointing at PjTo a straight lineA distance of Δ diIs pointing at PiAll points to straight lines in the neighborhoodAverage redundancy of, σ1Is a set of points PgMean redundant, straight line of all points inIs defined as passing through point PiAnd the direction vector and the distance PiThe nearest trajectory line is in the same direction,
is PiThe label of (a) is used,is Pj(ii) a label of (x)i,yi,zi),(xj,yj,zj) Are respectively a point PiAnd PjIs determined by the three-dimensional coordinates of (a),is pointing at PiAnd PjEuclidean distance ofFrom here, hereIndicates if the point P isiAnd PjIf the allocated labels are consistent, the cost is zero, otherwise, the cost is B { P }i,Pj},
Where σ is2Refers to a set of points PbThe spatial resolution of (2) is obtained by using a graph cut algorithm to obtain the minimum value of the energy formula, namely, the boundary points are divided into two types, one type is road boundary points, and the other type is non-road boundary points.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610996779.6A CN106780524B (en) | 2016-11-11 | 2016-11-11 | Automatic extraction method for three-dimensional point cloud road boundary |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610996779.6A CN106780524B (en) | 2016-11-11 | 2016-11-11 | Automatic extraction method for three-dimensional point cloud road boundary |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106780524A CN106780524A (en) | 2017-05-31 |
CN106780524B true CN106780524B (en) | 2020-03-06 |
Family
ID=58973201
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610996779.6A Active CN106780524B (en) | 2016-11-11 | 2016-11-11 | Automatic extraction method for three-dimensional point cloud road boundary |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106780524B (en) |
Families Citing this family (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107390152B (en) * | 2017-07-14 | 2020-02-18 | 歌尔科技有限公司 | Calibration method and device for magnetometer and electronic equipment |
CN107516098B (en) * | 2017-07-30 | 2021-08-10 | 华南理工大学 | Target contour three-dimensional information extraction method based on edge curvature angle |
CN107657659A (en) * | 2017-08-14 | 2018-02-02 | 南京航空航天大学 | The Manhattan construction method for automatic modeling of scanning three-dimensional point cloud is fitted based on cuboid |
CN110044371A (en) * | 2018-01-16 | 2019-07-23 | 华为技术有限公司 | A kind of method and vehicle locating device of vehicle location |
CN110068834B (en) * | 2018-01-24 | 2023-04-07 | 北京京东尚科信息技术有限公司 | Road edge detection method and device |
CN108831146A (en) * | 2018-04-27 | 2018-11-16 | 厦门维斯云景信息科技有限公司 | Generate semi-automatic cloud method of three-dimensional high-definition mileage chart intersection lane |
CN108615242B (en) * | 2018-05-04 | 2021-07-27 | 重庆邮电大学 | High-speed guardrail tracking method |
CN109241978B (en) * | 2018-08-23 | 2021-09-07 | 中科光绘(上海)科技有限公司 | Method for rapidly extracting plane piece in foundation three-dimensional laser point cloud |
CN109299739A (en) * | 2018-09-26 | 2019-02-01 | 速度时空信息科技股份有限公司 | The method that vehicle-mounted laser point cloud is filtered based on the surface fitting of normal vector |
CN109635641B (en) * | 2018-11-01 | 2020-06-09 | 百度在线网络技术(北京)有限公司 | Method, device and equipment for determining road boundary line and storage medium |
CN109584294B (en) * | 2018-11-23 | 2020-08-28 | 武汉中海庭数据技术有限公司 | Pavement point cloud extraction method and device based on laser point cloud |
CN109815943B (en) * | 2019-03-18 | 2021-02-09 | 北京石油化工学院 | Hazardous chemical storage stacking picture sample generation method and system |
CN110046661A (en) * | 2019-04-10 | 2019-07-23 | 武汉大学 | A kind of vehicle-mounted cloud clustering method cutting algorithm based on contextual feature and figure |
CN110060338B (en) * | 2019-04-25 | 2020-11-10 | 重庆大学 | Prefabricated part point cloud identification method based on BIM model |
CN110243369A (en) * | 2019-04-30 | 2019-09-17 | 北京云迹科技有限公司 | Drawing method and device are remotely built suitable for robot localization |
CN110249741B (en) * | 2019-06-05 | 2020-07-28 | 中国农业大学 | Potato seed potato dicing method based on point cloud model |
CN110222642B (en) * | 2019-06-06 | 2021-07-16 | 上海黑塞智能科技有限公司 | Plane building component point cloud contour extraction method based on global graph clustering |
CN110349252B (en) * | 2019-06-30 | 2020-12-08 | 华中科技大学 | Method for constructing actual machining curve of small-curvature part based on point cloud boundary |
CN110458083B (en) * | 2019-08-05 | 2022-03-25 | 武汉中海庭数据技术有限公司 | Lane line vectorization method, device and storage medium |
CN110673107B (en) * | 2019-08-09 | 2022-03-08 | 北京智行者科技有限公司 | Road edge detection method and device based on multi-line laser radar |
CN110967024A (en) * | 2019-12-23 | 2020-04-07 | 苏州智加科技有限公司 | Method, device, equipment and storage medium for detecting travelable area |
CN111563457A (en) * | 2019-12-31 | 2020-08-21 | 成都理工大学 | Road scene segmentation method for unmanned automobile |
CN111340935A (en) * | 2020-01-23 | 2020-06-26 | 北京市商汤科技开发有限公司 | Point cloud data processing method, intelligent driving method, related device and electronic equipment |
CN111487646A (en) * | 2020-03-31 | 2020-08-04 | 安徽农业大学 | Online detection method for corn plant morphology |
CN111553946B (en) * | 2020-04-17 | 2023-04-18 | 中联重科股份有限公司 | Method and device for removing ground point cloud and method and device for detecting obstacle |
CN111768421A (en) * | 2020-07-03 | 2020-10-13 | 福州大学 | Edge-aware semi-automatic point cloud target segmentation method |
CN111783721B (en) * | 2020-07-13 | 2021-07-20 | 湖北亿咖通科技有限公司 | Lane line extraction method of laser point cloud and electronic equipment |
CN112561808B (en) * | 2020-11-27 | 2023-07-18 | 中央财经大学 | Road boundary line restoration method based on vehicle-mounted laser point cloud and satellite image |
CN112560747B (en) * | 2020-12-23 | 2024-08-20 | 园测信息科技股份有限公司 | Lane boundary interactive extraction method based on vehicle-mounted point cloud data |
CN112731338A (en) * | 2020-12-30 | 2021-04-30 | 潍柴动力股份有限公司 | Storage logistics AGV trolley obstacle detection method, device, equipment and medium |
CN112733696B (en) * | 2021-01-04 | 2023-08-15 | 长安大学 | Vehicle-mounted LIDAR road edge extraction method based on multi-model fitting |
CN113409332B (en) * | 2021-06-11 | 2022-05-27 | 电子科技大学 | Building plane segmentation method based on three-dimensional point cloud |
CN113378800B (en) * | 2021-07-27 | 2021-11-09 | 武汉市测绘研究院 | Automatic classification and vectorization method for road sign lines based on vehicle-mounted three-dimensional point cloud |
CN114972377B (en) * | 2022-05-24 | 2024-07-19 | 厦门大学 | 3D point cloud segmentation method and device based on mobile least square method and super-voxel |
CN117576144B (en) * | 2024-01-15 | 2024-03-29 | 湖北工业大学 | Laser point cloud power line extraction method and device and electronic equipment |
CN117726775A (en) * | 2024-02-07 | 2024-03-19 | 法奥意威(苏州)机器人系统有限公司 | Point cloud preprocessing method and device based on grid downsampling |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103559737A (en) * | 2013-11-12 | 2014-02-05 | 中国科学院自动化研究所 | Object panorama modeling method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2532948B (en) * | 2014-12-02 | 2021-04-14 | Vivo Mobile Communication Co Ltd | Object Recognition in a 3D scene |
-
2016
- 2016-11-11 CN CN201610996779.6A patent/CN106780524B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103559737A (en) * | 2013-11-12 | 2014-02-05 | 中国科学院自动化研究所 | Object panorama modeling method |
Non-Patent Citations (3)
Title |
---|
3D ROAD SURFACE EXTRACTION FROM MOBILE LASER SCANNING POINT CLOUDS;Dawei Zai et al;《2016 IEEE international geoscience and remote sensing symposium》;20160715;第1595-1598页 * |
A parallel point cloud clustering algorithm for subset segmentation and outlier detection;Christian Teutsch et al;《SPIE Optical Metrology》;20111231;第8085卷;第1-6 * |
Robust Piecewise-Planar 3D Reconstruction and Completion from Large-Scale Unstructured Point Data;Anne-Laure Chauve et al;《2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition》;20100618;第1261-1268页 * |
Also Published As
Publication number | Publication date |
---|---|
CN106780524A (en) | 2017-05-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106780524B (en) | Automatic extraction method for three-dimensional point cloud road boundary | |
Vosselman et al. | Recognising structure in laser scanner point clouds | |
Zolanvari et al. | Three-dimensional building façade segmentation and opening area detection from point clouds | |
Lee et al. | Fusion of lidar and imagery for reliable building extraction | |
Cheng et al. | Integration of LiDAR data and optical multi-view images for 3D reconstruction of building roofs | |
Matei et al. | Building segmentation for densely built urban regions using aerial lidar data | |
Kedzierski et al. | Methods of laser scanning point clouds integration in precise 3D building modelling | |
CN109961440A (en) | A kind of three-dimensional laser radar point cloud Target Segmentation method based on depth map | |
CN115564926B (en) | Three-dimensional patch model construction method based on image building structure learning | |
CN111652241B (en) | Building contour extraction method integrating image features and densely matched point cloud features | |
CN107481274B (en) | Robust reconstruction method of three-dimensional crop point cloud | |
Chen et al. | Building reconstruction from LIDAR data and aerial imagery | |
CN113409332B (en) | Building plane segmentation method based on three-dimensional point cloud | |
CN116449384A (en) | Radar inertial tight coupling positioning mapping method based on solid-state laser radar | |
KR101549155B1 (en) | Method of automatic extraction of building boundary from lidar data | |
CN112669333A (en) | Single tree information extraction method | |
JP2023530449A (en) | Systems and methods for air and ground alignment | |
Zeybek et al. | Road surface and inventory extraction from mobile LiDAR point cloud using iterative piecewise linear model | |
Sun et al. | Automated segmentation of LiDAR point clouds for building rooftop extraction | |
Omidalizarandi et al. | Segmentation and classification of point clouds from dense aerial image matching | |
CN112330604A (en) | Method for generating vectorized road model from point cloud data | |
Yazdanpanah et al. | A new statistical method to segment photogrammetry data in order to obtain geological information | |
JP2003141567A (en) | Three-dimensional city model generating device and method of generating three-dimensional city model | |
CN117253205A (en) | Road surface point cloud rapid extraction method based on mobile measurement system | |
Novacheva | Building roof reconstruction from LiDAR data and aerial images through plane extraction and colour edge detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20231214 Address after: 361000 Siming South Road, Xiamen, Fujian Province, No. 422 Patentee after: XIAMEN University Patentee after: National University of Defense Technology Address before: 361000 Siming South Road, Xiamen, Fujian Province, No. 422 Patentee before: XIAMEN University |