CN114119866A - Point cloud data-based urban road intersection visual evaluation method - Google Patents

Point cloud data-based urban road intersection visual evaluation method Download PDF

Info

Publication number
CN114119866A
CN114119866A CN202111340586.2A CN202111340586A CN114119866A CN 114119866 A CN114119866 A CN 114119866A CN 202111340586 A CN202111340586 A CN 202111340586A CN 114119866 A CN114119866 A CN 114119866A
Authority
CN
China
Prior art keywords
points
point cloud
cloud data
dimensional
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111340586.2A
Other languages
Chinese (zh)
Inventor
张家钰
程建川
刘佳玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN202111340586.2A priority Critical patent/CN114119866A/en
Publication of CN114119866A publication Critical patent/CN114119866A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • G06T3/06
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/60Rotation of a whole image or part thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration by the use of histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a visual evaluation method for an urban road intersection based on point cloud data, which comprises the steps of collecting the point cloud data of the road intersection, carrying out pavement identification and segmentation processing on the point cloud data to distinguish the pavement part from other discrete plane points, to obtain complete road surface point cloud data, establishing a plane grid dot matrix, taking the identified road surface points as a reference for basic interpolation, acquiring the height information of the plane grid points by utilizing a neighboring point interpolation algorithm, constructing a digital elevation model, the method comprises the steps of constructing a coordinate system by taking a starting point of sight of a driver as an origin, converting a three-dimensional coordinate system into a two-dimensional coordinate system, converting a two-dimensional rectangular coordinate system into a polar coordinate system, and determining the position of a barrier blocking the sight of the driver.

Description

Point cloud data-based urban road intersection visual evaluation method
Technical Field
The invention relates to the technical field of road three-dimensional sight distance detection, in particular to a point cloud data-based urban road intersection visual evaluation method.
Background
Urban road intersections are multiple road sections of traffic safety accidents, and in a road design stage, a sight distance triangle of the intersection is used as an important visibility evaluation index to ensure good intersection visibility. However, the real three-dimensional, real-time intersection environment is more complex than the road in the design phase, and many of the design phases do not take into account or take into account the deficiency of the variety of potential line-of-sight obstacles, such as vegetation growth, new facilities, debris accumulation, and the like. The current means of acquireing road infrastructure data has the advantage that data accuracy is high, the data bulk is big like surveying and mapping level laser radar, unmanned aerial vehicle, communication satellite etc. has obvious advantage to the traffic road data acquisition of macroscale. However, the method has certain limitation on traffic infrastructure of microscopic scales such as intersections and the like, and has the defects of influence on traffic travel, high equipment loss cost and the like. The small laser radar has low cost and simple operation, but the application of the small visual angle of 38.4 degrees in the engineering practice is also limited. The three-dimensional sight distance calculation is the basis for carrying out the visual evaluation of urban road intersections, and the domestic road three-dimensional sight distance calculation method fundamentally simplifies and processes the three-dimensional environment of the road to different degrees, and is greatly limited in application to road intersections with complex traffic flow composition and more road side obstacles. Many foreign road three-dimensional line-of-sight methods can only be called 2.5D due to the limitation of modeling, and cannot process objects which are located at the same position but have different heights. A few calculation methods based on the 3D model also have the limitations of complicated algorithm and long calculation time, and have further improved space. The existing point cloud data acquisition mode and intersection visualization evaluation program have great defects in economy and high efficiency, and evaluation consideration on actual driver conditions and road environment conditions is not comprehensive.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides the point cloud data-based urban road intersection visual evaluation method considering different parameters such as the height, the range, the visual angle position and the like of the sight of the driver.
In order to achieve the purpose, the invention adopts the following technical scheme:
in a first aspect, the embodiment of the invention provides a point cloud data-based urban road intersection visual evaluation method, and the road intersection three-dimensional sight distance calculation method comprises the following steps:
s1, collecting point cloud data of the road intersection;
s2, carrying out pavement identification and segmentation processing on the point cloud data, and distinguishing a pavement part from other discrete plane points to obtain complete pavement point cloud data;
s3, establishing a plane grid dot matrix, taking the identified road surface points as a reference interpolation reference, acquiring height information of the plane grid dots by utilizing a neighboring point interpolation algorithm, and constructing a digital elevation model;
s4, quantifying the vision concept into a fan shape formed by innumerable rays emitted from the sight origin of the driver, wherein the angle of the fan shape is related to the visual angle range of the driver; converting the spatial three-dimensional coordinate points into two-dimensional coordinate points through a cylindrical perspective projection model, and converting the three-dimensional problem into a two-dimensional problem;
constructing a coordinate system by taking a starting point of the driver sight as an origin, converting a three-dimensional coordinate system into a two-dimensional coordinate system, converting a two-dimensional rectangular coordinate system into a polar coordinate system, and determining the position of an obstacle blocking the driver sight;
and S5, analyzing variable factors which may influence the visible area of the urban road intersection.
Further, in step S1, the process of collecting the point cloud data of the intersection includes the following steps:
s11, fixing the small laser radar on the rotating holder, placing the small laser radar at the center of an urban road intersection, and keeping the laser radar carrying support fixed; acquiring data for multiple times by rotating the cloud deck to obtain subfiles of the intersection global point cloud data, wherein the contact ratio of the cloud files of the adjacent 2 points is 1/3;
and S12, carrying out data splicing on the obtained point cloud file through corresponding characteristic points to obtain complete point cloud data of the urban road intersection.
Further, in step S2, the process of performing the road surface identification and segmentation processing on the point cloud data includes the following steps:
s21: rasterizing original three-dimensional point cloud data, wherein each grid point of the rasterized point cloud data has a corresponding linear index value, data points at the same position of the same matrix have the same linear index value, and the linear index value zeta of the data points is stepped when the data points cross one matrix;
s22: using height threshold htScreening out points with lower elevations in the cylindrical unit, and analyzing the residual data points by using a principal component analysis method to obtain the principal direction of the cylindrical unit;
s23: sequencing and detecting road points by using a height histogram method, and extracting the characteristics of the obtained plane point cloud data; the characteristic variables comprise the number of point clouds in a unit area, and standard deviation and median of difference values of adjacent items along the x direction and the y direction, which are used for describing distribution characteristics of the point clouds on an x-y plane, so that an observation value array with the number consistent with that of the columnar units is obtained; dividing the cylindrical units into three categories of uniform plane points, non-uniform plane points and pseudo plane points by using an unsupervised classification K-Means clustering algorithm; all variables are standardized to eliminate the difference, so that the intervals of the variables fall into [0,1 ]; eliminating uneven plane points and pseudo plane points, and merging the even plane points into the cylindrical unit after the primary filtering processing to obtain optimized plane point cloud data;
s24: and distinguishing the road surface part from other discrete plane points by using a hyper-voxel clustering method to obtain complete road surface point cloud data.
Further, in step S21, the rasterizing process of the original three-dimensional point cloud data includes the following steps:
s211, assuming the original three-dimensional point cloud data space coordinate as (X, Y, Z)TCreating a temporary array (x, y, z)T
Figure BDA0003351758350000021
S212, assume (min x, max y)TAnd (max x, min y)TRespectively calculating a starting point and an end point of linear index calculation, and calculating a horizontal distance D between the starting point and the end point through a formula after rasterizing point cloud dataxAt a distance D from the verticalyWherein D isxAnd DyAre all positive integers, and the calculation formula is as follows:
Figure BDA0003351758350000022
s213, establishing the size of (1+ D)y)×(1+Dx) Zero value matrix Ψ and the empty set cell matrix Ω; calculating to obtain the horizontal distance D from any grid point to the starting point of the data indexxAnd a vertical distance Dy(ii) a Using line number 1+ dyAnd column number 1+ dxRepresenting elements in the zero value matrix Ψ and the empty set cell matrix Ω; by the formula ζ ═ dx·(1+Dy)+dy+1 converts the row and column numbers to a linear index denoted ζ;
when point cloud data are partitioned, the point cloud data are arranged in an ascending order according to values of linear indexes zeta, whether the point is a step point or not is judged by calculating the difference value delta zeta of adjacent data points, if the difference value delta zeta is smaller than 1, the point is not the step point, and if not, the point is the step point; will step at zetajζ being the last step pointj-1The point cloud data between the two is stored to the zeta th matrix of the cellular omegaj-1In each unit, simultaneously supplying the ζ th matrix to Ψ matrixj-1The individual elements are assigned a value of 1.
Further, in step S23, the processing procedure of the height orthogonal map sorting method includes the following steps:
s231, height delta of single stripe in height histogramhSetting the cell number to be 1-4 m, establishing a frequency histogram along the Z direction, and establishing an empty cell array phi at the same time;
s232, sorting the frequency according to the frequency descending order, and setting F { F }1,F2...Fi...Fn|1≤i≤n,i,n∈N+The height value corresponding to each arranged stripe is obtained;
s233, set up cycle i 2, k 1, and apply Fi-1Merging and storing the corresponding point cloud data to phi { k };
s234, e.g. Fi-Fi-1≤δhThen F will beiAnd Fi-1Merging and storing the corresponding point cloud data to phi { k }, otherwise, taking k as k +1, and FiStoring the corresponding point cloud data to phi { k };
s235, making i equal to i +1, such as i ≦ NhThen the process returns to step S234, otherwise the loop ends.
Further, in step S24, the processing procedure of the super voxel clustering method is as follows:
and searching to obtain three-dimensional point cloud data in the corresponding cellular matrix by using the index position information, splicing the three-dimensional point cloud data to obtain complete road surface point cloud data in a final road surface super-voxel clustering result, and distinguishing a road surface part from other discrete plane points.
Further, in step S4, the processing procedure of converting the three-dimensional problem into the two-dimensional problem by converting the spatial three-dimensional coordinate points into the two-dimensional coordinate points through the cylindrical perspective projection model is as follows:
the viewpoint starting point is set as the origin, and the coordinates are set as (x)m,ym,zm)TSetting the advancing direction of the vehicle as a Y 'axis, wherein an X' axis is parallel to an XOY plane and is vertical to the Y axis, and a Z 'axis is vertical to the X' OY plane to construct a three-dimensional space coordinate system;
constructing a corresponding two-dimensional coordinate system u-upsilon, wherein the origin of the two-dimensional coordinate system is arranged on a Y ' axis and has a distance R from the origin of the sight line, the upsilon axis is arranged in parallel with a Z ' axis along a vector, and the u axis is arranged into a clockwise arc section and is in a vertical relation with the upsilon axis and the Y ';
in two spatial coordinate systems, the coordinate calculation formula of two-dimensional points on the cylindrical surface is as follows:
Figure BDA0003351758350000031
Figure BDA0003351758350000041
Figure BDA0003351758350000042
wherein:
(x,y,z)Tcoordinates representing points around the driver in the geodetic space;
(xm,ym,zm)Trepresenting the coordinates of the origin of the sight line;
(x′,y′,z′)Trepresents coordinates transformed into an X ' Y ' Z ' coordinate system;
lambda represents a scale factor between the two reference coordinate systems, and the value of lambda is 1.0;
theta represents a rotation matrix of the sight line rigid transformation;
θx,θy,θzrepresenting a rotation matrix around X, Y and the Z axis;
αmrepresents the angle of rotation about the Z axis;
γmrepresents the angle of rotation about the X axis;
r represents the radius of the cylindrical surface and is 3.0;
(u,υ)Tcoordinates representing a cylindrical surface;
αmand gammamRespectively representing the azimuth and vertical angle of the origin of the line of sight, the direction of the Y ' axis coinciding with the direction of advance of the vehicle, and hence Y ' in this case 'The direction of the shaft is consistent with the advancing direction of the vehicle;
in the local three-dimensional space which is locally constructed, the coordinate calculation formula of all sight line end points is as follows:
Figure BDA0003351758350000043
η=θ/θres+1
[ρ]={ρ1,ρ2...ρj...ρη-1,ρηi=D(1≤j≤η),η=θ/θres+1}
[y′e,x′e]=Polar To Cartesian([θ,ρ])
[z′e]={z1,z2...zj...zη-1,zη|zi=D·tanθv(1≤j≤η),η=θ/θres+1}
wherein:
[x′e,y′e,z′e]Trepresenting the three-dimensional coordinates of the sight line end point in the local three-dimensional coordinate system;
θ represents the driver's perspective;
θresrepresents the angular spacing between the view lines;
j represents the sight line index, and when j is more than or equal to 1 and less than or equal to eta, eta is the sight line index;
regarding the area where the projection point coordinates are as the side length WiThe projection point is located at the center of the region, and the coordinates are set to
Figure BDA0003351758350000051
Setting collection
Figure BDA0003351758350000052
Storing two-dimensional coordinates of all sight line end points, and setting the coordinates of a target point as Obi(xi,yi,zi)TBy KD tree algorithm, with parameter WiChebyshev sequence search of/2
Figure BDA0003351758350000053
Points within the neighborhood;
set of settings ΨiStoring two-dimensional coordinates of all points in a square area where projection points are located, and setting psiiAll the two-dimensional points have three-dimensional coordinates corresponding to the two-dimensional points; let κiRepresenting the three-dimensional spatial distance between the line-of-sight origin and the target point, any three-dimensional spatial distance from the line-of-sight origin exceeding κiIs excluded from the set ΨiAnd (c) out. The invention has the beneficial effects that:
1. the invention fixes the small laser radar on the cloud deck, and collects point cloud data through multiple rotation and performs data splicing, thereby solving the problems of small visual angle, incomplete and inaccurate data collection of the small laser radar.
2. The point cloud data subunit dividing method based on the linear index improves the efficiency of data partitioning and indexing.
3. The pavement points in the point cloud data are quickly and efficiently segmented and extracted by adopting a preliminary filtering method, a height histogram ordering method and a hyper-voxel clustering method.
4. The digital elevation model is adopted to convert a space coordinate system and a local plane coordinate system according to the cylindrical perspective projection model, a visual domain model is constructed, the positions of obstacles influencing the visibility of a driver are analyzed and determined, and the visualization of the intersection is analyzed.
5. According to the invention, through the three-dimensional sight distance calculation method, the processing and calculation efficiency of the point cloud data is improved, and the processing problem of massive point cloud data is solved.
6. According to the invention, the conversion from a space three-dimensional coordinate point to a two-dimensional coordinate point is realized through a cylindrical perspective projection model, and a three-dimensional problem is converted into a two-dimensional problem.
Drawings
Fig. 1 is a schematic structural diagram of the urban road intersection visual evaluation method based on point cloud data.
FIG. 2 is a schematic structural diagram of a method for constructing a digital elevation model according to the present invention.
FIG. 3 is a diagram illustrating the effect of the preliminary filtering method according to the present invention.
FIG. 4 is a schematic structural diagram of the height-histogram method of the present invention.
FIG. 5 is a schematic diagram of a cylindrical perspective projection calculation according to the present invention.
FIG. 6 is a view field blind area structure diagram according to the present invention.
FIG. 7 is a corresponding diagram of a graph of visible area versus viewing distance according to the present invention.
Detailed Description
The present invention will now be described in further detail with reference to the accompanying drawings.
It should be noted that the terms "upper", "lower", "left", "right", "front", "back", etc. used in the present invention are for clarity of description only, and are not intended to limit the scope of the present invention, and the relative relationship between the terms and the terms is not limited by the technical contents of the essential changes.
Example one
FIG. 1 is a schematic structural diagram of the urban road intersection visual evaluation method based on point cloud data. FIG. 2 is a schematic structural diagram of a method for constructing a digital elevation model according to the present invention. The embodiment is suitable for the urban road intersection three-dimensional sight distance calculation method based on point cloud data, and the method for calculating the urban road intersection three-dimensional sight distance comprises the following steps:
and S1, collecting point cloud data of the road intersection.
And S2, performing pavement identification and segmentation processing on the point cloud data, and distinguishing the pavement part from other discrete plane points to obtain complete pavement point cloud data.
And S3, establishing a plane grid lattice, taking the identified road surface points as a reference interpolation reference, acquiring height information of the plane grid points by utilizing a neighboring point interpolation algorithm, and constructing a digital elevation model.
S4, quantifying the vision concept into a fan shape formed by innumerable rays emitted from the sight origin of the driver, wherein the angle of the fan shape is related to the visual angle range of the driver; and converting the spatial three-dimensional coordinate points into two-dimensional coordinate points through a cylindrical perspective projection model, and converting the three-dimensional problem into a two-dimensional problem.
And (3) constructing a coordinate system by taking the starting point of the driver sight as an origin, converting the three-dimensional coordinate system into a two-dimensional coordinate system, converting the two-dimensional rectangular coordinate system into a polar coordinate system, and determining the position of the obstacle blocking the driver sight.
And S5, analyzing variable factors which may influence the visible area of the urban road intersection.
Firstly, collecting point cloud data of road intersection
In the above step S1, the process of collecting the point cloud data of the intersection includes the following sub-steps:
s11, fixing the small laser radar on the rotating holder, placing the small laser radar at the center of an urban road intersection, and keeping the laser radar carrying support fixed; and acquiring data for multiple times by rotating the cloud deck to obtain subfiles of the intersection full-face point cloud data, wherein the contact ratio of the adjacent 2 point cloud files is 1/3.
And S12, carrying out data splicing on the obtained point cloud file through corresponding characteristic points to obtain complete point cloud data of the urban road intersection.
Secondly, carrying out pavement identification and segmentation processing on point cloud data
In step S2, the process of performing the road surface recognition segmentation processing on the point cloud data includes the following substeps:
s21: rasterization is carried out on original three-dimensional point cloud data, each grid point after the point cloud data is rasterized has a corresponding linear index value, data points at the same position of the same matrix have the same linear index value, and the linear index value zeta of the data points is stepped when the data points cross over one matrix. The specific rasterization processing steps are as follows:
firstly, rasterizing original three-dimensional point cloud data, wherein the space coordinate of the original three-dimensional point cloud data is assumed to be (X, Y, Z)TCreating a temporary array (x, y, z)T,(x,y,z)TCan be calculated by the following formula:
Figure BDA0003351758350000061
in the formula: [.]Represents the rounding symbol εx,εy,εzRespectively, the grid spacing along the X-axis, the Y-axis and the Z-axis, the smaller the spacing, the smaller the size of the block, and vice versa. The partitioning process in different dimensions is similar, and therefore is explained herein with the example of a division of two-dimensional columnar cells. Suppose (min x, max y)TAnd (min x, min y)TRespectively calculating the starting point and the end point of the linear index, and after rasterizing the point cloud data, calculating the starting point and the end point of the linear index by using a formula
Figure BDA0003351758350000071
Calculating to obtain the horizontal distance D between the starting point and the ending pointxAt a distance D from the verticaly,DxAnd DyAre all positive integers. Establishing a size of (1+ D) at the same timey)×(1+Dx) The zero value matrix Ψ and the empty set cell matrix Ω. Horizontal distance d from any grid point to the start of the data indexxAnd a vertical distance dyAll calculated according to the same method. Therefore, in a computer memory system, the elements in the zero value matrix Ψ and the empty set cell matrix Ω can be considered to use the row number 1+ dyAnd column number 1+ dxIs represented by (0 < d)x≤Dx,0<dy≤Dy). For convenient data indexing, can be represented by the formula ζ ═ dx·(1+Dy)+dy+1 translates the row and column numbers into a linear index denoted ζ. Each grid point after the point cloud data is rasterized has a corresponding linear index value, data points at the same position of the same matrix have the same linear index value, and zeta of the data points is stepped when the data points cross one matrix, which is called as a step point, and zeta is used for indicating that the data points are steppedjAnd (4) defining. When point cloud data are partitioned, the point cloud data can be arranged in an ascending order according to values of a linear index zeta, then whether the point is a step point or not is judged by calculating a difference value delta zeta of adjacent data points, if the delta zeta is smaller than 1, the point is not the step point, and if not, the point is the step point. For all stepsZeta jump pointjThe same treatment is carried out, i.e. between it and the last jump ζj-1Storing the point cloud data to the zeta th matrix of the cellular omega matrixj-1In each unit, simultaneously supplying the ζ th matrix to Ψ matrixj-1And assigning the value of each element as 1, thereby realizing the blocking of the point cloud data.
S22: using height threshold htAnd screening out points with lower elevations in the columnar unit, and analyzing the residual data points by using a principal component analysis method to obtain the principal direction of the columnar unit.
FIG. 3 is a diagram illustrating the effect of the preliminary filtering method according to the present invention. In the embodiment, the unit grid is divided into 0.5m by 0.5m, and the height range of the road surface point cloud data in each cylindrical unit is extremely limited and approximately equal, so that the height threshold h can be utilizedtThe points with lower elevation within the columnar cells are first screened. The remaining data points are then analyzed using principal component analysis methods to obtain their principal directions. Principal component analysis is one of the common statistical methods, that is, a group of variables possibly having correlation is converted into linearly uncorrelated variables, that is, principal components, by matrix orthogonal transformation. Specifically, in the present embodiment, it is assumed that (x, y, z) is setTIs the coordinate of any point in the cylindrical unit, (lambda, u, upsilon)TIs its transformed variable. For a point cloud with good plane characteristics, the main direction v of the point cloud is consistent with a plane normal vector formed by the point cloud, namely, the point cloud is in a vertical relation with each other, so that the average absolute error in the upsilon direction is smaller than that of a non-characteristic point. The mean absolute error is simply the average of the absolute values of the deviations of all individual data from their overall arithmetic mean. For a set of m data, the mean absolute error is given by
Figure BDA0003351758350000072
The calculation is carried out according to the calculation,
Figure BDA0003351758350000073
since the elevation of the ground points is in a lower position in all the point cloud data, the height threshold is set to 0.2m in this embodiment, i.e., each columnar cell size is 0.5m 0.2 m.
S23: sequencing and detecting road points by using a height histogram method, and extracting the characteristics of the obtained plane point cloud data; the characteristic variables comprise the number of point clouds in a unit area, and standard deviation and median of difference values of adjacent items along the x direction and the y direction, which are used for describing distribution characteristics of the point clouds on an x-y plane, so that an observation value array with the number consistent with that of the columnar units is obtained; dividing the cylindrical units into three categories of uniform plane points, non-uniform plane points and pseudo plane points by using an unsupervised classification K-Means clustering algorithm; all variables are standardized to eliminate the difference, so that the intervals of the variables fall into [0,1 ]; and eliminating uneven plane points and pseudo plane points, and merging the even plane points into the cylindrical unit after the primary filtering treatment to obtain optimized plane point cloud data.
FIG. 4 is a schematic structural diagram of the height-histogram method of the present invention. In step S23 of this embodiment, the process of the height histogram ranking method includes the following steps:
s231, height delta of single stripe in height histogramhSetting the cell number as 1 m-4 m, establishing a frequency histogram along the Z direction, and establishing an empty cell array phi at the same time.
S232, sorting the frequency according to the frequency descending order, and setting F { F }1,F2...Fi...Fn|1≤i≤n,i,n∈N+And the value is the height value corresponding to each arranged stripe.
S233, set up cycle i 2, k 1, and apply Fi-1And merging and storing the corresponding point cloud data to phi { k }.
S234, e.g. Fi-Fi-1≤δhThen F will beiAnd Fi-1Merging and storing the corresponding point cloud data to phi { k }, otherwise, taking k as k +1, and FiAnd storing the corresponding point cloud data to phi { k }.
S235, making i equal to i +1, such as i ≦ NhThen the process returns to step S234, otherwise the loop ends.
Considering that the algorithm processes in each cylindrical unit are similar and independent from each other, the embodiment makes full use of the parallel computing advantages of the multithreaded processor, and improves the efficiency of automatic plane point detection in each cylindrical unit. In consideration of the difference of data points in different cylindrical units, the embodiment also performs feature extraction on the plane point cloud data obtained by the algorithm, the extracted feature variables mainly include the standard deviation and the median of the difference value of adjacent terms along the x direction and the y direction, the standard deviation and the median are firstly arranged in an ascending order and then calculated during calculation, and the feature variables mainly describe the distribution features of the point cloud on the x-y plane. Because the distribution of road points is continuous and regular, and not uneven. Further, the point cloud density of the number of point clouds per unit area is also taken as one of the characteristic variables, considering that the road surface points within a partial cylindrical cell are not completely uniformly distributed. The point cloud density of the point cloud number in the unit area is calculated by adopting a method of block calculation and then averaging, each columnar unit is equally divided into 100 sub-areas of 10 x 10, and the point cloud density is equal to the ratio of the area of the sub-area containing the points to the total number of the points.
After the characteristic vectors are extracted, an observed value array with the quantity consistent with that of the columnar units is obtained, and then the columnar units can be divided into three types of uniform plane points, non-uniform plane points and pseudo plane points by using an unsupervised classification K-Means clustering algorithm. Wherein the uniform plane points are regularly distributed in the unit, and the non-uniform plane points are distributed in disorder; the pseudo-planar points are points that are mistakenly classified as planar points because of their small height interval in the Z direction, and thus exhibit a linear structure. All variables need to be normalized to eliminate their differences, so that the intervals of the variables fall within [0,1 ]. And then eliminating uneven plane points and pseudo plane points, and merging the even plane points into the cylindrical unit after the primary filtering treatment to obtain optimized plane point cloud data.
S24: and distinguishing the pavement part from other discrete plane points by using a hyper-voxel clustering method to obtain complete pavement point cloud data, retrieving by using index position information to obtain three-dimensional point cloud data in a corresponding cellular matrix, and splicing the three-dimensional point cloud data to obtain a final pavement hyper-voxel clustering result. The clustering method based on the hyper-voxels can effectively distinguish the pavement part from other discrete plane points and obtain complete pavement point cloud data.
Thirdly, constructing a digital elevation model
In this embodiment, the digital elevation model construction includes the following sub-steps:
on the plane with (minx, maxy)TAnd (maxx, miny)TEstablishing a plane grid dot matrix at intervals of 0.2m as angular points;
and taking the identified road surface points as reference interpolation reference, and acquiring the height information of the plane grid points by utilizing a neighboring point interpolation algorithm. When interpolation is carried out under the digital elevation model, the adjacent grid points can be positioned based on the plane position of the observation point, and the height information of the position of the observation point is quickly acquired, so that subsequent visibility analysis is facilitated.
Fourthly, converting the three-dimensional problem into the two-dimensional problem
FIG. 5 is a schematic diagram of a cylindrical perspective projection calculation according to the present invention. FIG. 6 is a view field blind area structure diagram according to the present invention. In step S4, the spatial three-dimensional coordinate point is converted into a two-dimensional coordinate point through the cylindrical perspective projection model, and the processing procedure of converting the three-dimensional problem into the two-dimensional problem is as follows:
in a two-dimensional plane, the field of view is conceptually quantified as a fan of countless rays emitted by the origin of the driver's gaze. The distance between the start and end points of the line of sight is a sector radius D, the angle of view of the driver is θ, and both D and θ are set as variable parameters for more comprehensive evaluation. In this sector area, detection of an obstacle is achieved when the line of sight is obstructed, θ being set to 120 °.
Through the cylindrical perspective projection model, the conversion from a space three-dimensional coordinate point to a two-dimensional coordinate point is realized, and a three-dimensional problem is converted into a two-dimensional problem. First, two local coordinate systems are established to acquire coordinate data of cylindrical surface points. Now, the viewpoint starting point is set as the origin, and the coordinates are set to (x)m,ym,zm)TThe vehicle advancing direction is set as the Y ' axis, the X ' axis is parallel to the XOY plane and perpendicular to the Y axis, and the Z ' axis is disposed perpendicular to the X ' OY ' plane. On the basis of the three-dimensional space establishment, another two-dimensional coordinate system u-upsilon can be established, the origin of the two-dimensional coordinate system is arranged on the axis Y 'and has a distance R from the origin of the sight line, the upsilon axis is arranged in parallel with the axis Z' along a vector, and the axis u is arrangedIs a clockwise arc segment and is in a perpendicular relationship with the upsilon axis and the Y'. In two spatial coordinate systems, the coordinates of two-dimensional points on the cylindrical surface can be calculated by the following formula:
Figure BDA0003351758350000091
Figure BDA0003351758350000092
wherein: (x, y, z)TCoordinates representing points around the driver in the geodetic space; (x)m,ym,zm)TRepresenting the coordinates of the origin of the sight line; (x ', y ', z ')TRepresents coordinates transformed into an X ' Y ' Z ' coordinate system; lambda represents a scale factor between the two reference coordinate systems, and the value of lambda is 1.0; theta represents a rotation matrix of the sight line rigid transformation; thetax,θy,θzRepresenting a rotation matrix around X, Y and the Z axis; alpha is alphamRepresents the angle of rotation about the Z axis; gamma raymRepresents the angle of rotation about the X axis; r represents the radius of the cylindrical surface and is 3.0; (u, upsilon)TCoordinates representing a cylindrical surface; alpha is alphamAnd gammamRespectively characterizing the azimuth and vertical angle of the origin of the line of sight, the direction of the Y 'axis coincides with the direction of advance of the vehicle, and therefore in this case the direction of the Y' axis coincides with the direction of advance of the vehicle. Thus, perspective projection may work even if there is a horizontal or vertical curve in the alignment of the roads. Subsequently, in the locally constructed local three-dimensional space, the coordinates of all the sight-line end points are calculated using the following formula.
Figure BDA0003351758350000093
η=θ/θres+1
[ρ]={ρ12…ρj…ρη-1ηi=D(1≤j≤η),η=θ/θres+1}
[y′e,x′e]=Polar To Cartesian([θ,ρ])
[z′e]={z1,z2…zj…zη-1,zη|zi=D·tanθv(1≤j≤η),η=θ/θres+1}
Wherein, the meaning of each parameter in the formula is as follows:
[x′e,y′e,z′e]Trepresenting the three-dimensional coordinates of the sight line end point in the local three-dimensional coordinate system; θ represents the driver's perspective; thetaresRepresents the angular spacing between the view lines; j represents the sight line index, and when j is more than or equal to 1 and less than or equal to eta, eta is the sight line index.
Through the above calculation, two-dimensional coordinates of points representing the road environment and all sight-line end points can be obtained.
The next step is to obtain the projected point shown in fig. 5, i.e. the coordinates of the projected point of the target point on the cylindrical surface, and the area where the projected point coordinates are located can be regarded as the side length WiThe projection point is located at the center of the region, and the coordinates are set to
Figure BDA0003351758350000101
Setting collection
Figure BDA0003351758350000102
Storing two-dimensional coordinates of all sight line end points, and setting the coordinates of a target point as Obi(xi,yi,zi)TThen passes through a KD tree algorithm with the parameter WiChebyshev sequence search of/2
Figure BDA0003351758350000103
Points within the neighborhood. Set of settings ΨiStoring two-dimensional coordinates of all points in a square area where projection points are located, and setting psiiAll the two-dimensional points have three-dimensional coordinates corresponding to the two-dimensional points. Let κiRepresenting the three-dimensional spatial distance between the line-of-sight origin and the target point, any three-dimensional spatial distance from the line-of-sight origin exceeding κiWill be excluded from the set ΨiIn addition, this can greatly improve the efficiency of searching along the line of sight.
In this embodiment, the process of visual domain modeling mainly includes: the method comprises the steps of constructing a coordinate system by taking a sight starting point of a driver as an origin, converting a three-dimensional coordinate system into a two-dimensional coordinate system and converting a two-dimensional rectangular coordinate system into a polar coordinate system, determining the position of an obstacle blocking the sight of the driver through two times of coordinate conversion, and realizing the construction of a three-dimensional visual domain model in the coordinate system when the coordinate system is converted based on a specific matrix equation and is mapped into a real three-dimensional space coordinate system.
FIG. 7 is a corresponding diagram of a graph of visible area versus viewing distance according to the present invention. According to the corresponding schematic diagram of the visual domain modeling and the sight distance curve obtained by the embodiment, the position of the barrier can be quickly positioned and the sight distance can be quantitatively calculated through the method, so that the visibility evaluation of the urban road intersection can be obtained.
The above is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above-mentioned embodiments, and all technical solutions belonging to the idea of the present invention belong to the protection scope of the present invention. It should be noted that modifications and embellishments within the scope of the invention may be made by those skilled in the art without departing from the principle of the invention.

Claims (7)

1. A point cloud data-based urban road intersection visual evaluation method is characterized in that the road intersection three-dimensional sight distance calculation method comprises the following steps:
s1, collecting point cloud data of the road intersection;
s2, carrying out pavement identification and segmentation processing on the point cloud data, and distinguishing a pavement part from other discrete plane points to obtain complete pavement point cloud data;
s3, establishing a plane grid dot matrix, taking the identified road surface points as a reference interpolation reference, acquiring height information of the plane grid dots by utilizing a neighboring point interpolation algorithm, and constructing a digital elevation model;
s4, quantifying the vision concept into a fan shape formed by innumerable rays emitted from the sight origin of the driver, wherein the angle of the fan shape is related to the visual angle range of the driver; converting the spatial three-dimensional coordinate points into two-dimensional coordinate points through a cylindrical perspective projection model, and converting the three-dimensional problem into a two-dimensional problem;
constructing a coordinate system by taking a starting point of the driver sight as an origin, converting a three-dimensional coordinate system into a two-dimensional coordinate system, converting a two-dimensional rectangular coordinate system into a polar coordinate system, and determining the position of an obstacle blocking the driver sight;
and S5, analyzing variable factors which may influence the visible area of the urban road intersection.
2. The method for visually evaluating the urban road intersection based on the point cloud data according to claim 1, wherein in the step S1, the process of collecting the point cloud data of the road intersection comprises the following steps:
s11, fixing the small laser radar on the rotating holder, placing the small laser radar at the center of an urban road intersection, and keeping the laser radar carrying support fixed; acquiring data for multiple times by rotating the cloud deck to obtain subfiles of the intersection global point cloud data, wherein the contact ratio of the cloud files of the adjacent 2 points is 1/3;
and S12, carrying out data splicing on the obtained point cloud file through corresponding characteristic points to obtain complete point cloud data of the urban road intersection.
3. The method for visually evaluating the urban road intersection based on the point cloud data as claimed in claim 1, wherein in step S2, the process of performing the road surface identification and segmentation processing on the point cloud data comprises the following steps:
s21: rasterizing original three-dimensional point cloud data, wherein each grid point of the rasterized point cloud data has a corresponding linear index value, data points at the same position of the same matrix have the same linear index value, and the linear index value zeta of the data points is stepped when the data points cross one matrix;
s22: using height threshold htScreening outAnalyzing the residual data points by using a principal component analysis method to obtain the principal direction of the columnar unit at a point with lower elevation in the columnar unit;
s23: sequencing and detecting road points by using a height histogram method, and extracting the characteristics of the obtained plane point cloud data; the characteristic variables comprise the number of point clouds in a unit area, and standard deviation and median of difference values of adjacent items along the x direction and the y direction, which are used for describing distribution characteristics of the point clouds on an x-y plane, so that an observation value array with the number consistent with that of the columnar units is obtained; dividing the cylindrical units into three categories of uniform plane points, non-uniform plane points and pseudo plane points by using an unsupervised classification K-Means clustering algorithm; all variables are standardized to eliminate the difference, so that the intervals of the variables fall into [0,1 ]; eliminating uneven plane points and pseudo plane points, and merging the even plane points into the cylindrical unit after the primary filtering processing to obtain optimized plane point cloud data;
s24: and distinguishing the road surface part from other discrete plane points by using a hyper-voxel clustering method to obtain complete road surface point cloud data.
4. The method for visual evaluation of urban road intersections based on point cloud data according to claim 3, wherein in step S21, the process of rasterizing the original three-dimensional point cloud data comprises the following steps:
s211, assuming the original three-dimensional point cloud data space coordinate as (X, Y, Z)TBy εx,εy,εzRespectively representing the grid spacing along the X, Y and Z axes, creating a temporary array (X, Y, Z)T
Figure FDA0003351758340000021
S212, suppose (minx, maxy)TAnd (maxx, miny)TRespectively calculating a starting point and an end point of linear index calculation, and calculating a horizontal distance D between the starting point and the end point through a formula after rasterizing point cloud dataxAt a distance D from the verticalyWherein D isxAnd DyAre all positive integers, and the calculation formula is as follows:
Figure FDA0003351758340000022
s213, establishing the size of (1+ D)y)×(1+Dx) Zero value matrix Ψ and the empty set cell matrix Ω; calculating to obtain the horizontal distance D from any grid point to the starting point of the data indexxAnd a vertical distance Dy(ii) a Using line number 1+ dyAnd column number 1+ dxRepresenting elements in the zero value matrix Ψ and the empty set cell matrix Ω; by the formula ζ ═ dx·(1+Dy)+dy+1 converts the row and column numbers to a linear index denoted ζ;
when point cloud data are partitioned, the point cloud data are arranged in an ascending order according to values of linear indexes zeta, whether the point is a step point or not is judged by calculating the difference value delta zeta of adjacent data points, if the difference value delta zeta is smaller than 1, the point is not the step point, and if not, the point is the step point; will step at zetajζ being the last step pointj-1The point cloud data between the two is stored to the zeta th matrix of the cellular omegaj-1In each unit, simultaneously supplying the ζ th matrix to Ψ matrixj-1The individual elements are assigned a value of 1.
5. The method for visually evaluating the urban road intersection based on the point cloud data as claimed in claim 2, wherein in step S23, the processing procedure of the height orthogonal map ranking method comprises the following steps:
s231, height delta of single stripe in height histogramhSetting the cell number to be 1-4 m, establishing a frequency histogram along the Z direction, and establishing an empty cell array phi at the same time;
s232, sorting the frequency according to the frequency descending order, and setting F { F }1,F2...Fi...Fn|1≤i≤n,i,n∈N+The height value corresponding to each arranged stripe is obtained;
s233, set up cycle i 2, k 1, and apply Fi-1Merging and storing corresponding point cloud data to phi { k};
S234, e.g. Fi-Fi-1≤δhThen F will beiAnd Fi-1Merging and storing the corresponding point cloud data to phi { k }, otherwise, taking k as k +1, and FiStoring the corresponding point cloud data to phi { k };
s235, making i equal to i +1, such as i ≦ NhThen the process returns to step S234, otherwise the loop ends.
6. The method for visual evaluation of urban road intersections based on point cloud data according to claim 2, wherein in step S24, the processing procedure of the voxel clustering method is as follows:
and searching to obtain three-dimensional point cloud data in the corresponding cellular matrix by using the index position information, splicing the three-dimensional point cloud data to obtain complete road surface point cloud data in a final road surface super-voxel clustering result, and distinguishing a road surface part from other discrete plane points.
7. The method according to claim 1, wherein in step S4, the processing procedure for converting the three-dimensional problem into the two-dimensional problem by converting the spatial three-dimensional coordinate points into the two-dimensional coordinate points through the cylindrical perspective projection model is as follows:
the viewpoint starting point is set as the origin, and the coordinates are set as (x)m,ym,zm)TSetting the advancing direction of the vehicle as a Y 'axis, wherein an X' axis is parallel to an XOY plane and is vertical to the Y axis, and a Z 'axis is vertical to the X' OY plane to construct a three-dimensional space coordinate system;
constructing a corresponding two-dimensional coordinate system u-u, wherein the origin of the two-dimensional coordinate system is arranged on a Y ' axis, the distance from the origin of the sight line is R, the u axis is arranged in parallel with a Z ' axis along a vector, the u axis is arranged into a clockwise arc section, and the u axis is in a perpendicular relation with both a v axis and the Y ';
in two spatial coordinate systems, the coordinate calculation formula of two-dimensional points on the cylindrical surface is as follows:
Figure FDA0003351758340000031
Figure FDA0003351758340000032
Figure FDA0003351758340000041
wherein: (x, y, z)TCoordinates representing points around the driver in the geodetic space; (x)m,ym,zm)TRepresenting the coordinates of the origin of the sight line; (x ', y ', z ')TRepresents coordinates transformed into an X ' Y ' Z ' coordinate system; lambda represents a scale factor between the two reference coordinate systems, and the value of lambda is 1.0; theta represents a rotation matrix of the sight line rigid transformation; thetax,θy,θzRepresenting a rotation matrix around X, Y and the Z axis; α represents a rotation angle about the Z-axis; gamma raymRepresents the angle of rotation about the X axis; r represents the radius of the cylindrical surface and is 3.0; (u, v)TCoordinates representing a cylindrical surface; alpha is alphamAnd gammamRespectively representing the azimuth angle and the vertical angle of the sight line origin, wherein the direction of the Y 'axis is consistent with the advancing direction of the vehicle, so that the direction of the Y' axis is consistent with the advancing direction of the vehicle in the condition;
in the local three-dimensional space which is locally constructed, the coordinate calculation formula of all sight line end points is as follows:
Figure FDA0003351758340000042
η=θ/θres+1
[ρ]={ρ1,ρ2...ρj...ρη-1,ρηi=D(1≤j≤η),η=θ/θres+1}
[y′e,x′e]=Polar To Cartesian([θ,ρ])
[z′e]={z1,z2...zj...zη-1,zη|zi=D·tanθv(1≤j≤η),η=θ/θres+1}
wherein: [ x'e,y′e,z′e]TRepresenting the three-dimensional coordinates of the sight line end point in the local three-dimensional coordinate system; θ represents the driver's perspective; thetaresRepresents the angular spacing between the view lines; rho represents a distance parameter in a polar coordinate after coordinate conversion; d represents the sector radius, namely the sight distance of a driver; j represents the sight line index, and when j is more than or equal to 1 and less than or equal to eta, eta is the sight line index;
regarding the area where the projection point coordinates are as the side length WiThe projection point is located at the center of the region, and the coordinates are set to
Figure FDA0003351758340000043
Setting collection
Figure FDA0003351758340000044
Storing two-dimensional coordinates of all sight line end points, and setting the coordinates of a target point as Obi(xi,yi,zi)TBy KD tree algorithm, with parameter WiChebyshev sequence search of/2
Figure FDA0003351758340000045
Points within the neighborhood; set of settings ΨiStoring two-dimensional coordinates of all points in a square area where projection points are located, and setting psiiAll the two-dimensional points have three-dimensional coordinates corresponding to the two-dimensional points; let κiRepresenting the three-dimensional spatial distance between the line-of-sight origin and the target point, any three-dimensional spatial distance from the line-of-sight origin exceeding κiIs excluded from the set ΨiAnd (c) out.
CN202111340586.2A 2021-11-12 2021-11-12 Point cloud data-based urban road intersection visual evaluation method Pending CN114119866A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111340586.2A CN114119866A (en) 2021-11-12 2021-11-12 Point cloud data-based urban road intersection visual evaluation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111340586.2A CN114119866A (en) 2021-11-12 2021-11-12 Point cloud data-based urban road intersection visual evaluation method

Publications (1)

Publication Number Publication Date
CN114119866A true CN114119866A (en) 2022-03-01

Family

ID=80379289

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111340586.2A Pending CN114119866A (en) 2021-11-12 2021-11-12 Point cloud data-based urban road intersection visual evaluation method

Country Status (1)

Country Link
CN (1) CN114119866A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114677424A (en) * 2022-05-26 2022-06-28 浙江天新智能研究院有限公司 Point cloud data processing method for unattended screw ship unloader
CN115690773A (en) * 2022-12-26 2023-02-03 武汉天际航信息科技股份有限公司 DEM partitioning and rebuilding method, computing device and storage medium
CN115712298A (en) * 2022-10-25 2023-02-24 太原理工大学 Autonomous navigation method for robot running in channel based on single-line laser radar
CN117611759A (en) * 2023-11-30 2024-02-27 博雅达勘测规划设计集团有限公司 Three-dimensional model-based scoring map generation method, device, terminal and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114677424A (en) * 2022-05-26 2022-06-28 浙江天新智能研究院有限公司 Point cloud data processing method for unattended screw ship unloader
CN115712298A (en) * 2022-10-25 2023-02-24 太原理工大学 Autonomous navigation method for robot running in channel based on single-line laser radar
CN115690773A (en) * 2022-12-26 2023-02-03 武汉天际航信息科技股份有限公司 DEM partitioning and rebuilding method, computing device and storage medium
CN117611759A (en) * 2023-11-30 2024-02-27 博雅达勘测规划设计集团有限公司 Three-dimensional model-based scoring map generation method, device, terminal and storage medium

Similar Documents

Publication Publication Date Title
CN114119866A (en) Point cloud data-based urban road intersection visual evaluation method
Yu et al. Semiautomated extraction of street light poles from mobile LiDAR point-clouds
EP2710556B1 (en) Method and system for processing image data
CN111325138B (en) Road boundary real-time detection method based on point cloud local concave-convex characteristics
CN106780524A (en) A kind of three-dimensional point cloud road boundary extraction method
CN111598952B (en) Multi-scale cooperative target design and online detection identification method and system
EP4120123A1 (en) Scan line-based road point cloud extraction method
JP7232946B2 (en) Information processing device, information processing method and program
CN112184736A (en) Multi-plane extraction method based on European clustering
CN113221648B (en) Fusion point cloud sequence image guideboard detection method based on mobile measurement system
Ye et al. Robust lane extraction from MLS point clouds towards HD maps especially in curve road
Ma et al. A convolutional neural network method to improve efficiency and visualization in modeling driver’s visual field on roads using MLS data
Schlichting et al. Vehicle localization by lidar point correlation improved by change detection
CN113345094A (en) Electric power corridor safety distance analysis method and system based on three-dimensional point cloud
CN114332134B (en) Building facade extraction method and device based on dense point cloud
Wen et al. Research on 3D point cloud de-distortion algorithm and its application on Euclidean clustering
CN112581511B (en) Three-dimensional reconstruction method and system based on near vertical scanning point cloud rapid registration
CN111861946B (en) Adaptive multi-scale vehicle-mounted laser radar dense point cloud data filtering method
Sun et al. Automated segmentation of LiDAR point clouds for building rooftop extraction
Guo et al. Occupancy grid based urban localization using weighted point cloud
CN106709473B (en) Voxel-based airborne LIDAR road extraction method
Qin et al. A voxel-based filtering algorithm for mobile LiDAR data
CN116052023A (en) Three-dimensional point cloud-based electric power inspection ground object classification method and storage medium
CN115690138A (en) Road boundary extraction and vectorization method fusing vehicle-mounted image and point cloud
CN112989946B (en) Lane line determining method, device, equipment and vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination