CN114612795A - Laser radar point cloud-based road surface scene target identification method - Google Patents

Laser radar point cloud-based road surface scene target identification method Download PDF

Info

Publication number
CN114612795A
CN114612795A CN202210203259.0A CN202210203259A CN114612795A CN 114612795 A CN114612795 A CN 114612795A CN 202210203259 A CN202210203259 A CN 202210203259A CN 114612795 A CN114612795 A CN 114612795A
Authority
CN
China
Prior art keywords
point
point cloud
target
points
road surface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210203259.0A
Other languages
Chinese (zh)
Inventor
李振华
鲍方明
来建成
王春勇
严伟
纪运景
吴志祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202210203259.0A priority Critical patent/CN114612795A/en
Publication of CN114612795A publication Critical patent/CN114612795A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a road surface scene target identification method based on laser radar point cloud. The method comprises the steps of obtaining road surface scene point cloud data through a laser radar sensor; fitting road scene point cloud data by using a second-order polynomial surface model, removing ground point cloud by using a random sampling consistency method, and performing multi-scale combined denoising treatment on the point cloud data so as to correct the position of the target surface point cloud and eliminate isolated outliers; carrying out segmentation clustering on the point cloud data by a mean shift clustering method to obtain an independent target point cloud set; extracting structural features of the target point cloud set to construct a structural feature vector; and training and predicting by using a support vector machine classifier of a parameter self-adaptive selection method based on global search and local search modes, thereby realizing target classification under a road surface scene. Compared with the traditional target classification method based on the laser radar, the target classification method based on the laser radar has higher identification accuracy and has important significance for researching the perception capability of automatic driving.

Description

Laser radar point cloud-based road surface scene target identification method
Technical Field
The invention relates to the field of target classification, in particular to a road surface scene target identification method based on laser radar point cloud.
Background
The laser radar has the advantages of strong anti-interference capability, high spatial resolution, strong environment adaptability and the like, and is widely applied to the intelligent traffic fields of intelligent traffic monitoring, vehicle-road cooperation, automatic driving and the like. However, whether in the applications of people and vehicle safety early warning, people flow rate and vehicle flow rate statistics, traffic comprehensive monitoring and command or automatic driving, the method depends on accurate imaging and target identification of the point cloud target. At present, all laser radar point cloud target identification algorithms under a road surface scene cannot give consideration to both efficiency and accuracy, or cannot accurately segment a target.
Disclosure of Invention
The invention aims to provide a road surface scene target identification method based on laser radar point cloud aiming at the defects of the prior art.
The technical scheme for realizing the purpose of the invention is as follows: a road surface scene target identification method based on laser radar point cloud comprises the following steps:
step 1: collecting road surface scene point cloud data by using a laser radar;
and 2, step: determining ground point cloud according to the quadratic polynomial model and the random sampling consistency, and deleting the ground point cloud data from the point cloud data;
and step 3: filtering the remaining point cloud in the step 2, including removing isolated outliers from the point cloud by using statistical filtering, and correcting the point cloud position of the target surface by bilateral filtering based on a point cloud normal vector and an Euclidean distance;
and 4, step 4: dividing the three-dimensional point cloud obtained in the step 3 by using Meanshift clustering to obtain a target point cloud set;
and 5: carrying out feature description on the target point cloud set obtained in the step 4, and extracting structural feature information;
step 6: and inputting the extracted structural feature information into a support vector machine model obtained by training, and classifying the target point cloud.
Preferably, the road surface scene point cloud data includes spatial coordinate information and reflectivity information of the target.
Preferably, the specific method for determining the ground point cloud according to the quadratic polynomial model and the random sampling consistency is as follows:
step 2.1: setting an initial ground height threshold value d, representing the reference of a ground plane by taking z as 0, randomly extracting 6 data points from z belonging to [ -d, d ], and fitting an initial ground model according to a quadratic polynomial fitting road surface expression to serve as a current optimal model;
step 2.2: randomly extracting 6 different data points from z E < -d, d ] and fitting a new ground model according to the step 2.1;
step 2.3: comparing the number of the interior points of the current optimal model and the new ground model, and updating the model with more interior points into the current optimal model;
step 2.4: and (4) repeating the step 2.2 to the step 2.3 until the maximum iteration times are reached, ending the process, and recording the inner point of the current optimal model as the ground point cloud.
Preferably, the method for determining the interior points of the model comprises the following steps:
traversing the data points in the point cloud and judging whether the data points are interior points, wherein the specific judgment method comprises the following steps: determining the corresponding z value of a point (x, y) on the fitted surface at the current stage according to the quadratic polynomial fitting road surface expression to be represented as zgThe z-axis coordinate value corresponding to the true coordinates of the point (x, y) is ztWhen | zt-zgAnd when | ≦ e, determining the point as an inner point, wherein e is a distance error threshold.
Preferably, the maximum number of iterations is:
Figure BDA0003528189090000021
in the formula, u is the proportion of the point cloud of the ground point to the whole point cloud, N represents the point number of the model, and P is the confidence coefficient of the model.
Preferably, the specific method for filtering the point cloud remaining in step 2 is as follows:
step 3.1: and (3) statistical filtering: taking k points around each point in the point cloud as its neighborhood points, calculating the distance value between the point and its neighborhood points and calculating the average distance daAs a Gaussian function with daThe peak value corresponds to the abscissa, and the distance from the point is daThe number of the points is used as the vertical coordinate corresponding to the peak value, the Gaussian function formula can know that the corresponding mean value mu and the standard deviation sigma of the Gaussian function parameter can be obtained through the two data, the confidence interval of 90 percent of the Gaussian distribution function is taken, and the maximum value of the corresponding distance in the interval is set as dmaxIf the average distance d of a certain pointa>dmaxClassifying the point cloud into a noise point and removing the noise point, otherwise, keeping the point to obtain a three-dimensional point cloud after statistical filtering;
step 3.2: and (3) performing normal vector correction on the points after the statistical filtering: setting an angle threshold T, and obtaining a point p to be corrected by using radius searchiK neighborhood points, calculating the constraint factor of each point:
Figure BDA0003528189090000031
determining a point piThe modified normal vector is represented as:
Figure BDA0003528189090000032
wherein n isiTo the point p to be correctediNormal vector of (1), nijRepresents piThe normal vectors of the j neighborhood points.
Traversing points in the point cloud, and correcting the normal vector of each point;
step 3.3: and (3) carrying out bilateral filtering on the corrected points respectively:
for the target point piAnd its neighborhood point pijCalculating the filter parameter u1,u2Wherein u is1=||pi-pijI represents the target point piDistance to a neighborhood point; u. of2=<ni,pi-pij>Representing the target point piThe inner product of the directed distance to the neighborhood point and the normal vector of this point;
according to
Figure BDA0003528189090000033
And
Figure BDA0003528189090000034
calculating a function
Figure BDA0003528189090000035
And
Figure BDA0003528189090000036
is brought into
Figure BDA0003528189090000037
Obtaining a weight factor alpha;
updating target point information: p is a radical ofi:=pi+αniAnd finishing the filtering of the current point.
Preferably, the method for obtaining the target point cloud set by segmenting the three-dimensional point cloud obtained in step 3 by using the Meanshift clustering comprises the following specific steps:
step 4.1: defining a point p in a point cloudiAnd its neighborhood point pijRandomly selecting a point p in the unmarked point cloud dataiAs a starting point;
step 4.2: finding out a neighborhood point set taking the initial point as the center through radius search, regarding the neighborhood point set as the same category, marking the points in the record, and adding 1 to the occurrence frequency in the category;
step 4.3: calculating the mean shift vector with the starting point as the center and passing through piter+1=piter+Mh(piter) Performing an iterative update of the starting point, where piterFor the new start position, M, obtained at the iter iterationhIs a mean shift vector;
step 4.4: repeating the step 4.2 and the step 4.3 until the modulus of the mean shift vector is smaller than a set threshold, stopping iteration, and recording the obtained starting point as a central point;
step 4.5: calculating the distance between the central point obtained by the iteration and the existing central point, if the distance is smaller than a distance threshold value, combining the two types, otherwise, recording as a new type;
step 4.6: and repeating the step 4.2 to the step 4.5, traversing the points in the point cloud set until all the points are visited, counting the number of times that each point is visited by each category, taking the category with the largest number of visits as the category to which the point belongs, and finishing the Meanshift clustering operation.
Compared with the prior art, the cloud target classification support method has the following remarkable advantages:
(1) according to the method, the model based on the second-order polynomial is adopted for ground fitting, so that the characteristics of high middle and low two sides of a road are better met, and ground point cloud can be effectively filtered;
(2) according to the invention, point cloud joint denoising is carried out by adopting statistical filtering and bilateral filtering based on normal vectors and Euclidean distances, so that multi-scale noise is effectively removed;
(3) the hyper-parameter-kernel function parameter gamma and the penalty coefficient C used in the training stage of the point vector machine are not set artificially, but are obtained by training according to the data set, and the trained classification model is more accurate.
Drawings
FIG. 1 is a flow chart of the road surface scene target identification based on laser radar point cloud.
FIG. 2 is a laser radar collecting road surface scene point cloud data.
FIG. 3 is a schematic diagram of point cloud comparison before and after removing ground point cloud data.
FIG. 4 is a schematic diagram of point cloud comparison before and after multi-scale joint denoising.
Fig. 5 is a point cloud image obtained by segmenting point cloud data using the Meanshift three-dimensional point cloud clustering algorithm.
Fig. 6 is an example of a partial data set file resulting from the processing.
FIG. 7 is a support vector machine classification flow for adaptive parameter selection.
Detailed Description
In order to make the patents, technical solutions and advantages of the present application more apparent, the present application is further described in detail below with reference to the accompanying drawings and specific embodiments.
In one embodiment, with reference to fig. 1, a method for identifying a target in a road surface scene based on a laser radar point cloud is provided, where the method includes the following steps:
step 1: and collecting point cloud data of the road surface scene by using a laser radar, wherein the point cloud data comprises space coordinate information and reflectivity information of a target, the point cloud space is visualized as shown in figure 2, and the laser radar acquires information under the road surface scene and converts the information into a point cloud form of three-dimensional coordinates.
And 2, step: ground point cloud filtering is performed on collected point cloud information, ground point cloud is determined by means of a quadratic polynomial model and random sampling consistency, and the ground point cloud is extracted from point cloud data, so that the scale of the point cloud data is reduced, as shown in fig. 3, the specific flow is as follows:
step 2.1: and setting an initial ground height threshold value d aiming at the selection of the current scene and the coordinate system. If z is 0 to represent the reference of the ground plane, then randomly extracting 6 data points from z e [ -d, d ] to fit an initial ground model according to a quadratic polynomial fitting road surface expression as a current optimal model, and in a specific embodiment, setting d to be 0.2 m;
the selected quadratic polynomial fitting road surface expression is as follows: a is0+a1x+a2y+a3xy+a4x2+a5y2
Step 2.2: randomly extracting 6 different data points from z E < -d, d again, and fitting a new ground model according to the step 2.1;
step 2.3: comparing the number of the interior points of the current optimal model and the new model, and updating the model with more interior points into the current optimal model;
specifically, the method for determining the interior points of the model comprises the following steps: traversing the data points in the point cloud and judging whether the data points are interior points, wherein the specific judgment method comprises the following steps: determining the value of the corresponding z value of the point (x, y) on the fitted surface at the current stage as z according to the pavement expression fitted by the quadratic polynomialgThe z-axis coordinate value corresponding to the true coordinates of the point (x, y) is ztWhen | zt-zgAnd when | ≦ e, regarding the point as an inner point, otherwise, regarding the point as an outer point, and regarding e as a distance error threshold value. For the distance error threshold e, it is determined in the present invention as
Figure BDA0003528189090000051
I.e. 0.05 m.
Step 2.4: repeating the steps 2.2-2.3 until the maximum iteration number is reached
Figure BDA0003528189090000052
And then, ending the process, and recording the inner point of the current optimal model as the ground point.
Wherein u is the proportion of the point cloud of the ground points to the whole point cloud, N represents the number of points of the model, and P is 1- (1-u)N)tRepresenting the confidence of the model. In the calculation of the maximum iteration number, the confidence P is set to 95%, in the embodiment, the ground point cloud ratio is 19.9%, and the number of the ground point clouds is 59800.
And step 3: filtering the point cloud in the step 2, removing isolated outliers from the point cloud by using statistical filtering, and correcting the position of the point cloud on the surface of the target by using bilateral filtering based on a normal vector and an Euclidean distance of the point cloud to remove the noise point cloud, wherein as shown in FIG. 4, the specific flow is as follows:
step 3.1: firstly, statistical filtering is carried out: taking k points around each point in the point cloud as its neighborhood points, calculating the distance value between the point and its neighborhood points and calculating the average distance da
As a Gaussian function with daIs peak value corresponding toThe abscissa is at a distance d from the pointaThe number of the points is used as the vertical coordinate corresponding to the peak value, the Gaussian function formula can know that the corresponding mean value mu and the standard deviation sigma of the Gaussian function parameter can be obtained through the two data, the confidence interval of 90 percent of the Gaussian distribution function is taken, and the maximum value of the corresponding distance in the interval is set as dmaxIf the average distance d of a certain pointa>dmaxAnd classifying the point cloud into a noise point and removing the noise point, otherwise, keeping the point to obtain the three-dimensional point cloud after statistical filtering. In a specific embodiment, the number of neighborhood points of the point cloud is set to be k equal to 50, and the distance threshold d is set to bemax=0.5m。
Step 3.2: and (3) performing normal vector correction: step 3.1, counting the filtered points as points to be corrected, setting an angle threshold T, and obtaining a correction point p by using radius searchiK neighborhood points, calculating the constraint factor of each point:
Figure BDA0003528189090000061
determining a point piThe modified normal vector is represented as:
Figure BDA0003528189090000062
wherein n isiTo the point p to be correctediNormal vector of (1), nijRepresents piThe normal vectors of the j neighborhood points. And traversing the points in the point cloud, and correcting the normal vector of each point.
In a specific embodiment, the number of neighborhood points of the point cloud is set to
Figure BDA0003528189090000063
T=50°。
Step 3.3: and (3) bilateral filtering: for the target point piAnd its neighborhood point pijCalculating the filter parameter u1,u2Wherein u is1=||pi-pijI represents the target point piDistance to a neighborhood point; u. of2=<ni,pi-pij>Representing the target point piThe inner product of the directed distance to the neighborhood point and the normal vector of this point; according to
Figure BDA0003528189090000064
And
Figure BDA0003528189090000065
calculating a function
Figure BDA0003528189090000066
And
Figure BDA0003528189090000067
is brought into
Figure BDA0003528189090000068
Obtaining a weight factor alpha; updating target point information: p is a radical ofi:=pi+αniFinishing the filtering of the current point; and when all the points in the point set are operated, ending the process.
And 4, step 4: and (3) dividing the three-dimensional point cloud obtained in the step (3) by using Meanshift clustering, and as shown in FIG. 5, performing the following process:
step 4.1: defining a point p in a point cloudiAnd its neighborhood point pijRandomly selecting a point p in the unmarked point cloud dataiAs a starting point;
step 4.2: finding out a neighborhood point set taking the initial point as the center through radius search, regarding the neighborhood point set as the same category, marking the points in the record, and adding 1 to the occurrence frequency in the category;
step 4.3: calculating the mean shift vector with the starting point as the center and passing through piter+1=piter+Mh(piter) Performing iteration updating on the starting point; wherein p isiterThe new starting position obtained at the iter iteration.
In Meanshift, a gaussian kernel function is used as a kernel function, and a calculation formula of a mean shift vector after the kernel function is introduced is as follows:
Figure BDA0003528189090000071
in a specific embodiment h is 0.2m and k is 80.
Step 4.4: repeating the step 4.2 and the step 4.3 until the modulus of the mean shift vector is smaller than a set threshold, stopping iteration, and recording the obtained starting point as a central point;
step 4.5: calculating the distance between the central point obtained by the iteration and the existing central point, if the distance is smaller than a distance threshold value, combining the two types, otherwise, recording as a new type;
step 4.6: and repeating the step 4.2 to the step 4.5, traversing the points in the point cloud set until all the points are visited, counting the number of times that each point is visited by each category, taking the category with the largest number of visits as the category to which the point belongs, and finally finishing the Meanshift clustering operation.
And 5: and (3) respectively performing feature description on all the target point clouds obtained in the step (4) from 20 dimensions, and forming a data set by taking the 20 feature descriptors as feature information of the target, wherein part of the data set is as shown in FIG. 6.
TABLE 1 eigenvalues and calculation formulas thereof
Figure BDA0003528189090000072
Figure BDA0003528189090000081
Figure BDA0003528189090000091
Step 6: for the data set in step 5, the data set is collated as shown in table 1 below:
table 2 example of data collected
Categories In total
Automobile 880
Human being 807
Person riding bicycle 225
Others 52
The data set is divided into a training set and a test set according to a certain proportion.
Then, two hyper-parameters, namely kernel function and penalty coefficient, in the support vector machine are trained, the main process is to firstly determine the parameter range on the global range, then further confirm locally, and then train the classification model according to the training set after determining the hyper-parameters. The prediction phase is to rely on the model obtained by training to confirm the classification of the target in the test set, as shown in fig. 7. The specific process is as follows:
step 6.1: initializing randomly in a search space to obtain initial parameters;
step 6.2: in the parameter determination process, when the iteration number is tnMaximum number of iterations is TmaxSetting:
Figure BDA0003528189090000092
C(tn) 2 r, where r ∈ [0, 1]]And is a random number, A (t)n) Determining the learning rate in the process for the parameters, and performing local search in the algorithm when the parameters are more than-1 and less than A and less than 1, otherwise performing global search;
in the global search phase, after a group of particles is initially set, a search particle is selected and represented as XrWhen the search particle updates data, other particles execute global search operation and are far away from the search particle, so that the limited particle covers a larger search area; in the process, the distance of the other particles from the search particle is expressed as:
Figure BDA0003528189090000101
Figure BDA0003528189090000102
in the local searching stage, the optimal particle in each iteration is considered as the solution closest to the target, in the stage, all other particles move to the optimal particle and update the position information, and in the mechanism of particle movement, the optimal solution is obtained by continuously reducing the movement radius through a spiral ascending type track, wherein X is set*For the optimal target position, the distance between the search particle and the target position is:
Figure BDA0003528189090000103
iteratively updating the position of the search particle by a spiral equation:
Figure BDA0003528189090000104
wherein, b is a shape parameter which is a constant, and is a random number which belongs to [ -1,1 ];
in the whole iteration process, according to the change of the parameter A, the search mode is switched between local search and global search, the parameter is continuously updated, iteration is stopped when a termination condition is met, and the solution at the moment is regarded as the finally determined parameter.
Here, the classification rate is used as the fitness for calculation.
Step 6.3: and setting the iteration times when the fitness tends to be stable as the maximum iteration times, stopping iteration when the maximum iteration times is reached, and obtaining the optimal parameters. In this embodiment, the maximum number of iterations is 28, the adaptively determined kernel function parameter γ has a value of 5.5, and the penalty coefficient C is 5.9.
Step 6.4: and establishing a model by taking the calculated optimal parameters as initial parameters of the SVM multi-classifier. In the embodiment, 4 categories are classified, 6 binary classifiers are required according to calculation, in each binary classifier, a sample to be detected calculates characteristic values of two different categories and outputs a result, and finally a classification result is generated through a 'voting' mechanism for the categories.
Step 6.5: and testing the data in the test set by using the trained model to obtain a target classification prediction result.
The classification results are shown in table 2 below:
Figure BDA0003528189090000105
Figure BDA0003528189090000111
the method is based on point cloud data processing performed by the Livox Mid-40 laser radar, and meanwhile, only automobiles, pedestrians, people riding bicycles and other four types of targets are classified, so that detection accuracy and detection efficiency are improved relative to other point cloud target detection methods. Meanwhile, the invention is not limited by the embodiment, and has universality for other types of laser radars or more classification standards.

Claims (10)

1. A road surface scene target identification method based on laser radar point cloud is characterized by comprising the following steps:
step 1: collecting road surface scene point cloud data by using a laser radar;
and 2, step: determining ground point cloud according to the quadratic polynomial model and the random sampling consistency, and deleting the ground point cloud data from the point cloud data;
and 3, step 3: filtering the remaining point cloud in the step 2, including removing isolated outliers from the point cloud by using statistical filtering, and correcting the point cloud position of the target surface by bilateral filtering based on a point cloud normal vector and an Euclidean distance;
and 4, step 4: dividing the three-dimensional point cloud obtained in the step 3 by using Meanshift clustering to obtain a target point cloud set;
and 5: carrying out feature description on the target point cloud set obtained in the step (4), and extracting structural feature information;
and 6: and inputting the extracted structural feature information into a support vector machine model obtained by training, and classifying the target point cloud.
2. The lidar point cloud based road surface scene target identification method of claim 1, wherein the road surface scene point cloud data comprises spatial coordinate information and reflectivity information of a target.
3. The method for identifying the target of the laser radar point cloud-based road surface scene according to claim 1, wherein the specific method for determining the ground point cloud according to the quadratic polynomial model and the random sampling consistency comprises the following steps:
step 2.1: setting an initial ground height threshold value d, representing the reference of a ground plane by taking z as 0, randomly extracting 6 data points from z belonging to [ -d, d ], and fitting an initial ground model according to a quadratic polynomial fitting road surface expression to serve as a current optimal model;
step 2.2: randomly extracting 6 different data points from z E < -d, d ] and fitting a new ground model according to the step 2.1;
step 2.3: comparing the number of the interior points of the current optimal model and the new ground model, and updating the model with more interior points into the current optimal model;
step 2.4: and (5) repeating the step 2.2 to the step 2.3 until the maximum iteration times are reached, ending the process, and recording the inner point of the current optimal model as the ground point cloud.
4. The method for identifying the road surface scene target based on the laser radar point cloud as claimed in claim 3, wherein the determination method of the interior point of the model is as follows:
traversing the data points in the point cloud and judging whether the data points are interior points, wherein the specific judgment method comprises the following steps: determining the corresponding z value of a point (x, y) on the fitted surface at the current stage according to the quadratic polynomial fitting road surface expression to be represented as zgThe z-axis coordinate value corresponding to the true coordinates of the point (x, y) is ztWhen z ist-zgAnd when | ≦ e, determining the point as an inner point, wherein e is a distance error threshold.
5. The method for identifying the road surface scene target based on the laser radar point cloud as claimed in claim 3, wherein the maximum iteration number is as follows:
Figure FDA0003528189080000021
in the formula, u is the proportion of the point cloud of the ground point to the whole point cloud, N represents the point number of the model, and P is the confidence coefficient of the model.
6. The laser radar point cloud-based road surface scene target identification method according to claim 1, wherein the specific method for filtering the point cloud remaining in the step 2 is as follows:
step 3.1: and (3) statistical filtering: taking k points around each point in the point cloud as its neighborhood points, calculating the distance value between the point and its neighborhood points and calculating the average distance daAs a Gaussian function with daThe peak value corresponds to the abscissa, and the distance from the point is daThe number of the points is taken as the vertical coordinate corresponding to the peak value, the corresponding mean value mu and standard deviation sigma of the parameters of the Gaussian function can be obtained through the two data according to the Gaussian function formula, the confidence interval of 90 percent of the Gaussian distribution function is taken,the maximum value of the corresponding distance in the interval is set as dmaxIf the average distance d of a certain pointa>dmaxClassifying the point cloud into a noise point and removing the noise point, otherwise, keeping the point to obtain a three-dimensional point cloud after statistical filtering;
step 3.2: and (3) performing normal vector correction on the points after the statistical filtering: setting an angle threshold T, and obtaining a point p to be corrected by using radius searchiCalculating the constraint factor of each point by using the k neighborhood points:
Figure FDA0003528189080000022
determining a point piThe modified normal vector is represented as:
Figure FDA0003528189080000023
wherein n isiIs the point p to be correctediNormal vector of (1), nijDenotes piThe normal vectors of the j neighborhood points.
Traversing points in the point cloud, and correcting the normal vector of each point;
step 3.3: and (3) carrying out bilateral filtering on the corrected points respectively:
for the target point piAnd its neighborhood point pijCalculating the filter parameter u1,u2Wherein u is1=||pi-pijI represents the target point piDistance to a neighborhood point; u. of2=<ni,pi-pijRepresents the target point piThe inner product of the directed distance to the neighborhood point and the normal vector of this point;
according to
Figure FDA0003528189080000031
And
Figure FDA0003528189080000032
calculating a function
Figure FDA0003528189080000033
And
Figure FDA0003528189080000034
is brought into
Figure FDA0003528189080000035
Obtaining a weight factor alpha;
updating target point information: p is a radical ofi:=pi+αniAnd finishing the filtering of the current point.
7. The method for identifying the target of the road surface scene based on the laser radar point cloud as claimed in claim 1, wherein the method for segmenting the three-dimensional point cloud obtained in the step 3 by using Meanshift clustering comprises the following specific steps:
step 4.1: defining a point p in a point cloudiAnd its neighborhood point pijRandomly selecting a point p in the unmarked point cloud dataiAs a starting point;
step 4.2: finding out a neighborhood point set taking the initial point as the center through radius search, regarding the neighborhood point set as the same category, marking the points in the record, and adding 1 to the occurrence frequency in the category;
step 4.3: calculating the mean shift vector with the starting point as the center and passing through piter+1=piter+Mh(piter) Performing an iterative update of the starting point, where piterFor the new start position, M, obtained at the iter iterationhIs a mean shift vector;
step 4.4: repeating the step 4.2 and the step 4.3 until the modulus of the mean shift vector is smaller than a set threshold, stopping iteration, and recording the obtained starting point as a central point;
step 4.5: calculating the distance between the central point obtained by the iteration and the existing central point, if the distance is smaller than a distance threshold value, combining the two types, otherwise, recording as a new type;
step 4.6: and repeating the step 4.2 to the step 4.5, traversing the points in the point cloud set until all the points are visited, counting the number of times that each point is visited by each category, taking the category with the largest number of visits as the category to which the point belongs, and finishing the Meanshift clustering operation.
8. The method for identifying the road surface scene target based on the laser radar point cloud as claimed in claim 7, wherein the calculation formula of the mean shift vector is as follows:
Figure FDA0003528189080000041
wherein h is the search radius of the neighborhood of the target point, k is the point cloud number in the search radius, and the used Gaussian kernel function G is specifically defined as:
Figure FDA0003528189080000042
9. the method for identifying the target of the road surface scene based on the laser radar point cloud as claimed in claim 1, wherein the characterization is performed from 20 dimensions, and comprises the following steps:
Figure FDA0003528189080000043
Figure FDA0003528189080000051
10. the method for identifying the road surface scene target based on the laser radar point cloud as claimed in claim 1, wherein a training method of a support vector machine model is as follows:
step 6.1: initializing randomly in a search space to obtain initial parameters;
step 6.2: in the parameter determination process, when the iteration number is tnMaximum number of iterations is TmaxSetting:
Figure FDA0003528189080000052
C(tn) 2 r, where r ∈ [0, 1]]And is a random number, A (t)n) Determining the learning rate in the process for the parameters, and when the parameters are more than-1 and less than A and less than 1, performing local search in the algorithm, otherwise performing global search;
in the global search phase, after a group of particles is initially set, a search particle is selected and represented as XrWhen the searching particles update data, other particles execute global searching operation and are far away from the searching particles, so that the limited particles cover a larger searching area; in the process, the distance of the other particles from the search particle is represented as:
Figure FDA0003528189080000061
Figure FDA0003528189080000062
in the local searching stage, the optimal particle in each iteration is considered as the solution closest to the target, in the stage, all other particles move to the optimal particle and update the position information, and in the mechanism of particle movement, the optimal solution is obtained by continuously reducing the movement radius through a spiral ascending type track, wherein X is set*For the optimal target position, the distance between the search particle and the target position is:
Figure FDA0003528189080000063
iteratively updating the position of the search particle by a spiral equation:
Figure FDA0003528189080000064
wherein, b is a shape parameter which is a constant, and is a random number which belongs to [ -1,1 ];
in the whole iteration process, according to the change of the parameter A, the search mode is switched between local search and global search, the parameter is continuously updated, the iteration is stopped when the termination condition is met, and the solution at the moment is regarded as the finally determined parameter;
step 6.3: performing model establishment by taking the calculated optimal parameters as initial parameters of the SVM multi-classifier; the multiple classifiers are one-to-one support vector machine classifiers.
CN202210203259.0A 2022-03-02 2022-03-02 Laser radar point cloud-based road surface scene target identification method Pending CN114612795A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210203259.0A CN114612795A (en) 2022-03-02 2022-03-02 Laser radar point cloud-based road surface scene target identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210203259.0A CN114612795A (en) 2022-03-02 2022-03-02 Laser radar point cloud-based road surface scene target identification method

Publications (1)

Publication Number Publication Date
CN114612795A true CN114612795A (en) 2022-06-10

Family

ID=81860244

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210203259.0A Pending CN114612795A (en) 2022-03-02 2022-03-02 Laser radar point cloud-based road surface scene target identification method

Country Status (1)

Country Link
CN (1) CN114612795A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115327568A (en) * 2022-07-19 2022-11-11 哈尔滨工程大学 Unmanned aerial vehicle cluster real-time target identification method and system based on PointNet network and map construction method
CN115542340A (en) * 2022-12-01 2022-12-30 广东工业大学 Roadside double-solid-state laser radar point cloud fusion method based on distance known reference object
CN115713787A (en) * 2023-01-10 2023-02-24 深圳市鸿逸达科技有限公司 Pedestrian detection method, computer equipment and storage medium
CN116740101A (en) * 2023-05-16 2023-09-12 中国信息通信研究院 Plane segmentation algorithm for point cloud objects
CN116953661A (en) * 2023-09-20 2023-10-27 中国地质大学(武汉) Photon counting laser radar self-adaptive nuclear density estimation filtering method and device
CN117237902A (en) * 2023-11-15 2023-12-15 山东飞宏工程机械有限公司 Robot character recognition system based on deep learning
WO2023240805A1 (en) * 2022-06-13 2023-12-21 之江实验室 Connected vehicle overspeed early warning method and system based on filtering correction

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023240805A1 (en) * 2022-06-13 2023-12-21 之江实验室 Connected vehicle overspeed early warning method and system based on filtering correction
CN115327568A (en) * 2022-07-19 2022-11-11 哈尔滨工程大学 Unmanned aerial vehicle cluster real-time target identification method and system based on PointNet network and map construction method
CN115327568B (en) * 2022-07-19 2023-10-20 哈尔滨工程大学 PointNet network-based unmanned aerial vehicle cluster real-time target recognition method, system and map construction method
CN115542340A (en) * 2022-12-01 2022-12-30 广东工业大学 Roadside double-solid-state laser radar point cloud fusion method based on distance known reference object
CN115713787A (en) * 2023-01-10 2023-02-24 深圳市鸿逸达科技有限公司 Pedestrian detection method, computer equipment and storage medium
CN116740101A (en) * 2023-05-16 2023-09-12 中国信息通信研究院 Plane segmentation algorithm for point cloud objects
CN116740101B (en) * 2023-05-16 2024-03-12 中国信息通信研究院 Plane segmentation method for point cloud object
CN116953661A (en) * 2023-09-20 2023-10-27 中国地质大学(武汉) Photon counting laser radar self-adaptive nuclear density estimation filtering method and device
CN116953661B (en) * 2023-09-20 2023-12-15 中国地质大学(武汉) Photon counting laser radar self-adaptive nuclear density estimation filtering method and device
CN117237902A (en) * 2023-11-15 2023-12-15 山东飞宏工程机械有限公司 Robot character recognition system based on deep learning
CN117237902B (en) * 2023-11-15 2024-01-26 山东飞宏工程机械有限公司 Robot character recognition system based on deep learning

Similar Documents

Publication Publication Date Title
CN114612795A (en) Laser radar point cloud-based road surface scene target identification method
CN109829403B (en) Vehicle anti-collision early warning method and system based on deep learning
CN109816024B (en) Real-time vehicle logo detection method based on multi-scale feature fusion and DCNN
CN111260683A (en) Target detection and tracking method and device for three-dimensional point cloud data
CN104200657B (en) A kind of traffic flow parameter acquisition method based on video and sensor
CN111667512B (en) Multi-target vehicle track prediction method based on improved Kalman filtering
CN111274843B (en) Truck overload monitoring method and system based on monitoring video
CN102463990B (en) For the system and method for tracking object
CN110992693B (en) Deep learning-based traffic congestion degree multi-dimensional analysis method
CN110866430B (en) License plate recognition method and device
CN111444767B (en) Pedestrian detection and tracking method based on laser radar
CN112052802B (en) Machine vision-based front vehicle behavior recognition method
CN111340855A (en) Road moving target detection method based on track prediction
CN110599800A (en) Parking lot parking space state monitoring system and monitoring method
CN113034378A (en) Method for distinguishing electric automobile from fuel automobile
CN114926356A (en) LiDAR point cloud unsupervised denoising method aiming at snowfall influence
CN112666573B (en) Detection method for retaining wall and barrier behind mine unloading area vehicle
CN113701642A (en) Method and system for calculating appearance size of vehicle body
CN112907972A (en) Road vehicle flow detection method and system based on unmanned aerial vehicle and computer readable storage medium
CN117115415A (en) Image marking processing method and system based on big data analysis
CN109615007B (en) Deep learning network target detection method based on particle filtering
CN116805409A (en) Method for identifying road surface state and evaluating flatness by using driving video
CN112052768A (en) Urban illegal parking detection method and device based on unmanned aerial vehicle and storage medium
CN113673383B (en) Time-space domain obstacle detection method and system for complex road scene
CN116110230A (en) Vehicle lane crossing line identification method and system based on vehicle-mounted camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination