CN115861968A - Dynamic obstacle removing method based on real-time point cloud data - Google Patents

Dynamic obstacle removing method based on real-time point cloud data Download PDF

Info

Publication number
CN115861968A
CN115861968A CN202211606482.6A CN202211606482A CN115861968A CN 115861968 A CN115861968 A CN 115861968A CN 202211606482 A CN202211606482 A CN 202211606482A CN 115861968 A CN115861968 A CN 115861968A
Authority
CN
China
Prior art keywords
points
point cloud
cluster
dynamic
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211606482.6A
Other languages
Chinese (zh)
Inventor
杨继海
刘向超
张可飞
惠文龙
陈秋航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Construction Machinery Branch of XCMG
Original Assignee
Construction Machinery Branch of XCMG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Construction Machinery Branch of XCMG filed Critical Construction Machinery Branch of XCMG
Priority to CN202211606482.6A priority Critical patent/CN115861968A/en
Publication of CN115861968A publication Critical patent/CN115861968A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses a dynamic obstacle removing method based on real-time point cloud data, which comprises the following steps: acquiring object information to form real-time three-dimensional point cloud data; dividing the target point cloud into a sparse point cloud and a dense point cloud through preprocessing; performing point cloud clustering on the sparse point cloud through a noisy density-based spatial clustering algorithm DBSCAN, dividing point cloud data into a plurality of clusters, and fitting a bounding box of each cluster to obtain geometric characteristic information of each individual cluster; performing multi-target matching and tracking on reasonable clusters according to Hungary and Kalman filtering algorithms, and dividing a branch quiescent point; performing point cloud matching on the dense point cloud through KD-Tree; and determining the dynamic and static attributes of the clustered targets according to the dynamic and static proportions of the points in the cluster, and filtering the dynamic targets. The invention improves the precision and the accuracy of the automatic driving map and solves the problems that the dynamic obstacle detection algorithm has low normalization capability and cannot adapt to complicated and variable actual working conditions.

Description

Dynamic obstacle removing method based on real-time point cloud data
Technical Field
The invention belongs to the technical field of unmanned driving, and relates to a dynamic obstacle removing method based on real-time point cloud data.
Background
The unmanned driving is that the vehicle realizes automatic control without manpower real-time control, and the unmanned driving technology is a comprehensive body of a plurality of leading-edge subjects such as a sensor, a computer, artificial intelligence, communication, navigation positioning, mode recognition, machine vision, intelligent control and the like. With the continuous advance of automation technology, the artificial intelligence technology is continuously breaking through in the aspect of vehicles and is more and more widely applied to the society. The practical unmanned vehicle is gradually developed, and the popularization of 5G and the application of the internet of vehicles can generate a more intelligent traffic system by combining the unmanned vehicle.
The driving environment of the unmanned trolley on the open road section is complex and changeable, the open road section has a large number of dynamic targets, the dynamic targets can interfere the accuracy of map registration, the accuracy of map generation is reduced, and ghost images can be left in a static map. At present, aiming at the real-time update of an unmanned map in a dynamic environment, the mainstream solution at present is to realize the detection of a dynamic target by means of deep learning, and realize three-dimensional sensing tasks such as pedestrian detection, vehicle detection and the like by means of abundant image or point cloud sensing information and fusion of information such as a ground plane and the like. However, the method needs prior information of the model, is low in generalization capability, is difficult to cope with complex and variable environments in an actual unmanned scene, and has high requirements on hardware of a computing platform.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, provides a dynamic obstacle removing method based on real-time point cloud data, improves the precision and accuracy of an automatic driving map, and solves the problems that the dynamic obstacle detection algorithm has low normalization capability and cannot adapt to complicated and variable actual working conditions.
In order to achieve the purpose, the invention adopts the following technical scheme:
a dynamic barrier removing method based on real-time point cloud data comprises the following steps:
acquiring object information to form real-time three-dimensional point cloud data;
dividing the target point cloud into a sparse point cloud and a dense point cloud through preprocessing;
performing point cloud clustering on the sparse point cloud through a noisy density-based spatial clustering algorithm DBSCAN, dividing point cloud data into a plurality of clusters, and fitting a bounding box of each cluster to obtain geometric characteristic information of each individual cluster;
performing multi-target matching and tracking on reasonable clusters according to Hungary and Kalman filtering algorithms, and dividing a branch quiescent point;
performing point cloud matching on the dense point cloud through KD-Tree;
and determining the dynamic and static attributes of the clustered targets according to the dynamic and static proportions of the points in the cluster, and filtering the dynamic targets.
Optionally, the pre-processing comprises:
self-motion distortion correction is carried out on the point cloud through IMU data, pose information of an unmanned vehicle is provided through odometer data, local point cloud data are registered in a global coordinate system to be associated, and ROI extraction and down-sampling filtering operation of single-frame point cloud are completed in sequence.
Optionally, the DBSCAN point cloud clustering includes:
counting the number of neighbor points in a certain radius Eps near each point, wherein the points with the number of the neighbor points larger than a minimum point threshold value MinPts are core points, the points which fall in the Eps neighborhood of the core points but with the number of the neighbor points smaller than the MinPts are boundary points, and all the points except the core points and the boundary points are noise points;
randomly taking any core point as a new cluster, expanding the core point to the periphery, and classifying all points with reachable density into the same cluster until all points except the noise point are contained in the cluster; density reachable refers to a chain of points { X } 1 ,X 2 ,...,X n }, any set of point pairs X i And X i+1 ,X i+1 All fall in X i And X in Eps neighborhood of i Is core point, then called X 1 To X n The density can be reached.
Optionally, before point cloud clustering, searching for neighboring points is performed through the KD-Tree, the number of the neighboring points is counted, and all the points are classified.
Optionally, obtaining effective geometric size, posture and centroid feature information of the clustered object through a bounding box fitting algorithm of the L-shape feature; the bounding box fitting algorithm includes:
projecting the three-dimensional coordinate points on an XY plane, listing extreme points on XY axis dimensions in a cluster as candidate points, calculating the distance between every two candidate points, and finding out two angular points with the farthest distance in the cluster;
searching a third corner point which is farthest from the connecting line of the two corner points in the cluster;
and calculating the centroid of the cluster, the size and the posture of the bounding box according to the three corner points to obtain the information of the complete bounding box.
Optionally, a state model X = { X, y, z, w, l, h, X ', y', z ', w', l ', h' } of kalman filtering is established T
Wherein x, y and z are mass center coordinates of the cluster, w, l and h are length, width and height information of the bounding box of the cluster, x ', y' and z 'represent speeds in three dimensions, and w', l 'and h' are transformation quantities of the size;
prediction value obtained by Kalman filtering according to historical information prediction at t-delta t moment
Figure BDA0003995193520000021
Obtaining a state value X according to observation at the current moment t Updating the best estimate of the current state->
Figure BDA0003995193520000031
Optionally, constructing a Cost matrix Cost n×m If a total of n Tracks predicted by Kalman filtering in the previous frame, the centroid is expressed as
Figure BDA0003995193520000032
The observation at the current moment results in a total of m Detections, the centroid of which is denoted as ^ greater or greater>
Figure BDA0003995193520000033
Cost ij Centroid of the ith Track represented as the previous frame
Figure BDA0003995193520000034
And the centroid of the jth Detection of the current frame>
Figure BDA0003995193520000035
The distance between them;
and solving the cost matrix by using a Hungarian algorithm.
Optionally, for newly-appeared Detection in each frame, when the target is detected by three continuous frames and is associated to the same Tracks tracking chain, a Track ID is allocated to the target; for the Detection that the previous frame exists and the current frame disappears, using prediction data of Kalman filtering as estimation state output of the current frame, if the Detection matched with the previous frame is still not detected after three frames, completely destroying the Tracks tracking chain and the Track ID thereof; the obtained cluster tracking result is expressed as:
Figure BDA0003995193520000036
wherein
Figure BDA0003995193520000037
Indicating that within the current frame t there has been a tracking chain of k interval times, which includes a cluster k frames ago @>
Figure BDA0003995193520000038
The ith cluster in the current frame->
Figure BDA0003995193520000039
/>
OptionalThrough a sparse point cloud
Figure BDA00039951935200000310
Point k in the middle cluster is matched with the dense point cloud before the designated interval delta frame>
Figure BDA00039951935200000311
Obtaining nearest neighbor points with point normal vector constraints;
according to the nearest neighbor point, calculating the distance d between a certain point of the current frame and a certain point before the specified interval delta frame k Calculating the velocity of each point according to the time difference between two frames, the velocity being expressed as
Figure BDA00039951935200000312
Setting l NN Is a speed threshold, the speed is less than->
Figure BDA00039951935200000313
Is a static point, is a greater or lesser point>
Figure BDA00039951935200000314
The point of (2) is a dynamic point;
and counting the number of the dynamic and static points in the current cluster, and defining the dynamic and static attributes of the cluster according to the proportion of the dynamic and static points.
Optionally, if the number of dynamic points in the cluster exceeds the absolute number threshold
Figure BDA00039951935200000315
While the proportion of dynamic points exceeds the relative proportion threshold value>
Figure BDA00039951935200000316
The cluster is a dynamic target;
if the number of dynamic points in the cluster does not exceed the absolute number threshold
Figure BDA00039951935200000317
While the proportion of dynamic points does not exceed the relative proportion threshold value>
Figure BDA0003995193520000041
The cluster is a static target.
Compared with the prior art, the invention has the following beneficial effects:
clustering point cloud data, performing multi-target matching tracking according to characteristic information obtained by a clustering result, distinguishing static state and dynamic state of a single target by using matching results of front and back frame data, removing point clouds belonging to dynamic objects from the point clouds in an original map, screening dynamic points in the original point cloud map, and obtaining a reliable static environment map; the method is faster and more reliable, has high modeling capability and does not use the high hardware requirement.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a flow chart of a DBSCAN clustering algorithm of the present invention;
FIG. 3 is a schematic diagram of a bounding box fitting algorithm of the present invention;
FIG. 4 is a flow chart of the multi-target tracking algorithm of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "up", "down", "front", "back", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on those shown in the drawings, and are used only for convenience in describing the present invention and for simplicity in description, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed and operated in a particular orientation, and thus, are not to be construed as limiting the present invention. Furthermore, the terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first," "second," etc. may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless otherwise specified.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art through specific situations.
As shown in fig. 1 to 4, a method for removing dynamic obstacles based on real-time point cloud data includes the following steps:
s1, acquiring real-time three-dimensional point cloud data through a laser radar;
s2, dividing the target point cloud into a sparse point cloud and a dense point cloud through pretreatment; inputting single-frame original point cloud and odometer information obtained by a positioning module; in the preprocessing operation, IMU data is used for performing self-motion distortion correction on point clouds, odometer data provides pose information of an unmanned vehicle, local point cloud data can be registered into a global coordinate system for association, and meanwhile filtering operations such as ROI extraction and down sampling of single-frame point clouds are sequentially completed;
s3, carrying out point cloud clustering on the sparse point cloud after the preprocessing is finished through a noisy density-based spatial clustering algorithm DBSCAN, dividing point cloud data into a plurality of clusters, fitting a boundary frame for each cluster to obtain geometric characteristic information of each independent cluster, and meanwhile sending the dense point cloud which is not subjected to downsampling into a matching thread for use in subsequent dynamic point matching;
the DBSCAN clustering algorithm firstly counts the number of neighbor points in a certain radius Eps near each point, the radius data is adjusted according to the situation, and all the points are traversed and divided into three types: neighbor pointThe current point with the number larger than the minimum point threshold value MinPts is a core point, the point which falls in the Eps neighborhood of the core point but has the number smaller than the MinPts is a boundary point, and all points except the core point and the boundary point are noise points; randomly finding any core point and putting the core point into a new cluster, expanding the core point to the periphery, and putting all the points with accessible density into the same cluster; wherein the density can be up to: for a chain of points { X 1 ,X 2 ,...,X n Any set of point pairs X i And X i+1 ,X i+1 All fall in X i And X in Eps neighborhood of i Is a core point, then called X 1 To X n The density can be reached. Repeating the process of creating the new clusters until all the points except the noise point are contained in a certain cluster, and ending the algorithm;
as can be known from the dbss principle, the first step of the conventional dbss algorithm is to traverse and calculate the number of data points in the neighborhood of each point Eps in a data sample, which involves a large number of cyclic traversal and search operations, the time complexity is O (n 2), a large amount of calculation overhead is inevitably generated, and the time consumption is often long; aiming at the problem, the point cloud data are divided by using a KD-Tree structure, so that the searching time of adjacent points can be effectively reduced. Constructing KD-Tree for the original point cloud data before clustering starts, searching for adjacent points based on the KD-Tree, and counting the number of the adjacent points to realize classification of all the points; besides using KD-Tree to realize the acceleration of clustering algorithm, the Tree structure is also used in the following text to accelerate the matching of corresponding point pairs;
due to the fact that shielding conditions occur in an actual environment, point cloud data obtained by a laser radar sensor may not accurately represent object characteristics, and a bounding box fitting algorithm can obtain effective characteristic information such as geometric size, posture, mass center and the like of a clustered object to a certain extent; the invention relates to an bounding box fitting algorithm based on L-shape characteristics; in an actual road scene, due to the change of the observation angle of the laser radar, the point cloud shape characteristics of the vehicle are not in an L-shape state, and at the moment, a surrounding frame output by the algorithm may be slightly distorted;
the bounding box fitting algorithm comprises the following steps:
(1) Projecting the three-dimensional coordinate points on an XY plane, taking extreme points on XY axis dimensions in a cluster as candidate points, traversing and calculating the distance between every two candidate points, and finding two angular points with the farthest distance in the cluster, such as a point a and a point b in the graph;
(2) Traversing all the points in the cluster, and finding a third corner point m which is farthest away from a connecting line ab of the two corner points;
(3) Calculating the centroid of the cluster and the size and the posture of the bounding box according to the three angular points to obtain complete bounding box information;
in an actual automatic driving environment, large-size clusters are generally building targets such as houses and walls, and small-size clusters are possibly influenced by noise; dynamic objects on outdoor roads are mostly vehicles and pedestrians, and the size value of the surrounding frame of the dynamic objects should be within a reasonable threshold value range. Therefore, the clustering result can be filtered according to the geometric dimension information of the surrounding frame, and the cluster with reasonable dimension is reserved and input into a subsequent multi-target matching and tracking module;
s4, performing multi-target matching and tracking on reasonable clusters according to Hungary and Kalman filtering algorithms, and dividing branch quiescent points; the multi-target tracking mainly comprises three modules, namely Kalman filtering, hungary algorithm and tracking management; the Kalman filtering is a linear model obeying Gaussian distribution and is used for fusing a predicted value obtained by a mathematical model and an observed value obtained by real measurement to finally obtain optimal estimation; the algorithm is divided into two parts, namely state prediction and updating;
the state model of the kalman filter is established here as follows:
X={x,y,z,w,l,h,x',y',z',w',l',h'} T
wherein x, y and z are coordinates of the mass center of the cluster, w, l and h are length, width and height information of the bounding box of the cluster, x ', y' and z 'represent speeds in three dimensions, namely derivatives of positions, and w', l 'and h' are transformation quantities of sizes, namely derivatives of length, width and height information;
kalman filtering is performed according to historical information at t-delta t momentMeasured predicted value
Figure BDA0003995193520000061
(i.e., tracks) and combines the observation of the current time to obtain the state value X t (i.e., detects) updates the optimal estimate of the current state @>
Figure BDA0003995193520000062
The Hungarian algorithm aims to solve the problem of target matching in the front frame and the rear frame. Firstly, a cost matrix needs to be constructed, and then the cost matrix is solved by using the Hungarian algorithm. The cost matrix is constructed as follows:
assuming a total of n Tracks predicted by Kalman filtering in the previous frame, the centroid of the Tracks can be expressed as
Figure BDA0003995193520000063
Figure BDA0003995193520000064
The observation at the current moment results in a total of m Detections, the centroid of which can be expressed as->
Figure BDA0003995193520000065
Figure BDA0003995193520000066
Constructing a Cost matrix Cost n×m 。Cost ij Expressed as the centroid of the ith Track of the previous frame->
Figure BDA0003995193520000067
And the centroid of the jth Detection of the current frame>
Figure BDA0003995193520000068
The distance between them;
when a new target appears, a new Track needs to be created and a corresponding Track ID is allocated to the new target, and when the old target disappears, the corresponding Track needs to be deleted and the Track ID is destroyed. If a certain target which is being tracked and observed in the current frame is lost and is detected again in the next frame, if the target is directly marked as a new Track, the target Track ID in the system is frequently switched, and the actual tracking effect stability is low;
for a new Detection in each frame, only three consecutive frames detect this target and are associated to the same Tracks trace chain, will they be assigned a Track ID. And for the Detection of the previous frame which exists and disappears, using prediction data of Kalman filtering as the estimation state output of the current frame, and if the Detection which can be matched with the current frame is not detected after three frames, completely destroying the Tracks tracking chain and the Track ID thereof. The finally obtained cluster tracking result can be expressed as:
Figure BDA0003995193520000071
wherein->
Figure BDA0003995193520000072
Indicates that a certain tracking chain with k intervals already exists in the current frame t, and a certain cluster before k frames is included in the tracking chain>
Figure BDA0003995193520000073
Certain cluster before k-1 frame->
Figure BDA0003995193520000074
Trace back until the ith cluster in the current frame->
Figure BDA0003995193520000075
S5, determining the dynamic and static attributes of the clustered targets according to the dynamic and static proportions of the clustering points, and filtering the dynamic targets; the method comprises the following steps:
(1) Utilizing current sparse point clouds
Figure BDA0003995193520000076
Point k in a certain cluster in the system is matched with the dense point cloud before a certain specified interval delta frame>
Figure BDA0003995193520000077
Nearest neighbors with point normal vector constraints can be obtained. According to the fourth figure, the dense point cloud which is not subjected to downsampling processing in the preprocessing module is sent to a matching thread, and meanwhile KD-Tree with normal vector information is constructed for the point cloud. Clustering and multi-target tracking are carried out on the preprocessed sparse point cloud to obtain a clustered tracking result
Figure BDA0003995193520000078
Figure BDA0003995193520000079
(2) According to the result of nearest neighbor matching, calculating the distance d between a certain point of current frame and a certain point before the appointed interval delta frame k From the time difference between two frames, the velocity of each point is calculated, which can be expressed as
Figure BDA00039951935200000710
Setting l NN Is a speed threshold, then the speed is less than>
Figure BDA00039951935200000711
Is a static point, is a greater or lesser point>
Figure BDA00039951935200000712
The point of (2) is a dynamic point;
(3) And counting the number of the dynamic and static points in the current cluster, and distinguishing the dynamic and static attributes of the cluster according to a voting strategy. Defining relative scale thresholds for dynamic points
Figure BDA00039951935200000713
And an absolute number threshold>
Figure BDA00039951935200000714
If the number of dynamic points in the cluster exceeds an absolute number threshold->
Figure BDA00039951935200000715
While the proportion of dynamic points exceeds the relative proportion threshold value>
Figure BDA00039951935200000716
And considering the cluster as a dynamic target, and if the cluster does not meet the dynamic target, determining the cluster as a static target. If only one of the two conditions is met, marking the clustering result as an Unknown state;
as with the multi-target tracking algorithm, the dynamic and static voting module also needs to make corresponding strategies to manage the output of the final dynamic and static targets; the motion of a dynamic target in a default environment conforms to a general rule, so that the speed transformation amplitude of the target is limited, and the change of the speed between any two adjacent frames and the change of the speed in the direction cannot be overlarge; counting the times of voting clusters belonging to the same tracking chain and in different time frames into a dynamic target in a window with a specified frame number, wherein the proportion of the dynamic attributes of the clusters is required to exceed a preset dynamic cluster relative proportion threshold value
Figure BDA0003995193520000081
It can only be considered as a dynamic target at this time; the target point cloud which is considered as a dynamic object can be directly removed from the original point cloud without participating in the links of map building or map updating, so that the reliability of the map is improved, and the ghost image left by the dynamic object is eliminated.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.

Claims (10)

1. A dynamic barrier removing method based on real-time point cloud data is characterized by comprising the following steps:
acquiring object information to form real-time three-dimensional point cloud data;
dividing the target point cloud into a sparse point cloud and a dense point cloud through preprocessing;
performing point cloud clustering on the sparse point cloud through a noisy density-based spatial clustering algorithm DBSCAN, dividing point cloud data into a plurality of clusters, and fitting a bounding box of each cluster to obtain geometric characteristic information of each individual cluster;
according to the Hungary and Kalman filtering algorithm, multi-target matching and tracking are carried out on the clusters which accord with the filtering threshold value, and sub-quiescent points are divided;
performing point cloud matching on the dense point cloud through KD-Tree;
and determining the dynamic and static attributes of the clustered targets according to the dynamic and static proportions of the points in the cluster, and filtering the dynamic targets.
2. The method for removing dynamic obstacles based on real-time point cloud data as claimed in claim 1, wherein the preprocessing comprises:
self-motion distortion correction is carried out on the point cloud through IMU data, pose information of an unmanned vehicle is provided through odometer data, local point cloud data are registered into a global coordinate system to be associated, and ROI extraction and down-sampling filtering operation of single-frame point cloud are completed in sequence.
3. The method according to claim 1, wherein DBSCAN point cloud clustering comprises:
counting the number of neighbor points in the radius Eps near each point, wherein the points with the number of the neighbor points larger than the minimum point threshold value MinPts are core points, the points which fall in the Eps neighborhood of the core points but with the number of the neighbor points smaller than the MinPts are boundary points, and all the points except the core points and the boundary points are noise points;
taking any core point as a new cluster, expanding the core point to the periphery, and classifying all points with reachable density into the same cluster until all points except the noise point are contained in the cluster; density reachable refers to a chain of points { X } 1 ,X 2 ,...,X n }, any set of point pairs X i And X i+1 ,X i+1 All fall in X i And X in Eps neighborhood of i Is core point, then called X 1 To X n The density can be reached.
4. The method for removing dynamic obstacles based on real-time point cloud data as claimed in claim 3, wherein: before point cloud clustering, searching adjacent points through KD-Tree, counting the number of the adjacent points, and classifying all the points.
5. The method for removing dynamic obstacles based on real-time point cloud data as claimed in claim 1, wherein the method comprises the following steps: obtaining effective geometric dimension, posture and centroid characteristic information of the clustered objects through a bounding box fitting algorithm of the L-shape characteristic; the bounding box fitting algorithm includes:
projecting the three-dimensional coordinate points on an XY plane, taking extreme points on XY axis dimensions in a cluster as candidate points, calculating the distance between two points of the candidate points, and finding two angular points with the farthest distance in the cluster;
searching a third corner point which is farthest from the connecting line of the two corner points in the cluster;
and calculating the centroid of the cluster and the size and the posture of the bounding box according to the three angular points to obtain complete bounding box information.
6. The method for removing dynamic obstacles based on real-time point cloud data as claimed in claim 1, wherein the method comprises the following steps:
establishing a state model X = { X, y, z, w, l, h, X ', y', z ', w', l ', h' } of Kalman filtering T
Wherein x, y and z are mass center coordinates of the clusters, w, l and h are length, width and height information of the clustered bounding boxes, x ', y' and z 'represent speeds in three dimensions, and w', l 'and h' are transformation quantities of the sizes;
prediction value obtained by Kalman filtering according to historical information prediction at t-delta t moment
Figure FDA0003995193510000021
Obtaining a state value X according to observation at the current moment t Updating the best estimate of the current state->
Figure FDA0003995193510000022
7. The method for removing dynamic obstacles based on real-time point cloud data as claimed in claim 6, wherein:
constructing a Cost matrix Cost n×m If a total of n Tracks predicted by Kalman filtering in the previous frame, the centroid is expressed as
Figure FDA0003995193510000023
The observation of the current moment obtains m Detections, and the mass center of the Detections is expressed as
Figure FDA0003995193510000024
Figure FDA0003995193510000025
Cost ij Centroid of the ith Track represented as the previous frame
Figure FDA0003995193510000026
And the centroid of the jth Detection of the current frame>
Figure FDA0003995193510000027
The distance between them;
and solving the cost matrix by using a Hungarian algorithm.
8. The method for removing dynamic obstacles based on real-time point cloud data as claimed in claim 7, wherein:
for newly appeared Detection in each frame, when the target is detected by three continuous frames and is associated to the same Tracks tracking chain, a Track ID is allocated to the target; for the Detection of the previous frame existing and the current frame disappearing, the prediction data of Kalman filtering is used as the estimation state output of the current frame, and if the Detection matched with the current frame is not detected after three frames, the Trac where the current frame is located is thoroughly destroyed at the momentks tracking chain and its Track ID; the obtained cluster tracking result is expressed as:
Figure FDA0003995193510000028
Figure FDA0003995193510000029
wherein
Figure FDA00039951935100000210
Indicating that within the current frame t there has been a tracking chain of k interval times, which includes a cluster k frames ago @>
Figure FDA0003995193510000031
The ith cluster in the current frame->
Figure FDA0003995193510000032
9. The method for dynamic obstacle elimination based on real-time point cloud data as claimed in claim 1, characterized in that:
by sparse point clouds
Figure FDA0003995193510000033
Point k in the middle cluster is matched with the dense point cloud before the designated interval delta frame>
Figure FDA0003995193510000034
Obtaining nearest neighbor points with point normal vector constraints;
calculating the distance d between a certain point of the current frame and a certain point before the appointed interval delta frame according to the nearest neighbor point k Calculating the velocity of each point according to the time difference between two frames, the velocity being expressed as
Figure FDA0003995193510000035
Setting l NN Is a speed threshold, the speed is less than->
Figure FDA0003995193510000036
Is a static point, is a greater or lesser point>
Figure FDA0003995193510000037
The point of (2) is a dynamic point;
and counting the number of the dynamic and static points in the current cluster, and defining the dynamic and static attributes of the cluster according to the proportion of the dynamic and static points.
10. The method for removing dynamic obstacles based on real-time point cloud data as claimed in claim 9, wherein:
if the number of dynamic points in the cluster exceeds the absolute number threshold
Figure FDA0003995193510000038
While the proportion of dynamic points exceeds the relative proportion threshold value>
Figure FDA0003995193510000039
The cluster is a dynamic target;
if the number of dynamic points in the cluster does not exceed the absolute number threshold
Figure FDA00039951935100000310
While the proportion of dynamic points does not exceed the relative proportion threshold value>
Figure FDA00039951935100000311
The cluster is a static target. />
CN202211606482.6A 2022-12-13 2022-12-13 Dynamic obstacle removing method based on real-time point cloud data Pending CN115861968A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211606482.6A CN115861968A (en) 2022-12-13 2022-12-13 Dynamic obstacle removing method based on real-time point cloud data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211606482.6A CN115861968A (en) 2022-12-13 2022-12-13 Dynamic obstacle removing method based on real-time point cloud data

Publications (1)

Publication Number Publication Date
CN115861968A true CN115861968A (en) 2023-03-28

Family

ID=85672834

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211606482.6A Pending CN115861968A (en) 2022-12-13 2022-12-13 Dynamic obstacle removing method based on real-time point cloud data

Country Status (1)

Country Link
CN (1) CN115861968A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116493392A (en) * 2023-06-09 2023-07-28 北京中超伟业信息安全技术股份有限公司 Paper medium carbonization method and system
CN117252863A (en) * 2023-11-13 2023-12-19 山东省地质测绘院 Quick detection and analysis method for geographic information abnormal data
CN117115494B (en) * 2023-10-23 2024-02-06 卡松科技股份有限公司 Lubricating oil impurity pollution detection method and device based on artificial intelligence

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116493392A (en) * 2023-06-09 2023-07-28 北京中超伟业信息安全技术股份有限公司 Paper medium carbonization method and system
CN116493392B (en) * 2023-06-09 2023-12-15 北京中超伟业信息安全技术股份有限公司 Paper medium carbonization method and system
CN117115494B (en) * 2023-10-23 2024-02-06 卡松科技股份有限公司 Lubricating oil impurity pollution detection method and device based on artificial intelligence
CN117252863A (en) * 2023-11-13 2023-12-19 山东省地质测绘院 Quick detection and analysis method for geographic information abnormal data
CN117252863B (en) * 2023-11-13 2024-02-09 山东省地质测绘院 Quick detection and analysis method for geographic information abnormal data

Similar Documents

Publication Publication Date Title
CN111337941B (en) Dynamic obstacle tracking method based on sparse laser radar data
CN112014857A (en) Three-dimensional laser radar positioning and navigation method for intelligent inspection and inspection robot
CN111583369B (en) Laser SLAM method based on facial line angular point feature extraction
WO2022188663A1 (en) Target detection method and apparatus
CN108171131B (en) Improved MeanShift-based method for extracting Lidar point cloud data road marking line
CN115861968A (en) Dynamic obstacle removing method based on real-time point cloud data
CN110780305A (en) Track cone bucket detection and target point tracking method based on multi-line laser radar
CN113506318B (en) Three-dimensional target perception method under vehicle-mounted edge scene
CN114842438A (en) Terrain detection method, system and readable storage medium for autonomous driving vehicle
CN113345008B (en) Laser radar dynamic obstacle detection method considering wheel type robot position and posture estimation
CN112380312B (en) Laser map updating method based on grid detection, terminal and computer equipment
CN110197173B (en) Road edge detection method based on binocular vision
CN116576857A (en) Multi-obstacle prediction navigation obstacle avoidance method based on single-line laser radar
CN114325634A (en) Method for extracting passable area in high-robustness field environment based on laser radar
CN114120075A (en) Three-dimensional target detection method integrating monocular camera and laser radar
CN114998276A (en) Robot dynamic obstacle real-time detection method based on three-dimensional point cloud
CN116109601A (en) Real-time target detection method based on three-dimensional laser radar point cloud
He et al. Real-time track obstacle detection from 3D LIDAR point cloud
CN114545435A (en) Dynamic target sensing system and method fusing camera and laser radar
Gökçe et al. Recognition of dynamic objects from UGVs using Interconnected Neuralnetwork-based Computer Vision system
CN115482282A (en) Dynamic SLAM method with multi-target tracking capability in automatic driving scene
CN112348950B (en) Topological map node generation method based on laser point cloud distribution characteristics
CN115187955A (en) Target positioning method based on post-fusion of visual image and sparse point cloud
Wang et al. A new grid map construction method for autonomous vehicles
Zhao et al. Self-localization using point cloud matching at the object level in outdoor environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination