CN113344112B - Dynamic object identification method based on clustering algorithm - Google Patents

Dynamic object identification method based on clustering algorithm Download PDF

Info

Publication number
CN113344112B
CN113344112B CN202110715305.0A CN202110715305A CN113344112B CN 113344112 B CN113344112 B CN 113344112B CN 202110715305 A CN202110715305 A CN 202110715305A CN 113344112 B CN113344112 B CN 113344112B
Authority
CN
China
Prior art keywords
points
point cloud
cluster
dynamic object
transformation matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110715305.0A
Other languages
Chinese (zh)
Other versions
CN113344112A (en
Inventor
张涛
张晨
张驰
魏宏宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN202110715305.0A priority Critical patent/CN113344112B/en
Publication of CN113344112A publication Critical patent/CN113344112A/en
Application granted granted Critical
Publication of CN113344112B publication Critical patent/CN113344112B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Probability & Statistics with Applications (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

A dynamic object identification method based on a clustering algorithm, 1): conducting ordering treatment on the disordered point cloud collected by the laser radar to complete projection of the three-dimensional point cloud to a two-dimensional plane; 2): performing straight line fitting on the discrete points in each column in the step one, calculating to obtain a corresponding slope k and an intercept b, and segmenting out ground points and other points; 3): clustering other points in the step 2 by using a DBSCAN clustering algorithm, and dividing the points into different clusters; 4): calculating the mass center of each point cloud cluster after being segmented in the step 3, and respectively estimating a transformation matrix corresponding to each cluster type through an interframe matching algorithm; 5): and combining the pose of the robot in the previous frame, calculating a corresponding new pose according to the estimated transformation matrix of each cluster, clustering, and obtaining a label corresponding to the clustered noise point as a dynamic object. The invention completes the identification of the dynamic object by directly carrying out a series of processing on the point cloud, thereby reducing the estimation error of the transformation matrix.

Description

Dynamic object identification method based on clustering algorithm
Technical Field
The invention relates to the field of dynamic object clustering algorithm identification, in particular to a dynamic object identification method based on a clustering algorithm, which can be applied to three-dimensional laser SLAM navigation to eliminate dynamic object interference and improve mapping and positioning accuracy.
Background
In recent years, the development of robots is continuously accelerated in the world, and both service robots and industrial robots show a rising trend, so that intelligent mobile robots gradually step into the public view and become one of indispensable components of daily production and life of the human society. When the intelligent mobile robot executes tasks, the environment needs to be sensed through the sensor, so that various severe situations can be met, and more precise operation can be executed. The successful completion of complex operations by robots has required a number of technical hurdles to be overcome, of which the mapping and positioning technique (SLAM) is one of the most active research topics in robotics. In the process of estimating a transformation matrix by performing point cloud registration at the front end of the laser SLAM according to two frames of point clouds, a certain error is introduced into a dynamic object, so that a large accumulated positioning deviation is generated finally. In order to solve the problem, most of the current schemes are to acquire an image corresponding to a point cloud frame through a camera, identify a dynamic object through image processing, calculate and remove related points in the point cloud corresponding to image pixels. However, the method of removing the dynamic object in the laser radar point cloud by introducing the camera image increases the processing time, is complex, and is difficult to satisfy the SLAM system in high-speed operation.
Disclosure of Invention
In order to solve the problems, the invention provides a dynamic object identification method based on a clustering algorithm, and in a three-dimensional laser SLAM system, errors are introduced into a dynamic object, so that the estimated pose of a robot generates large offset. The method divides the point cloud into clusters by means of a clustering algorithm and judges whether each cluster belongs to a dynamic object, thereby removing points belonging to the dynamic object in the point cloud and eliminating the interference of the dynamic object on an estimation transformation matrix.
The invention provides a dynamic object identification method based on a clustering algorithm, which comprises the following specific steps:
step 1: conducting ordering treatment on the disordered point cloud collected by the laser radar to complete projection of the three-dimensional point cloud to a two-dimensional plane;
the ordering processing is carried out by means of scanning time sequence and different wire harnesses, the dimension reduction processing of data is completed, and the operated disordered point cloud is a three-dimensional point cloud after distortion correction;
after the three-dimensional point cloud is subjected to dimensionality reduction, the three-dimensional point cloud subjected to distortion removal is still reserved and used for subsequent clustering;
and 2, step: performing straight line fitting on the discrete points in each column in the step one, calculating to obtain a corresponding slope k and an intercept b, and segmenting out ground points and other points;
uniformly calling points except the ground point as other points, and not carrying out secondary segmentation on the other points; a high plane may exist in the point cloud, so that points with large intercept and small slope are regarded as other points;
and 3, step 3: clustering other points in the step 2 by using a DBSCAN clustering algorithm, and dividing the points into different clusters;
step 3, only performing semantic segmentation on other points in the step 2, and not identifying; when the object is tightly attached to the surrounding environment to cause difficulty in segmentation, the object and the environment are treated as a semantic whole; if one part of the semantic integral is a dynamic object, all points of the semantic integral are screened out finally
And 4, step 4: calculating the mass center of each point cloud cluster after being segmented in the step 3, and respectively estimating a transformation matrix corresponding to each cluster type through an interframe matching algorithm;
and 5: and combining the pose of the robot in the previous frame, calculating a corresponding new pose according to the estimated transformation matrix of each cluster, clustering, and obtaining a label corresponding to the clustered noise point as a dynamic object.
As a further improvement of the invention, step 4 is implemented by calculating the centroid of the cluster in step 3
Figure BDA0003134895810000021
Instead of a whole body, wherein
Figure BDA0003134895810000022
Respectively representing coordinate values of the cluster centroid x, y and z labeled as i at the time t, thereby completing the large-amplitude compression of the point cloud; utilizing the corresponding centroid position in each semantic cluster of the front and back frames
Figure BDA0003134895810000023
And with
Figure BDA0003134895810000024
The transformation matrix is estimated by the following three formulas to finally obtain a transformation matrix set
Figure BDA0003134895810000025
R i =icosθ i ·I+(1-cosθ i )n i n i T +sinθ i ·n i ^
i
Figure BDA0003134895810000026
Figure BDA0003134895810000027
Wherein I is an identity matrix and n i And theta i The direction vector and the rotation angle of the equivalent rotation vector obtained by calculation for the ith semantic cluster respectively, R i And calculating the ith semantic cluster to obtain a transformation matrix.
As a further improvement of the invention, the transformation matrix of each cluster calculated in the step 5 is larger in dimension, and the new pose of the carrier is solved according to the pose change relationship
Figure BDA0003134895810000028
Finishing the dimension reduction operation; and by applying a clustering algorithm and judging the outliers of the new pose, a transformation matrix corresponding to the outliers is obtained, and the identification of the dynamic object is completed.
Compared with the prior art, the invention has the advantages that: most of current dynamic object identification schemes acquire images corresponding to point cloud frames through a camera, identify dynamic objects through image processing, calculate and remove relevant points in the point cloud corresponding to image pixels, and are too complex and poor in real-time performance. The laser point cloud is clustered by applying the DBSCAN clustering algorithm, the number of clusters in the point cloud does not need to be predetermined, and the method is simple and can be self-adaptive to various scenes. The pose estimation error of the dynamic object is much larger than that of other static objects, so that the error introduced by selecting and using the cluster centroid to replace the cluster point cloud is negligible, the validity of the scheme can be confirmed, the operation amount is greatly reduced by using the centroid replacement method, and the real-time requirement of the robot can be met.
Drawings
FIG. 1 is a three-dimensional point cloud chart acquired by a laser radar;
FIG. 2 is a three-dimensional point cloud with ground points removed;
FIG. 3 is a top view of the three-dimensional point cloud projected onto a 2-dimensional plane after ground points are removed;
fig. 4 is a schematic diagram of clustering point clouds.
Detailed Description
The invention is described in further detail below with reference to the following detailed description and accompanying drawings:
the invention provides a dynamic object identification method based on a clustering algorithm. The method divides the point cloud into clusters by means of a clustering algorithm and judges whether each cluster belongs to a dynamic object, thereby removing points belonging to the dynamic object in the point cloud and eliminating the interference of the dynamic object on an estimation transformation matrix.
As a specific embodiment of the invention, the technical solution of the invention is as follows: the point cloud data set acquired by scanning the three-dimensional laser radar for one circle at the time t is set as
Figure BDA0003134895810000031
p i =(x i y i z i ) In which N is t Number of point clouds, p, collected by the lidar at time t i The method is a one-dimensional vector formed by Euler coordinates under a laser radar coordinate system, wherein a three-dimensional point cloud image acquired by the laser radar is shown in figure 1.
Step 1: firstly, conducting ordering treatment on disordered point clouds acquired by a laser radar, namely projecting disordered point cloud data to a two-dimensional plane, wherein a three-dimensional point cloud picture with ground points removed is shown in figure 2, and a top view of projecting the three-dimensional point cloud with the ground points removed to a 2-dimensional plane is shown in figure 3. And lines corresponding to the multi-line laser radars corresponding to the lines of the two-dimensional plane, wherein the number of the laser radar lines is the same as that of the lines of the two-dimensional array. The column of the two-dimensional plane indicates the sequence of the point clouds collected by the laser radar rotating one circle. By calculating the angle of each point with respect to the XOY plane
Figure BDA0003134895810000032
Determining the corresponding line number, and calculating the deflection angle of each point in the XOY plane relative to the Y axis
Figure BDA0003134895810000033
And determining the corresponding column number, thereby completing the projection of the three-dimensional point cloud to the two-dimensional plane.
Step 2: performing straight line fitting on the discrete points in each column in the first step, and calculating to obtain corresponding slopes k and b, wherein the points can be regarded as ground points under the condition that the following conditions are met:
condition 1: considering that the ground point is not a vertical structure and there is a case that the ground is a slope, the slope of the straight line obtained by fitting should be smaller than a certain threshold.
Condition 2: considering that the slope after point fitting of the object plane is also small, the intercept b must not be greater than a certain threshold when condition 1 is satisfied.
And step 3: setting the point cloud after the ground point is removed as
Figure BDA0003134895810000041
And finishing the clustering and partitioning task of the point cloud by using a DBSCAN clustering algorithm. The method comprises the following specific steps:
in the first step, a point p is selected from the processed point cloud m All points that satisfy the following condition are found in the remaining point cloud.
||p m -p i ||<ε
The points are tentatively assigned to a cluster class, where ε is a given parameter that represents the distance threshold between a point and a point. If the number of the cluster class is greater than or equal to the given parameter minpts, the point is marked as a core point and formally assigned as a new cluster label. If less than minpts, the point is marked as a noise point.
And secondly, accessing surrounding points of other points in the cluster where each core point is located, and adding the point into the cluster where the core point is located if the distance is smaller than epsilon and the points are not distributed into other clusters. Until no other points are within epsilon.
And thirdly, selecting unmarked points in the point cloud and repeating the same process until all the points are marked.
Final point cloud P t Is divided into
Figure BDA0003134895810000042
P′ t,i Is represented by point cloud P' t Set of points with a middle label of i, N l Is the total number of labels after clustering.
And 4, step 4: the centroid of each point cloud cluster is selected to represent the whole point cloud, so that the calculation amount is greatly reduced. Computing point clouds
Figure BDA0003134895810000043
Center of mass of each cluster
Figure BDA0003134895810000044
Wherein
Figure BDA0003134895810000045
And respectively representing coordinate values of the cluster centroid x, y and z of the label i at the time t.
The point cloud segmented at the last time (time t-1) is represented as,
Figure BDA0003134895810000046
combining the point cloud segmented at the current time (t time)
Figure BDA0003134895810000047
And registering each cluster through an interframe registration algorithm of the point clouds to obtain the corresponding relation among the point cloud clusters, wherein a schematic diagram of clustering the point clouds is shown in fig. 4. Corresponding centroid positions within clusters by computation
Figure BDA0003134895810000048
And
Figure BDA0003134895810000049
estimating the transformation relation R in each cluster between the front and back two frame point clouds by the Rodrigues formula i
R i =cosθ i ·I+(1-cosθ i )n i n i T +sinθ i ·n i ^
Figure BDA00031348958100000410
Figure BDA0003134895810000051
Finally obtaining a transformation matrix set
Figure BDA0003134895810000052
And 5: combining the estimated pose X of the robot at the last moment t-1 And estimating the new pose by utilizing the transformation matrix set to obtain a new pose set
Figure BDA0003134895810000053
Because the dynamic object introduces the deviation when estimating the motion relation between the front frame and the back frame, when the calculated transformation matrix updates the pose, a larger error is inevitably introduced, and therefore, the pose set X is aligned t And clustering, wherein the label corresponding to the clustered noise point is the dynamic object.
The above description is only one of the preferred embodiments of the present invention, and is not intended to limit the present invention in any way, but any modifications or equivalent variations made in accordance with the technical spirit of the present invention are within the scope of the present invention as claimed.

Claims (3)

1. A dynamic object identification method based on a clustering algorithm comprises the following specific steps:
step 1: conducting ordering treatment on the disordered point cloud collected by the laser radar to complete projection of the three-dimensional point cloud to a two-dimensional plane;
the ordering processing is carried out by means of scanning time sequence and different wire harnesses, the dimension reduction processing of data is completed, and the operated disordered point cloud is a three-dimensional point cloud after distortion correction;
after the three-dimensional point cloud is subjected to dimensionality reduction, the three-dimensional point cloud subjected to distortion removal is still reserved and used for subsequent clustering;
step 2: performing straight line fitting on the discrete points in each column in the step one, calculating to obtain a corresponding slope k and an intercept b, and segmenting out ground points and other points;
uniformly calling points except the ground point as other points, and not carrying out secondary segmentation on the other points; a high plane may exist in the point cloud, so that points with large intercept and small slope are regarded as other points;
and 3, step 3: clustering other points in the step 2 by using a DBSCAN clustering algorithm, and dividing the points into different clusters;
step 3, only performing semantic segmentation on other points in the step 2, and not identifying; when the object is tightly attached to the surrounding environment to cause difficulty in segmentation, the object and the environment are treated as a semantic whole; if one part of the semantic integral is a dynamic object, all points of the semantic integral are screened out finally
And 4, step 4: calculating the mass center of each point cloud cluster divided in the step 3, and respectively estimating a transformation matrix corresponding to each cluster type through an interframe matching algorithm;
and 5: and combining the pose of the robot in the previous frame, calculating a corresponding new pose according to the estimated transformation matrix of each cluster, clustering, and obtaining a label corresponding to the clustered noise point as a dynamic object.
2. The dynamic object recognition method based on the clustering algorithm as claimed in claim 1, wherein: calculating the centroid of the cluster in step 3 in step 4
Figure FDA0003134895800000011
Instead of a whole body, wherein
Figure FDA0003134895800000012
Respectively representing coordinate values of the cluster centroid x, y and z labeled as i at the time t, thereby completing the large-amplitude compression of the point cloud; utilizing the corresponding centroid position in each semantic cluster of the front and back frames
Figure FDA0003134895800000013
And
Figure FDA0003134895800000014
the estimation of the transformation matrix is completed by the following three formulas to finally obtain a transformation matrix set
Figure FDA0003134895800000015
R i =cosθ i ·I+(1-cosθ i )n i n i T +sinθ i ·n i ^
Figure FDA0003134895800000016
Figure FDA0003134895800000017
Where I is the identity matrix and n i And theta i The direction vector and the rotation angle, R, of the equivalent rotation vector calculated for the ith semantic cluster i And calculating the ith semantic cluster to obtain a transformation matrix.
3. The dynamic object recognition method based on the clustering algorithm as claimed in claim 1, wherein: the transformation matrix of each cluster calculated in the step 5 is solved according to the pose change relation due to the large dimensionNew pose of carrier
Figure FDA0003134895800000021
Finishing the dimension reduction operation; and by applying a clustering algorithm and judging the outliers of the new pose, a transformation matrix corresponding to the outliers is obtained, and the identification of the dynamic object is completed.
CN202110715305.0A 2021-06-27 2021-06-27 Dynamic object identification method based on clustering algorithm Active CN113344112B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110715305.0A CN113344112B (en) 2021-06-27 2021-06-27 Dynamic object identification method based on clustering algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110715305.0A CN113344112B (en) 2021-06-27 2021-06-27 Dynamic object identification method based on clustering algorithm

Publications (2)

Publication Number Publication Date
CN113344112A CN113344112A (en) 2021-09-03
CN113344112B true CN113344112B (en) 2022-11-15

Family

ID=77478971

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110715305.0A Active CN113344112B (en) 2021-06-27 2021-06-27 Dynamic object identification method based on clustering algorithm

Country Status (1)

Country Link
CN (1) CN113344112B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114066773B (en) * 2021-11-26 2023-10-27 哈尔滨理工大学 Dynamic object removal based on point cloud characteristics and Monte Carlo expansion method
CN115372972A (en) * 2022-08-05 2022-11-22 西安电子科技大学 Clustering method based on traffic millimeter wave radar point cloud data
CN115937320B (en) * 2023-02-21 2023-05-05 深圳市华亿明投资发展有限公司 Visual positioning method for polishing mobile phone shell
CN117649495B (en) * 2024-01-30 2024-05-28 山东大学 Indoor three-dimensional point cloud map generation method and system based on point cloud descriptor matching

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111091062A (en) * 2019-11-21 2020-05-01 东南大学 Robot out-of-order target sorting method based on 3D visual clustering and matching

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111091062A (en) * 2019-11-21 2020-05-01 东南大学 Robot out-of-order target sorting method based on 3D visual clustering and matching

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于多模态信息的机器人视觉识别与定位研究;魏玉锋等;《光电工程》;20180215(第02期);全文 *
激光雷达场景三维姿态点法向量估计方法;张楠等;《红外与激光工程》;20200125(第01期);全文 *

Also Published As

Publication number Publication date
CN113344112A (en) 2021-09-03

Similar Documents

Publication Publication Date Title
CN113344112B (en) Dynamic object identification method based on clustering algorithm
CN107563446B (en) Target detection method for micro-operation system
CN111443359B (en) Positioning method, device and equipment
CN112396595B (en) Semantic SLAM method based on point-line characteristics in dynamic environment
US20150253864A1 (en) Image Processor Comprising Gesture Recognition System with Finger Detection and Tracking Functionality
CN111612841B (en) Target positioning method and device, mobile robot and readable storage medium
CN110910451B (en) Object pose estimation method and system based on deformation convolution network
CN110930411B (en) Human body segmentation method and system based on depth camera
CN112328715B (en) Visual positioning method, training method of related model, related device and equipment
CN113327298B (en) Grabbing gesture estimation method based on image instance segmentation and point cloud PCA algorithm
CN114743259A (en) Pose estimation method, pose estimation system, terminal, storage medium and application
CN109685827B (en) Target detection and tracking method based on DSP
CN114491399A (en) Data processing method and device, terminal equipment and computer readable storage medium
Huang et al. Progressive spatio-temporal alignment for efficient event-based motion estimation
CN117456327A (en) Industrial part pose tracking method based on self-adaptive fusion of geometric edge and color statistics characteristics
CN114119749A (en) Monocular 3D vehicle detection method based on dense association
CN116468632A (en) Grid denoising method and device based on self-adaptive feature preservation
CN113838072B (en) High-dynamic star map image segmentation method
CN113702941A (en) Point cloud speed measurement method based on improved ICP (inductively coupled plasma)
CN110895684B (en) Gesture motion recognition method based on Kinect
CN114387308A (en) Machine vision characteristic tracking system
CN112529943A (en) Object detection method, object detection device and intelligent equipment
CN113160332A (en) Multi-target identification and positioning method based on binocular vision
CN112597913B (en) Face labeling method and device
CN112489134B (en) Motion estimation-oriented frame-crossing ultra-high-speed camera design method and motion estimation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant