CN117830341A - Method for removing dynamic trace of point cloud map on line - Google Patents

Method for removing dynamic trace of point cloud map on line Download PDF

Info

Publication number
CN117830341A
CN117830341A CN202410005624.6A CN202410005624A CN117830341A CN 117830341 A CN117830341 A CN 117830341A CN 202410005624 A CN202410005624 A CN 202410005624A CN 117830341 A CN117830341 A CN 117830341A
Authority
CN
China
Prior art keywords
ground
voxels
dynamic
voxel
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410005624.6A
Other languages
Chinese (zh)
Inventor
方正
吴容光
庞成林
王纪波
申朝辉
兰正阳
吴选康
Original Assignee
东北大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 东北大学 filed Critical 东北大学
Priority to CN202410005624.6A priority Critical patent/CN117830341A/en
Publication of CN117830341A publication Critical patent/CN117830341A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention provides a method for removing dynamic marks of a point cloud map on line, and relates to the technical field of point cloud map dynamic mark removal. The method is realized by the following four steps: ground segmentation, map updating, dynamic voxel removal based on downward and upward retrieval, and static restoration. According to the method, the laser radar point cloud and the pose of each frame are taken as input, dynamic marks are removed by comparing the observation time difference between the point cloud and the ground, and the dynamic marks are removed while the map is constructed, so that a clean and reliable static map is directly obtained. The method effectively avoids the technical defects existing in the off-line removal and on-line removal methods in the existing dynamic trace removal field.

Description

Method for removing dynamic trace of point cloud map on line
Technical Field
The invention relates to the technical field of point cloud map dynamic trace removal, in particular to a method for removing point cloud map dynamic trace on line.
Background
The clean and reliable map plays an important role in three-dimensional reconstruction, autonomous robot navigation, automatic driving, environment monitoring and the like. The map drawn by the LiDAR sensor is called a 3D point cloud map, and is usually obtained by accumulating point cloud data acquired by the LiDAR frame by frame. Unfortunately, urban environments often contain a large number of dynamic targets, which are also scanned by lidar into point cloud data. Moreover, when the targets move, a large number of points are accumulated in the map along the movement track of the targets, so as to form a long trace. These points are often referred to as dynamic points, and the marks formed by these points are referred to as dynamic marks. In addition, these dynamic traces can become obstacles in autonomous navigation and city modeling processes, affecting the use of subsequent maps. Therefore, dynamic trace removal is critical to building a clean point cloud map.
Dynamic trace removal generally includes both off-line removal and on-line removal. The off-line removal is to obtain a priori map containing all dynamic marks, then process the map to remove the dynamic marks, and finally obtain a map only containing static point cloud. And the online removal is to judge and remove dynamic marks in the map while constructing the map so as to directly obtain the static map.
The off-line dynamic trace removal method is mostly based on a basic idea: and judging the dynamic trace by comparing the difference between the single-frame point cloud and the prior map. Common off-line removal methods include ray projection, visual visibility, and traversability. The methods can obtain excellent dynamic removal results, but offline removal requires processing after the map is constructed, and time cost for obtaining the map is increased.
The online dynamic trace removing method is mainly realized by a mobile object segmentation algorithm, and the mobile object segmentation is usually realized by deep learning. However, the method of deep learning is too dependent on the data set, and if the actual scene is too different from the training scene, the method of deep learning often fails. Therefore, the method for online dynamic trace removal while constructing the map is designed and realized, and has important innovation and specific application value.
The technical method of the Chinese patent CN114926369A combined with the map model is used for removing dynamic obstacles in the laser point cloud by combining with the map model so as to improve the positioning accuracy of the automobile. According to the method, firstly, the ground in the point cloud of the current frame is separated, and then the difference between the residual point cloud and the high-precision map is compared to remove the dynamic point cloud. However, the method removes the dynamic point cloud in the current frame, and aims to remove the influence of dynamic obstacles on the positioning of the automobile, so that a clean and reliable map cannot be obtained.
The technical method of the Chinese patent CN112270694B for detecting the dynamic targets in the urban environment based on the laser radar scan map projects laser radar point clouds into panoramic depth images, and establishes an index relationship between panoramic depth influence and the point clouds. After the initial dynamic region is detected and the background static region is removed, a related change optical flow removing pseudo dynamic detection region is constructed, and finally, point clustering and region filling are carried out to complete urban environment dynamic target detection. However, in the initial dynamic region detection, only the information change of two adjacent frames is used to detect the dynamic target, and the whole map is not used, so that all the dynamic targets cannot be detected.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a method for removing dynamic marks of a point cloud map on line aiming at the defects of the prior art, and the dynamic marks can be removed in the process of constructing the point cloud map. According to the method, the laser radar point cloud and the pose of each frame are taken as input, the dynamic trace is removed by comparing the observation time difference between the point cloud and the ground, the dynamic trace contained in the map can be removed while the map is constructed, a clean and reliable static map is further directly obtained, and the time cost of a conventional offline removing method is reduced.
In order to solve the technical problems, the invention adopts the following technical scheme: a method for removing dynamic trace of a point cloud map on line comprises the following steps:
step 1: adopting two-stage ground segmentation, coarse segmentation and fine segmentation to obtain a ground point set and a non-ground point set;
the ground segmentation process of two stages is adopted: in the preprocessing stage, taking the laser radar point cloud and the pose of each frame of point cloud as input, performing rough segmentation on the point cloud by using a ground segmentation method based on a depth map, and removing most of non-ground points to obtain an alternative ground point set; and (3) refining the alternative ground point set by using a Principal Component Analysis (PCA) method to obtain a final ground point set, and placing points without the ground point set into a non-ground point set.
Step 2: constructing a voxel map, and finishing voxel map updating through radar frame indexes;
constructing a voxel map by taking voxels as basic units for dynamic trace removal to manage the point cloud image;
the voxel map is divided into three parts: the ground sub-graph, the non-ground sub-graph and the dynamic sub-graph share the same coordinate system and are stored separately; voxels in the ground subgraph, the non-ground subgraph and the dynamic subgraph are respectively called a ground voxel, a non-ground voxel and a dynamic voxel; meanwhile, the radar frame index of each point is stored, the minimum frame index represents the time when the voxel is observed for the first time, and the maximum frame index represents the time when the voxel is observed for the last time;
the voxel coordinates of a ground point set and a non-ground point set obtained by ground segmentation are calculated, voxels are respectively added into a ground subgraph and a non-ground subgraph, and the map is updated;
step 3: eliminating dynamic voxels with 'pop-up' and 'pop-up' by adopting a dynamic voxel removing method based on downward retrieval and upward retrieval;
comparing observed time differences between the ground subgraph and the non-ground subgraph by upward search and downward search to remove dynamic voxels of "pop-up" and "pop-down", respectively;
the downward search is: calculating voxels of the non-ground point set in the non-ground subgraph, and taking the voxels as starting points, searching downwards in the ground subgraph to find ground voxels below the voxels; comparing the observed time differences in the two voxels to determine a dynamic voxel;
the upward search is as follows: calculating voxels of the ground point set in the ground subgraph, and taking the voxels as starting points, searching upwards in the non-ground subgraph to find all non-ground voxels above the voxels; comparing observed time differences between the ground voxels and all non-ground voxels to determine dynamic voxels;
the dynamic voxels are: if the voxel contains a dynamic point, it is considered a dynamic voxel;
the dynamic voxels of the "pop-up" are: at time t 0 The observation position P can only observe the ground, and can not observe the non-ground target voxels; but from t 1 Start at time t 1 >t 0 Ground and non-ground target voxels can be observed simultaneously, then the non-ground target voxel is defined as a "pop-up" dynamic voxel;
the "snap-off" dynamic voxels are: at time t 0 Observing the position P, the ground and non-ground target voxels can be observed at the same time, but from t 1 Start at time t 1 >t 0 Only the ground can be observed, and non-ground target voxels cannot be observed; then, the non-ground target voxel is defined as a dynamic voxel that "suddenly disappears";
the method for judging the dynamic voxels based on the observation time difference comprises the following steps: judging whether the voxel is a dynamic voxel by comparing the time difference between a target voxel of a radar observation position and the ground, and dividing the dynamic voxel into a dynamic voxel with 'suddenly appearing' and 'suddenly disappearing';
step 4: recovering the misdeleted static voxels by adopting a static reduction method to obtain a complete static voxel map;
calculating a voxel of the non-ground point set in the dynamic subgraph, taking the voxel as a starting point, searching downwards in the ground subgraph, and finding out a ground voxel below the voxel;
and comparing the total number of observations in the two voxels, namely comparing the total number of observations between the dynamic subgraph and the ground subgraph, and judging whether the voxel is a static voxel which is erroneously removed. The static voxels that are misidentified as dynamic are restored back into the non-ground subgraph to reduce the overall false positive rate.
The beneficial effects of adopting above-mentioned technical scheme to produce lie in: according to the method for removing the dynamic trace of the point cloud map on line, the dynamic trace can be removed on the premise of no priori map by comparing the observation time difference between the ground sub-map and the non-ground sub-map, and a clean static map can be directly obtained.
The invention only needs to compare the observation time difference between non-ground and ground voxels in each iteration, and the number of the involved voxels is far smaller than the number of point clouds in each frame, so that the time for processing the point clouds in each frame is very short, and the speed requirement of online processing can be met.
Drawings
FIG. 1 is a flowchart of a method for online removal of dynamic marks from a point cloud map according to an embodiment of the present invention
FIG. 2 is a schematic view of a static voxel observation provided by an embodiment of the present invention;
FIG. 3 is a schematic view of dynamic voxel observations provided by an embodiment of the present invention;
FIG. 4 is a schematic diagram of a downward search provided by an embodiment of the present invention;
FIG. 5 is a schematic diagram of upward retrieval provided by an embodiment of the present invention;
FIG. 6 is a graph showing the comparison of the effects of the dynamic trace removal according to the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments will be clearly and completely described below with reference to the drawings in the specific embodiments. It will be apparent that the specific embodiments described are only some, but not all, embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The purpose of dynamic trace removal is to remove all dynamic points obtained by the dynamic object, and it is not practical for the online flow to process each point in the map separately. Thus, the present invention uses voxels as the minimum unit of dynamic trace removal. If a voxel contains a dynamic point, it is considered a dynamic voxel and all points within the voxel are identified as dynamic points and deleted. Otherwise, the voxel is considered a static voxel. Furthermore, if a point in a voxel is at t k The moment is observed, then the voxel is considered to be at t k The moment can be observed, otherwise the voxel is considered at t k The time of day cannot be observed.
Basically, if a voxel in a map is static, that voxel should be observed simultaneously with the ground on which it is located and disappear simultaneously from the field of view. As shown in FIG. 2, the box represents a voxel, the box contains a static object, and the triangle represents a particular location in the map corresponding to the box. The rectangular and sector gray areas represent the detection ranges of the ground and the lidar, respectively, and the sector center point position is the lidar. The arrow above the radar represents the direction of lidar motion. At the starting moment, the laser radar is far away from the voxel, and the voxel and the ground at the position of the voxel cannot be observed. As the lidar moves, the voxels and the ground at which the voxels are located enter the detection range of the lidar at the same time and are observed. And then the laser radar continues to move in the same direction, and the voxels and the ground at the positions of the voxels disappear from the field of view of the laser radar at the same time. Namely: the voxel and the ground must appear and disappear simultaneously, and if either of the two conditions is not met, the voxel is considered a dynamic voxel.
As shown in fig. 3 (a), at the start time, the target voxel is located outside the box, and the radar can only observe the ground at the box position, but cannot observe the target voxel. After a period of time, the target voxel moves into a box. At this point, the radar can observe the target voxel and the ground at the box location at the same time. I.e. in the radar detection range, the target voxel appears later than the ground, the target voxel is considered as a "pop-up" dynamic voxel. Similarly, as shown in fig. 3 (b), at the start time, the radar can observe the ground and target voxels at the box position at the same time. After a period of time, the target voxel has moved out of the box position and is not observed by the radar, but the ground at the box position can still be observed, and the target voxel disappears earlier than the ground at the box position. I.e. in the radar detection range, the target voxel disappears earlier than the ground, which is regarded as a "snap-out" dynamic voxel.
According to the definition of "pop-up" and "pop-up" voxels, a voxel is considered to be a "pop-up" dynamic voxel if it is observed for the first time later than the time at which the ground below it is observed. Conversely, if a voxel is observed for the last time earlier than the ground is observed, the voxel is considered to be a "snap-out" dynamic voxel. We refer to this method of determining dynamic voxels as "observation time differences".
In this embodiment, a method for removing dynamic trace of a point cloud map on line is implemented based on three modules of ground segmentation, map management and dynamic removal, as shown in fig. 1. The ground segmentation module converts the point cloud of the current frame into a world coordinate system by utilizing the input point cloud and pose, divides the current frame into a ground point set and a non-ground point set, and then sends the ground point set and the non-ground point set into the map management module. And taking the voxels as a minimum unit for dynamic removal, constructing a voxel map in a map management module, and dividing the whole map into three parts, namely a ground subgraph, a non-ground subgraph and a dynamic subgraph. And the map management module realizes the non-ground subgraph and the interconversion between the ground subgraph and the dynamic subgraph according to the result of the dynamic removal module. The dynamic removal module comprises three processing methods of downward retrieval, upward retrieval and static reduction. The downward retrieval and the upward retrieval judge dynamic voxels by comparing the observed time difference between the non-ground subgraph and the ground subgraph, and the static state is also in charge of searching static voxels which are erroneously removed in the dynamic subgraph and restoring the static voxels which are erroneously removed in the dynamic subgraph into a static map. The method specifically comprises the following steps:
step 1: dividing the ground;
since the dynamic removal method requires a constant comparison of the observed time differences between ground and non-ground voxels, the accuracy of ground segmentation will affect the accuracy of dynamic removal. In particular, since there are no dynamic points in the ground subgraph, the effect of dynamic removal may also be affected if too many dynamic points are mistakenly classified into the ground point set. Thus, a two-stage ground segmentation process is employed, where the candidate ground point set is first extracted by preprocessing and then refined to obtain the final ground point set.
In preprocessing, firstly, the input point cloud and pose are utilized to convert the point cloud P of the current frame into the world coordinate system, and the point cloud P is recorded as P W It is then projected into an image I, the number of rows of image I corresponding to the number of laser beams of the sensor and the number of columns corresponding to the number of points obtained per laser beam per frame. Each pixel of image I corresponds to P W Is a single point in the drawing. Point coordinates p in image I line m m,n =(x m,n ,y m,n ,z m,n ) T Coordinate point p in m+1th row m+1,n =(x m+1,n ,y m+1,n ,z m+1,n ) T From point p m,n And point p m+1,n The height angle θ of the component vector can be calculated by:
wherein: p is p m,n =(x m,n ,y m,n ,z m,n ) T Representing the coordinates of the radar point corresponding to the pixel of the mth row and the nth column of the image I in the world system. P is p m+1,n =(x m+1,n ,y m+1,n ,z m+1,n ) T Representing the m+1th row and n column of image ICoordinates of the corresponding radar point in the world.
If θ is less than threshold τ θ Then point p m,n Adding to an alternative ground point set c G. After all the points are processed, a final alternative ground point set is obtained c G. Due to the looser threshold tau being set during the preprocessing θ More ground points are classified into a candidate ground point set. At the same time, some non-ground points are also misclassified into the candidate ground point set. Therefore, further refinement is required to solve this problem.
Obtaining a candidate ground point set c After G, PCA was introduced for further refinement. And on the basis of obtaining an initial ground point set by utilizing the height of the point cloud, obtaining a final ground point set by utilizing PCA (principal component analysis) continuously and iteratively. In general, the vehicle radar is placed at a fixed position on the vehicle body, and the height h of the radar relative to the ground can be obtained in advance.
The initial ground point settings are as follows:
G 0 ={p i ∣p ic G,h-τ seed <z i <h+τ seed } (2)
wherein: τ seed Is the height threshold value of the preset initial ground point, G 0 Representing an initial set of ground points, p i Representing candidate ground point sets c Points in G, z i Represents p i Coordinates of the point on the z-axis.
The iteration is then started. Ground Point set G for the kth iteration κ Calculating the coordinate mean value of all points of the ground point setAnd covariance matrix Σ κ The following are provided:
wherein n is G κ Total number of midpoints. Then for covariance matrix Σ κ S is carried outAnd (5) decomposing by VD. Will be sigma κ The singular value vector corresponding to the smallest singular value is used as a plane normal vectorThe point French plane equation is constructed as follows:
sequentially solving after obtaining plane equations c Each point p in G i Distance to plane d i If it is smaller than the preset threshold value tau h Then p is i Adding the ground point set G of the next round κ+1 Among them, namely:
G κ+1 ={p i ∣p ic G,d ih } (5)
and then starts the k+1 iteration. Stopping iteration when the maximum iteration number is reached, and obtaining the final ground point cloud G W While the rest points are non-ground point clouds U W
Since the candidate ground point set has been pre-processed to remove most of the non-ground points, PCA performed on the basis of the candidate ground point set is more accurate, allowing a more stringent threshold τ to be set h To ensure that the dynamic points therein are removed as much as possible.
Step 2: constructing a voxel map, and finishing voxel map updating through radar frame indexes;
and constructing a voxel map by taking the voxels as basic units of dynamic trace removal to manage the point cloud image. The voxel map is divided into three parts: ground subgraphNon-ground subgraph->And dynamic subgraph->The three sub-graphs share the same coordinate system and are stored separately. Voxels located in the ground subgraph, non-ground subgraph, and dynamic subgraph are referred to as ground voxels, non-ground voxels, and dynamic voxels, respectively.
Meanwhile, the radar frame index of each point is stored, and all frame indexes are recorded in a set and recorded as The minimum frame index gamma of (a) min Representing the time at which the voxel was first observed; maximum frame index gamma max Representing the last time the voxel was observed.
Let t k+1 The ground point set and the non-ground point set at the moment are respectively recorded asAnd->Then by calculating the voxel coordinates of each point, it is added to the ground subgraph respectively +.>Non-ground subgraph->Is a kind of medium.
Step 3: retrieving downwards, and removing dynamic voxels which are suddenly appeared;
the purpose of the downward search is to remove t k Time of day non-ground subgraphThe "pop-up" dynamic voxels in (1). Sequentially take out->Voxel coordinates V i Finding V in non-ground subgraphs i Non-ground voxels in position n V i Statistical voxels n V i And the smallest frame index is recorded as n γ min Representing the moment when the non-ground voxel was first observed.
Then, in the ground subgraph, the voxel coordinates V i Based on this, a ground voxel within 3 meters is found down the z-axis g V i . If not found, prove n V i Is a voxel in the air of the altitude, and a dynamic target should not exist. If found, statistics are made g V i And the smallest frame index is recorded as g γ min . Representing the moment when the ground voxel was first observed.
Finally, the time when the ground voxel is first observed and the non-ground voxel are compared, if the time when the non-ground voxel is first observed is much later than the time when the ground voxel is first observed:
n γ min - g γ minsea (6)
then prove n V i Dynamic voxels that are "pop-up" will n V i All points in (a) and all frame indexes are moved into the dynamic sub-graph. Wherein τ sea Is a preset time difference threshold.
As shown in fig. 4, the left image represents the ith radar frame, and the right image represents the jth radar frame (j>i) A. The invention relates to a method for producing a fibre-reinforced plastic composite The points observed by the radar are indicated by dots. The target is located at position E at the ith lidar frame and moved to position C at the jth lidar frame. Thus, when a non-ground voxel (C, 4) is observed for the first time at the j-th frame, then the smallest radar frame index in (C, 4) is noted j. Then, starting from (C, 4), the ground voxels below it are found down the z-axis (C, 2). Since (C, 2) is already observed at the i-th frame, the smallest radar frame index in (C, 2) is i. According to formula (6), if j-i>τ ret Non-ground voxels (C, 4) are considered dynamic voxels.
Step 4: searching upwards to remove dynamic voxels which 'suddenly disappear';
the purpose of the upward search is to remove t k Time of day non-ground subgraphThe "snap-off" dynamic voxels in (a). Sequentially take out->Corresponding voxels in the ground subgraph are g V i Statistics of g V i And the maximum frame index is recorded as g γ max Representing the moment when the ground voxel was last observed. Searching all voxels within 3 meters upwards in the non-ground subgraph, and checking each non-ground voxel in turn n V i Statistics of n V i And the largest frame index is recorded as n γ max Representing the time at which the non-ground voxel was last observed.
Comparing the time when the ground voxel was last observed with the non-ground voxel, if the time when the ground voxel was last observed is much later than the time when the non-ground voxel was observed, namely:
g γ max - n γ maxsea
then certify non-ground voxels n V i Is a dynamic target of "snap-away". Will be n V i All points in (a) and all frame indexes are moved into the dynamic sub-graph.
As shown in fig. 5, the last observation of a ground voxel (E, 2) is at the j-th frame. Therefore, the largest radar frame index in (E, 2) is denoted j. We then find non-ground voxels (E, 4) in the z-axis starting from (E, 2). Since the last observation (E, 4) is the i-th frame, the largest radar frame index in (E, 4) is i. According to formula (6), if j-i>τ ret Non-ground voxels (E, 4) are considered dynamic voxels.
Step 5: static restoring, recovering the static voxels deleted by mistake, and obtaining a complete map of the static voxels;
the down-search and up-search remove "pop-up" and "pop-up" dynamic voxels, respectively. Some static voxels may be erroneously identified as dynamic voxels due to the effects of ground grade and radar measurement noise. Thus, a static restoration module is introduced to put these misclassified static voxels back into the non-ground subgraph. The static voxels are judged by utilizing the characteristic that the static voxels always appear and disappear simultaneously with the ground and are similar to the total number of the observations of the ground. The specific method comprises the following steps:
sequentially take outVoxel coordinates V i Finding V in dynamic subgraphs i Dynamic voxels in position d V i Statistical voxels d V i The total number of mid-frame indexes is recorded as d γ sum Representing the total number of times the dynamic voxel is observed.
Then, in the ground subgraph, the voxel coordinates V i Based on this, a ground voxel within 3 meters is found down the z-axis g V i . If found, statistics are made g V i The total number of mid-frame indexes is recorded as g γ sum Representing the total number of times that the ground voxel was observed.
If it is g γ sum And d γ sum the difference between them being smaller than a predefined threshold τ res Then prove d V i Is a misidentified static voxel. Will be d V i All points and all frame indexes in (c) are moved back into the non-ground sub-graph.
In this embodiment, the schematic diagrams before and after the dynamic trace removal are shown in fig. 6: the upper two graphs in fig. 6 are the original map and a partially enlarged view containing dynamic voxels. The lower two graphs in fig. 6 are a static map constructed by the method of the present invention and a partial enlarged view of the original region containing dynamic voxels. The dark grey covered portion of the circle is the trace left by the dynamic voxel.
In this embodiment, experimental verification is performed on the SemanticKITTI data set, the UrabanLoca data set and the self-collected campus data set, respectively. The SemanticKITTI data set is taken as a classical data set in the fields of robots and autopilots, and comprises various dynamic targets of pedestrians, automobiles, bicycles and the like in urban areas, suburban areas, high speeds and other scenes. The average dynamic removal rate of this embodiment on the SemanticKITTI data set can reach 98.229% and the average static retention rate can reach 98.064%.
In order to demonstrate the effect of dynamic trace removal in more difficult scenarios, the present embodiment also performs experimental verification on the urabanlocal dataset and the self-collected campus dataset. The Uraban Loca data set is mainly used in urban areas, but contains a large number of running automobiles, and has challenging scenes such as traffic jam, close-range following and the like. The acquisition time of the campus data set coincides with the lesson time of students, and a large number of pedestrians and bicycles are contained in the campus data set.
The average dynamic removal rate of this embodiment on the urabanlocal dataset can reach 96.489% and the average static retention rate can reach 98.452%. The average dynamic removal rate on the campus data set can reach 97.0131% and the average static retention rate can reach 98.686%. Compared with the SemanticKITTI data set, the UrabanLoca data set and the campus data set contain more dynamic targets, so that the dynamic removal rate is lower, but the dynamic removal rate is kept above 95%, and the dynamic trace removal effect of the invention in more difficult scenes is shown.
In this embodiment, only the observation time difference between non-ground and ground voxels is compared for each iteration, and the number of the voxels involved is far smaller than the number of point clouds of each frame, so that the time consumption for processing the point clouds of each frame is only 25.9ms, and the requirement of online processing can be completely met.
The above embodiments are only exemplary embodiments of the present application and are not intended to limit the present application, the scope of which is defined by the claims. Various modifications and equivalent arrangements may be made to the present application by those skilled in the art, which modifications and equivalents are also considered to be within the scope of the present application.

Claims (8)

1. A method for removing dynamic trace of a point cloud map on line is characterized by comprising the following steps: the method comprises the following steps:
step 1: adopting two-stage ground segmentation, coarse segmentation and fine segmentation to obtain a ground point set and a non-ground point set;
the two-stage ground segmentation process is as follows: in the preprocessing stage, taking the laser radar point cloud and the pose of each frame of point cloud as input, performing rough segmentation on the point cloud by using a ground segmentation method based on a depth map, and removing most of non-ground points to obtain an alternative ground point set;
refining the alternative ground point set by using a principal component analysis method to obtain a final ground point set, and placing points without the ground point set into a non-ground point set;
step 2: constructing a voxel map, and finishing voxel map updating through radar frame indexes;
step 3: removing dynamic voxels which are suddenly appeared and suddenly disappeared by adopting a dynamic voxel removing method based on downward retrieval and upward retrieval;
step 4: recovering the misdeleted static voxels by adopting a static reduction method to obtain a complete static voxel map;
and calculating voxels of the non-ground point set in the dynamic subgraph, taking the voxels as a starting point, searching downwards in the ground subgraph, finding out ground voxels below the voxels, comparing the total number of observations of the two voxels, judging whether the voxels are static voxels which are erroneously removed by comparing the total number of observations between the dynamic subgraph and the ground subgraph, and restoring the static voxels which are erroneously identified as dynamic to the non-ground subgraph.
2. The method for online removal of dynamic marks from a point cloud map of claim 1, wherein: and 2, constructing a voxel map by taking the voxels as basic units for dynamic trace removal, and managing the point cloud map.
3. The method for online removal of dynamic marks from a point cloud map of claim 2, wherein: the voxel map is divided into: the ground sub-graph, the non-ground sub-graph and the dynamic sub-graph share the same coordinate system and are stored separately;
voxels located in the ground subgraph, non-ground subgraph, and dynamic subgraph are referred to as ground voxels, non-ground voxels, and dynamic voxels, respectively.
4. A method for online removal of point cloud map dynamic marks as recited in claim 3, wherein: step 2 completes voxel map updating through radar frame index, specifically:
storing a radar frame index of each point, wherein the minimum frame index represents the first observed time of the voxel, and the maximum frame index represents the last observed time of the voxel;
and obtaining voxel coordinates of the ground point set and the non-ground point set, and respectively adding voxels into the ground subgraph and the non-ground subgraph to finish updating the map.
5. The method for online removal of dynamic marks from a point cloud map of claim 4, wherein: and 3, searching downwards in the step, namely calculating the voxels of the non-ground point set in the non-ground subgraph, searching downwards in the ground subgraph by taking the voxels as starting points, finding out ground voxels below the voxels, and comparing the observation time difference between the two voxels to judge dynamic voxels.
6. The method for online removal of dynamic marks from a point cloud map of claim 5, wherein: and (3) searching upwards in the step, calculating the voxels of the ground point set in the ground subgraph, searching upwards in the non-ground subgraph by taking the voxels as starting points, finding all non-ground voxels above the voxels, and comparing the observed time difference between the ground voxels and all the non-ground voxels to judge dynamic voxels.
7. The method for online removal of dynamic marks from a point cloud map of claim 6, wherein: the dynamic voxels "suddenly appearing" in the step 3 are: at time t 0 The observation position P can only observe the ground, and can not observe the non-ground target voxels; but from t 1 Start at time t 1 >t 0 Ground and non-ground target voxels can be observed simultaneously, then the non-ground target voxel is defined as a "pop-up" dynamic voxel;
the dynamic voxels "suddenly disappeared" in the step 3 are: at time t 0 Observing the position P, the ground and non-ground target voxels can be observed at the same time, but from t 1 Start at time t 1 >t 0 Only the ground can be observed, and non-ground target voxels cannot be observed; then, the non-ground target voxel is defined as a dynamic voxel that "suddenly disappears".
8. The method for online removal of dynamic marks from a point cloud map of claim 6, wherein: the dynamic voxels in the step 3 refer to: if a voxel contains a dynamic point, it is considered a dynamic voxel.
CN202410005624.6A 2024-01-03 2024-01-03 Method for removing dynamic trace of point cloud map on line Pending CN117830341A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410005624.6A CN117830341A (en) 2024-01-03 2024-01-03 Method for removing dynamic trace of point cloud map on line

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410005624.6A CN117830341A (en) 2024-01-03 2024-01-03 Method for removing dynamic trace of point cloud map on line

Publications (1)

Publication Number Publication Date
CN117830341A true CN117830341A (en) 2024-04-05

Family

ID=90507460

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410005624.6A Pending CN117830341A (en) 2024-01-03 2024-01-03 Method for removing dynamic trace of point cloud map on line

Country Status (1)

Country Link
CN (1) CN117830341A (en)

Similar Documents

Publication Publication Date Title
CN110032949B (en) Target detection and positioning method based on lightweight convolutional neural network
CN110675307B (en) Implementation method from 3D sparse point cloud to 2D grid graph based on VSLAM
TWI722355B (en) Systems and methods for correcting a high-definition map based on detection of obstructing objects
CN113421289B (en) High-precision vehicle track data extraction method for overcoming unmanned aerial vehicle shooting disturbance
CN108846333B (en) Method for generating landmark data set of signpost and positioning vehicle
CN114526745B (en) Drawing construction method and system for tightly coupled laser radar and inertial odometer
CN113516664A (en) Visual SLAM method based on semantic segmentation dynamic points
CN114677554A (en) Statistical filtering infrared small target detection tracking method based on YOLOv5 and Deepsort
CN111881790A (en) Automatic extraction method and device for road crosswalk in high-precision map making
CN115032651A (en) Target detection method based on fusion of laser radar and machine vision
CN113936198A (en) Low-beam laser radar and camera fusion method, storage medium and device
CN108009494A (en) A kind of intersection wireless vehicle tracking based on unmanned plane
WO2021017211A1 (en) Vehicle positioning method and device employing visual sensing, and vehicle-mounted terminal
CN114325634A (en) Method for extracting passable area in high-robustness field environment based on laser radar
CN113759391A (en) Passable area detection method based on laser radar
CN113281782A (en) Laser radar snow point filtering method based on unmanned vehicle
CN113256731A (en) Target detection method and device based on monocular vision
CN114549549B (en) Dynamic target modeling tracking method based on instance segmentation in dynamic environment
CN113865581B (en) Closed scene positioning method based on multi-level map
CN113554705B (en) Laser radar robust positioning method under changing scene
CN115423985A (en) Method for eliminating dynamic object point cloud in point cloud map
CN117830341A (en) Method for removing dynamic trace of point cloud map on line
CN115482282A (en) Dynamic SLAM method with multi-target tracking capability in automatic driving scene
CN115468576A (en) Automatic driving positioning method and system based on multi-mode data fusion
CN113762195A (en) Point cloud semantic segmentation and understanding method based on road side RSU

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination