CN111402161B - Denoising method, device, equipment and storage medium for point cloud obstacle - Google Patents
Denoising method, device, equipment and storage medium for point cloud obstacle Download PDFInfo
- Publication number
- CN111402161B CN111402161B CN202010177539.XA CN202010177539A CN111402161B CN 111402161 B CN111402161 B CN 111402161B CN 202010177539 A CN202010177539 A CN 202010177539A CN 111402161 B CN111402161 B CN 111402161B
- Authority
- CN
- China
- Prior art keywords
- point cloud
- obstacle
- point
- noise
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 62
- 230000004888 barrier function Effects 0.000 claims abstract description 106
- 238000004422 calculation algorithm Methods 0.000 claims description 31
- 230000015654 memory Effects 0.000 claims description 19
- 238000012545 processing Methods 0.000 claims description 19
- 238000013136 deep learning model Methods 0.000 claims description 16
- 238000001514 detection method Methods 0.000 claims description 14
- 238000001914 filtration Methods 0.000 abstract description 19
- 230000008901 benefit Effects 0.000 abstract description 15
- 230000008569 process Effects 0.000 abstract description 15
- 239000013598 vector Substances 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 238000004590 computer program Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000003993 interaction Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000011282 treatment Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
- G06T2207/10044—Radar image
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T90/00—Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Traffic Control Systems (AREA)
Abstract
The embodiment of the application discloses a denoising method, device, equipment and storage medium for a point cloud obstacle, and relates to the field of automatic driving. The specific implementation scheme is as follows: detecting point cloud obstacles in the point cloud data, and determining the category of the point cloud obstacles; determining noise points in the point cloud barriers by adopting a denoising rule associated with the category; and denoising the point cloud barrier according to the noise point. According to the embodiment of the application, the noise point filtering link is moved backwards, filtering denoising taking the point as a unit is optimized into filtering denoising taking the obstacle as a unit, and different types of obstacles correspond to different denoising rules. The method has the advantages that the point cloud obstacle denoising precision and efficiency are improved while obstacle omission is avoided, and further the accuracy and instantaneity of the obstacle sensing process are improved.
Description
Technical Field
The embodiment of the application relates to the field of data processing, in particular to the technical field of automatic driving, and specifically relates to a denoising method, device and equipment for point cloud obstacles and a storage medium.
Background
Lidar is an important component in autopilot technology, and the point cloud data obtained by scanning is used for obstacle perception. Due to the influence of laser radar hardware, noise points exist in the point cloud data. At present, when the prior art senses an obstacle, the noise in the point cloud data is removed by adopting a whole-frame denoising mode, then the obstacle is detected, and the common whole-frame denoising mode is to perform treatments such as Fourier change or Hough transformation on the whole-frame point cloud data, or directly perform iterative denoising on the whole-frame point cloud data by adopting a mean value filtering mode or a Gaussian filtering mode, and the like.
However, the whole frame denoising method in the prior art is easy to cause point cloud distortion or small object deletion, so that the problems of missed detection or inaccurate boundary and the like occur in the subsequent obstacle detection. And the denoising operation amount of the whole frame is larger, so that higher delay exists in the obstacle sensing process.
Disclosure of Invention
The embodiment of the application discloses a denoising method, device, equipment and storage medium for a point cloud obstacle, which can improve the denoising precision and efficiency of the point cloud obstacle while avoiding obstacle missing detection, and further improve the instantaneity of the obstacle sensing process.
In a first aspect, an embodiment of the present application discloses a denoising method for a point cloud obstacle, including:
detecting point cloud obstacles in the point cloud data, and determining the category of the point cloud obstacles;
determining noise points in the point cloud barriers by adopting a denoising rule associated with the category;
and denoising the point cloud barrier according to the noise point.
One embodiment of the above application has the following advantages or benefits: and detecting the point cloud barriers and the category of the point cloud barriers contained in the acquired point cloud data, and further determining noise points of the point cloud barriers by adopting each associated denoising rule to perform denoising processing. According to the scheme, the point cloud data noise point removing link is moved backwards, the whole frame filtering denoising taking the point as a unit is optimized to the filtering denoising taking the obstacle as a unit, and different obstacles correspond to different denoising rules, so that the noise removing precision and efficiency of the point cloud obstacle are greatly improved while the obstacle omission is avoided, and the accuracy and instantaneity of the obstacle sensing process are further improved. Provides a new idea for denoising the point cloud obstacle.
In addition, the denoising method for the point cloud obstacle according to the embodiment of the application may further have the following additional technical features:
Optionally, determining the noise point in the point cloud obstacle by adopting the denoising rule associated with the category includes:
if the category of the point cloud obstacle is a pedestrian category, determining noise points in the point cloud obstacle by adopting a mean shift algorithm associated with the pedestrian category;
and if the class of the point cloud obstacle is a vehicle class, determining noise points in the point cloud obstacle by adopting an edge fitting algorithm associated with the vehicle class.
One embodiment of the above application has the following advantages or benefits: for the point cloud barriers of the pedestrian category, determining noise points in the point cloud barriers by adopting a mean shift algorithm; and for the point cloud barriers of the vehicle class, determining noise points in the point cloud barriers by adopting an edge fitting algorithm. Aiming at the point cloud barriers of the pedestrian category and the vehicle category, the noise points are determined in completely different modes according to the characteristics of the point cloud barriers, so that the accuracy of determining the noise points is improved.
Optionally, determining the noise point in the point cloud obstacle by adopting a mean shift algorithm associated with the pedestrian category includes:
iterating the offset of the point cloud obstacle according to the initial point cloud center and the preset radius of the point cloud obstacle of the pedestrian category, and determining the target point cloud center of the point cloud obstacle;
Determining a target area of the point cloud barrier according to the target point cloud center and the preset radius;
and taking the point cloud which is positioned outside the target area in the point cloud barrier as a noise point in the point cloud barrier.
One embodiment of the above application has the following advantages or benefits: combining the characteristics of pedestrians conforming to the cylinder assumption on the point cloud distribution space, continuously iterating the offset of the point cloud barriers of the pedestrian categories to determine the target point cloud centers of the point cloud barriers of the pedestrian categories, further determining noise points which are not in the target area where the point cloud barriers of the pedestrian categories are located, and improving the accuracy of determining the noise points of the point cloud barriers of the pedestrian categories.
Optionally, determining the noise point in the point cloud obstacle by adopting an edge fitting algorithm associated with the vehicle category includes:
determining a visible edge line of a point cloud obstacle of a vehicle category, wherein the visible edge line is in a right-angle state;
obtaining point clouds with the distance from the visible edge line smaller than a first preset distance from the point cloud obstacle, and performing straight line fitting to obtain a target edge line;
and taking the point cloud which is not in the right angle area of the target edge line and is more than a second preset distance away from the target edge line in the point cloud barrier as a noise point in the point cloud barrier.
Optionally, determining a visible edge line of the point cloud obstacle of the vehicle category includes:
determining a bounding box of the point cloud obstacle of the vehicle class;
and taking a visible surrounding edge of the surrounding box in a right angle state in a radar scanning view angle as a visible edge line of the point cloud barrier.
One embodiment of the above application has the following advantages or benefits: and fitting out the target edge line which accurately represents the vehicle edge through the point cloud data near the visible edge line of the point cloud obstacle in the right angle state of the vehicle class, so as to determine the right angle area which is not in the target edge line, wherein the point cloud which is far away from the target edge line is used as a noise point, and the accuracy of determining the noise point of the point cloud obstacle in the vehicle class is improved.
Optionally, denoising the point cloud obstacle according to the noise point includes:
calculating the noise point proportion of the point cloud barriers according to the number of the noise points and the total point cloud number of the point cloud barriers;
and if the noise point proportion is smaller than a preset proportion value, denoising the point cloud obstacle according to the noise point.
One embodiment of the above application has the following advantages or benefits: when the noise proportion of the noise points of the point cloud barrier to the total point cloud quantity of the point cloud barrier is smaller than a preset proportion value, the operation of denoising the point cloud barrier can be executed, the situation that information is lost due to misdetermination of the noise points is avoided, and the accuracy of denoising the point cloud data is improved.
Optionally, detecting a point cloud obstacle in the point cloud data and determining a class of the point cloud obstacle includes:
and determining the point cloud barriers in the point cloud data and the categories of the point cloud barriers by adopting a deep learning model.
One embodiment of the above application has the following advantages or benefits: the deep learning model is adopted to determine the point cloud obstacle and the category to which the point cloud obstacle belongs, so that the accuracy of extracting the point cloud obstacle from the point cloud data and identifying the category of the point cloud obstacle is improved.
In a second aspect, an embodiment of the present application discloses a denoising device for a point cloud obstacle, including:
the obstacle detection module is used for detecting point cloud obstacles in the point cloud data and determining the category of the point cloud obstacles;
the noise point determining module is used for determining noise points in the point cloud barriers by adopting the denoising rule associated with the category;
and the denoising processing module is used for denoising the point cloud barrier according to the noise point.
In a third aspect, an embodiment of the present application discloses an electronic device, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method for denoising a point cloud obstacle according to any embodiment of the present application.
In a fourth aspect, embodiments of the present application provide a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method for denoising a point cloud obstacle according to any embodiment of the present application.
One embodiment of the above application has the following advantages or benefits: and detecting the point cloud barriers and the category of the point cloud barriers contained in the acquired point cloud data, and further determining noise points of the point cloud barriers by adopting each associated denoising rule to perform denoising processing. According to the scheme, the point cloud data noise point removing link is moved backwards, the whole frame filtering denoising taking the point as a unit is optimized to the filtering denoising taking the obstacle as a unit, and different obstacles correspond to different denoising rules, so that the noise removing precision and efficiency of the point cloud obstacle are greatly improved while the obstacle omission is avoided, and the accuracy and instantaneity of the obstacle sensing process are further improved. Provides a new idea for denoising the point cloud obstacle.
Other effects of the above alternative will be described below in connection with specific embodiments.
Drawings
The drawings are for better understanding of the present solution and do not constitute a limitation of the present application. Wherein:
fig. 1 is a flowchart of a method of denoising a point cloud obstacle according to a first embodiment of the present application;
fig. 2 is a flowchart of a method of denoising a point cloud obstacle according to a second embodiment of the present application;
3A-3B are point cloud data schematics of a vehicle class point cloud obstacle according to a second embodiment of the present application;
fig. 4 is a schematic structural view of a denoising apparatus of a point cloud obstacle according to a third embodiment of the present application;
fig. 5 is a block diagram of an electronic device for implementing a method for denoising a point cloud obstacle according to a fourth embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present application to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
First embodiment
Fig. 1 is a flowchart of a method for denoising a point cloud obstacle according to a first embodiment of the present application, where the present embodiment is applicable to a case of denoising an obstacle in point cloud data, the method may be performed by a denoising device for a point cloud obstacle, and the device is implemented in a software and/or hardware manner, and may be preferably configured in an automatic driving apparatus, such as an automatic driving vehicle, or a mobile robot. As shown in fig. 1, the method specifically includes the following steps:
s101, detecting point cloud barriers in the point cloud data, and determining the categories of the point cloud barriers.
In this application, the point cloud data may be a set of three-dimensional coordinate vectors recorded in the form of a point cloud of a scene where the radar scans, where each three-dimensional coordinate vector may be represented by (x, y, z). In addition, the point cloud data may further include a reflected light intensity value of each point cloud. Due to the inherent nature of radar, the scanned point cloud data contains not only obstacles in a scene, but also a small amount of noise. The point cloud obstacle may be an object that may affect the safe driving of the automatic driving device in the current scene, for example, if the current scene is a road scene, the point cloud obstacle may be corresponding point cloud data of pedestrians, vehicles, guideboards, and the like on the road. The class of the point cloud barrier may be classified according to the attribute of the point cloud barrier, and in this application, the class of the point cloud barrier may include, but is not limited to: pedestrian category, vehicle category, road sign category, etc.
Optionally, in the present application, a radar is configured on the automatic running device, in a running process of the automatic running device, the radar may scan, in real time, point cloud data in a current scene to transmit to a processor in the automatic running device, and after the processor acquires the point cloud data, the processor does not first denoise the whole frame of point cloud data, but first detects a point cloud obstacle on the scanned original point cloud data, and determines a category to which the detected point cloud obstacle belongs. Optionally, in the present application, there are many methods for detecting the point cloud obstacle from the point cloud data and determining the type of the point cloud obstacle, which is not limited in this embodiment. For example, a clustering algorithm or a deep learning model is adopted to detect the cloud obstacle from the point cloud data, and then an image feature detection or a deep learning model is adopted to determine the category of the detected point cloud obstacle.
Optionally, the application preferably adopts a deep learning model to determine the point cloud obstacle in the point cloud data and the category of the point cloud obstacle. Specifically, a large amount of point cloud sample data is adopted in advance, and the initial deep learning model is trained to obtain the deep learning model capable of accurately identifying the obstacle in the point cloud data and determining the type of the point cloud obstacle. The point cloud sample data may be point cloud data marked with a point cloud obstacle and its category. When the step operation is executed, the point cloud data scanned by the radar are input into the trained deep learning model, and the deep learning model analyzes the input point cloud data according to an algorithm during training to obtain the point cloud barrier contained in the point cloud data and the category of the detected point cloud barrier. It should be noted that, the deep learning model for detecting the point cloud obstacle from the point cloud data and the deep learning model for determining the class of the point cloud obstacle may be the same or may be two different models, and any obstacle detection and identification algorithm based on deep learning may be adopted for the deep learning model, which is not limited in this embodiment.
Optionally, in the embodiment of the present application, when detecting a point cloud obstacle in the point cloud data and determining a category attribute of the point cloud obstacle, other attributes of the obstacle, for example, an orientation of the obstacle, a bounding box, and the like, may also be determined. The method is used for subsequent determination of noise of the point cloud obstacle or post-processing operations such as collision detection of the point cloud obstacle.
S102, determining noise points in the point cloud barriers by adopting a denoising rule associated with the category.
The denoising rule in the application can be a noise recognition algorithm of the point cloud obstacle of the category specially determined according to the characteristics of the point cloud obstacle of the category aiming at the point cloud obstacle of different categories. For example, a pedestrian class point cloud obstacle that conforms to a cylindrical assumption over a point cloud distribution space, and for a cylindrical feature, a mean shift algorithm (i.e., meanShift algorithm) is associated for the pedestrian class to determine noise points in the pedestrian class point cloud obstacle. The method comprises the steps that a visible edge line in a right angle state, namely a visible edge line meeting an L-shaped assumption, can be seen on a point cloud distribution space of a point cloud obstacle of a vehicle class, and a noise point of the point cloud obstacle of the vehicle class is determined for a vehicle class association edge fitting algorithm aiming at the characteristic.
Optionally, in the present application, after detecting a point cloud obstacle included in original point cloud data, since no noise-removing process is performed before identifying the point cloud obstacle, the point cloud obstacle identified in S101 may avoid a missing detection problem, but a part of noise points must be included in the point cloud obstacle, so that, in order to accurately determine the noise points included in the point cloud obstacle, the present application sets a dedicated noise-removing rule for each type of obstacle in advance, and at this time, the present step may accurately determine the noise points included in the point cloud obstacle by adopting the noise-removing rule associated with the type according to the type to which the point cloud obstacle belongs.
Optionally, in the embodiment of the present application, when determining the noise point in the point cloud obstacle by using a denoising rule associated with a category, if the category of the point cloud obstacle is a pedestrian category, determining the noise point in the point cloud obstacle by using a mean shift algorithm associated with the pedestrian category; if the class of the point cloud obstacle is a vehicle class, determining noise points in the point cloud obstacle by adopting an edge fitting algorithm associated with the vehicle class. How to determine the noise point of the pedestrian category point cloud obstacle by adopting the mean shift algorithm and how to determine the specific implementation process of the noise point of the vehicle category point cloud obstacle by adopting the edge fitting algorithm are described in detail in the following embodiments. And will not be described in detail herein.
It should be noted that, although the present embodiment only gives the denoising rule associated with the point cloud obstacle of the pedestrian category and the point cloud obstacle of the vehicle category, the present embodiment is not limited to determining only the noise points of the point cloud obstacle of the pedestrian category and the point cloud obstacle of the vehicle category, and for other categories, such as the road sign category, the fence category, and the like, the corresponding denoising rule may be associated to identify the noise points in the point cloud obstacle of the other categories. The embodiments of the present application are not limited thereto.
S103, denoising the point cloud obstacle according to the noise point.
Optionally, in this embodiment of the present application, after determining the noise point of each point cloud obstacle in S102, the noise point may be filtered for each point cloud obstacle, so as to obtain accurate point cloud data after denoising the point cloud obstacle. The accurate point cloud data of the denoised point cloud obstacle can be input into a downstream application processing link, such as a collision detection link, so that the subsequent processing flow of the point cloud obstacle is completed. According to the embodiment of the application, the noise point determining and filtering operation is carried out on the point cloud data (namely, tens of point clouds) of each obstacle, compared with the noise point determining and filtering operation carried out on the whole frame of point cloud data (namely, hundreds of thousands of point clouds), the time consumption of the noise point determining and filtering is greatly reduced, the time delay brought by denoising the point cloud obstacle is reduced, and the real-time performance of the point cloud data processing is improved.
Optionally, in the embodiment of the present application, if the determination of the type of the point cloud obstacle is wrong, or if the judgment of the point cloud data distribution assumption is wrong, the noise point of the point cloud obstacle determined in S102 is wrong, for example, the actual point cloud of the point cloud obstacle is misjudged as the noise point. In order to prevent this, when the present application performs the operation of this step, the noise point proportion of the point cloud obstacle may be calculated according to the number of noise points and the total number of point clouds of the point cloud obstacle; and if the noise point proportion is smaller than a preset proportion value, denoising the point cloud obstacle according to the noise point. Specifically, for a certain point cloud obstacle, calculating the proportion of the number of noise points of the point cloud obstacle determined in S102 to the total number of point clouds of the point cloud obstacle, to obtain the noise point proportion of the point cloud obstacle, comparing the noise point proportion with a preset proportion value (such as 30%), if the noise point proportion is smaller than the preset proportion value, determining the noise point accurately, and removing the noise point as the noise point of the point cloud obstacle. If the noise number is greater than or equal to the preset proportion value, the situation that the noise number is excessive and the determination of the noise point is wrong is indicated, and the denoising operation of the step is not executed at this moment, or the denoising operation of the point cloud barrier is performed after the noise point of the point cloud barrier is determined again in other modes. Thereby guarantee the accuracy of this application point cloud barrier denoising.
According to the technical scheme, the collected point cloud data are firstly detected to include the point cloud barriers and the categories of the point cloud barriers, and noise points of the point cloud barriers are determined by adopting the denoising rules associated with the point cloud barriers to perform denoising processing. According to the scheme, the point cloud data noise point removing link is moved backwards, the whole frame filtering denoising taking the point as a unit is optimized to the filtering denoising taking the obstacle as a unit, and different obstacles correspond to different denoising rules, so that the noise removing precision and efficiency of the point cloud obstacle are greatly improved while the obstacle omission is avoided, and the accuracy and instantaneity of the obstacle sensing process are further improved. Provides a new idea for denoising the point cloud obstacle.
Second embodiment
Fig. 2 is a flowchart of a denoising method of a point cloud obstacle according to a second embodiment of the present application, and fig. 3A-3B are point cloud data schematic diagrams of a vehicle class point cloud obstacle according to the second embodiment of the present application. The embodiment further optimizes the first embodiment, and provides a specific implementation process of determining noise points in the point cloud barrier by adopting a denoising rule associated with a vehicle and a pedestrian category, as shown in fig. 2, and the method specifically may include:
S201, detecting point cloud barriers in the point cloud data, and determining the categories of the point cloud barriers.
S202, judging the category of the point cloud obstacle, if the category is the pedestrian category, executing S203, and if the category is the vehicle category, executing S206.
Optionally, in the embodiment of the present application, if the category of the point cloud obstacle determined in S201 is a pedestrian category, the operations in S203 to S205 are executed, and a mean shift algorithm associated with the pedestrian category is adopted to determine a noise point in the point cloud obstacle. If the class of the point cloud obstacle determined in S201 is a vehicle class, the operations in S206-S208 are executed, and a noise point in the point cloud obstacle is determined by adopting an edge fitting algorithm associated with the vehicle class.
S203, iterating the offset of the point cloud obstacle according to the initial point cloud center and the preset radius of the point cloud obstacle of the pedestrian category, and determining the target point cloud center of the point cloud obstacle.
In this application, the initial point cloud center of the point cloud obstacle of the pedestrian category may be randomly selected in the point cloud data of the point cloud obstacle of the pedestrian category. Optionally, in order to improve accuracy of determining the noise point, when the initial point cloud center is selected, a center point of the point cloud obstacle may be selected as far as possible as the initial point cloud center, for example, a mean coordinate position of point cloud data of the point cloud obstacle of the pedestrian category may be calculated, and the mean coordinate position may be used as the initial point cloud center of the point cloud obstacle. The preset radius of the point cloud obstacle of the pedestrian category can be preset by combining the size of the actual area occupied by the pedestrian in the point cloud data. The target point cloud center can be a precise point cloud center relative to the initial point cloud center of the obtained point cloud obstacle of the pedestrian category after calculation by a mean shift algorithm.
Optionally, the step is based on a mean shift algorithm, iterating the offset of the point cloud obstacle according to the initial point cloud center and the preset radius of the point cloud obstacle of the pedestrian category, and determining the specific execution process of the target point cloud center may include the following steps: 1) The clustering number (seed) is set as a default value, for example, 1, an initial area is determined by taking the center of the initial point cloud as a center point center and taking a preset radius as radius, and the point cloud in the initial area is divided into clusters. 2) And calculating the vector from the center to each point cloud in the cluster, and adding the vectors of the point clouds to obtain a vector drift amount shift. 3) And moving the current center along the shift direction, wherein the moving distance is equal to the shift, so as to obtain a new center. 3) And repeatedly executing the step 2) to iterate the offset until shift tends to be infinitesimal, namely iterating to convergence, wherein the center is the target point cloud center of the iterated point cloud obstacle.
S204, determining a target area of the point cloud obstacle according to the target point cloud center and the preset radius.
Optionally, in the embodiment of the present application, S203 has determined a target point cloud center and a preset radius of the point cloud obstacle of the pedestrian category, and an area surrounded by the target point cloud center and the preset radius may be used as the target area of the point cloud obstacle. The target area is the actual area of the human point cloud barrier determined by adopting a mean shift algorithm.
S205, using the point cloud outside the target area in the point cloud obstacle as the noise point in the point cloud obstacle.
Optionally, based on the target area of the point cloud obstacle determined in S204, it is determined whether the point cloud of the point cloud obstacle detected in S201 falls within the target area, and if not, the point cloud is used as a noise point in the point cloud obstacle.
S206, determining visible edge lines of the point cloud barriers of the vehicle category.
In this application, the visible edge line of the point cloud obstacle of the vehicle class may be the point cloud data of the vehicle class, and the vehicle edge line is visible on the point cloud distribution space and is in a right angle state (i.e. meets the L-type assumption). For example, as shown in fig. 3A, the point cloud in the box 1 is a point cloud obstacle of the vehicle category detected in S201, and as can be seen from fig. 3A, the right edge line and the lower edge line of the point cloud obstacle are two visible edge lines in a right angle state in the current point cloud space.
Alternatively, in this embodiment, there are many ways to determine the visible edge line of the point cloud obstacle of the vehicle class, for example, it may be to detect the straight line of the point cloud data of the point cloud obstacle of the vehicle class, and use the detected straight line in a right angle state as the visible edge line of the point cloud obstacle. A bounding box that determines a point cloud obstacle for the vehicle class; and taking a visible surrounding edge of the surrounding box in a right angle state in a radar scanning view angle as a visible edge line of the point cloud barrier. The bounding box of the point cloud barrier may be an optimal bounding space of the point cloud set of the discrete point cloud barrier. As shown in fig. 3A, the box 1 is a bounding box of the cloud obstacle in the figure. Specifically, the bounding box may be detected by the deep learning model when the attribute information of the cloud obstacle is detected and the category information of the cloud obstacle is detected together, or may be detected by performing image processing on the cloud data, which is not limited in this embodiment. After determining the bounding box of the point cloud obstacle, since the radar cannot scan all surfaces of the vehicle at the same time in the point cloud distribution space, in the radar scanning view angle, a vehicle edge line meeting the L-shaped assumption is generally seen, and the bounding box of the point cloud obstacle is the optimal bounding space of the point cloud obstacle, so that the application can take the visible bounding edge of the bounding box in a right-angle state (namely meeting the L-shaped assumption) in the radar scanning view angle as the visible edge line of the point cloud obstacle. For example, as shown in fig. 3A, the point a is the position of the radar, the area surrounded by two dotted lines sent out by the point a is the scanning view angle of the radar, in this view angle, the right edge line and the lower edge line of the frame 1 are visible edges in this view angle, and other edges are invisible, so that the right edge line and the lower edge line of the frame 1 can be used as visible edge lines of the point cloud barrier. Alternatively, the radar scanning perspective may be dependent on the radar mounting location and radar parameters.
S207, obtaining point clouds with the distance from the visible edge line smaller than the first preset distance from the point cloud obstacle, and performing straight line fitting to obtain a target edge line.
Optionally, in this application, after determining the visible edge line of the point cloud obstacle of the vehicle class, the point cloud that has a distance value smaller than the first preset distance may be searched from the point cloud obstacles of the vehicle class detected in S201, so as to form a point cloud set. And then adopting a straight line fitting algorithm to fit two target edge lines in a right angle state, namely meeting the L-shaped assumption. For example, as shown in fig. 3B, a straight line 3 in the L shape in the figure is a fitted target edge line in a right angle state.
And S208, taking the point cloud which is not in the right angle area of the target edge line and is more than the second preset distance away from the target edge line in the point cloud barrier as a noise point in the point cloud barrier.
Optionally, in the present application, the target edge line determined in S207 is a line closest to an actual edge of the vehicle point cloud obstacle, and the target edge line is in a right angle state, and point cloud data in the right angle range should belong to data of the vehicle point cloud obstacle, so this step may be to determine whether the point cloud in the point cloud obstacle of the vehicle class detected in S201 is in a right angle area surrounded by the target edge line, if yes, the point cloud belongs to the actual point cloud data of the point cloud obstacle, if not, further determine whether a distance between the point cloud not in the right angle area and the target edge line is greater than a second preset distance, and if yes, the point cloud is taken as a noise point of the point cloud obstacle. Otherwise, the point cloud is still used as the actual point cloud data of the point cloud obstacle. For example, as shown in fig. 3B, the point cloud in the box 2 is the determined noise point of the vehicle point cloud obstacle. It should be noted that, in the embodiment of the present application, the first preset distance and the second preset distance may be the same or different, and may be determined according to actual situations.
S209, denoising the point cloud barrier according to the noise point.
Optionally, in order to further verify the accuracy of the noise points determined in S205 or S208, this embodiment may further calculate the noise point proportion of the point cloud obstacle according to the number of noise points determined in S205 or S208 and the total number of point clouds of the point cloud obstacle; if the noise ratio is smaller than the preset ratio value, denoising the point cloud obstacle according to the noise determined in S205 or S208.
Optionally, in this embodiment of the present application, for each point cloud obstacle, after the obstacle denoising process is performed on the point cloud obstacle, the range of the bounding box of the point cloud obstacle is adjusted accordingly, for example, as shown in fig. 3B, the range of the box 1 in the drawing is adjusted to the target edge line position by the right edge and the lower edge compared with that in fig. 3A, where the precision of the bounding box is higher. After the noise of the point cloud obstacle is removed, the surrounding box of the point cloud obstacle is changed, but the direction of the point cloud obstacle is not changed.
According to the technical scheme, the collected point cloud data are firstly detected to obtain the point cloud obstacle and the category of the point cloud obstacle, the point cloud obstacle of the pedestrian category is subjected to iteration continuously based on the characteristic that the point cloud obstacle accords with the cylinder assumption in the point cloud distribution space, the target point cloud center of the obstacle is determined through the offset of the point cloud obstacle, the noise point which is not in the target area where the point cloud obstacle is located is further determined, and the point cloud obstacle of the vehicle category is subjected to fitting to obtain the accurate target edge line representing the vehicle edge through the point cloud data near the visible edge line of the vehicle point cloud obstacle in the right angle state, so that the right angle area which is not in the target edge line is determined, and the point cloud far away from the target edge line is used as the noise point. According to the method, the semantic priori of the obstacle category is introduced, the noise points are determined in completely different modes according to the characteristics of the point cloud obstacle, namely the point cloud obstacle of the pedestrian category and the vehicle category, and the accuracy of noise point determination and filtering processing is greatly improved.
Third embodiment
Fig. 4 is a schematic structural view of a denoising apparatus of a point cloud obstacle according to a third embodiment of the present application; the embodiment can be applied to the situation of denoising the obstacle in the point cloud data. The device can realize the denoising method of the point cloud barrier in the first embodiment to the second embodiment of the application. The apparatus may be configured in an automatic driving device, and the apparatus 400 specifically includes the following:
an obstacle detection module 401, configured to detect a point cloud obstacle in the point cloud data, and determine a class of the point cloud obstacle;
a noise determination module 402, configured to determine a noise point in the point cloud obstacle by using a denoising rule associated with the category;
and the denoising processing module 403 is configured to denoise the point cloud obstacle according to the noise point.
According to the technical scheme, the collected point cloud data are firstly detected to include the point cloud barriers and the categories of the point cloud barriers, and noise points of the point cloud barriers are determined by adopting the denoising rules associated with the point cloud barriers to perform denoising processing. According to the scheme, the point cloud data noise point removing link is moved backwards, the whole frame filtering denoising taking the point as a unit is optimized to the filtering denoising taking the obstacle as a unit, and different obstacles correspond to different denoising rules, so that the noise removing precision and efficiency of the point cloud obstacle are greatly improved while the obstacle omission is avoided, and the accuracy and instantaneity of the obstacle sensing process are further improved. Provides a new idea for denoising the point cloud obstacle.
Further, the noise determination module 402 includes:
the pedestrian noise point determining unit is used for determining noise points in the point cloud obstacle by adopting a mean shift algorithm associated with the pedestrian category if the category of the point cloud obstacle is the pedestrian category;
and the vehicle noise point determining unit is used for determining noise points in the point cloud obstacle by adopting an edge fitting algorithm associated with the vehicle category if the category of the point cloud obstacle is the vehicle category.
Further, the pedestrian noise determination unit is specifically configured to:
iterating the offset of the point cloud obstacle according to the initial point cloud center and the preset radius of the point cloud obstacle of the pedestrian category, and determining the target point cloud center of the point cloud obstacle;
determining a target area of the point cloud barrier according to the target point cloud center and the preset radius;
and taking the point cloud which is positioned outside the target area in the point cloud barrier as a noise point in the point cloud barrier.
Further, the vehicle noise determination unit is specifically configured to:
determining a visible edge line of a point cloud obstacle of a vehicle category, wherein the visible edge line is in a right-angle state;
Obtaining point clouds with the distance from the visible edge line smaller than a first preset distance from the point cloud obstacle, and performing straight line fitting to obtain a target edge line;
and taking the point cloud which is not in the right angle area of the target edge line and is more than a second preset distance away from the target edge line in the point cloud barrier as a noise point in the point cloud barrier.
Further, the vehicle noise determination unit is specifically configured to, when performing determination of a visible edge line of a point cloud obstacle of a vehicle category:
determining a bounding box of the point cloud obstacle of the vehicle class;
and taking a visible surrounding edge of the surrounding box in a right angle state in a radar scanning view angle as a visible edge line of the point cloud barrier.
Further, the denoising processing module 403 is specifically configured to:
calculating the noise point proportion of the point cloud barriers according to the number of the noise points and the total point cloud number of the point cloud barriers;
and if the noise point proportion is smaller than a preset proportion value, denoising the point cloud obstacle according to the noise point.
Further, the obstacle detection module 401 is specifically configured to:
and determining the point cloud barriers in the point cloud data and the categories of the point cloud barriers by adopting a deep learning model.
Fourth embodiment
According to embodiments of the present application, an electronic device and a readable storage medium are also provided.
As shown in fig. 5, a block diagram of an electronic device of a denoising method of a point cloud obstacle according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the application described and/or claimed herein.
As shown in fig. 5, the electronic device includes: one or more processors 501, memory 502, and interfaces for connecting components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the electronic device, including instructions stored in or on memory to display graphical information of a graphical user interface (Graphical User Interface, GUI) on an external input/output device, such as a display device coupled to the interface. In other embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple electronic devices may be connected, each providing a portion of the necessary operations, e.g., as a server array, a set of blade servers, or a multiprocessor system. One processor 501 is illustrated in fig. 5.
Memory 502 is a non-transitory computer readable storage medium provided herein. The memory stores instructions executable by the at least one processor to cause the at least one processor to perform the method for denoising a point cloud obstacle provided by the present application. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to perform the method of denoising a point cloud obstacle provided by the present application.
The memory 502 is used as a non-transitory computer readable storage medium, and may be used to store a non-transitory software program, a non-transitory computer executable program, and modules, such as program instructions/modules corresponding to the denoising method of a point cloud obstacle in the embodiment of the present application, for example, the obstacle detection module 401, the noise determination module 402, and the denoising processing module 403 shown in fig. 4. The processor 501 executes various functional applications of the server and data processing by running non-transitory software programs, instructions, and modules stored in the memory 502, that is, implements the method for denoising a point cloud obstacle in the above-described method embodiment.
Memory 502 may include a storage program area that may store an operating system, at least one application program required for functionality, and a storage data area; the storage data area may store data created according to the use of the electronic device of the denoising method of the point cloud obstacle, and the like. In addition, memory 502 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some embodiments, the memory 502 may optionally include memory remotely located with respect to the processor 501, which may be connected to the electronic device of the denoising method of the point cloud obstacle through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the denoising method of the point cloud obstacle may further include: an input device 503 and an output device 504. The processor 501, memory 502, input devices 503 and output devices 504 may be connected by a bus or otherwise, for example in fig. 5.
The input device 503 may receive input numeric or character information, as well as key signal inputs related to user settings and function control of the electronic device that generate the method of denoising of point cloud obstructions, such as input devices for a touch screen, a keypad, a mouse, a trackpad, a touch pad, a pointer stick, one or more mouse buttons, a trackball, a joystick, and the like. The output means 504 may include a display device, auxiliary lighting means, such as light emitting diodes (Light Emitting Diode, LEDs), tactile feedback means, and the like; haptic feedback devices such as vibration motors and the like. The display device may include, but is not limited to, a liquid crystal display (Liquid Crystal Display, LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device may be a touch screen.
Various implementations of the systems and techniques described here can be implemented in digital electronic circuitry, integrated circuitry, application specific integrated circuits (Application Specific Integrated Circuit, ASIC), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
These computing programs, also referred to as programs, software applications, or code, include machine instructions for a programmable processor, and may be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device for providing machine instructions and/or data to a programmable processor, e.g., magnetic discs, optical disks, memory, programmable logic devices (Programmable Logic Device, PLD), including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device for displaying information to a user, for example, a Cathode Ray Tube (CRT) or an LCD monitor; and a keyboard and pointing device, such as a mouse or trackball, by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front-end component, e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here, or that includes any combination of such background, middleware, or front-end components. The components of the system may be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include: local area network (Local Area Network, LAN), wide area network (Wide Area Network, WAN), the internet and blockchain networks.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, the collected point cloud data are firstly detected to include the point cloud barriers and the categories of the point cloud barriers, and then noise points of the point cloud barriers are determined by adopting the noise removal rules associated with the point cloud barriers to carry out noise removal processing. According to the scheme, the point cloud data noise point removing link is moved backwards, the whole frame filtering denoising taking the point as a unit is optimized to the filtering denoising taking the obstacle as a unit, and different obstacles correspond to different denoising rules, so that the noise removing precision and efficiency of the point cloud obstacle are greatly improved while the obstacle omission is avoided, and the accuracy and instantaneity of the obstacle sensing process are further improved. Provides a new idea for denoising the point cloud obstacle.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present application may be performed in parallel, sequentially, or in a different order, provided that the desired results of the technical solutions disclosed in the present application can be achieved, and are not limited herein.
The above embodiments do not limit the scope of the application. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present application are intended to be included within the scope of the present application.
Claims (14)
1. A method for denoising a point cloud obstacle, the method comprising:
detecting point cloud obstacles in the point cloud data, and determining the category of the point cloud obstacles;
determining noise points in the point cloud barriers by adopting a denoising rule associated with the category;
denoising the point cloud barrier according to the noise point;
the determining the noise point in the point cloud barrier by adopting the denoising rule associated with the category comprises the following steps:
if the category of the point cloud obstacle is a pedestrian category, determining noise points in the point cloud obstacle by adopting a mean shift algorithm associated with the pedestrian category;
and if the class of the point cloud obstacle is a vehicle class, determining noise points in the point cloud obstacle by adopting an edge fitting algorithm associated with the vehicle class.
2. The method of claim 1, wherein determining noise points in the point cloud obstacle using a pedestrian category-associated mean shift algorithm comprises:
iterating the offset of the point cloud obstacle according to the initial point cloud center and the preset radius of the point cloud obstacle of the pedestrian category, and determining the target point cloud center of the point cloud obstacle;
Determining a target area of the point cloud barrier according to the target point cloud center and the preset radius;
and taking the point cloud which is positioned outside the target area in the point cloud barrier as a noise point in the point cloud barrier.
3. The method of claim 1, wherein determining noise points in the point cloud obstacle using an edge fitting algorithm associated with a vehicle class comprises:
determining a visible edge line of a point cloud obstacle of a vehicle category, wherein the visible edge line is in a right-angle state;
obtaining point clouds with the distance from the visible edge line smaller than a first preset distance from the point cloud obstacle, and performing straight line fitting to obtain a target edge line;
and taking the point cloud which is not in the right angle area of the target edge line and is more than a second preset distance away from the target edge line in the point cloud barrier as a noise point in the point cloud barrier.
4. A method according to claim 3, wherein determining a visible edge line of a point cloud obstacle of a vehicle category comprises:
determining a bounding box of the point cloud obstacle of the vehicle class;
and taking a visible surrounding edge of the surrounding box in a right angle state in a radar scanning view angle as a visible edge line of the point cloud barrier.
5. The method of claim 1, wherein denoising the point cloud obstacle according to the noise point comprises:
calculating the noise point proportion of the point cloud barriers according to the number of the noise points and the total point cloud number of the point cloud barriers;
and if the noise point proportion is smaller than a preset proportion value, denoising the point cloud obstacle according to the noise point.
6. The method of claim 1, wherein detecting a point cloud obstacle in the point cloud data and determining a class of the point cloud obstacle comprises:
and determining the point cloud barriers in the point cloud data and the categories of the point cloud barriers by adopting a deep learning model.
7. A device for denoising a point cloud obstacle, the device comprising:
the obstacle detection module is used for detecting point cloud obstacles in the point cloud data and determining the category of the point cloud obstacles;
the noise point determining module is used for determining noise points in the point cloud barriers by adopting the denoising rule associated with the category;
the denoising processing module is used for denoising the point cloud barrier according to the noise point;
Wherein, the noise point determining module includes:
the pedestrian noise point determining unit is used for determining noise points in the point cloud obstacle by adopting a mean shift algorithm associated with the pedestrian category if the category of the point cloud obstacle is the pedestrian category;
and the vehicle noise point determining unit is used for determining noise points in the point cloud obstacle by adopting an edge fitting algorithm associated with the vehicle category if the category of the point cloud obstacle is the vehicle category.
8. The apparatus of claim 7, wherein the pedestrian noise determination unit is specifically configured to:
iterating the offset of the point cloud obstacle according to the initial point cloud center and the preset radius of the point cloud obstacle of the pedestrian category, and determining the target point cloud center of the point cloud obstacle;
determining a target area of the point cloud barrier according to the target point cloud center and the preset radius;
and taking the point cloud which is positioned outside the target area in the point cloud barrier as a noise point in the point cloud barrier.
9. The apparatus of claim 7, wherein the vehicle noise determination unit is specifically configured to:
determining a visible edge line of a point cloud obstacle of a vehicle category, wherein the visible edge line is in a right-angle state;
Obtaining point clouds with the distance from the visible edge line smaller than a first preset distance from the point cloud obstacle, and performing straight line fitting to obtain a target edge line;
and taking the point cloud which is not in the right angle area of the target edge line and is more than a second preset distance away from the target edge line in the point cloud barrier as a noise point in the point cloud barrier.
10. The apparatus according to claim 9, wherein the vehicle noise determination unit, when performing the determination of the visible edge line of the point cloud obstacle of the vehicle category, is specifically configured to:
determining a bounding box of the point cloud obstacle of the vehicle class;
and taking a visible surrounding edge of the surrounding box in a right angle state in a radar scanning view angle as a visible edge line of the point cloud barrier.
11. The apparatus of claim 7, wherein the denoising processing module is specifically configured to:
calculating the noise point proportion of the point cloud barriers according to the number of the noise points and the total point cloud number of the point cloud barriers;
and if the noise point proportion is smaller than a preset proportion value, denoising the point cloud obstacle according to the noise point.
12. The apparatus of claim 7, wherein the obstacle detection module is specifically configured to:
And determining the point cloud barriers in the point cloud data and the categories of the point cloud barriers by adopting a deep learning model.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of denoising a point cloud obstacle according to any one of claims 1-6.
14. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of denoising a point cloud obstacle according to any one of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010177539.XA CN111402161B (en) | 2020-03-13 | 2020-03-13 | Denoising method, device, equipment and storage medium for point cloud obstacle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010177539.XA CN111402161B (en) | 2020-03-13 | 2020-03-13 | Denoising method, device, equipment and storage medium for point cloud obstacle |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111402161A CN111402161A (en) | 2020-07-10 |
CN111402161B true CN111402161B (en) | 2023-07-21 |
Family
ID=71413461
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010177539.XA Active CN111402161B (en) | 2020-03-13 | 2020-03-13 | Denoising method, device, equipment and storage medium for point cloud obstacle |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111402161B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112560258B (en) * | 2020-12-10 | 2023-02-21 | 中国第一汽车股份有限公司 | Test method, device, equipment and storage medium |
CN113763262A (en) * | 2021-07-27 | 2021-12-07 | 华能伊敏煤电有限责任公司 | Application method of vehicle body filtering technology in point cloud data of automatic driving mine truck |
CN115249349B (en) * | 2021-11-18 | 2023-06-27 | 上海仙途智能科技有限公司 | Point cloud denoising method, electronic equipment and storage medium |
CN114509785A (en) * | 2022-02-16 | 2022-05-17 | 中国第一汽车股份有限公司 | Three-dimensional object detection method, device, storage medium, processor and system |
CN116051427B (en) * | 2023-03-31 | 2023-07-04 | 季华实验室 | Point cloud denoising model acquisition method, point cloud fusion method and related equipment thereof |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105157608A (en) * | 2015-08-31 | 2015-12-16 | 浙江大华技术股份有限公司 | Detection method, apparatus, and system of oversized vehicle |
CN106772435A (en) * | 2016-12-12 | 2017-05-31 | 浙江华飞智能科技有限公司 | A kind of unmanned plane barrier-avoiding method and device |
CN108984741A (en) * | 2018-07-16 | 2018-12-11 | 北京三快在线科技有限公司 | A kind of ground drawing generating method and device, robot and computer readable storage medium |
CN109949375A (en) * | 2019-02-02 | 2019-06-28 | 浙江工业大学 | A kind of mobile robot method for tracking target based on depth map area-of-interest |
CN110109142A (en) * | 2019-04-04 | 2019-08-09 | 深圳市速腾聚创科技有限公司 | Point cloud filtering method, device, computer equipment and storage medium |
CN110362077A (en) * | 2019-07-03 | 2019-10-22 | 上海交通大学 | Automatic driving vehicle urgent danger prevention decision system, method and medium |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10545029B2 (en) * | 2016-12-30 | 2020-01-28 | DeepMap Inc. | Lane network construction using high definition maps for autonomous vehicles |
WO2019113749A1 (en) * | 2017-12-11 | 2019-06-20 | Beijing Didi Infinity Technology And Development Co., Ltd. | Systems and methods for identifying and positioning objects around a vehicle |
-
2020
- 2020-03-13 CN CN202010177539.XA patent/CN111402161B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105157608A (en) * | 2015-08-31 | 2015-12-16 | 浙江大华技术股份有限公司 | Detection method, apparatus, and system of oversized vehicle |
CN106772435A (en) * | 2016-12-12 | 2017-05-31 | 浙江华飞智能科技有限公司 | A kind of unmanned plane barrier-avoiding method and device |
CN108984741A (en) * | 2018-07-16 | 2018-12-11 | 北京三快在线科技有限公司 | A kind of ground drawing generating method and device, robot and computer readable storage medium |
CN109949375A (en) * | 2019-02-02 | 2019-06-28 | 浙江工业大学 | A kind of mobile robot method for tracking target based on depth map area-of-interest |
CN110109142A (en) * | 2019-04-04 | 2019-08-09 | 深圳市速腾聚创科技有限公司 | Point cloud filtering method, device, computer equipment and storage medium |
CN110362077A (en) * | 2019-07-03 | 2019-10-22 | 上海交通大学 | Automatic driving vehicle urgent danger prevention decision system, method and medium |
Non-Patent Citations (3)
Title |
---|
VISION BASED OBSTACLE DETECTION USING ROVER STEREO IMAGES;Y. Wang 等;《Remote Sensing and Spatial Information Sciences》;20190614;1471-1477 * |
基于噪声分类的双边滤波点云去噪算法;袁华 等;《计算机应用》;20150810;第35卷(第8期);2305-2310 * |
点云模型的噪声分类去噪算法;李鹏飞;《计算机工程与应用》;20161231;第52卷(第20期);188-192 * |
Also Published As
Publication number | Publication date |
---|---|
CN111402161A (en) | 2020-07-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111402161B (en) | Denoising method, device, equipment and storage medium for point cloud obstacle | |
KR20220113829A (en) | Vehicle tracking methods, devices and electronic devices | |
CN112150558B (en) | Obstacle three-dimensional position acquisition method and device for road side computing equipment | |
CN111402160B (en) | Point cloud data denoising method, device, equipment and storage medium | |
CN108399424B (en) | Point cloud classification method, intelligent terminal and storage medium | |
CN110717933B (en) | Post-processing method, device, equipment and medium for moving object missed detection | |
CN110675635B (en) | Method and device for acquiring external parameters of camera, electronic equipment and storage medium | |
CN111797745B (en) | Training and predicting method, device, equipment and medium for object detection model | |
CN112509126B (en) | Method, device, equipment and storage medium for detecting three-dimensional object | |
CN111353466B (en) | Lane line recognition processing method, equipment and storage medium | |
CN111597987B (en) | Method, apparatus, device and storage medium for generating information | |
CN111337898B (en) | Laser point cloud processing method, device, equipment and storage medium | |
CN111275827A (en) | Edge-based augmented reality three-dimensional tracking registration method and device and electronic equipment | |
CN115147809A (en) | Obstacle detection method, device, equipment and storage medium | |
CN114091515A (en) | Obstacle detection method, obstacle detection device, electronic apparatus, and storage medium | |
CN111950345A (en) | Camera identification method and device, electronic equipment and storage medium | |
CN112528932B (en) | Method and device for optimizing position information, road side equipment and cloud control platform | |
CN112749701B (en) | License plate offset classification model generation method and license plate offset classification method | |
CN114299242A (en) | Method, device and equipment for processing images in high-precision map and storage medium | |
CN114882461B (en) | Equipment environment recognition method and device, electronic equipment and automatic driving vehicle | |
CN111968071B (en) | Method, device, equipment and storage medium for generating spatial position of vehicle | |
CN114581890B (en) | Method and device for determining lane line, electronic equipment and storage medium | |
CN115909253A (en) | Target detection and model training method, device, equipment and storage medium | |
CN114419564B (en) | Vehicle pose detection method, device, equipment, medium and automatic driving vehicle | |
CN112651983B (en) | Splice graph identification method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |