CN115249349A - Point cloud denoising method, electronic device and storage medium - Google Patents

Point cloud denoising method, electronic device and storage medium Download PDF

Info

Publication number
CN115249349A
CN115249349A CN202111367938.3A CN202111367938A CN115249349A CN 115249349 A CN115249349 A CN 115249349A CN 202111367938 A CN202111367938 A CN 202111367938A CN 115249349 A CN115249349 A CN 115249349A
Authority
CN
China
Prior art keywords
point cloud
dimensional
noise
points
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111367938.3A
Other languages
Chinese (zh)
Other versions
CN115249349B (en
Inventor
黄超
孟泽楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Xiantu Intelligent Technology Co Ltd
Original Assignee
Shanghai Xiantu Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Xiantu Intelligent Technology Co Ltd filed Critical Shanghai Xiantu Intelligent Technology Co Ltd
Priority to CN202111367938.3A priority Critical patent/CN115249349B/en
Priority to PCT/CN2022/071296 priority patent/WO2023087526A1/en
Publication of CN115249349A publication Critical patent/CN115249349A/en
Application granted granted Critical
Publication of CN115249349B publication Critical patent/CN115249349B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/60Image enhancement or restoration using machine learning, e.g. neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T90/00Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The application provides a point cloud denoising method, an electronic device and a storage medium, wherein the method comprises the following steps: acquiring a three-dimensional point cloud to be denoised; performing down-sampling processing on first point cloud characteristics extracted from the three-dimensional point cloud; carrying out shape feature statistics according to the first point cloud features after down sampling, and determining first point cloud features belonging to the same obstacle; performing up-sampling processing on the first point cloud features belonging to the same obstacle to obtain second point cloud features; the dimensions of the second point cloud features are the same as the dimensions of the first point cloud features; fusing the first point Yun Te and the second point cloud feature, and performing noise point identification by using the fused point cloud feature; and removing the noise points in the three-dimensional point cloud according to the noise point identification result. The embodiment of the application realizes accurate identification and removal of the noise points.

Description

Point cloud denoising method, electronic device and storage medium
Technical Field
The present disclosure relates to the field of point cloud processing, and in particular, to a point cloud denoising method, an electronic device, and a storage medium.
Background
With the wide application of devices such as laser radars, depth cameras, millimeter wave radars and the like, three-dimensional point clouds are widely applied to different fields such as the automatic driving field, the security inspection and protection field, the routing inspection field, the disaster relief field and the like.
Illustratively, the three-dimensional point cloud is another data form widely used in the field of automatic driving in addition to image data. Obstacle detection may be performed by a point cloud acquisition device such as a three-dimensional point cloud acquired by a laser radar, so as to assist the vehicle in good path planning or driving control based on the obstacle detection result.
However, in some scenarios, noise points may exist in the three-dimensional point cloud acquired by the point cloud acquisition device, such as a laser radar, resulting in inaccurate obstacle detection results. For example, in a scene that a sanitation vehicle performs a watering operation, a three-dimensional point related to water mist in the three-dimensional point cloud is a noise point to be removed. For example, in the weather of haze or dust, the three-dimensional point related to fine particles such as haze and dust in the three-dimensional point cloud is a noise point to be removed.
Therefore, in order to improve the accuracy of obstacle detection, it is necessary to effectively remove noise points in the three-dimensional point cloud.
Disclosure of Invention
In view of the above, the present application provides a point cloud denoising method, an electronic device and a storage medium.
Specifically, the method is realized through the following technical scheme:
in a first aspect, an embodiment of the present application provides a point cloud denoising method, where the method includes:
acquiring a three-dimensional point cloud to be denoised;
performing down-sampling processing on first point cloud characteristics extracted from the three-dimensional point cloud;
performing shape feature statistics according to the first point cloud features after down sampling, and determining first point cloud features belonging to the same barrier;
performing up-sampling processing on the first point cloud characteristics belonging to the same barrier to obtain second point cloud characteristics; the dimensions of the second point cloud features are the same as the dimensions of the first point cloud features;
fusing the first point Yun Te and the second point cloud feature, and performing noise point identification by using the fused point cloud feature;
and removing the noise points in the three-dimensional point cloud according to the noise point identification result.
In a second aspect, embodiments of the present application provide an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor executes the program to implement the method according to the first aspect.
In a third aspect, embodiments of the present application provide a computer-readable storage medium storing executable instructions that, when executed by a processor, implement the method according to the first aspect.
According to the point cloud denoising method, the electronic device and the storage medium provided by the embodiment of the application, in the process of noise point identification, in order to save computational power resources of the electronic device, in the process of shape feature statistics, the first point cloud feature extracted from the three-dimensional point cloud is firstly subjected to down-sampling processing, so that the calculated amount in the shape feature statistics process is saved, the statistical efficiency is improved, and further the loss of low-precision information caused by the down-sampling process is considered, so that the first point Yun Te and the second point cloud feature obtained by up-sampling are subjected to fusion processing, the low-precision information lost in the down-sampling process is compensated by using the first point cloud feature, the fused point cloud feature is further used for noise point identification, the accuracy of noise point identification is improved, and the noise points in the three-dimensional point cloud are well removed.
Drawings
Fig. 1 is a schematic flow chart of a point cloud denoising method according to an exemplary embodiment of the present application.
Fig. 2 is an architecture diagram of a noise point identification model according to an exemplary embodiment of the present application.
Fig. 3 is a schematic diagram of a running track of a noise point framed by a manual labeling process according to an exemplary embodiment of the present application.
Fig. 4 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if," as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination," depending on the context.
In order to solve the problems in the related art, the embodiment of the application provides a point cloud denoising method, which is used for effectively identifying noise points of a three-dimensional point cloud to be denoised and removing the noise points.
The point cloud denoising method provided by the embodiment of the application can be applied to electronic equipment. For example, the electronic device may include a program for executing the point cloud denoising method. Illustratively, the electronic device includes at least a memory storing executable instructions of the point cloud denoising method and a processor configured to execute the executable instructions.
In an exemplary application scenario, the three-dimensional point cloud is widely applied in the field of vehicle driving, and obstacle detection may be performed on the three-dimensional point cloud acquired by a point cloud acquisition device, such as a laser radar, so as to assist a vehicle in performing good path planning or driving control based on an obstacle detection result. In consideration of the problem that a noise point exists in the three-dimensional point cloud acquired by the point cloud acquisition device and causes an inaccurate obstacle detection result, the noise point includes but is not limited to a three-dimensional point corresponding to water mist, gravel, dust or fine particulate matters, therefore, the electronic device may be a vehicle-mounted terminal, and the vehicle-mounted terminal may remove the noise point in the three-dimensional point cloud by using the point cloud denoising method provided by the embodiment of the application after receiving the three-dimensional point cloud acquired by the point cloud acquisition device, so as to improve the accuracy of subsequent obstacle detection.
In another exemplary application scenario, the three-dimensional point cloud is also widely used in the inspection field, for example, in a warehouse or a large factory, the inspection robot may be equipped with a point cloud collecting device, and obstacle detection is performed on the three-dimensional point cloud collected by the point cloud collecting device, such as a laser radar, so as to assist the inspection robot to perform good inspection path planning or obstacle avoidance based on the obstacle detection result. Considering that there is a noise point in the three-dimensional point cloud collected by the point cloud collection device, which results in an inaccurate obstacle detection result, the noise point includes but is not limited to a three-dimensional point corresponding to fine particulate matters such as dust or dust, therefore, the electronic device may be an inspection robot or a terminal installed on the inspection robot, and the inspection robot or the terminal installed on the inspection robot may remove the noise point in the three-dimensional point cloud after receiving the three-dimensional point cloud collected by the point cloud collection device by using the point cloud denoising method provided by the embodiment of the present application, thereby improving the accuracy of subsequent obstacle detection.
Certainly, the electronic device may also be other types of devices, which is not limited in this embodiment, for example, the electronic device may be a server, and may receive the three-dimensional point cloud to be denoised from other devices (such as an autonomous vehicle and a patrol robot), remove the noise point in the three-dimensional point cloud by using the point cloud denoising method provided in the embodiment of the present application, and return the denoised three-dimensional point cloud to the other devices.
Referring to fig. 1, fig. 1 is a schematic flow chart of a point cloud denoising method, which can be applied to an electronic device, and the method includes:
in step S101, a three-dimensional point cloud to be denoised is obtained.
In step S102, a down-sampling process is performed on the first point cloud feature extracted from the three-dimensional point cloud.
In step S103, shape feature statistics is performed based on the first point cloud features after down-sampling, and first point cloud features belonging to the same obstacle are determined.
In step S104, performing upsampling processing on the first point cloud features belonging to the same obstacle to obtain second point cloud features; the dimensions of the second point cloud features are the same as the dimensions of the first point cloud features.
In step S105, the first point Yun Te and the second point cloud feature are fused, and noise point identification is performed using the fused point cloud feature.
In step S106, the noise points in the three-dimensional point cloud are removed according to the noise point identification result.
In the embodiment, in the process of identifying the noise points, in order to save computing resources of electronic equipment, in the process of counting the shape features, the first point cloud features extracted from the three-dimensional point cloud are firstly subjected to down-sampling processing, so that the calculated amount in the process of counting the shape features is saved, the counting efficiency is improved, and further, the loss of low-precision information caused in the down-sampling process is considered, so that the first point Yun Te and the second point cloud features obtained by up-sampling are subjected to fusion processing, the low-precision information lost in the down-sampling process is compensated by using the first point cloud features, and the fused point cloud features are further used for identifying the noise points, thereby being beneficial to improving the accuracy of identifying the noise points and realizing the good removal of the noise points in the three-dimensional point cloud.
In some embodiments, the three-dimensional point cloud to be denoised can be collected by a point cloud collection device, and currently, the mainstream point cloud collection devices are classified into two types: active and passive. Active sensors can be divided into TOF (Time of Flight) systems, which determine the true distance of the sensor to the surface of the object by measuring the Time interval between the arrival of the transmitted signal at the object surface and the return to the receiver, and triangulation systems, which calculate the spatial position of a point by measuring the relationship between two sensors at different locations to the same point of the object. Passive sensors rely on image pairs or image sequences and recover three-dimensional data from two-dimensional image data according to camera parameters. Typical active point cloud capture devices include, but are not limited to, laser radars, depth cameras (e.g., RGB-D cameras), millimeter wave radars, or binocular vision sensors, among others; typical passive point cloud acquisition devices include, but are not limited to, stereo cameras, SFM (structure from motion) systems, SFS (shape from shooting) systems, and the like.
Illustratively, the electronic device is a vehicle-mounted terminal installed on a vehicle, the vehicle-mounted terminal is in communication connection with a point cloud collecting device (such as a laser radar installed on the vehicle), the point cloud collecting device installed on the vehicle collects a three-dimensional point cloud in the vehicle driving process, and the vehicle-mounted terminal can acquire the three-dimensional point cloud to be denoised collected by the point cloud collecting device and remove noise points by using the method of steps S101 to S106. Illustratively, the electronic device is an inspection robot, the inspection robot is provided with a point cloud acquisition device, and the inspection robot can acquire a three-dimensional point cloud to be denoised, which is acquired by the point cloud acquisition device, and remove noise points by using the method of steps S101 to S106.
Wherein, the noise points in the three-dimensional point cloud include but are not limited to three-dimensional points corresponding to at least one fine particulate matter: water mist, gravel, dust or dirt, and the like. It is to be understood that the present application is not limited in any way as to the type of fine particulate matter.
In some embodiments, after acquiring the three-dimensional point cloud to be denoised, first point cloud features may be extracted from the three-dimensional point cloud to be denoised for subsequent processing.
As a possible implementation manner, the first point cloud feature includes features of a plurality of three-dimensional points in the three-dimensional point cloud, and the features of the three-dimensional points include, but are not limited to, coordinates of the three-dimensional points, reflection intensity, identification of a light pulse sequence corresponding to the three-dimensional points (such as a radar beam ID), depth information (such as a distance between the three-dimensional points and the point cloud acquisition device), height information, or angle information (such as a deflection angle of a connecting line between the three-dimensional points and the point cloud acquisition device).
As another possible implementation manner, considering that the data volume of the first point cloud feature directly obtained for the feature of the three-dimensional point is large and more computing resources need to be consumed, in order to improve the recognition efficiency for the noise point, the three-dimensional meshing process may be performed on the three-dimensional point cloud to be denoised, and the three-dimensional point cloud may be segmented according to the preset distance to obtain the three-dimensional meshing three-dimensional point cloud; it can be understood that the preset distance may be specifically set according to an actual application scenario, and this embodiment does not limit this, for example, the preset distance includes, but is not limited to, 5cm, 20cm, or 1 m. After the three-dimensional gridded three-dimensional point cloud is obtained, feature extraction may be performed on each three-dimensional grid in the three-dimensional gridded three-dimensional point cloud, for example, the feature of the three-dimensional grid may be determined according to the statistics of the features of the three-dimensional points in the three-dimensional grid, so as to obtain the first point cloud feature based on the features of all the three-dimensional grids in the three-dimensional point cloud. Wherein the features of the three-dimensional points include at least one of: coordinates of the three-dimensional points, reflection intensity, identification of the light pulse sequence corresponding to the three-dimensional points, depth information, height information, or angle information. The statistical value includes, but is not limited to, an average value, a median, a maximum value, a minimum value, and the like, for example, the feature of the three-dimensional grid may be determined according to the average value of the features of the three-dimensional points in the three-dimensional grid, and specific calculation may be performed according to an actual application scenario, which is not limited in this embodiment. In this embodiment, the three-dimensional meshing processing is performed on the three-dimensional point cloud to be denoised, and the features of the three-dimensional mesh are extracted as the first point cloud features, so that the data volume participating in the operation is reduced, and the recognition efficiency of the noise points is improved.
In some embodiments, after obtaining the first point cloud feature, in order to save the computing resources for performing the shape feature statistics process subsequently, and on the other hand, consider a scenario with limited computing power of the electronic device, a downsampling process may be performed on the first point cloud feature, for example, the dimension of the first point cloud feature is 1024 × 1024 dimensions, the dimension of the downsampled first point cloud feature is 256 × 256 dimensions, and then the shape feature statistics is performed using the downsampled first point cloud feature, which is advantageous for saving the computing resources, the shape feature statistics process is to detect a relative relationship between three-dimensional points (or three-dimensional grids) to determine whether the first point cloud feature belongs to the same obstacle, and performing shape feature partition statistics on the downsampled first point cloud feature may determine the first point cloud feature (or each three-dimensional point or three-dimensional grid belonging to each obstacle), and then perform upsampling process to obtain a second point feature, the dimension of the second point cloud feature is the same as the dimension of the first point cloud feature, for example, the second point cloud feature is also 1024 × 8978, and finally, the first point cloud feature and the second point cloud feature recognition accuracy is advantageous for improving the noise of the point cloud noise recognition.
In some embodiments, a noise point identification model may be pre-constructed for identifying noise points in the three-dimensional point cloud to be denoised, for example, after the three-dimensional point cloud to be denoised is obtained, the three-dimensional point cloud may be input into the pre-established noise point identification model, and the noise point identification result is obtained after the three-dimensional point cloud is processed by the noise point identification model.
Referring to fig. 2, fig. 2 shows an architecture diagram of a noise point identification model, which includes at least a feature extraction layer 22, a shape identification network 23, a noise point identification network 24, and the like, and may further include an input layer 21 and an output layer 25.
The input layer 21 is used for acquiring an input three-dimensional point cloud to be denoised.
The feature extraction layer 22 is configured to perform feature extraction on the three-dimensional point cloud, for example, the three-dimensional point cloud may be segmented according to a preset distance to obtain a three-dimensional gridded three-dimensional point cloud, and then perform feature extraction on each three-dimensional grid in the three-dimensional gridded three-dimensional point cloud to obtain the first point cloud feature; wherein, the feature of each three-dimensional grid is a statistic (such as an average value) of the features of the three-dimensional points in the three-dimensional grid. The grid features are extracted in the embodiment, so that the data volume of subsequent operation participation is reduced, and the operation efficiency is improved.
The shape recognition network 23 comprises a coefficient convolution layer 231, a first multi-layer perceptron network 232 and an upsampling layer 233; the coefficient convolution layer 231 is configured to perform downsampling on the first point cloud feature; the first multi-layer perceptron network 232 is configured to perform shape feature statistics according to the first point cloud features after downsampling, for example, perform shape feature statistical analysis according to a position relationship between three-dimensional grids, and determine first point cloud features belonging to the same obstacle, that is, divide and count the first point cloud features according to the obstacle; the upsampling layer 233 is configured to perform upsampling processing on the first point cloud features belonging to the same obstacle, so as to obtain second point cloud features. In the shape recognition network 23, the first point cloud feature is downsampled, so that the amount of calculation of the first multilayer perceptron network 232 in the shape feature statistical process is saved, and the statistical efficiency is improved.
The noise point identification network 24 comprises a feature fusion layer 241 and a second multi-layer perceptron network 242; the feature fusion layer 241 is used for fusing the first point Yun Te and the second point cloud feature; the second multi-layer perceptron network 242 is configured to perform noise point identification on the fused point cloud features to obtain a noise point identification result. In the noise point identification network 24, in consideration of the loss of low-precision information caused in the down-sampling process, the first point Yun Te and the second point cloud feature obtained by up-sampling are subjected to fusion processing, the first point cloud feature is used for compensating the low-precision information lost in the down-sampling process, and then the fused point cloud feature is used for noise point identification, so that the accuracy of noise point identification is improved.
The output layer 25 is configured to output the noise point identification result. In this embodiment, when the noise point identification model performs shape feature statistics, in order to further reduce the amount of computation and save computing resources, the first point cloud feature is downsampled by the coefficient convolution layer 231, and thus the noise point identification model can be applied to devices with limited computing resources, so that the noise point identification model is applicable to more devices and has wide applicability; in addition, considering that the down-sampling process causes loss of low-precision information, the noise point identification model is designed with an up-sampling structure in the latter half of the shape identification network 23 to restore point cloud characteristics to the size before down-sampling, and the second point cloud characteristics obtained by up-sampling are fused with the first point cloud characteristics, so that the loss of low-precision information in the down-sampling process is compensated, and then the second multilayer perceptron network 242 identifies noise points of the fused point cloud characteristics, so that the noise point identification accuracy is improved, and the balance between computing resources and precision is realized.
Next, an exemplary description is given of a training process of the noise point identification model, which is trained based on a plurality of three-dimensional point cloud samples labeled with noise points, by using a supervised learning manner as an example.
Firstly, aiming at the acquisition of a training sample, a plurality of single-frame three-dimensional point cloud samples can be fused into a first dense point cloud, wherein the plurality of single-frame three-dimensional point cloud samples are acquired by a point cloud acquisition device mounted on a movable platform in a moving process, the movable platform can be a vehicle or a mobile robot, and the like.
In the fusion process, the pose of the point cloud acquisition device when acquiring each frame of three-dimensional point cloud sample can be determined in a point cloud registration mode, and then a plurality of single-frame three-dimensional point cloud samples are converted to the same three-dimensional coordinate system based on the pose of the point cloud acquisition device when acquiring each frame of three-dimensional point cloud sample, so that the fusion process of the plurality of single-frame three-dimensional point cloud samples is realized in the three-dimensional coordinate system.
When a plurality of single-frame three-dimensional point cloud samples are fused into a first dense point cloud, static obstacles (such as trees and stopped vehicles) are represented in the first dense point cloud in a denser state (because a plurality of frames of three-dimensional points are accumulated), and dynamic obstacles (such as pedestrians and moving vehicles) represent a moving track in the first dense point cloud, corresponding to the direction of the dynamic movement relative to the first dense point cloud. And fine particles such as water mist, gravel, dust or dust are very small in size, weight and the like, so that different motion states can be presented according to the driving state of the movable platform (such as a vehicle), wind direction, wind power and other factors, and in the first dense point cloud, the fine particles such as water mist, gravel, dust or dust can also present a motion track corresponding to the direction of the dynamic motion of the fine particles relative to the first dense point cloud.
In one possible implementation, after obtaining a first dense point cloud, three-dimensional points belonging to the ground and three-dimensional points belonging to the movable platform (such as a vehicle) in the first dense point cloud may be removed, and a second dense point cloud may be obtained; the three-dimensional points belonging to the ground are three-dimensional points with the height of 0, and the three-dimensional points belonging to the movable platform (such as a vehicle) can be determined if the position of the point cloud acquisition device is known as the point cloud acquisition device is carried on the movable platform (such as a vehicle); after three-dimensional points belonging to the ground and three-dimensional points belonging to the movable platform (such as a vehicle) are removed, three-dimensional points with motion tracks around the movable platform (such as the vehicle) in the second dense point cloud can be determined as three-dimensional points corresponding to fine particles such as water mist, gravel, dust or dust, the three-dimensional points can be determined as noise points and labeled, and then the first dense point cloud can be split according to the labeled noise points to obtain a plurality of single-frame three-dimensional point cloud samples labeled with the noise points. In the embodiment, the automatic labeling process of the noise points is realized, and a plurality of single-frame three-dimensional point cloud samples are fused into the first dense point cloud for one-time labeling, so that the labeling efficiency is improved.
Certainly, in another possible implementation manner, in order to improve the accuracy of noise point labeling, manual labeling may also be performed by a user, for example, the second dense point cloud is projected to a two-dimensional space to obtain a labeled image, the user frames a motion trajectory of fine particles such as water mist, gravel, dust, or dust in the labeled image according to experience, for example, the motion trajectory of the fine particles may be simply framed by clicking with a mouse or using another tool such as a stylus, for example, as shown in fig. 3, a black frame in fig. 3 shows a running trajectory of a noise point framed in the manual labeling process; and a fixed height value is set, so that the electronic equipment can reversely project the framed area in the marked image to a three-dimensional space to obtain marked noise points, and finally, the first dense point cloud is disassembled according to the marked noise points to obtain a plurality of single-frame three-dimensional point cloud samples marked with the noise points. In the embodiment, the manual labeling process is favorable for improving the accuracy of noise point labeling, a plurality of single-frame three-dimensional point cloud samples are fused into the first dense point cloud for labeling, only simple frame selection is needed, each point is not required to be labeled, and the manual labeling efficiency is favorable for improving.
Further, in another possible implementation manner, after the first dense point cloud is obtained, in addition to removing a three-dimensional point belonging to the ground and a three-dimensional point belonging to the movable platform (such as a vehicle), obstacle recognition may be performed according to the first dense point cloud to obtain an obstacle recognition result, for example, the first dense point cloud may be input to an existing obstacle detection model, obstacle recognition may be performed by using the existing obstacle detection model, and an obstacle recognition result output by the obstacle detection model is obtained; then, according to the obstacle recognition result, three-dimensional points belonging to obstacles in the first dense point cloud are removed, and the second dense point cloud is obtained, in this embodiment, in addition to the three-dimensional points of the ground point and the host vehicle, three-dimensional points belonging to other obstacles (such as other vehicles, pedestrians, motorcycles, trees) are also removed, and interference factors are further eliminated, after the obstacles on the moving path (such as a road) of the movable platform (such as a vehicle) are all removed, the remaining three-dimensional points with the moving track on the road can be determined to be three-dimensional points corresponding to fine particulate matters such as water mist, gravel, dust or dust, so that the three-dimensional points with the moving track on the moving path (such as the road) in the second dense point cloud can be labeled as noise points, and finally, the first dense point cloud is split according to the labeled noise points, and a plurality of single-frame three-dimensional point cloud samples labeled with noise points are obtained. In the embodiment, the irrelevant factors are removed, the interference of the obstacles with the same motion trail is eliminated, and the accuracy of noise point marking is further improved.
In some embodiments, considering that the number of samples also relates to the accuracy of model training, in order to improve the accuracy of model training, the number of samples for training may be increased, and in addition to marking noise points in the above manner, the training sample set may be enriched in a data enhancement manner of at least one of the following:
in the first data enhancement mode, the noise points in the single-frame three-dimensional point cloud sample labeled with the noise points can be superimposed on other three-dimensional point cloud samples, so as to obtain additional single-frame three-dimensional point cloud samples labeled with the noise points. For example, in a vehicle driving scene, considering that the distances of fine particulate matters such as water mist, gravel, dust or dust relative to a vehicle body are similar, the labeled noise points can be superposed into three-dimensional point clouds corresponding to different vehicles running in different scenes so as to expand a training data set and enhance diversity.
In a second data enhancement mode, the positions of the noise points in the single-frame three-dimensional point cloud samples marked with the noise points can be moved to obtain additional single-frame three-dimensional point cloud samples marked with the noise points. For example, in a vehicle driving scene, considering that distances of fine particulate matters such as water mist, gravel, dust or dust relative to a vehicle body are similar, after a three-dimensional point belonging to a vehicle is determined, a labeled noise point can be moved to different positions relative to the vehicle, for example, the noise point can be symmetrically or horizontally moved to other different positions relative to the vehicle by symmetrically inverting or adjusting data of the noise point of a certain coordinate axis, so that different conditions that some vehicles exist on the left side, or on the head of the vehicle, or in the middle of the vehicle body, water mist, gravel, dust or dust and the like can be met.
In a third data enhancement mode, all three-dimensional points in the single-frame three-dimensional point cloud sample labeled with the noise points can be rotated by a preset angle to obtain an additional single-frame three-dimensional point cloud sample labeled with the noise points. For example, in a vehicle driving scene, in order to prevent the data from having a bias which may occur as a result of model training caused by driving in a fixed direction from east, west, south and north, all three-dimensional points in the single-frame three-dimensional point cloud sample labeled with the noise point may be rotated clockwise or counterclockwise by a preset angle to expand a training data set, thereby enhancing diversity.
In a fourth data enhancement mode, a three-dimensional point set representing an obstacle may be added to a vicinity of a noise point in the single-frame three-dimensional point cloud sample labeled with the noise point, so as to obtain an additional single-frame three-dimensional point cloud sample labeled with the noise point. For example, in a vehicle driving scene, a three-dimensional point set representing obstacles such as motorcycles or pedestrians can be added to the vicinity of a noise point, so that the respective ability of the noise point identification model when the noise point intersects with other objects can be verified.
In a fifth data enhancement mode, all three-dimensional points in the single-frame three-dimensional point cloud sample labeled with the noise points can be randomly shifted, the three-dimensional points can be shifted in any direction, a training data set is expanded, diversity is enhanced, and the robustness of the model can be improved due to the increase of training samples.
In some embodiments, after obtaining a plurality of three-dimensional point cloud samples labeled with noise points, model training may be performed based on the plurality of three-dimensional point cloud samples labeled with noise points to obtain the noise point identification model. For example, referring to the model structure shown in fig. 2, after the electronic device obtains a plurality of three-dimensional point cloud samples marked with noise points, the electronic device may input the plurality of three-dimensional point cloud samples marked with noise points into a preset model shown in fig. 2:
and then, performing feature extraction through a feature extraction layer of a preset model, for example, performing three-dimensional meshing processing on a three-dimensional point cloud sample, dividing a point cloud space into three-dimensional meshes according to a preset distance, and extracting features of the three-dimensional meshes to obtain first point cloud features, for example, the features of the three-dimensional meshes may be an average value of the features of three-dimensional points in the three-dimensional meshes.
And then processing the first point cloud characteristics by a shape recognition network in a preset model, performing down-sampling processing on the first point cloud characteristics by a sparse convolution layer, performing shape characteristic statistics by a first multilayer perceptron network according to the down-sampled first point cloud characteristics to determine the first point cloud characteristics belonging to the same obstacle, and performing up-sampling processing on the first point cloud characteristics belonging to the same obstacle by an upper sampling layer to obtain second point cloud characteristics.
And then, processing by a noise point identification network in a preset model, fusing the first point Yun Te and the second point cloud characteristics by a characteristic fusion layer, and identifying the noise point of the fused point cloud characteristics by a second multilayer perceptron network to obtain a noise point prediction result.
And finally, determining the difference between the noise point prediction result and the noise point marked by the three-dimensional point cloud sample, and reversely adjusting the parameters for constructing the preset model according to the difference to obtain the trained noise point identification model. For example, a loss value corresponding to a difference between the noise point prediction result and the noise point labeled by the three-dimensional point cloud sample may be calculated based on a preset loss function, and a parameter for constructing the specified model may be reversely adjusted according to the loss value.
In this embodiment, the first point cloud feature used is a grid feature extracted from the point cloud after the meshing, which is beneficial to reducing the calculation amount; when the noise point identification model is used for shape feature statistics, in order to further reduce the operation amount and save the calculation resources, the first point cloud features are subjected to down-sampling processing through the sparse convolution layer, and then the noise point identification model can be applied to equipment with limited operation resources, so that the noise point identification model is suitable for more equipment and has wide applicability; and considering that the down-sampling process can cause the loss of low-precision information, the noise point identification model designs an up-sampling structure at the latter half part of the shape identification network to restore the point cloud characteristics to the size before down-sampling, and fuses the second point cloud characteristics obtained by up-sampling with the first point cloud characteristics, so that the loss low-precision information in the down-sampling process is compensated, and then the second multilayer perceptron network carries out noise point identification on the fused point cloud characteristics, thereby being beneficial to improving the accuracy of noise point identification and realizing the balance between computing resources and precision.
It is to be understood that the training process and the application process of the noise point identification model may be performed by the same electronic device, or may be performed by different electronic devices, which is not limited in this embodiment.
Corresponding to the above point cloud denoising method, referring to fig. 4, an embodiment of the present application further provides an electronic device 30, which includes a memory 32, a processor 31, and a computer program 33 stored on the memory 32 and executable on the processor 31, where the processor 31 is configured to execute the above method when executing the program.
The Processor 31 executes executable instructions included in the memory 32, and the Processor 31 may be a Central Processing Unit (CPU), or other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, a discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 32 stores executable instructions of the point cloud denoising method, and the memory 32 may include at least one type of storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, and the like. Also, the device may cooperate with a network storage device that performs a storage function of the memory through a network connection. The storage 32 may be an internal storage unit of the device 30, such as a hard disk or a memory of the device 30. The memory 32 may also be an external storage device of the device 30, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), etc., provided on the device 30. Further, memory 32 may also include both internal and external storage units of device 30. The memory 32 is used for storing the computer program 33 and other programs and data required by the device. The memory 32 may also be used to temporarily store data that has been output or is to be output.
Illustratively, the electronic device comprises a vehicle-mounted terminal, the vehicle-mounted terminal is mounted on a vehicle, the vehicle is further provided with a point cloud acquisition device, the vehicle-mounted terminal is in communication connection with the point cloud acquisition device (such as a laser radar mounted on the vehicle), the point cloud acquisition device mounted on the vehicle acquires three-dimensional point cloud in the driving process of the vehicle, and the vehicle-mounted terminal can acquire the three-dimensional point cloud to be denoised, acquired by the point cloud acquisition device, and denoise the three-dimensional point cloud based on the point cloud denoising method provided by the embodiment of the application.
Illustratively, the processor, when executing the program, is configured to perform the steps of:
acquiring a three-dimensional point cloud to be denoised;
performing down-sampling processing on first point cloud characteristics extracted from the three-dimensional point cloud;
carrying out shape feature statistics according to the first point cloud features after down sampling, and determining first point cloud features belonging to the same obstacle;
performing up-sampling processing on the first point cloud features belonging to the same obstacle to obtain second point cloud features; the dimensions of the second point cloud features are the same as the dimensions of the first point cloud features;
fusing the first point Yun Te and the second point cloud feature, and performing noise point identification by using the fused point cloud feature;
and removing the noise points in the three-dimensional point cloud according to the noise point identification result.
Optionally, after the acquiring the three-dimensional point cloud to be denoised, the processor is further configured to: dividing the three-dimensional point cloud according to a preset distance to obtain a three-dimensional gridded three-dimensional point cloud; and extracting the characteristics of each three-dimensional grid in the three-dimensional gridded three-dimensional point cloud to obtain the first point cloud characteristics.
Optionally, the first point cloud features comprise features of all three-dimensional meshes in the three-dimensional point cloud. The characteristics of each mesh are determined by statistics of the characteristics of the three-dimensional points within the three-dimensional mesh. The features of the three-dimensional points include at least one of: coordinates of the three-dimensional points, reflection intensity, identification of the light pulse sequence corresponding to the three-dimensional points, depth information, height information, or angle information.
Optionally, the noise point identification result is obtained by inputting the three-dimensional point cloud into a noise point identification model established in advance and processing the three-dimensional point cloud through the noise point identification model. The noise point identification model comprises a feature extraction layer, a shape identification network and a noise point identification network. And the first point cloud feature is obtained by performing feature extraction on the three-dimensional point cloud through the feature extraction layer. The shape recognition network comprises a sparse convolution layer used for carrying out downsampling processing on first point cloud characteristics, a first multi-layer perceptron network used for carrying out shape characteristic statistics according to the downsampled first point cloud characteristics, and an upsampling layer used for carrying out upsampling processing on the first point cloud characteristics belonging to the same obstacle. The noise point identification network comprises a feature fusion layer for fusing the first point Yun Te and the second point cloud feature and a second multilayer perceptron network for performing noise point identification on the fused point cloud feature.
Optionally, the noise point identification model is obtained by training based on a plurality of three-dimensional point cloud samples marked with noise points.
Optionally, the processor is further configured to: fusing a plurality of single-frame three-dimensional point cloud samples into a first dense point cloud; the single-frame three-dimensional point cloud samples are acquired by a point cloud acquisition device mounted on a vehicle in the vehicle driving process; removing three-dimensional points belonging to the ground and three-dimensional points belonging to the vehicle from the first dense point cloud to obtain a second dense point cloud; labeling three-dimensional points with motion trajectories around the vehicle in the second dense point cloud as noise points; and splitting the first dense point cloud according to the labeled noise points to obtain a plurality of single-frame three-dimensional point cloud samples labeled with the noise points.
Optionally, the processor is further configured to: performing obstacle identification according to the first dense point cloud to obtain an obstacle identification result; removing three-dimensional points belonging to obstacles in the first dense point cloud according to the obstacle identification result to obtain a second dense point cloud; and marking the three-dimensional points with the motion trail on the road in the second dense point cloud as noise points.
Optionally, the processor is further configured to: after obtaining a plurality of single-frame three-dimensional point cloud samples marked with noise points, obtaining the three-dimensional point cloud samples marked with the noise points by at least one of the following modes: superposing the noise points in the single-frame three-dimensional point cloud sample marked with the noise points to other three-dimensional point cloud samples; or, moving the position of the noise point in the single-frame three-dimensional point cloud sample marked with the noise point; or, rotating all three-dimensional points in the single-frame three-dimensional point cloud sample marked with the noise points by a preset angle; or adding a three-dimensional point set representing an obstacle to the vicinity of a noise point in the single-frame three-dimensional point cloud sample marked with the noise point; or randomly offsetting all three-dimensional points in the single-frame three-dimensional point cloud sample marked with the noise points.
Optionally, the three-dimensional point cloud to be denoised is acquired by a point cloud acquisition device mounted on a vehicle in the vehicle driving process; the noise points comprise three-dimensional points corresponding to at least one of the following fine particles: water mist, gravel, dust, or dust.
It will be appreciated that more or fewer components than those shown in fig. 3 may be included, or certain components may be combined, or different components, e.g., the device may also include input-output devices, network access devices, buses, etc.
Accordingly, in an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as a memory comprising instructions, executable by a processor of an apparatus to perform the above method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
A non-transitory computer readable storage medium, instructions in the storage medium, when executed by a processor of a terminal, enable the terminal to perform the above-described method.
Correspondingly, the embodiment of the present application further provides a computer program product, including a computer program of any one of the methods described above.
Embodiments of the subject matter and the functional operations described in this specification can be implemented in: digital electronic circuitry, tangibly embodied computer software or firmware, computer hardware including the structures disclosed in this specification and their structural equivalents, or a combination of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions, encoded on a tangible, non-transitory program carrier for execution by, or to control the operation of, data processing apparatus. Alternatively or additionally, the program instructions may be encoded on an artificially generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode and transmit information to suitable receiver apparatus for execution by the data processing apparatus. The computer storage medium may be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them.
The processes and logic flows described in this specification can be performed by one or more programmable computers executing one or more computer programs to perform corresponding functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
Computers suitable for executing computer programs include, for example, general and/or special purpose microprocessors, or any other type of central processing unit. Generally, a central processing unit will receive instructions and data from a read-only memory and/or a random access memory. The basic components of a computer include a central processing unit for implementing or executing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer does not necessarily have such a device. Further, the computer may be embedded in another device, such as an autonomous vehicle, a mobile robot, and so forth, to name a few.
Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices (e.g., EPROM, EEPROM, and flash memory devices), magnetic disks (e.g., an internal hard disk or a removable disk), magneto-optical disks, and CD ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. In other instances, features described in connection with one embodiment may be implemented as discrete components or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In some cases, multitasking and parallel processing may be advantageous. Moreover, the separation of various system modules and components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
Thus, particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. Further, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some implementations, multitasking and parallel processing may be advantageous.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.

Claims (12)

1. A point cloud denoising method, comprising:
acquiring a three-dimensional point cloud to be denoised;
performing down-sampling processing on first point cloud characteristics extracted from the three-dimensional point cloud;
carrying out shape feature statistics according to the first point cloud features after down sampling, and determining first point cloud features belonging to the same obstacle;
performing up-sampling processing on the first point cloud features belonging to the same obstacle to obtain second point cloud features; the dimensions of the second point cloud features are the same as the dimensions of the first point cloud features;
fusing the first point Yun Te and the second point cloud feature, and performing noise point identification by using the fused point cloud feature;
and removing the noise points in the three-dimensional point cloud according to the noise point identification result.
2. The method of claim 1, further comprising, after said obtaining the three-dimensional point cloud to be denoised:
dividing the three-dimensional point cloud according to a preset distance to obtain a three-dimensional gridded three-dimensional point cloud;
and extracting the characteristics of each three-dimensional grid in the three-dimensional gridded three-dimensional point cloud to obtain the first point cloud characteristics.
3. The method of claim 2, wherein the first point cloud features comprise features of all three-dimensional meshes in the three-dimensional point cloud;
the characteristics of the three-dimensional grid are determined by the statistics of the characteristics of the three-dimensional points in the three-dimensional grid;
the features of the three-dimensional points include at least one of: coordinates of the three-dimensional points, reflection intensity, identification of the light pulse sequence corresponding to the three-dimensional points, depth information, height information or angle information.
4. The method according to claim 1, wherein the noise point identification result is obtained by inputting the three-dimensional point cloud into a pre-established noise point identification model and processing the three-dimensional point cloud through the noise point identification model;
the noise point identification model comprises a feature extraction layer, a shape identification network and a noise point identification network;
the first point cloud feature is obtained by performing feature extraction on the three-dimensional point cloud through the feature extraction layer;
the shape recognition network comprises a sparse convolution layer used for carrying out downsampling processing on first point cloud characteristics, a first multi-layer perceptron network used for carrying out shape characteristic statistics according to the downsampled first point cloud characteristics, and an upsampling layer used for carrying out upsampling processing on the first point cloud characteristics belonging to the same obstacle;
the noise point identification network comprises a feature fusion layer for fusing the first point Yun Te and the second point cloud feature and a second multilayer perceptron network for performing noise point identification on the fused point cloud feature.
5. The method of claim 4, wherein the noise point identification model is trained based on a plurality of three-dimensional point cloud samples labeled with noise points.
6. The method of claim 5, further comprising:
fusing a plurality of single-frame three-dimensional point cloud samples into a first dense point cloud; the single-frame three-dimensional point cloud samples are acquired by a point cloud acquisition device mounted on a vehicle in the vehicle driving process;
removing three-dimensional points belonging to the ground and three-dimensional points belonging to the vehicle from the first dense point cloud to obtain a second dense point cloud;
labeling three-dimensional points with motion trajectories around the vehicle in the second dense point cloud as noise points;
and splitting the first dense point cloud according to the labeled noise points to obtain a plurality of single-frame three-dimensional point cloud samples labeled with the noise points.
7. The method of claim 6, further comprising:
performing obstacle identification according to the first dense point cloud to obtain an obstacle identification result;
removing three-dimensional points belonging to obstacles in the first dense point cloud according to the obstacle identification result to obtain a second dense point cloud;
the labeling, as noise points, three-dimensional points having motion trajectories around the vehicle in the second dense point cloud, comprising:
and marking the three-dimensional points with the motion trail on the road in the second dense point cloud as noise points.
8. The method of claim 6, wherein after obtaining the plurality of single-frame three-dimensional point cloud samples labeled with noise points, further comprising:
acquiring a three-dimensional point cloud sample marked with a noise point by at least one of the following modes:
superposing the noise points in the single-frame three-dimensional point cloud sample marked with the noise points to other three-dimensional point cloud samples;
or, moving the position of the noise point in the single-frame three-dimensional point cloud sample marked with the noise point;
or, rotating all three-dimensional points in the single-frame three-dimensional point cloud sample marked with the noise points by a preset angle;
or adding a three-dimensional point set representing an obstacle to the vicinity of a noise point in the single-frame three-dimensional point cloud sample marked with the noise point;
or randomly offsetting all three-dimensional points in the single-frame three-dimensional point cloud sample marked with the noise points.
9. The method according to any one of claims 1 to 8, characterized in that the three-dimensional point cloud to be denoised is acquired by a point cloud acquisition device mounted on a vehicle during the driving of the vehicle;
the noise points comprise three-dimensional points corresponding to at least one of the following fine particles: water mist, gravel, dust, or dust.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method of any one of claims 1 to 9 when executing the program.
11. The device of claim 10, wherein the electronic device comprises a vehicle-mounted terminal.
12. A computer-readable storage medium having stored thereon executable instructions which, when executed by a processor, implement the method of any one of claims 1 to 9.
CN202111367938.3A 2021-11-18 2021-11-18 Point cloud denoising method, electronic equipment and storage medium Active CN115249349B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111367938.3A CN115249349B (en) 2021-11-18 2021-11-18 Point cloud denoising method, electronic equipment and storage medium
PCT/CN2022/071296 WO2023087526A1 (en) 2021-11-18 2022-01-11 Point cloud denoising method, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111367938.3A CN115249349B (en) 2021-11-18 2021-11-18 Point cloud denoising method, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115249349A true CN115249349A (en) 2022-10-28
CN115249349B CN115249349B (en) 2023-06-27

Family

ID=83698116

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111367938.3A Active CN115249349B (en) 2021-11-18 2021-11-18 Point cloud denoising method, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN115249349B (en)
WO (1) WO2023087526A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116051427A (en) * 2023-03-31 2023-05-02 季华实验室 Point cloud denoising model acquisition method, point cloud fusion method and related equipment thereof
CN116129472A (en) * 2023-04-07 2023-05-16 阿里巴巴(中国)有限公司 Grid point generation method, storage medium and system

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116453063B (en) * 2023-06-12 2023-09-05 中广核贝谷科技有限公司 Target detection and recognition method and system based on fusion of DR image and projection image
CN117269940B (en) * 2023-11-17 2024-03-15 北京易控智驾科技有限公司 Point cloud data generation method and perception capability verification method of laser radar
CN117975202B (en) * 2024-04-01 2024-07-26 之江实验室 Model training method, service execution method, device, medium and equipment
CN118628398A (en) * 2024-08-12 2024-09-10 浙江托普云农科技股份有限公司 Dense point cloud denoising method, system and device based on visual shell

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111402161A (en) * 2020-03-13 2020-07-10 北京百度网讯科技有限公司 Method, device and equipment for denoising point cloud obstacle and storage medium
CN111709343A (en) * 2020-06-09 2020-09-25 广州文远知行科技有限公司 Point cloud detection method and device, computer equipment and storage medium
US20210082181A1 (en) * 2019-06-17 2021-03-18 Sensetime Group Limited Method and apparatus for object detection, intelligent driving method and device, and storage medium
CN112733885A (en) * 2020-12-23 2021-04-30 西人马帝言(北京)科技有限公司 Point cloud identification model determining method and point cloud identification method and device
CN113516663A (en) * 2021-06-30 2021-10-19 同济大学 Point cloud semantic segmentation method and device, electronic equipment and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103065354A (en) * 2012-12-24 2013-04-24 中国科学院深圳先进技术研究院 Device and method for point cloud optimization
CN106846272A (en) * 2017-01-18 2017-06-13 西安工程大学 A kind of denoising compressing method of point cloud model
US11592820B2 (en) * 2019-09-13 2023-02-28 The Boeing Company Obstacle detection and vehicle navigation using resolution-adaptive fusion of point clouds
CN111862171B (en) * 2020-08-04 2021-04-13 万申(北京)科技有限公司 CBCT and laser scanning point cloud data tooth registration method based on multi-view fusion
CN112508803B (en) * 2020-11-03 2023-10-03 中山大学 Denoising method and device for three-dimensional point cloud data and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210082181A1 (en) * 2019-06-17 2021-03-18 Sensetime Group Limited Method and apparatus for object detection, intelligent driving method and device, and storage medium
CN111402161A (en) * 2020-03-13 2020-07-10 北京百度网讯科技有限公司 Method, device and equipment for denoising point cloud obstacle and storage medium
CN111709343A (en) * 2020-06-09 2020-09-25 广州文远知行科技有限公司 Point cloud detection method and device, computer equipment and storage medium
CN112733885A (en) * 2020-12-23 2021-04-30 西人马帝言(北京)科技有限公司 Point cloud identification model determining method and point cloud identification method and device
CN113516663A (en) * 2021-06-30 2021-10-19 同济大学 Point cloud semantic segmentation method and device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
于挺;杨军;: "基于K近邻卷积神经网络的点云模型识别与分类", 激光与光电子学进展 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116051427A (en) * 2023-03-31 2023-05-02 季华实验室 Point cloud denoising model acquisition method, point cloud fusion method and related equipment thereof
CN116129472A (en) * 2023-04-07 2023-05-16 阿里巴巴(中国)有限公司 Grid point generation method, storage medium and system

Also Published As

Publication number Publication date
WO2023087526A1 (en) 2023-05-25
CN115249349B (en) 2023-06-27

Similar Documents

Publication Publication Date Title
CN115249349B (en) Point cloud denoising method, electronic equipment and storage medium
US10970871B2 (en) Estimating two-dimensional object bounding box information based on bird's-eye view point cloud
Luo et al. Fast and furious: Real time end-to-end 3d detection, tracking and motion forecasting with a single convolutional net
CN106908775B (en) A kind of unmanned vehicle real-time location method based on laser reflection intensity
CN109521756B (en) Obstacle motion information generation method and apparatus for unmanned vehicle
CN111192295B (en) Target detection and tracking method, apparatus, and computer-readable storage medium
Weon et al. Object Recognition based interpolation with 3d lidar and vision for autonomous driving of an intelligent vehicle
Pantilie et al. Real-time obstacle detection in complex scenarios using dense stereo vision and optical flow
CN115049700A (en) Target detection method and device
CN111814602B (en) Intelligent vehicle environment dynamic target detection method based on vision
CN111699410A (en) Point cloud processing method, device and computer readable storage medium
CN114419098A (en) Moving target trajectory prediction method and device based on visual transformation
Kellner et al. Road curb detection based on different elevation mapping techniques
Baig et al. A robust motion detection technique for dynamic environment monitoring: A framework for grid-based monitoring of the dynamic environment
EP3555854B1 (en) A method of tracking objects in a scene
CN116299500A (en) Laser SLAM positioning method and device integrating target detection and tracking
Poostchi et al. Spatial pyramid context-aware moving vehicle detection and tracking in urban aerial imagery
Dimitrievski et al. Semantically aware multilateral filter for depth upsampling in automotive lidar point clouds
CN117516560A (en) Unstructured environment map construction method and system based on semantic information
Du et al. Particle filter based object tracking of 3D sparse point clouds for autopilot
CN115683109B (en) Visual dynamic obstacle detection method based on CUDA and three-dimensional grid map
Eraqi et al. Static free space detection with laser scanner using occupancy grid maps
CN114648639B (en) Target vehicle detection method, system and device
CN115965847A (en) Three-dimensional target detection method and system based on multi-modal feature fusion under cross view angle
CN115861481A (en) SLAM system based on real-time dynamic object of laser inertia is got rid of

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant