CN116500590A - Laser radar denoising method and device, vehicle and storage medium - Google Patents

Laser radar denoising method and device, vehicle and storage medium Download PDF

Info

Publication number
CN116500590A
CN116500590A CN202310479284.6A CN202310479284A CN116500590A CN 116500590 A CN116500590 A CN 116500590A CN 202310479284 A CN202310479284 A CN 202310479284A CN 116500590 A CN116500590 A CN 116500590A
Authority
CN
China
Prior art keywords
point cloud
cloud data
frame
contour
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310479284.6A
Other languages
Chinese (zh)
Inventor
许皓
贺武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Desai Xiwei Automobile Electronics Co ltd
Original Assignee
Shenzhen Desai Xiwei Automobile Electronics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Desai Xiwei Automobile Electronics Co ltd filed Critical Shenzhen Desai Xiwei Automobile Electronics Co ltd
Priority to CN202310479284.6A priority Critical patent/CN116500590A/en
Publication of CN116500590A publication Critical patent/CN116500590A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/495Counter-measures or counter-counter-measures using electronic or electro-optical means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Theoretical Computer Science (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The invention discloses a laser radar denoising method, a laser radar denoising device, a laser radar denoising vehicle and a storage medium. The method comprises the following steps: acquiring point cloud data of a current frame; determining the current image moment of the outline corresponding to the point cloud data; and filtering point cloud data of a target object in the point cloud data corresponding to each contour based on the current image moment and adjacent image moments of adjacent frames to obtain the target point cloud data, wherein the adjacent frames comprise a frame before the current frame and a frame after the current frame, and deformation of the target object among different frames is larger than a set deformation threshold. According to the method, the point cloud data corresponding to the target object with deformation larger than the set deformation threshold in the current frame are filtered, so that the problem of false detection of the laser radar caused by dust and water mist can be prevented.

Description

Laser radar denoising method and device, vehicle and storage medium
Technical Field
The embodiment of the invention relates to the technical field of laser radars, in particular to a laser radar denoising method, a laser radar denoising device, a laser radar denoising vehicle and a laser radar storage medium.
Background
The interference of dust and water mist on laser radar point cloud data is a problem facing the industry at present, noise caused by the occurrence of dust water mist can greatly influence the detection effect of the laser radar on the obstacle, and false detection of the obstacle is often generated. The prior art cannot solve the influence of dust water mist on obstacle detection in a general scene. Therefore, a solution is urgently needed to solve the problem of false detection of the laser radar caused by dust or water mist raised by wheels or operation modules in the running process of the vehicle.
Disclosure of Invention
The invention provides a laser radar denoising method, a device, a vehicle and a storage medium, which are used for solving the problem of false detection of a laser radar caused by dust and water mist in the prior art.
According to an aspect of the present invention, there is provided a laser radar denoising method, the method comprising:
acquiring point cloud data of a current frame;
determining the current image moment of the outline corresponding to the point cloud data;
and filtering point cloud data of a target object in the point cloud data corresponding to each contour based on the current image moment and adjacent image moments of adjacent frames to obtain the target point cloud data, wherein the adjacent frames comprise a frame before the current frame and a frame after the current frame, and deformation of the target object among different frames is larger than a set deformation threshold.
According to another aspect of the present invention, there is provided a laser radar denoising apparatus, the apparatus comprising:
the acquisition module is used for acquiring the point cloud data of the current frame;
the determining module is used for determining the current image moment of the outline corresponding to the point cloud data;
the filtering module is used for filtering point cloud data of a target object in the point cloud data corresponding to each contour based on the current image moment and adjacent image moments of adjacent frames to obtain the target point cloud data, the adjacent frames comprise a frame before the current frame and a frame after the current frame, and deformation of the target object among different frames is larger than a set deformation threshold value.
According to another aspect of the present invention, there is provided a vehicle including:
at least one lidar;
at least one processor; and
a memory communicatively coupled to the at least one processor; the at least one lidar is communicatively coupled to the at least one processor and the memory; wherein the method comprises the steps of
The memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the lidar denoising method according to any of the embodiments of the present invention.
According to another aspect of the present invention, there is provided a computer readable storage medium storing computer instructions for causing a processor to implement the laser radar denoising method according to any one of the embodiments of the present invention when executed.
The embodiment of the invention discloses a laser radar denoising method, a device, a vehicle and a storage medium, wherein the method comprises the following steps: acquiring point cloud data of a current frame; determining the current image moment of the outline corresponding to the point cloud data; and filtering point cloud data of a target object in the point cloud data corresponding to each contour based on the current image moment and adjacent image moments of adjacent frames to obtain the target point cloud data, wherein the adjacent frames comprise a frame before the current frame and a frame after the current frame, and deformation of the target object among different frames is larger than a set deformation threshold. According to the method, the point cloud data corresponding to the target object with deformation larger than the set deformation threshold in the current frame are filtered, so that the problem of false detection of the laser radar caused by dust and water mist can be prevented.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the invention or to delineate the scope of the invention. Other features of the present invention will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a laser radar denoising method according to a first embodiment of the present invention;
fig. 2 is a schematic flow chart of another denoising method of a lidar according to the first embodiment of the present invention;
fig. 3 is a schematic flow chart of a laser radar denoising method according to a second embodiment of the present invention;
fig. 4 is a schematic structural diagram of a laser radar denoising apparatus according to a third embodiment of the present invention;
fig. 5 is a schematic structural diagram of a vehicle according to a fourth embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention. It should be understood that the various steps recited in the method embodiments of the present invention may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the invention is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those skilled in the art will appreciate that "one or more" is intended to be construed as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the devices in the embodiments of the present invention are for illustrative purposes only and are not intended to limit the scope of such messages or information.
Example 1
Fig. 1 is a schematic flow chart of a laser radar denoising method according to an embodiment of the present invention, which is applicable to the case of detecting an obstacle, and the method may be performed by a laser radar denoising apparatus, where the apparatus may be implemented by software and/or hardware and is generally integrated on a vehicle, and in this embodiment, the vehicle includes, but is not limited to: common transportation vehicles, special-purpose vehicles, and the like.
As shown in fig. 1, a method for denoising a laser radar according to a first embodiment of the present invention includes the following steps:
s110, acquiring point cloud data of the current frame.
The current frame may be a frame that needs to be processed currently. The point cloud data may refer to a set of vectors in a three-dimensional coordinate system.
In this embodiment, the point cloud data in the current frame may be acquired by a laser radar.
S120, determining the current image moment of the outline corresponding to the point cloud data.
The contour may be the shape of the object in the current frame, and the contour of the same object in different frames may be different according to the change of the state of the object. The object in the current frame may be an obstacle or dust water mist. The current image moment may be the image moment of the contour in the current frame, the image moment being a weighted average (moment) of some specific pixel gray levels of the image, or an attribute of similar function or meaning to the image. Among the image moments are zero-order moment, first-order moment, second-order moment, third-order moment, hu moment, and the like.
In this embodiment, the contours of all objects in the current frame may be extracted, and the image moments of all contours in the current frame may be determined.
And S130, filtering the point cloud data of the target object in the point cloud data corresponding to each contour based on the current image moment and the adjacent image moment of the adjacent frame to obtain the target point cloud data.
The adjacent frames comprise a frame before the current frame and a frame after the current frame, and the deformation of the target object among different frames is larger than a set deformation threshold.
The adjacent frame may be a frame adjacent to the current frame, for example, the adjacent frame may be a frame previous to the current frame and a frame subsequent to the current frame. The neighboring image moment may be an image moment of a contour in a neighboring frame, and the neighboring image moment may be acquired through a database or may be acquired through calculation, which is not limited in this embodiment. The object may be an object whose deformation is greater than a set deformation threshold between different frames, for example, the object may be dust, water mist. The set deformation threshold value can be a preset deformation value, and the set deformation threshold value can be set according to actual conditions. The target point cloud data may be data from which the target object corresponding point cloud data is filtered out.
In this embodiment, since particles such as dust and water mist exist in the air, which can affect the accuracy of the lidar in detecting the obstacle, in order to identify the obstacle more accurately, the image moment of the contour in the current frame may be compared with the image moment of the contour in the adjacent frame to determine whether the deformation of the contour in the current frame is greater than a set deformation threshold, if the deformation of the contour is greater than the set deformation threshold, it indicates that the deformation of the contour between different frames is greater, and the contour is likely to be dust or water mist. Therefore, the point cloud data corresponding to the outline can be deleted, so that the influence of dust water mist on obstacle detection is reduced.
The first embodiment of the invention provides a laser radar denoising method, which comprises the following steps: acquiring point cloud data of a current frame; determining the current image moment of the outline corresponding to the point cloud data; and filtering point cloud data of a target object in the point cloud data corresponding to each contour based on the current image moment and adjacent image moments of adjacent frames to obtain the target point cloud data, wherein the adjacent frames comprise a frame before the current frame and a frame after the current frame, and deformation of the target object among different frames is larger than a set deformation threshold. According to the method, the point cloud data corresponding to the target object with deformation larger than the set deformation threshold in the current frame are filtered, so that the problem of false detection of the laser radar caused by dust and water mist can be prevented.
On the basis of the above embodiments, modified embodiments of the above embodiments are proposed, and it is to be noted here that only the differences from the above embodiments are described in the modified embodiments for the sake of brevity of description.
In one embodiment, the determining the image moment of the contour corresponding to the point cloud data includes:
mapping the point cloud data to a two-dimensional grid map, and converting the two-dimensional grid map to obtain a binary image; performing morphological operation on the binary image; extracting contours included in the binary image based on morphological operation; image moments of the contours are determined.
Wherein the two-dimensional grid map may be a two-dimensional based grid map. A grid map is a map of the environment divided into a series of grids, where each grid is given a possible value representing the probability that the grid is occupied. Binary images are usually represented by black and white, B & W, and monochrome images, with each pixel on the image having only two possible values or gray scale states. In a binary image, the gray scale is only two, that is, the gray scale value of any pixel point in the image is 0 or 255, which represent black and white respectively. Morphological operations may be some simple operations based on the shape of the image, which are typically performed on binary images. Morphological operations may include erosion, dilation, open operation, closed operation, and the like.
In this embodiment, the point cloud data may be rasterized, and the laser point cloud rasterization core concept is to process an area scanned by the laser radar with a grid, where each grid point cloud represents a small area of space, and includes a part of point cloud data, and the point cloud rasterization process is divided into two-dimensional rasterization and three-dimensional rasterization. Because three-dimensional point cloud data is time-consuming in terms of real-time processing speed, the method can adopt a two-dimensional rasterization mode to perform data dimension reduction processing, and therefore the processing speed of an algorithm is improved.
In this embodiment, the point cloud data may be mapped to a two-dimensional grid map, and the pixel value of the point cloud data portion in the two-dimensional grid map is set to 255, and the pixel value of the remaining portion is set to 0, so as to be converted into a binary image; and performing morphological operation on the binary image, extracting the contours included in the binary image after morphological operation, and calculating the image moment of each contour.
According to the embodiment, the point cloud data is mapped to the two-dimensional grid map, the two-dimensional grid map is converted into the binary image, and morphological operation is carried out on the binary image, so that information in the image can be extracted more conveniently.
In one embodiment, the performing morphological operations on the binary image includes:
expanding the binary image to obtain a connected domain; and corroding the connected domain to obtain a binary image after morphological operation.
Among these, dilation and erosion are two fundamental operations in digital morphology, commonly used for binary images. Swelling and corrosion are for white parts (highlights) and not black parts. The expansion is to expand a highlight portion in the binary image, and to enlarge a bright area. The etching is performed on a highlight portion in the binary image, and a dark area is enlarged. The connected domain generally refers to an image region formed by foreground pixel points which have the same pixel value and are adjacent in position in the image.
In this embodiment, a larger connected domain may be obtained by performing expansion processing on the binary image, and a binary image after morphological operation is obtained by corroding the connected domain, so as to remove discrete points at the edge of the contour, thereby being capable of extracting a more accurate contour.
In one embodiment, after the obtaining the point cloud data of the current frame, one or more of the following steps are further included:
deleting point cloud data of a first preset range of the distance laser radar; performing up-sampling operation of a preset radius and a preset iteration number on point cloud data in a second preset range away from the laser radar, wherein the point cloud data of the target point comprises up-sampled point cloud data; and taking the point cloud data in the third preset range as the point cloud data updated by the current frame.
The distance value included in the second preset range is larger than the distance value included in the third preset range, and the distance value included in the third preset range is larger than the distance value included in the first preset range.
The laser radar is a radar system for detecting the characteristic quantities of the position, the speed and the like of a target by emitting laser beams, and can be arranged at any suitable position on a vehicle. The first preset range may be a distance range closer to the lidar, the second preset range may be a distance range farther from the lidar, and the third preset range may be a distance range between the first preset range and the second preset range. The values of different preset ranges can be set according to practical situations, which is not limited in this embodiment.
The range value may be a range from the lidar. The preset radius may be a preset radius value, and the preset iteration number may be a preset number of iterations required. Upsampling refers to any technique that can bring an image to a higher resolution. The upsampling may include interpolation, deconvolution, and anti-pooling. The up-sampling preset radius and the preset iteration number can be set according to practical situations, which is not limited in this embodiment.
In this embodiment, the point cloud data to be processed may be divided into three parts, and may be specifically divided according to the distance range from the lidar. Fig. 2 is a schematic flow chart of another laser radar denoising method according to the first embodiment of the present invention, where the distance range may be divided into a first preset range, a second preset range, and a third preset range; wherein the first preset range may be set to 0-20 cm, the second preset range may be set to greater than 200 cm, and the third preset range may be set to 20-200 cm.
Aiming at the point cloud data in the first preset range, the point cloud data can be deleted, and the point cloud data of the part is generally the point cloud of the vehicle and needs to be filtered; aiming at the point cloud data in the second preset range, up-sampling operation of a preset radius and preset iteration times can be performed to obtain point cloud data with richer density, so that the point cloud on a real obstacle can be ensured not to be rarefied due to the influence of dust water mist, wherein the preset radius and the preset iteration times can be set according to calculation force, for example, the preset radius can be set to be 3 cm, and the iteration times can be set to be 5 times; and aiming at the point cloud data in the third preset range, the dust water mist can be filtered.
According to the embodiment, the point cloud data is divided, so that the point cloud data can be processed more specifically, and the computing efficiency can be improved. It can be appreciated that, if the calculation efficiency is not saved, the dust mist can be filtered for the point cloud data in the first preset range and the second preset range.
Example two
Fig. 3 is a schematic flow chart of a laser radar denoising method according to a second embodiment of the present invention, where the second embodiment is optimized based on the above embodiments. For details not yet described in detail in this embodiment, refer to embodiment one.
As shown in fig. 2, in a laser radar denoising method according to a second embodiment of the present invention, the current image moment includes a current centroid position, and the adjacent image moment includes an adjacent centroid position, and specifically includes the following steps:
s210, acquiring point cloud data of a current frame.
S220, determining the current image moment of the outline corresponding to the point cloud data.
S230, for each contour, determining whether an object corresponding to the contour is the same object based on the current centroid position of the contour in the current frame and the adjacent centroid position of the contour in the adjacent frame.
The current centroid position may be a centroid position of the contour in the current frame, and the adjacent centroid position may be a centroid position of the contour in the adjacent frame. The centroid position may be a position where the centroid of the contour is located, the centroid position may be represented in coordinates or other forms, and the centroid position may be obtained by an image moment of the contour. The centroid positions of the same contour in different frames may be different.
In this embodiment, for each contour in the current frame, the distance between the centroid position of the contour in the current frame and the centroid position in the adjacent frame may be determined, if the distance between the centroid position of the contour in the current frame and the centroid position of the adjacent frame is greater, it is indicated that the object corresponding to the contour in the current frame and the contour in the adjacent frame is not the same object, and step S250 may be continuously performed.
And S240, if so, determining whether to take the point cloud data corresponding to the contour as the point cloud data of the target object based on the form change information of the contour, and deleting the point cloud data of the target object.
The shape change information may be shape change state information of the outline, and the shape change information may include change information such as size, position, and pixel of the outline, which is not limited in this embodiment.
In this embodiment, whether the change of the contour between two adjacent frames is too large may be determined according to the shape change information of the contour, if the shape change of the contour between two adjacent frames exceeds a certain threshold, it is indicated that the object corresponding to the contour is likely to be dust water mist, and the point cloud data corresponding to the contour may be used as the point cloud data of the target object, and the point cloud data of the target object may be deleted.
And S250, continuing to traverse the rest contours of the current frame until all contours are traversed.
In this embodiment, after one of the contours in the current frame is processed, the remaining contours in the current frame may be continuously traversed until all contours in the current frame are traversed.
According to the laser radar denoising method provided by the second embodiment of the invention, for each contour, whether an object corresponding to the contour is the same object is determined based on the current centroid position of the contour in the current frame and the adjacent centroid position of the contour in the adjacent frame; if so, determining whether to take the point cloud data corresponding to the contour as the point cloud data of the target object based on the form change information of the contour, and deleting the point cloud data of the target object; and continuing traversing the rest contours of the current frame until all contours are traversed. The method is characterized in that the contour in the current frame is further processed, the centroid position and the morphological change information of the contour in the current frame are compared with the contour in the adjacent frame, the contour of the same object can be determined in different frames, and whether the object corresponding to the contour is dust mist or not is judged. According to the embodiment, dust water mist can be prevented from shielding the laser beam emitted by the laser radar, so that the laser radar point cloud on a real obstacle can be sparse, and the problem of missed detection of the real obstacle is caused.
In one embodiment, the determining whether the object corresponding to the contour is the same object based on the current centroid position of the contour in the current frame and the adjacent centroid position of the contour in the adjacent frame includes:
acquiring adjacent centroid positions, wherein the adjacent centroid positions comprise a last centroid position of the previous frame and a next centroid position of the next frame;
and if the distance between the current centroid position and the previous centroid position is smaller than a first preset threshold value and the distance between the current centroid position and the next centroid position is smaller than a second preset threshold value, determining that the contours corresponding to the current frame, the previous frame and the next frame belong to the same object.
The last centroid position may be a centroid position of the previous frame, and the next centroid position may be a centroid position of the next frame. The distance may be the distance that the two centroid positions vary between different frames. The first preset threshold and the second preset threshold may be preset distance thresholds. The first preset threshold and the second preset threshold may be the same or different.
In this embodiment, whether the object corresponding to the contour is the same object may be determined by determining the distance between the current centroid position and the adjacent centroid position. It may be determined whether the distance between the current centroid position and the previous centroid position is less than a first preset threshold and whether the distance between the current centroid position and the subsequent centroid position is less than a second preset threshold.
For example, if the distance between the current centroid position and the previous centroid position is smaller than the first preset threshold, but the distance between the current centroid position and the next centroid position is larger than the second preset threshold, the objects corresponding to the contours of the current frame and the previous frame may be considered to be the same object, and the objects corresponding to the contours of the current frame and the next frame are not the same object, at this time, the centroid positions of other contours in the next frame may be compared with the centroid positions of the contours in the current frame until the contours of the next frame, which correspond to the contours of the current frame, are found, or vice versa. If the object corresponding to the contour of the current frame is not traversed to the contour of the same object in the next frame, the object corresponding to the contour can be considered as dust and water mist, and the point cloud data corresponding to the contour in the current frame can be deleted. Only if the distance between the current centroid position and the previous centroid position is smaller than a first preset threshold value and the distance between the current centroid position and the next centroid position is smaller than a second preset threshold value, the objects corresponding to the outlines in the current frame, the previous frame and the next frame are the same.
In this embodiment, by comparing the centroid position of the contour in the current frame with the centroid positions of the contours in the previous frame and the next frame, it can be determined whether the object corresponding to the contour in the current frame and the contour in the adjacent frame is the same object.
In one embodiment, the image moments include Hu moments, and the neighboring image moments include Hu moments of a previous frame and Hu moments of a subsequent frame; correspondingly, the determining whether to use the point cloud data corresponding to the contour as the point cloud data of the target object based on the morphological change information of the contour includes:
acquiring the Hu moment of the previous frame and the Hu moment of the next frame;
if the difference between the Hu moment of the current frame and the Hu moment of the previous frame is larger than a third preset threshold value and the difference between the Hu moment of the current frame and the Hu moment of the next frame is larger than a fourth preset threshold value, taking the point cloud data corresponding to the outline as the point cloud data of the target object; otherwise, the point cloud data of the outline are reserved.
Where the Hu moment is a set of 7 variables calculated using the central moment that is invariant to the image transformation. The first 6 moment invariants are unchanged for translation, scaling, rotation and mapping, and the 7 th moment is changed due to image mapping. The third preset threshold value and the fourth preset threshold value can be preset values, and the third preset threshold value and the fourth preset threshold value can be the same or different.
In this embodiment, the shape change information of the contour in the adjacent frames may be determined by the Hu moment, if the difference between the Hu moment of the contour in the current frame and the Hu moment of the contour in the previous frame is greater than the third preset threshold and the difference between the Hu moment of the contour in the current frame and the Hu moment of the contour in the next frame is greater than the fourth preset threshold, it may be determined that the deformation of the contour in the different frames is greater, which indicates that the object corresponding to the contour is likely to be dust and water mist, at this time, the point cloud data corresponding to the contour may be used as the point cloud data of the target object, otherwise, the point cloud data of the contour is reserved. It will be appreciated that in addition to the Hu moment, the morphological change information of the contour in the adjacent frames can also be determined by other image moments.
In this embodiment, by determining and comparing the Hu moment change condition of the contour of the current frame between different frames, it can be determined whether the object corresponding to the contour is the target object (i.e. dust mist).
Example III
Fig. 4 is a schematic structural diagram of a laser radar denoising apparatus according to a third embodiment of the present invention, which is applicable to detection of an obstacle, wherein the apparatus may be implemented by software and/or hardware and is generally integrated in a vehicle.
As shown in fig. 4, the apparatus includes:
an obtaining module 310, configured to obtain point cloud data of a current frame;
a determining module 320, configured to determine a current image moment of a contour corresponding to the point cloud data;
the filtering module 330 is configured to filter point cloud data of a target object in the point cloud data corresponding to each contour based on the current image moment and adjacent image moments of adjacent frames to obtain the target point cloud data, where the adjacent frames include a frame before the current frame and a frame after the current frame, and deformation of the target object between different frames is greater than a set deformation threshold.
The embodiment provides a laser radar denoising device, which comprises: acquiring point cloud data of a current frame; determining the current image moment of the outline corresponding to the point cloud data; and filtering point cloud data of a target object in the point cloud data corresponding to each contour based on the current image moment and adjacent image moments of adjacent frames to obtain the target point cloud data, wherein the adjacent frames comprise a frame before the current frame and a frame after the current frame, and deformation of the target object among different frames is larger than a set deformation threshold. The device can prevent the problem of false detection of the laser radar caused by dust and water mist by filtering the point cloud data corresponding to the target object with the deformation larger than the set deformation threshold in the current frame.
Further, the determining module 320 includes:
mapping the point cloud data to a two-dimensional grid map, and converting the two-dimensional grid map to obtain a binary image;
performing morphological operation on the binary image;
extracting contours included in the binary image based on morphological operation;
image moments of the contours are determined.
Further, the performing morphological operations on the binary image includes:
expanding the binary image to obtain a connected domain;
and corroding the connected domain to obtain a binary image after morphological operation.
Further, the current image moment includes a current centroid position, the adjacent image moment includes an adjacent centroid position, and the filtering module 330 includes:
a first judging unit, configured to determine, for each contour, whether an object corresponding to the contour is the same object based on a current centroid position of the contour in the current frame and an adjacent centroid position of the contour in the adjacent frame;
the second judging unit is used for determining whether the point cloud data corresponding to the outline is used as the point cloud data of the target object based on the morphological change information of the outline if so, and deleting the point cloud data of the target object;
And the traversing unit is used for continuously traversing the rest contours of the current frame until all contours are traversed.
Further, the first judging unit includes:
acquiring adjacent centroid positions, wherein the adjacent centroid positions comprise a last centroid position of the previous frame and a next centroid position of the next frame;
and if the distance between the current centroid position and the previous centroid position is smaller than a first preset threshold value and the distance between the current centroid position and the next centroid position is smaller than a second preset threshold value, determining that the contours corresponding to the current frame, the previous frame and the next frame belong to the same object.
Further, the image moments include Hu moments, and the adjacent image moments include Hu moments of a previous frame and Hu moments of a later frame; correspondingly, the second judging unit comprises:
acquiring the Hu moment of the previous frame and the Hu moment of the next frame;
if the difference between the Hu moment of the current frame and the Hu moment of the previous frame is larger than a third preset threshold value and the difference between the Hu moment of the current frame and the Hu moment of the next frame is larger than a fourth preset threshold value, taking the point cloud data corresponding to the outline as the point cloud data of the target object; otherwise, the point cloud data of the outline are reserved.
Further, the device further comprises:
the deleting module is used for deleting the point cloud data of the first preset range of the distance laser radar;
the up-sampling module is used for carrying out up-sampling operation of a preset radius and a preset iteration number on point cloud data in a second preset range away from the laser radar, and the point cloud data of the target point comprise up-sampled point cloud data;
the updating module is used for taking the point cloud data in a third preset range as the point cloud data updated by the current frame;
the distance value included in the second preset range is larger than the distance value included in the third preset range, and the distance value included in the third preset range is larger than the distance value included in the first preset range.
The laser radar denoising device can execute the laser radar denoising method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
Example IV
Fig. 5 is a schematic structural diagram of a vehicle according to an embodiment of the present invention, as shown in fig. 5, the vehicle includes: at least one lidar 41, at least one processor 42, a memory 43 in communication with the at least one processor, an input device 44 and an output device 45. In fig. 5, a lidar 41 and a processor 42 are taken as examples; the lidar 41, the processor 42, the memory 43, the input device 44 and the output device 45 in the vehicle may be connected by a bus or other means, in fig. 5 by way of example.
The memory 43 is a computer readable storage medium, and may be used to store a software program, a computer executable program, and modules, such as program instructions/modules corresponding to the laser radar denoising method according to the embodiment of the present invention. The processor 42 executes various functional applications of the vehicle and data processing by running software programs, instructions and modules stored in the memory 43, i.e., implements the above-described lidar denoising method.
The memory 43 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, at least one application program required for functions; the storage data area may store data created according to the use of the terminal, etc. In addition, memory 43 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some examples, memory 43 may further include memory remotely located relative to processor 42, which may be connected to the vehicle via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 44 is operable to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the vehicle. The output means 45 may comprise a display device such as a display screen.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a vehicle having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or a trackball) by which a user can provide input to the vehicle. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present invention may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present invention are achieved, and the present invention is not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (10)

1. A method for denoising a lidar, the method comprising:
acquiring point cloud data of a current frame;
determining the current image moment of the outline corresponding to the point cloud data;
and filtering point cloud data of a target object in the point cloud data corresponding to each contour based on the current image moment and adjacent image moments of adjacent frames to obtain the target point cloud data, wherein the adjacent frames comprise a frame before the current frame and a frame after the current frame, and deformation of the target object among different frames is larger than a set deformation threshold.
2. The method of claim 1, wherein determining the image moment of the contour corresponding to the point cloud data comprises:
mapping the point cloud data to a two-dimensional grid map, and converting the two-dimensional grid map to obtain a binary image;
performing morphological operation on the binary image;
extracting contours included in the binary image based on morphological operation;
image moments of the contours are determined.
3. The method of claim 2, wherein the morphologically manipulating the binary image comprises:
expanding the binary image to obtain a connected domain;
And corroding the connected domain to obtain a binary image after morphological operation.
4. The method of claim 1, wherein the current image moment includes a current centroid position, the neighboring image moment includes a neighboring centroid position, and the filtering the point cloud data of the target object in the point cloud data corresponding to each contour based on the current image moment and the neighboring image moment of the neighboring frame, respectively, includes:
for each contour, determining whether an object corresponding to the contour is the same object based on a current centroid position of the contour in the current frame and an adjacent centroid position of the contour in the adjacent frame;
if so, determining whether to take the point cloud data corresponding to the contour as the point cloud data of the target object based on the form change information of the contour, and deleting the point cloud data of the target object;
and continuing traversing the rest contours of the current frame until all contours are traversed.
5. The method of claim 4, wherein the determining whether the object to which the contour corresponds is the same object based on a current centroid position of the contour in the current frame and an adjacent centroid position of the contour in the adjacent frame comprises:
Acquiring adjacent centroid positions, wherein the adjacent centroid positions comprise a last centroid position of the previous frame and a next centroid position of the next frame;
and if the distance between the current centroid position and the previous centroid position is smaller than a first preset threshold value and the distance between the current centroid position and the next centroid position is smaller than a second preset threshold value, determining that the contours corresponding to the current frame, the previous frame and the next frame belong to the same object.
6. The method of claim 4, wherein the image moments comprise Hu moments, and the neighboring image moments comprise Hu moments of a previous frame and Hu moments of a subsequent frame; correspondingly, the determining whether to use the point cloud data corresponding to the contour as the point cloud data of the target object based on the morphological change information of the contour includes:
acquiring the Hu moment of the previous frame and the Hu moment of the next frame;
if the difference between the Hu moment of the current frame and the Hu moment of the previous frame is larger than a third preset threshold value and the difference between the Hu moment of the current frame and the Hu moment of the next frame is larger than a fourth preset threshold value, taking the point cloud data corresponding to the outline as the point cloud data of the target object; otherwise, the point cloud data of the outline are reserved.
7. The method of claim 1, further comprising one or more of the following after the obtaining of the point cloud data of the current frame:
deleting point cloud data of a first preset range of the distance laser radar;
performing up-sampling operation of a preset radius and a preset iteration number on point cloud data in a second preset range away from the laser radar, wherein the point cloud data of the target point comprises up-sampled point cloud data;
taking the point cloud data in the third preset range as the point cloud data updated by the current frame;
the distance value included in the second preset range is larger than the distance value included in the third preset range, and the distance value included in the third preset range is larger than the distance value included in the first preset range.
8. A lidar denoising apparatus, the apparatus comprising:
the acquisition module is used for acquiring the point cloud data of the current frame;
the determining module is used for determining the current image moment of the outline corresponding to the point cloud data;
the filtering module is used for filtering point cloud data of a target object in the point cloud data corresponding to each contour based on the current image moment and adjacent image moments of adjacent frames to obtain the target point cloud data, the adjacent frames comprise a frame before the current frame and a frame after the current frame, and deformation of the target object among different frames is larger than a set deformation threshold value.
9. A vehicle, characterized in that the vehicle comprises:
at least one lidar;
at least one processor; and
a memory communicatively coupled to the at least one processor; the at least one lidar is communicatively coupled to the at least one processor and the memory; wherein,,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the lidar denoising method of any of claims 1-7.
10. A computer readable storage medium storing computer instructions for causing a processor to perform the lidar denoising method of any of claims 1 to 7.
CN202310479284.6A 2023-04-26 2023-04-26 Laser radar denoising method and device, vehicle and storage medium Pending CN116500590A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310479284.6A CN116500590A (en) 2023-04-26 2023-04-26 Laser radar denoising method and device, vehicle and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310479284.6A CN116500590A (en) 2023-04-26 2023-04-26 Laser radar denoising method and device, vehicle and storage medium

Publications (1)

Publication Number Publication Date
CN116500590A true CN116500590A (en) 2023-07-28

Family

ID=87324486

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310479284.6A Pending CN116500590A (en) 2023-04-26 2023-04-26 Laser radar denoising method and device, vehicle and storage medium

Country Status (1)

Country Link
CN (1) CN116500590A (en)

Similar Documents

Publication Publication Date Title
US20220383535A1 (en) Object Tracking Method and Device, Electronic Device, and Computer-Readable Storage Medium
CN110717489B (en) Method, device and storage medium for identifying text region of OSD (on Screen display)
CN111402161B (en) Denoising method, device, equipment and storage medium for point cloud obstacle
CN112129266B (en) Method, apparatus, device and computer readable storage medium for processing map
CN111008585B (en) Ship target detection method based on self-adaptive layered high-resolution SAR image
EP3846122B1 (en) Method and apparatus for generating background-free image, device, and medium
CN111931704A (en) Method, device, equipment and computer readable storage medium for evaluating map quality
CN115797300A (en) Edge detection method and device based on adaptive gradient threshold canny operator
CN114283132A (en) Defect detection method, device, equipment and storage medium
CN111563505A (en) Character detection method and device based on pixel segmentation and merging
CN112764004A (en) Point cloud processing method, device, equipment and storage medium
CN115457152A (en) External parameter calibration method and device, electronic equipment and storage medium
CN114170596A (en) Posture recognition method and device, electronic equipment, engineering machinery and storage medium
CN116500590A (en) Laser radar denoising method and device, vehicle and storage medium
CN113920273B (en) Image processing method, device, electronic equipment and storage medium
CN115131315A (en) Image change detection method, device, equipment and storage medium
CN111160358B (en) Image binarization method, device, equipment and medium
CN113763308A (en) Ground detection method, device, server and medium
CN116385952B (en) Distribution network line small target defect detection method, device, equipment and storage medium
CN114581890B (en) Method and device for determining lane line, electronic equipment and storage medium
CN115953459B (en) Method for extracting central line of laser stripe under complex illumination condition
CN118015580A (en) 3D lane line detection method, device and equipment based on point cloud and storage medium
CN117274588A (en) Image processing method, device, electronic equipment and storage medium
CN116934714A (en) Image processing method, device, equipment and storage medium
CN117423091A (en) Obstacle detection method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination