CN112883820A - Road target 3D detection method and system based on laser radar point cloud - Google Patents

Road target 3D detection method and system based on laser radar point cloud Download PDF

Info

Publication number
CN112883820A
CN112883820A CN202110107066.0A CN202110107066A CN112883820A CN 112883820 A CN112883820 A CN 112883820A CN 202110107066 A CN202110107066 A CN 202110107066A CN 112883820 A CN112883820 A CN 112883820A
Authority
CN
China
Prior art keywords
point cloud
cloud data
point
points
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110107066.0A
Other languages
Chinese (zh)
Other versions
CN112883820B (en
Inventor
刘云翔
任金鹏
原鑫鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Institute of Technology
Original Assignee
Shanghai Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Institute of Technology filed Critical Shanghai Institute of Technology
Priority to CN202110107066.0A priority Critical patent/CN112883820B/en
Publication of CN112883820A publication Critical patent/CN112883820A/en
Application granted granted Critical
Publication of CN112883820B publication Critical patent/CN112883820B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Probability & Statistics with Applications (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a road target 3D detection method and system based on laser radar point cloud, comprising the following steps: receiving point cloud data acquired by a laser radar on a road; disassembling the point cloud data packet to obtain point cloud data of each frame, and respectively storing the point cloud data of each frame according to a time sequence; screening each frame of point cloud data according to the number threshold of the points in the selected target area so as to discard any frame of point cloud data with the number smaller than the number threshold; carrying out statistical filtering operation on the point cloud data; filtering the preprocessed point cloud data to determine target object point cloud data; clustering target object point cloud data to combine dense point cloud data to generate a point cloud target candidate set; and performing type judgment on the point cloud candidate set according to the pre-trained deep learning network model, and calculating the spatial position information of the target object. The method can avoid the problem of data precision loss after the three-dimensional point cloud data is mapped to the two-dimensional plane.

Description

Road target 3D detection method and system based on laser radar point cloud
Technical Field
The invention relates to the field of computer vision, in particular to a road target 3D detection method and system based on laser radar point cloud.
Background
The unmanned platform automobile technology development forms modes of multi-sensor information fusion, high-precision map and positioning, environment sensing, decision and path planning and vehicle bottom layer control basically up to now, wherein the environment sensing is the basis and the premise of safe driving of the unmanned platform automobile, and the driving strategy formulation, the path planning and the vehicle bottom layer control of the unmanned platform automobile directly depend on the high-robustness and accurate sensing.
In the early stage of research in the three-dimensional field, the computing power is limited, and people usually convert three-dimensional data into two-dimensional images and then process the two-dimensional images. For three-dimensional data, a point cloud can be mapped to obtain a two-dimensional image, and then a target is detected by using related knowledge in image processing; although the method of two-dimensional images can well detect the target information, the detected target information is inaccurate because the depth and coordinate information are lost in the process of two-dimensional mapping.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a method and a system for 3D detection of a road target based on laser radar point cloud, which solve the problem that the existing scheme of using two-dimensional image processing after two-dimensional projection of three-dimensional point cloud cannot meet the target detection requirement of a complex road scene, improve the target detection precision and reduce the false detection rate in an environment perception system.
The road target 3D detection method based on the laser radar point cloud provided by the invention comprises the following steps:
step S1: receiving point cloud data acquired by a laser radar on a road, and storing a section of the point cloud data intercepted according to a time sequence in a point cloud data packet mode;
step S2: disassembling each frame of point cloud data from the point cloud data packet, and respectively storing the point cloud data of each frame according to a time sequence;
step S3: screening each frame of point cloud data according to the number threshold of the points in the selected target area so as to discard any frame of point cloud data with the number smaller than the number threshold;
step S4: performing statistical filtering operation on the screened point cloud data to remove noise points and outliers and generate preprocessed point cloud data;
step S5: filtering the preprocessed point cloud data to remove ground point cloud data so as to determine target object point cloud data;
step S6: clustering the target object point cloud data to combine dense point cloud data to generate a point cloud target candidate set;
step S7: and performing type judgment on the point cloud candidate set according to the pre-trained deep learning network model, and calculating the spatial position information of a target object when the point cloud candidate set is judged to be the preset target object.
Preferably, the point cloud data packet comprises multiframe three-dimensional point cloud space data per second, and the characteristics of each point comprise the spatial coordinates of the X, Y, Z axis and the reflection intensity.
Preferably, the step S4 includes the steps of:
step S401: calculating the average distance from each point to the nearest K points to obtain the average distance result between each point and the adjacent K points;
step S402: determining a standard distance range between adjacent points according to the average distance result and a 3 sigma criterion;
step S403: and comparing the average distance value of each point with the adjacent K points with the standard distance range to remove the points outside the standard range.
Preferably, the step S5 includes the steps of:
step S501: dividing the preprocessed point cloud data into N sections of areas along the road direction;
step S502: for point cloud data of each section of area, firstly selecting the lowest height point of the area as a prior condition, and fitting out a plane;
step S503: calculating the distance from each point in each section of area to the plane, and judging whether the point belongs to the plane or not by using the distance value of each point and a set threshold value so as to generate a plane point set;
step S504: repeatedly executing the step S503 to generate a fitting plane, wherein the fitting plane is a plane containing ground point cloud data;
step S505: and removing the ground point cloud data of each section of area, splicing the point cloud data of N sections of areas, and generating target object point cloud data with the ground point cloud data removed.
Preferably, the step S6 includes the steps of:
step S601: randomly selecting an unaccessed point P in the point cloud data of the target object, searching for a point within the radius R range of the point P, determining the number of points in the range and judging whether the number of points is greater than a preset point threshold value;
step S602: when the number of points in the range is larger than the threshold value of the number of points, taking the point P as a central point, establishing a cluster C, and marking the point P as visited; for the point with the point number smaller than the set point threshold value in the range, marking the point P as a noise point and marking the point P as accessed;
step S603: traversing all points in the R range, marking the points as C types, taking all the points in the R as core points, corroding peripheral points, marking corroded points as C types, and stopping until the points cannot be found;
step S604: and repeating the steps S601 to S603 to enable all the points to be accessed, and finally generating a point cloud target candidate set.
Preferably, in step S7, the deep learning network model is generated by using improved pointet network training, and can be used for extracting local features of the target object.
Preferably, the spatial position information of the target object includes a length, a width, a height, and a center point of the target object.
Preferably, the RANSAC algorithm is used in the step S4 to perform the statistical filtering operation on the filtered point cloud data.
Preferably, a DBSCAN algorithm is used for clustering the target object point cloud data in step S6.
The invention provides a laser radar point cloud-based road target 3D detection system, which comprises the following modules:
the data receiving module is used for receiving point cloud data acquired by a laser radar on a road and storing a section of the point cloud data intercepted according to a time sequence in a point cloud data packet mode;
the data decomposition module is used for decomposing the point cloud data packet into each frame of point cloud data and respectively storing the point cloud data of each frame according to the time sequence;
the data screening module is used for screening each frame of point cloud data according to the number threshold of the points in the selected target area so as to discard any frame of point cloud data with the number smaller than the number threshold;
the data filtering module is used for carrying out statistical filtering operation on the screened point cloud data so as to remove noise points and outliers and generate preprocessed point cloud data;
the data filtering module is used for filtering the preprocessed point cloud data to remove the ground point cloud data so as to determine target object point cloud data;
the data clustering module is used for clustering the target object point cloud data to combine dense point cloud data to generate a point cloud target candidate set;
and the data identification module is used for carrying out type judgment on the point cloud candidate set according to the pre-trained deep learning network model, and calculating the spatial position information of a target object when the point cloud candidate set is judged to be the preset target object.
Compared with the prior art, the invention has the following beneficial effects:
according to the invention, the spatial position information of the target object is determined by directly processing the original point cloud data, so that the problem of data precision loss after the three-dimensional point cloud data is mapped to the two-dimensional plane is avoided, and the problems that the traditional image target detection lacks coordinate position information on road target detection and the like are solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts. Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a flowchart illustrating steps of a method for 3D detection of a road target based on a lidar point cloud according to an embodiment of the present invention;
fig. 2 is a schematic block diagram of a laser radar point cloud-based road target 3D detection system according to an embodiment of the present invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that variations and modifications can be made by persons skilled in the art without departing from the spirit of the invention. All falling within the scope of the present invention.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The technical solution of the present invention will be described in detail below with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
The invention provides a laser radar point cloud-based road target 3D detection method, and aims to solve the problems in the prior art.
The following describes the technical solutions of the present invention and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present invention will be described below with reference to the accompanying drawings.
Fig. 1 is a flowchart illustrating steps of a laser radar point cloud-based road target 3D detection method according to an embodiment of the present invention, and as shown in fig. 1, the laser radar point cloud-based road target 3D detection method according to the present invention includes the following steps:
step S1: receiving point cloud data acquired by a laser radar on a road, and storing a section of the point cloud data intercepted according to a time sequence in a point cloud data packet mode;
in an embodiment of the present invention, the point cloud data packet includes 20 frames per second three-dimensional point cloud spatial data, and the feature of each point includes X, Y, Z-axis spatial coordinates and reflection intensity (reflected intensity). The spatial position information of the target object includes a length, a width, a height, and a center point of the target object.
Step S2: disassembling each frame of point cloud data from the point cloud data packet, and respectively storing the point cloud data of each frame according to a time sequence;
step S3: screening each frame of point cloud data according to the number threshold of the points in the selected target area so as to discard any frame of point cloud data with the number smaller than the number threshold;
step S4: performing statistical filtering operation on the screened point cloud data to remove noise points and outliers and generate preprocessed point cloud data;
in the embodiment of the present invention, in step S4, a RANSAC algorithm is used to perform a statistical filtering operation on the filtered point cloud data. The outliers are often caused by problems such as occlusion by the acquisition equipment.
In this embodiment of the present invention, the step S4 includes the following steps:
step S401: calculating the average distance from each point to the nearest K points to obtain the average distance result between each point and the adjacent K points;
step S402: determining a standard distance range between adjacent points according to the average distance result and a 3 sigma criterion;
step S403: and comparing the average distance value of each point with the adjacent K points with the standard distance range to remove the points outside the standard range.
In the embodiment of the invention, the definition of the invalid point cloud is that the point cloud density at the position is less than the threshold value, that is, the point cloud is considered invalid, and the point cloud data at the position is removed.
Step S5: filtering the preprocessed point cloud data to remove ground point cloud data so as to determine target object point cloud data;
in this embodiment of the present invention, the step S5 includes the following steps:
step S501: dividing the preprocessed point cloud data into N sections of areas along the road direction, wherein N is a natural number greater than 2;
step S502: for point cloud data of each section of area, firstly selecting the lowest height point of the area as a prior condition, and fitting out a plane;
step S503: calculating the distance from each point in each section of area to the plane, and judging whether the point belongs to the plane or not by using the distance value of each point and a set threshold value so as to generate a plane point set;
step S504: repeatedly executing the step S503 to generate a fitting plane, wherein the fitting plane is a plane containing ground point cloud data;
step S505: and removing the ground point cloud data of each section of area, splicing the point cloud data of N sections of areas, and generating target object point cloud data with the ground point cloud data removed.
Step S6: clustering the target object point cloud data to combine dense point cloud data to generate a point cloud target candidate set;
in the embodiment of the present invention, in step S6, a density-based DBSCAN algorithm is used for clustering the target object point cloud data.
In this embodiment of the present invention, the step S6 includes the following steps:
step S601: randomly selecting an unaccessed point P in the point cloud data of the target object, searching for a point within the radius R range of the point P, determining the number of points in the range and judging whether the number of points is greater than a preset point threshold value;
step S602: when the number of points in the range is larger than the threshold value of the number of points, taking the point P as a central point, establishing a cluster C, and marking the point P as visited; for the point with the point number smaller than the set point threshold value in the range, marking the point P as a noise point and marking the point P as accessed;
step S603: traversing all points in the R range, marking the points as C types, taking all the points in the R as core points, corroding peripheral points, marking corroded points as C types, and stopping until the points cannot be found;
step S604: and repeating the steps S601 to S603 to enable all the points to be accessed, and finally generating a point cloud target candidate set.
Step S7: and performing type judgment on the point cloud candidate set according to the pre-trained deep learning network model, and calculating the spatial position information of a target object when the point cloud candidate set is judged to be the preset target object.
In the embodiment of the present invention, in step S7, the deep learning network model is generated by using an improved pointet network training, and is used to perform a category judgment on the point cloud candidate set, where the improved pointet network can be used to extract local features of the target object.
Fig. 2 is a schematic block diagram of a laser radar point cloud-based road target 3D detection system in an embodiment of the present invention, and as shown in fig. 2, the laser radar point cloud-based road target 3D detection system provided by the present invention includes the following modules:
the data receiving module is used for receiving point cloud data acquired by a laser radar on a road and storing a section of the point cloud data intercepted according to a time sequence in a point cloud data packet mode;
the data decomposition module is used for decomposing the point cloud data packet into each frame of point cloud data and respectively storing the point cloud data of each frame according to the time sequence;
the data screening module is used for screening each frame of point cloud data according to the number threshold of the points in the selected target area so as to discard any frame of point cloud data with the number smaller than the number threshold;
the data filtering module is used for carrying out statistical filtering operation on the screened point cloud data so as to remove noise points and outliers and generate preprocessed point cloud data;
the data filtering module is used for filtering the preprocessed point cloud data to remove the ground point cloud data so as to determine target object point cloud data;
the data clustering module is used for clustering the target object point cloud data to combine dense point cloud data to generate a point cloud target candidate set;
and the data identification module is used for carrying out type judgment on the point cloud candidate set according to the pre-trained deep learning network model, and calculating the spatial position information of a target object when the point cloud candidate set is judged to be the preset target object.
In the embodiment of the invention, the spatial position information of the target object is determined by directly processing the original point cloud data, so that the problem of data precision loss after the three-dimensional point cloud data is mapped to a two-dimensional plane is avoided, and the problems that the traditional image target detection lacks coordinate position information on road target detection and the like are solved.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes and modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention.

Claims (10)

1. A road target 3D detection method based on laser radar point cloud is characterized by comprising the following steps:
step S1: receiving point cloud data acquired by a laser radar on a road, and storing a section of the point cloud data intercepted according to a time sequence in a point cloud data packet mode;
step S2: disassembling each frame of point cloud data from the point cloud data packet, and respectively storing the point cloud data of each frame according to a time sequence;
step S3: screening each frame of point cloud data according to the number threshold of the points in the selected target area so as to discard any frame of point cloud data with the number smaller than the number threshold;
step S4: performing statistical filtering operation on the screened point cloud data to remove noise points and outliers and generate preprocessed point cloud data;
step S5: filtering the preprocessed point cloud data to remove ground point cloud data so as to determine target object point cloud data;
step S6: clustering the target object point cloud data to combine dense point cloud data to generate a point cloud target candidate set;
step S7: and performing type judgment on the point cloud candidate set according to the pre-trained deep learning network model, and calculating the spatial position information of a target object when the point cloud candidate set is judged to be the preset target object.
2. The lidar point cloud-based road target 3D detection method of claim 1, wherein the point cloud data packet comprises multiframe three-dimensional point cloud spatial data per second, and the feature of each point comprises X, Y, Z axis spatial coordinates and reflection intensity.
3. The lidar point cloud-based road target 3D detection method according to claim 1, wherein the step S4 comprises the steps of:
step S401: calculating the average distance from each point to the nearest K points to obtain the average distance result between each point and the adjacent K points;
step S402: determining a standard distance range between adjacent points according to the average distance result and a 3 sigma criterion;
step S403: and comparing the average distance value of each point with the adjacent K points with the standard distance range to remove the points outside the standard range.
4. The lidar point cloud-based road target 3D detection method according to claim 1, wherein the step S5 comprises the steps of:
step S501: dividing the preprocessed point cloud data into N sections of areas along the road direction;
step S502: for point cloud data of each section of area, firstly selecting the lowest height point of the area as a prior condition, and fitting out a plane;
step S503: calculating the distance from each point in each section of area to the plane, and judging whether the point belongs to the plane or not by using the distance value of each point and a set threshold value so as to generate a plane point set;
step S504: repeatedly executing the step S503 to generate a fitting plane, wherein the fitting plane is a plane containing ground point cloud data;
step S505: and removing the ground point cloud data of each section of area, splicing the point cloud data of N sections of areas, and generating target object point cloud data with the ground point cloud data removed.
5. The lidar point cloud-based road target 3D detection method according to claim 1, wherein the step S6 comprises the steps of:
step S601: randomly selecting an unaccessed point P in the point cloud data of the target object, searching for a point within the radius R range of the point P, determining the number of points in the range and judging whether the number of points is greater than a preset point threshold value;
step S602: when the number of points in the range is larger than the threshold value of the number of points, taking the point P as a central point, establishing a cluster C, and marking the point P as visited; for the point with the point number smaller than the set point threshold value in the range, marking the point P as a noise point and marking the point P as accessed;
step S603: traversing all points in the R range, marking the points as C types, taking all the points in the R as core points, corroding peripheral points, marking corroded points as C types, and stopping until the points cannot be found;
step S604: and repeating the steps S601 to S603 to enable all the points to be accessed, and finally generating a point cloud target candidate set.
6. The lidar point cloud-based road target 3D detection method of claim 1, wherein in step S7, the deep learning network model is generated by using improved pointet network training, and can be used for target object local feature extraction.
7. The lidar point cloud-based road target 3D detection method of claim 1, wherein the spatial position information of the target object comprises a length, a width, a height, and a center point of the target object.
8. The lidar point cloud-based road target 3D detection method of claim 1, wherein a RANSAC algorithm is adopted when performing a statistical filtering operation on the filtered point cloud data in step S4.
9. The lidar point cloud-based road target 3D detection method of claim 1, wherein a DBSCAN algorithm is adopted in clustering the target object point cloud data in step S6.
10. A road target 3D detection system based on laser radar point cloud is characterized by comprising the following modules:
the data receiving module is used for receiving point cloud data acquired by a laser radar on a road and storing a section of the point cloud data intercepted according to a time sequence in a point cloud data packet mode;
the data decomposition module is used for decomposing the point cloud data packet into each frame of point cloud data and respectively storing the point cloud data of each frame according to the time sequence;
the data screening module is used for screening each frame of point cloud data according to the number threshold of the points in the selected target area so as to discard any frame of point cloud data with the number smaller than the number threshold;
the data filtering module is used for carrying out statistical filtering operation on the screened point cloud data so as to remove noise points and outliers and generate preprocessed point cloud data;
the data filtering module is used for filtering the preprocessed point cloud data to remove the ground point cloud data so as to determine target object point cloud data;
the data clustering module is used for clustering the target object point cloud data to combine dense point cloud data to generate a point cloud target candidate set;
and the data identification module is used for carrying out type judgment on the point cloud candidate set according to the pre-trained deep learning network model, and calculating the spatial position information of a target object when the point cloud candidate set is judged to be the preset target object.
CN202110107066.0A 2021-01-26 2021-01-26 Road target 3D detection method and system based on laser radar point cloud Active CN112883820B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110107066.0A CN112883820B (en) 2021-01-26 2021-01-26 Road target 3D detection method and system based on laser radar point cloud

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110107066.0A CN112883820B (en) 2021-01-26 2021-01-26 Road target 3D detection method and system based on laser radar point cloud

Publications (2)

Publication Number Publication Date
CN112883820A true CN112883820A (en) 2021-06-01
CN112883820B CN112883820B (en) 2024-04-19

Family

ID=76052426

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110107066.0A Active CN112883820B (en) 2021-01-26 2021-01-26 Road target 3D detection method and system based on laser radar point cloud

Country Status (1)

Country Link
CN (1) CN112883820B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113655477A (en) * 2021-06-11 2021-11-16 成都圭目机器人有限公司 Method for automatically detecting geological diseases of land radar by adopting shallow layer
CN113705617A (en) * 2021-07-30 2021-11-26 北京万集科技股份有限公司 Point cloud data processing method and device, computer equipment and storage medium
CN113744338A (en) * 2021-10-29 2021-12-03 青岛影创信息科技有限公司 Depth video space plane detection method and system
CN113925479A (en) * 2021-08-27 2022-01-14 上海赫千电子科技有限公司 Life monitoring method and device based on intelligent vehicle-mounted box
CN114359758A (en) * 2022-03-18 2022-04-15 广东电网有限责任公司东莞供电局 Power transmission line detection method and device, computer equipment and storage medium
CN114355381A (en) * 2021-12-31 2022-04-15 安徽海博智能科技有限责任公司 Laser radar point cloud quality detection and improvement method
CN115100491A (en) * 2022-08-25 2022-09-23 山东省凯麟环保设备股份有限公司 Abnormal robust segmentation method and system for complex automatic driving scene

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018205119A1 (en) * 2017-05-09 2018-11-15 深圳市速腾聚创科技有限公司 Roadside detection method and system based on laser radar scanning
CN110780305A (en) * 2019-10-18 2020-02-11 华南理工大学 Track cone bucket detection and target point tracking method based on multi-line laser radar
CN110879401A (en) * 2019-12-06 2020-03-13 南京理工大学 Unmanned platform real-time target 3D detection method based on camera and laser radar
CN111652926A (en) * 2020-04-30 2020-09-11 南京理工大学 Real-time three-dimensional target detection method based on sparse point cloud data

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018205119A1 (en) * 2017-05-09 2018-11-15 深圳市速腾聚创科技有限公司 Roadside detection method and system based on laser radar scanning
CN110780305A (en) * 2019-10-18 2020-02-11 华南理工大学 Track cone bucket detection and target point tracking method based on multi-line laser radar
CN110879401A (en) * 2019-12-06 2020-03-13 南京理工大学 Unmanned platform real-time target 3D detection method based on camera and laser radar
CN111652926A (en) * 2020-04-30 2020-09-11 南京理工大学 Real-time three-dimensional target detection method based on sparse point cloud data

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王张飞;刘春阳;隋新;杨芳;马喜强;陈立海;: "基于深度投影的三维点云目标分割和碰撞检测", 光学精密工程, no. 07 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113655477B (en) * 2021-06-11 2023-09-01 成都圭目机器人有限公司 Method for automatically detecting geological diseases by adopting shallow layer ground radar
CN113655477A (en) * 2021-06-11 2021-11-16 成都圭目机器人有限公司 Method for automatically detecting geological diseases of land radar by adopting shallow layer
CN113705617A (en) * 2021-07-30 2021-11-26 北京万集科技股份有限公司 Point cloud data processing method and device, computer equipment and storage medium
CN113925479A (en) * 2021-08-27 2022-01-14 上海赫千电子科技有限公司 Life monitoring method and device based on intelligent vehicle-mounted box
CN113925479B (en) * 2021-08-27 2023-10-03 上海赫千电子科技有限公司 Life monitoring method and device based on intelligent vehicle-mounted box
CN113744338A (en) * 2021-10-29 2021-12-03 青岛影创信息科技有限公司 Depth video space plane detection method and system
CN114355381A (en) * 2021-12-31 2022-04-15 安徽海博智能科技有限责任公司 Laser radar point cloud quality detection and improvement method
CN114355381B (en) * 2021-12-31 2022-09-09 安徽海博智能科技有限责任公司 Laser radar point cloud quality detection and improvement method
CN114359758B (en) * 2022-03-18 2022-06-14 广东电网有限责任公司东莞供电局 Power transmission line detection method and device, computer equipment and storage medium
CN114359758A (en) * 2022-03-18 2022-04-15 广东电网有限责任公司东莞供电局 Power transmission line detection method and device, computer equipment and storage medium
CN115100491B (en) * 2022-08-25 2022-11-18 山东省凯麟环保设备股份有限公司 Abnormal robust segmentation method and system for complex automatic driving scene
CN115100491A (en) * 2022-08-25 2022-09-23 山东省凯麟环保设备股份有限公司 Abnormal robust segmentation method and system for complex automatic driving scene
US11954917B2 (en) 2022-08-25 2024-04-09 Shandong Kailin Environmental Protection Equipment Co., Ltd. Method of segmenting abnormal robust for complex autonomous driving scenes and system thereof

Also Published As

Publication number Publication date
CN112883820B (en) 2024-04-19

Similar Documents

Publication Publication Date Title
CN112883820B (en) Road target 3D detection method and system based on laser radar point cloud
US11709058B2 (en) Path planning method and device and mobile device
CN110032962B (en) Object detection method, device, network equipment and storage medium
CN110988912B (en) Road target and distance detection method, system and device for automatic driving vehicle
CN108256577B (en) Obstacle clustering method based on multi-line laser radar
CN109087510B (en) Traffic monitoring method and device
US9142011B2 (en) Shadow detection method and device
CN110942449A (en) Vehicle detection method based on laser and vision fusion
CN110032949A (en) A kind of target detection and localization method based on lightweight convolutional neural networks
CN111563442A (en) Slam method and system for fusing point cloud and camera image data based on laser radar
CN110674705A (en) Small-sized obstacle detection method and device based on multi-line laser radar
CN115049700A (en) Target detection method and device
CN111091023B (en) Vehicle detection method and device and electronic equipment
CN110363771B (en) Isolation guardrail shape point extraction method and device based on three-dimensional point cloud data
CN111295666A (en) Lane line detection method, device, control equipment and storage medium
CN114179788B (en) Automatic parking method, system, computer readable storage medium and vehicle terminal
CN112528781B (en) Obstacle detection method, device, equipment and computer readable storage medium
EP3703008A1 (en) Object detection and 3d box fitting
CN114089330A (en) Indoor mobile robot glass detection and map updating method based on depth image restoration
US11919160B2 (en) Fall detection and assistance
CN116573017A (en) Urban rail train running clearance foreign matter sensing method, system, device and medium
CN115100741A (en) Point cloud pedestrian distance risk detection method, system, equipment and medium
CN114241448A (en) Method and device for obtaining heading angle of obstacle, electronic equipment and vehicle
CN111813882B (en) Robot map construction method, device and storage medium
CN111899277B (en) Moving object detection method and device, storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant