CN112578405B - Method and system for removing ground based on laser radar point cloud data - Google Patents

Method and system for removing ground based on laser radar point cloud data Download PDF

Info

Publication number
CN112578405B
CN112578405B CN202011179679.7A CN202011179679A CN112578405B CN 112578405 B CN112578405 B CN 112578405B CN 202011179679 A CN202011179679 A CN 202011179679A CN 112578405 B CN112578405 B CN 112578405B
Authority
CN
China
Prior art keywords
point cloud
laser radar
ground
cloud data
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011179679.7A
Other languages
Chinese (zh)
Other versions
CN112578405A (en
Inventor
周欣
姚明江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SAIC Volkswagen Automotive Co Ltd
Original Assignee
SAIC Volkswagen Automotive Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SAIC Volkswagen Automotive Co Ltd filed Critical SAIC Volkswagen Automotive Co Ltd
Priority to CN202011179679.7A priority Critical patent/CN112578405B/en
Publication of CN112578405A publication Critical patent/CN112578405A/en
Application granted granted Critical
Publication of CN112578405B publication Critical patent/CN112578405B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/42Simultaneous measurement of distance and other co-ordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a method for removing ground based on laser radar point cloud data, which comprises the steps of firstly adopting a laser radar to collect the laser radar point cloud data, and converting the laser radar point cloud data into a vehicle coordinate system from a laser radar coordinate system; and then constructing laser radar point cloud data with a grid structure, and acquiring image data by using a camera to identify a travelable area to obtain image data with travelable area identification. Then calibrating a conversion matrix for projecting the laser radar point cloud data to image data, and projecting the laser radar point cloud data to the image data with the travelable area identification by the conversion matrix to obtain a ground grid on the image data; traversing the point cloud data of the laser radar, and screening out ground point cloud candidate points; obtaining a fitting plane corresponding to the ground point cloud candidate points distributed in the ground grid; traversing the laser radar point cloud data, obtaining the distance between the laser radar point cloud data and the fitting plane of the corresponding ground grid, and removing the point cloud data if the distance is less than a set distance threshold.

Description

Method and system for removing ground based on laser radar point cloud data
Technical Field
The invention relates to an image processing method, in particular to an image processing method for intelligent driving of a vehicle.
Background
The three-dimensional lidar has a higher operating frequency than conventional microwave radars, so it can be used to provide real-time environmental information. Meanwhile, the three-dimensional laser radar also has high angular distance and speed resolution, so that the three-dimensional laser radar is widely applied to environment perception of the automatic driving vehicle.
However, when the three-dimensional lidar is applied to environment sensing of an autonomous vehicle, the point cloud returned by the three-dimensional lidar sensing itself contains information such as obstacles and the ground, so that the ground point cloud needs to be removed first to extract the obstacle information.
The existing traditional ground point cloud detection algorithm comprises the following steps: a grid height difference method and a plane model fitting method. These two methods are specifically described below:
grid height difference method: the grid height difference method relies on point cloud data, classified according to the magnitude of the height difference for each grid. The in-grid ground point height difference feature conforms to the grid height difference method, but the feature is still conformed to for the high platform, so the high platform points after rasterization are still classified as ground points. However, the classified ground points of this method include real ground points. When the grid has both ground point clouds and other obstacles, the ground point clouds cannot be distinguished.
The plane model fitting method comprises the following steps: and performing plane fitting according to the ground point cloud candidate points in the grid. If the grid contains more point clouds of high platforms or roofs and few ground point clouds, this method will remove the high platforms or roofs as the ground.
Based on the above, aiming at the defects in the prior art, the invention is expected to obtain the method and the system for removing the ground based on the laser radar point cloud data, which can effectively utilize image information and remove the ground false detection of a high platform and a roof in the detection range of the panoramic camera, and have good popularization prospect and application value.
Disclosure of Invention
One of the purposes of the invention is to provide a method for removing the ground based on the point cloud data of the laser radar, aiming at the defects in the prior art, the method can effectively utilize image information, remove the ground false detection of a high platform and a roof in the detection range of a panoramic camera, and has good popularization prospect and application value.
In order to achieve the purpose, the invention provides a method for removing the ground based on laser radar point cloud data, which comprises the following steps:
(1) Collecting point cloud data of a laser radar by adopting the laser radar arranged on a vehicle, and converting the point cloud data of the laser radar into a vehicle coordinate system from the laser radar coordinate system;
(2) Constructing laser radar point cloud data with a grid structure, wherein the laser radar point cloud data with the grid structure is provided with a set grid;
(3) Acquiring image data by adopting a camera, and identifying a travelable area of the acquired image data to obtain image data with a travelable area identifier;
(4) Calibrating a conversion matrix of the projection of the laser radar point cloud data to the image data;
(5) Projecting the laser radar point cloud data with the grid structure to image data with travelable area identification based on the conversion matrix to obtain a ground grid on the image data;
(6) Traversing the laser radar point cloud data in the step (2), and screening out ground point cloud candidate points according to whether the pixel positions projected to the image data by the laser radar point cloud data fall into the travelable area;
(7) For each ground grid, obtaining a fitting plane corresponding to the ground point cloud candidate points on the basis of the ground point cloud candidate points distributed in the ground grid;
(8) Traversing the laser radar point cloud data in the step (2), obtaining the distance between the laser radar point cloud data and a fitting plane of a corresponding ground grid, if the distance is smaller than a set distance threshold value, judging that the point cloud data represents the ground, and removing the point cloud data.
Further, in the method for removing the ground based on the lidar point cloud data, the step (1) further comprises: and carrying out filtering pretreatment on the collected laser radar point cloud data.
In the above technical solution, in some embodiments, in step (1), the collected lidar point cloud data may be further subjected to filter point preprocessing in x, y, and z directions according to the range of interest. In general, the lidar point cloud data is very huge, and some ranges of the lidar point cloud data are not concerned in vehicle driving, such as the point cloud with too high height and the point cloud at a far distance. Therefore, the laser radar point cloud data can be filtered in the x, y and z directions, and only the laser radar point cloud data in the interested range is reserved for subsequent processing.
Further, in the method for removing the ground based on the laser radar point cloud data, in the step (3), the travelable region of the collected image data is identified through the deep convolutional neural network.
Further, in the method for removing the ground based on the lidar point cloud data, the step (6) comprises the following pre-screening steps: preliminarily judging whether each point cloud in the laser radar point cloud data is used as a candidate point cloud of the ground based on the known drivable area range in the high-precision map, and if the point cloud falls in the known drivable area range in the high-precision map, further judging whether the pixel position projected to the image data by the point cloud falls in the drivable area of the image data; and if the point cloud does not fall within the range of the known drivable area in the high-precision map, screening the point cloud.
Still further, in the method for removing the ground based on the lidar point cloud data, the step (7) comprises the steps of:
(a) Randomly extracting three ground point cloud candidate points in the range of each ground grid to construct a primary fitting plane;
(b) Judging whether the absolute distance from the residual ground point cloud candidate points in the ground grid to the primary fitting plane meets a plane fitting condition, if so, taking the primary fitting plane as the fitting plane of the ground grid, and finishing the step; if not, returning to the step (a).
Further, in the method for removing the ground based on the laser radar point cloud data, the plane fitting condition is that the distance from at least 99% of points in the ground grid to the preliminary fitting plane is smaller than a set threshold value.
Accordingly, another object of the present invention is to provide a system for removing ground based on lidar point cloud data, which can effectively utilize image information to remove ground false detection of a high platform and a roof within the detection range of a panoramic camera, and has a good popularization prospect and an application value.
According to the above object, the present invention provides a system for removing ground based on lidar point cloud data, which comprises:
the system comprises a laser radar arranged on a vehicle and a control unit, wherein the laser radar is used for acquiring laser radar point cloud data;
a camera disposed on the vehicle, which collects image data;
the processing module is in data connection with the laser radar and the camera respectively, and executes the following steps based on laser radar point cloud data transmitted from the laser radar and image data transmitted from the camera:
(1) Converting the laser radar point cloud data into a vehicle coordinate system from a laser radar coordinate system;
(2) Constructing laser radar point cloud data with a grid structure, wherein the laser radar point cloud data with the grid structure is provided with a set grid;
(3) Identifying travelable areas of the image data to obtain image data with travelable area identifications;
(4) Calibrating a conversion matrix of the projection of the laser radar point cloud data to the image data;
(5) Projecting the three-dimensional laser radar point cloud data with the grid structure to image data with travelable area identification based on the conversion matrix to obtain a ground grid on the image data;
(6) Traversing the laser radar point cloud data in the step (2), and screening out ground point cloud candidate points according to whether pixel positions projected to the image data by the laser radar point cloud data fall into the travelable area;
(7) For each ground grid, obtaining a fitting plane corresponding to the ground grid based on the ground point cloud candidate points distributed in the ground grid;
(8) Traversing the laser radar point cloud data in the step (2), obtaining the distance between the laser radar point cloud data and a fitting plane of a corresponding ground grid, if the distance is smaller than a set distance threshold, judging that the point cloud data represents the ground, and removing the point cloud data.
Furthermore, in the system for removing the ground based on the laser radar point cloud data, the processing module is used for carrying out filtering pretreatment on the collected laser radar point cloud data.
Furthermore, in the system for removing the ground based on the laser radar point cloud data, the processing module identifies the travelable area of the image data through a deep convolutional neural network.
Further, in the system for removing the ground based on the lidar point cloud data, the step (6) comprises a pre-screening step: preliminarily judging whether each point cloud in the laser radar point cloud data is used as a candidate point cloud of the ground based on the known drivable area range in the high-precision map, and if the point cloud falls in the known drivable area range in the high-precision map, further judging whether the pixel position projected to the image data by the point cloud falls in the drivable area of the image data; if the point cloud does not fall into the range of the known drivable area in the high-precision map, screening the point cloud; and/or
The step (7) includes the steps of: (a) Randomly extracting three ground point cloud candidate points in the range of each ground grid to construct a primary fitting plane; (b) Judging whether the absolute distance from the residual ground point cloud candidate points in the ground grid to the primary fitting plane meets a plane fitting condition, if so, taking the primary fitting plane as the fitting plane of the ground grid, and finishing the step; if not, returning to the step (a).
The method for removing the ground based on the laser radar point cloud data can effectively utilize image information and remove ground false detection of a high platform and a roof in the detection range of the panoramic camera, and has good popularization prospect and application value.
Accordingly, the system for removing the ground based on the laser radar point cloud data can be used for implementing the method, and the system has the advantages and the beneficial effects.
Drawings
Fig. 1 schematically shows a flowchart of steps of a method for removing ground based on lidar point cloud data according to an embodiment of the present invention.
Detailed Description
The method and system for removing the ground based on the lidar point cloud data according to the present invention will be further explained and illustrated with reference to the drawings and specific embodiments, however, the explanation and illustration should not be construed as an undue limitation to the technical solution of the present invention.
The invention discloses a system for removing the ground based on laser radar point cloud data, which can be used for implementing the method for removing the ground based on the laser radar point cloud data.
In the system of the invention, the method can comprise the following steps: laser radar, camera and processing module. The laser radar and the camera are both arranged on the vehicle, the laser radar is used for collecting laser radar point cloud data, and the camera is used for collecting image data. The processing module is respectively in data connection with the laser radar and the camera, wherein the processing module can execute the steps of the method for removing the ground based on the laser radar point cloud data transmitted from the laser radar and the image data transmitted from the camera, and the specific steps are shown in figure 1.
Fig. 1 schematically shows a flowchart of steps of a method for removing ground based on lidar point cloud data according to an embodiment of the present invention.
As shown in fig. 1, in this embodiment, the method for removing the ground based on the lidar point cloud data of the present invention includes steps (1) to (8):
(1) And collecting the point cloud data of the laser radar by adopting the laser radar arranged on the vehicle, and converting the point cloud data into a vehicle coordinate system from the laser radar coordinate system.
In the step (1), the laser radar point cloud data collected by the laser radar is a three-dimensional coordinate point set with the laser radar as the origin of a coordinate system, and the laser radar point cloud data can be converted from the laser radar coordinate system to a vehicle coordinate system through external parameters calibrated by the laser radar.
In this embodiment, in step (1), the collected lidar point cloud data may be subjected to filter preprocessing in the x, y, and z directions according to the range of interest. In general, the lidar point cloud data is very huge, and some ranges of the lidar point cloud data are not concerned during vehicle driving, such as point clouds with too high height and point clouds at far positions. Therefore, the lidar point cloud data needs to be filtered in the x, y and z directions, and only the lidar point cloud data in the interesting range is reserved for subsequent processing.
(2) Constructing laser radar point cloud data with a grid structure, wherein the laser radar point cloud data with the grid structure is provided with a set grid.
In this embodiment, the filtered lidar point cloud data obtained in the above manner can be subjected to grid division in the x and y directions, and the lidar point cloud data belonging to each grid is stored in the corresponding space, so that subsequent convenient search is facilitated. The size of the grid can be adjusted according to actual conditions, and a grid with the size of 20cm-20cm is usually selected. At this time, the lidar point cloud data with the grid structure in the step (2) of the invention can be obtained.
(3) And acquiring image data by adopting a camera, and identifying a travelable area of the acquired image data to obtain the image data with a travelable area identifier.
It should be noted that, in this embodiment, in step (3) of the present invention, a camera may be used to collect image data, and the travelable region may be identified on the collected image data through a deep convolutional neural network. The method can output and obtain image data with the driving-capable area identification, and judge whether each pixel point of the forward-looking image collected by the camera is the driving-capable area.
(4) And calibrating a conversion matrix of the projection of the laser radar point cloud data to the image data.
In step (4) of the present invention, a transformation matrix of the projection of the lidar point cloud data to the image data needs to be calibrated. However, the laser radar point cloud data is a 3d coordinate point, and the image data collected by the camera is a 2d coordinate point. The projection from the point cloud data of the 3d laser radar to the pixel points of the 2d image can be realized through the existing calibration tool. Since the 3d is projected to the 2d, the same 2d image pixel point corresponds to a plurality of 3d laser radar point cloud coordinates.
(5) And projecting the laser radar point cloud data with the grid structure to the image data with the travelable area identification based on the conversion matrix to obtain the ground grid on the image data.
In step (5) of the present invention, the mesh in the lidar point cloud data having the mesh structure in step (2) needs to be projected to the image data having the travelable region identifier obtained in step (3) to obtain the ground mesh on the image data.
(6) Traversing the laser radar point cloud data in the step (2), and screening out ground point cloud candidate points according to whether the pixel positions projected to the image data by the laser radar point cloud data fall in the travelable area.
It should be noted that, in this embodiment, the step (6) described in the present invention may include a pre-screening step: preliminarily judging whether each point cloud in the laser radar point cloud data is used as a ground point cloud candidate point or not based on the known drivable area range in the high-precision map, and if the point cloud falls in the known drivable area range in the high-precision map, further judging whether the pixel position projected to the image data falls in the drivable area of the image data obtained in the step (3); and if the point cloud does not fall within the range of the known drivable area in the high-precision map, screening the point cloud.
If the pixel position projected by the laser radar point cloud data subjected to the pre-screening step into the image data falls into the travelable area, storing the pixel position into a ground point cloud candidate point in the grid; and if the point cloud is not in the travelable area, screening out the point cloud and continuing to obtain the next point cloud.
(7) And for each ground grid, obtaining a fitting plane corresponding to the ground point cloud candidate points based on the ground point cloud candidate points distributed in the ground grid.
Accordingly, in this embodiment, the step (7) of the present invention may further include the steps of:
(a) Randomly extracting three ground point cloud candidate points in the range of each ground grid to construct a primary fitting plane;
(b) Judging whether the absolute distance from the residual ground point cloud candidate points in the ground grid to the primary fitting plane meets a plane fitting condition, if so, taking the primary fitting plane as the fitting plane of the ground grid, and finishing the step; if not, returning to the step (a).
In addition, in the method for removing the ground based on the laser radar point cloud data, the plane fitting condition may be that the distance from at least 99% of points in the ground grid to the preliminary fitting plane is less than a set threshold. This threshold may typically be chosen to be 20cm.
(8) Traversing the laser radar point cloud data in the step (2), obtaining the distance between the laser radar point cloud data and a fitting plane of a corresponding ground grid, if the distance is smaller than a set distance threshold, judging that the point cloud data represents the ground, and removing the point cloud data. The distance threshold here can be typically chosen to be 15cm.
In conclusion, the method for removing the ground based on the point cloud data of the laser radar can effectively utilize the image information to remove the ground false detection of a high platform and a roof in the detection range of the looking-around camera, and has good popularization prospect and application value.
Accordingly, the system for removing the ground based on the laser radar point cloud data can be used for implementing the method, and the system has the advantages and the beneficial effects.
It should be noted that the prior art in the protection scope of the present invention is not limited to the examples given in the specification, and all the prior art which is not inconsistent with the technical solution of the present invention, including but not limited to the prior patent documents, the prior publications and the like, can be included in the protection scope of the present invention.
In addition, the combination of the features in the present application is not limited to the combination described in the claims of the present application or the combination described in the embodiments, and all the features described in the present application may be freely combined or combined in any manner unless contradictory to each other.
It should also be noted that the above-listed embodiments are only specific embodiments of the present invention. It is apparent that the present invention is not limited to the above embodiments and similar changes or modifications can be easily made by those skilled in the art from the disclosure of the present invention and shall fall within the scope of the present invention.

Claims (10)

1. A method for removing ground based on laser radar point cloud data is characterized by comprising the following steps:
(1) Collecting point cloud data of a laser radar by adopting the laser radar arranged on a vehicle, and converting the point cloud data of the laser radar into a vehicle coordinate system from the laser radar coordinate system;
(2) Constructing laser radar point cloud data with a grid structure, wherein the laser radar point cloud data with the grid structure is provided with a set grid;
(3) Acquiring image data by adopting a camera, and identifying travelable areas of the acquired image data to obtain image data with travelable area identifications;
(4) Calibrating a conversion matrix of the projection of the laser radar point cloud data to the image data;
(5) Projecting the laser radar point cloud data with the grid structure to image data with travelable area identification based on the conversion matrix to obtain a ground grid on the image data;
(6) Traversing the laser radar point cloud data in the step (2), and screening out ground point cloud candidate points according to whether the pixel positions projected to the image data by the laser radar point cloud data fall into the travelable area;
(7) For each ground grid, obtaining a fitting plane corresponding to the ground grid based on the ground point cloud candidate points distributed in the ground grid;
(8) Traversing the laser radar point cloud data in the step (2), obtaining the distance between the laser radar point cloud data and a fitting plane of a corresponding ground grid, if the distance is smaller than a set distance threshold, judging that the point cloud data represents the ground, and removing the point cloud data.
2. The method for removing ground based on lidar point cloud data of claim 1, wherein the step (1) further comprises: and carrying out filtering pretreatment on the collected laser radar point cloud data.
3. The method for removing ground based on lidar point cloud data of claim 1, wherein in step (3), travelable region identification is performed on the acquired image data through a deep convolutional neural network.
4. The lidar point cloud data-based method for removing the ground of claim 1, wherein said step (6) comprises a pre-screening step of: preliminarily judging whether each point cloud in the laser radar point cloud data is used as a candidate point cloud of the ground based on the known drivable area range in the high-precision map, and if the point cloud falls in the known drivable area range in the high-precision map, further judging whether the pixel position projected to the image data by the point cloud falls in the drivable area of the image data; and if the point cloud does not fall within the range of the known drivable area in the high-precision map, screening the point cloud.
5. The method for removing ground based on lidar point cloud data of claim 1, wherein the step (7) comprises the steps of:
(a) Randomly extracting three ground point cloud candidate points in the range of each ground grid to construct a primary fitting plane;
(b) Judging whether the absolute distance from the residual ground point cloud candidate points in the ground grid to the primary fitting plane meets a plane fitting condition, if so, taking the primary fitting plane as the fitting plane of the ground grid, and finishing the step; if not, returning to the step (a).
6. The method of claim 5, wherein the plane fitting condition is that the distance between at least 99% of points in the ground grid and the preliminary fitting plane is less than a predetermined threshold.
7. A system for removing ground based on laser radar point cloud data, comprising:
the system comprises a laser radar arranged on a vehicle and a control unit, wherein the laser radar is used for acquiring laser radar point cloud data;
a camera disposed on the vehicle, which collects image data;
the processing module is in data connection with the laser radar and the camera respectively, wherein the processing module executes the following steps based on laser radar point cloud data transmitted from the laser radar and image data transmitted from the camera:
(1) Converting the laser radar point cloud data into a vehicle coordinate system from the laser radar coordinate system;
(2) Constructing laser radar point cloud data with a grid structure, wherein the laser radar point cloud data with the grid structure is provided with a set grid;
(3) Identifying a travelable area of the image data to obtain image data with a travelable area identifier;
(4) Calibrating a conversion matrix of the projection of the laser radar point cloud data to the image data;
(5) Projecting the three-dimensional laser radar point cloud data with the grid structure to image data with travelable area identification based on the conversion matrix to obtain a ground grid on the image data;
(6) Traversing the laser radar point cloud data in the step (2), and screening out ground point cloud candidate points according to whether pixel positions projected to the image data by the laser radar point cloud data fall into the travelable area;
(7) For each ground grid, obtaining a fitting plane corresponding to the ground point cloud candidate points on the basis of the ground point cloud candidate points distributed in the ground grid;
(8) Traversing the laser radar point cloud data in the step (2), obtaining the distance between the laser radar point cloud data and a fitting plane of a corresponding ground grid, if the distance is smaller than a set distance threshold value, judging that the point cloud data represents the ground, and removing the point cloud data.
8. The lidar point cloud data-based ground removal system of claim 7, wherein the processing module further performs a filtering pre-process on the collected lidar point cloud data.
9. The lidar point cloud data-based ground removal system of claim 7, wherein the processing module performs travelable region identification on the image data via a deep convolutional neural network.
10. The lidar point cloud data based ground removal system of claim 7, wherein said step (6) comprises a pre-screening step of: preliminarily judging whether each point cloud in the laser radar point cloud data is used as a candidate point cloud of the ground based on the known drivable area range in the high-precision map, and if the point cloud falls in the known drivable area range in the high-precision map, further judging whether the pixel position projected to the image data by the point cloud falls in the drivable area of the image data; if the point cloud does not fall into the range of the known drivable area in the high-precision map, screening the point cloud; and/or
The step (7) includes the steps of: (a) Randomly extracting three ground point cloud candidate points in the range of each ground grid to construct a primary fitting plane; (b) Judging whether the absolute distance from the residual ground point cloud candidate points in the ground grid to the preliminary fitting plane meets a plane fitting condition or not, if so, taking the preliminary fitting plane as a fitting plane of the ground grid, and finishing the step; if not, returning to the step (a).
CN202011179679.7A 2020-10-29 2020-10-29 Method and system for removing ground based on laser radar point cloud data Active CN112578405B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011179679.7A CN112578405B (en) 2020-10-29 2020-10-29 Method and system for removing ground based on laser radar point cloud data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011179679.7A CN112578405B (en) 2020-10-29 2020-10-29 Method and system for removing ground based on laser radar point cloud data

Publications (2)

Publication Number Publication Date
CN112578405A CN112578405A (en) 2021-03-30
CN112578405B true CN112578405B (en) 2023-03-10

Family

ID=75120030

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011179679.7A Active CN112578405B (en) 2020-10-29 2020-10-29 Method and system for removing ground based on laser radar point cloud data

Country Status (1)

Country Link
CN (1) CN112578405B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113537049B (en) * 2021-07-14 2023-03-24 广东汇天航空航天科技有限公司 Ground point cloud data processing method and device, terminal equipment and storage medium
CN116229405A (en) * 2023-05-05 2023-06-06 倍基智能科技(四川)有限公司 Method for detecting ground from point cloud data

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108427124A (en) * 2018-02-02 2018-08-21 北京智行者科技有限公司 A kind of multi-line laser radar ground point separation method and device, vehicle
CN109543600A (en) * 2018-11-21 2019-03-29 成都信息工程大学 A kind of realization drivable region detection method and system and application
CN109961440A (en) * 2019-03-11 2019-07-02 重庆邮电大学 A kind of three-dimensional laser radar point cloud Target Segmentation method based on depth map
CN110705458A (en) * 2019-09-29 2020-01-17 北京智行者科技有限公司 Boundary detection method and device
CN111160302A (en) * 2019-12-31 2020-05-15 深圳一清创新科技有限公司 Obstacle information identification method and device based on automatic driving environment
CN111414848A (en) * 2020-03-19 2020-07-14 深动科技(北京)有限公司 Full-class 3D obstacle detection method, system and medium
CN111665524A (en) * 2020-04-29 2020-09-15 武汉光庭科技有限公司 Method and system for ground rejection by utilizing multi-line laser radar

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011145156A (en) * 2010-01-14 2011-07-28 Ihi Aerospace Co Ltd Apparatus and method for determining traveling area of mobile robot
EP2738570A1 (en) * 2012-11-29 2014-06-04 BAE Systems PLC Controlling laser trackers
CN103645480B (en) * 2013-12-04 2015-11-18 北京理工大学 Based on the topography and landform character construction method of laser radar and fusing image data
US20180211119A1 (en) * 2017-01-23 2018-07-26 Ford Global Technologies, Llc Sign Recognition for Autonomous Vehicles
CN107330925B (en) * 2017-05-11 2020-05-22 北京交通大学 Multi-obstacle detection and tracking method based on laser radar depth image
CN108873013B (en) * 2018-06-27 2022-07-22 江苏大学 Method for acquiring passable road area by adopting multi-line laser radar

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108427124A (en) * 2018-02-02 2018-08-21 北京智行者科技有限公司 A kind of multi-line laser radar ground point separation method and device, vehicle
CN109543600A (en) * 2018-11-21 2019-03-29 成都信息工程大学 A kind of realization drivable region detection method and system and application
CN109961440A (en) * 2019-03-11 2019-07-02 重庆邮电大学 A kind of three-dimensional laser radar point cloud Target Segmentation method based on depth map
CN110705458A (en) * 2019-09-29 2020-01-17 北京智行者科技有限公司 Boundary detection method and device
CN111160302A (en) * 2019-12-31 2020-05-15 深圳一清创新科技有限公司 Obstacle information identification method and device based on automatic driving environment
CN111414848A (en) * 2020-03-19 2020-07-14 深动科技(北京)有限公司 Full-class 3D obstacle detection method, system and medium
CN111665524A (en) * 2020-04-29 2020-09-15 武汉光庭科技有限公司 Method and system for ground rejection by utilizing multi-line laser radar

Also Published As

Publication number Publication date
CN112578405A (en) 2021-03-30

Similar Documents

Publication Publication Date Title
CN111291708B (en) Transformer substation inspection robot obstacle detection and identification method integrated with depth camera
CN110988912B (en) Road target and distance detection method, system and device for automatic driving vehicle
CN110979321B (en) Obstacle avoidance method for unmanned vehicle
CN112581612B (en) Vehicle-mounted grid map generation method and system based on fusion of laser radar and all-round-looking camera
CN108319655B (en) Method and device for generating grid map
EP3171292B1 (en) Driving lane data processing method, device, storage medium and apparatus
CN111160302A (en) Obstacle information identification method and device based on automatic driving environment
CN113111887B (en) Semantic segmentation method and system based on information fusion of camera and laser radar
CN111598916A (en) Preparation method of indoor occupancy grid map based on RGB-D information
CN112578405B (en) Method and system for removing ground based on laser radar point cloud data
CN111699410B (en) Processing method, equipment and computer readable storage medium of point cloud
CN112528979B (en) Transformer substation inspection robot obstacle distinguishing method and system
WO2021017211A1 (en) Vehicle positioning method and device employing visual sensing, and vehicle-mounted terminal
CN111880195A (en) Tower crane anti-collision method and system based on laser radar
CN113085838A (en) Parking space detection method and system based on multi-sensor fusion
CN114910891A (en) Multi-laser radar external parameter calibration method based on non-overlapping fields of view
CN114821526A (en) Obstacle three-dimensional frame detection method based on 4D millimeter wave radar point cloud
CN115097419A (en) External parameter calibration method and device for laser radar IMU
CN113219472B (en) Ranging system and method
EP2772801A1 (en) Matching procedure and device for the digital modelling of objects by stereoscopic images
CN115588047A (en) Three-dimensional target detection method based on scene coding
CN113624223B (en) Indoor parking lot map construction method and device
CN111323026A (en) Ground filtering method based on high-precision point cloud map
CN115235478A (en) Intelligent automobile positioning method and system based on visual label and laser SLAM
CN115077563A (en) Vehicle positioning accuracy evaluation method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant