WO2023216470A1 - 一种可行驶区域检测方法、装置及设备 - Google Patents

一种可行驶区域检测方法、装置及设备 Download PDF

Info

Publication number
WO2023216470A1
WO2023216470A1 PCT/CN2022/116601 CN2022116601W WO2023216470A1 WO 2023216470 A1 WO2023216470 A1 WO 2023216470A1 CN 2022116601 W CN2022116601 W CN 2022116601W WO 2023216470 A1 WO2023216470 A1 WO 2023216470A1
Authority
WO
WIPO (PCT)
Prior art keywords
point
area
grid
boundary
points
Prior art date
Application number
PCT/CN2022/116601
Other languages
English (en)
French (fr)
Inventor
潘奇
张灿
王文爽
赵天坤
Original Assignee
合众新能源汽车股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 合众新能源汽车股份有限公司 filed Critical 合众新能源汽车股份有限公司
Publication of WO2023216470A1 publication Critical patent/WO2023216470A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Definitions

  • the present application relates to the field of automatic driving perception technology, and in particular to a drivable area detection method, device and equipment.
  • Autonomous driving technology is a complex engineering system that requires the cooperation of various modules to ensure driving safety in all aspects.
  • the free space detection technology is one of the key technologies of the autonomous driving system, which can provide the basis for back-end modules such as path planning and behavioral decision-making.
  • camera images and lidar point clouds are the main data input sources used to detect drivable areas.
  • image-based extraction of drivable areas the color or texture features used are susceptible to interference from lighting and weather, and the lack of three-dimensional information also limits the adaptability of such algorithms in different scenarios.
  • Lidar can accurately provide rich road environment data in real time and has the advantages of high data dimension, accurate depth information, fast response frequency and high detection accuracy.
  • the current method of detecting drivable areas based on laser point clouds usually uses lidar to obtain point clouds, then performs ground plane fitting on all point clouds, divides them into multiple sector areas, and then performs drivable area detection in each sector area based on the height threshold method.
  • this method is not suitable for complex road conditions that are uneven or frequently undulating, and the overall plane fitting method is less robust to road conditions such as slopes; another method is to calculate each fan-shaped fence
  • the global angle, local angle, local height and height from the ground within the grid are used to determine whether the grid it belongs to is a ground grid.
  • this method is more effective in solving ground conditions such as rugged roads, the grid is only one part of the drivable area. Rough extraction of concepts with lower boundary accuracy.
  • Embodiments of the present application provide a drivable area detection method, device and equipment, which are used to solve problems existing in the prior art such as unstable roadside occlusion recognition, poor robustness on undulating roads, and insufficient detection accuracy.
  • embodiments of the present application provide a drivable area detection method, including:
  • lidar to obtain the three-dimensional point cloud of the target detection area, and use the semantic segmentation network to determine the type label of each point in the three-dimensional point cloud.
  • the above-mentioned type labels include the ground and ground objects;
  • the above three-dimensional point cloud is divided into different sector areas, and each sector area is divided into different grids according to the distance between the point and the origin;
  • An optional implementation method is to divide the above three-dimensional point cloud into different sector-shaped areas, and divide each sector-shaped area into different grids according to the distance between the points and the origin, including:
  • each point in the above-mentioned sector area is divided into a corresponding grid.
  • An optional implementation is to perform a raster traversal on each sector-shaped area, and determine the boundary points of the drivable area of each sector-shaped area based on the type labels of the points included in the grid, including:
  • the grids in each sector area are traversed in order from the nearest to the farthest distance from the above coordinate origin;
  • An optional implementation is to determine that the height of the points in the above grid satisfies the boundary area conditions, including any of the following:
  • the average height of the points in the above grid is greater than the set height threshold
  • the maximum height difference of the points in the above grid is greater than the set height threshold
  • the average height difference between the above grid and the points in the previous grid is greater than the set height threshold
  • the above raster contains points whose type label is ground object.
  • An optional implementation is to determine the boundary points of the drivable area of the current sector area from the grid, including:
  • the average height of the points in each grid, the maximum height difference of the points, and the average height difference from the points in the previous grid are determined.
  • An optional implementation is that before traversing to a grid containing points whose type labels are different and contain ground or whose type labels contain ground objects, it also includes:
  • the point in the current fan-shaped area that is closest to the above-mentioned empty grid and whose type label is ground is obtained, and used as the boundary point of the drivable area of the current sector-shaped area;
  • An optional implementation is to perform piecewise fitting on each boundary point, including:
  • the least squares method is used to fit the point clusters into boundary line segments, and according to the principle of connecting end to end, the fitted boundary line segments are synthesized into at least one boundary line.
  • An optional implementation method is to divide each boundary point into multiple point clusters based on the distance and direction between the boundary points in turn, including:
  • the cumulative distance to the first boundary point in the current point cluster is not greater than the first distance threshold
  • the distance from the previous boundary point is not greater than the second distance threshold
  • the difference in angle between the next boundary point and the previous boundary point is within the set range.
  • An optional implementation method is to use the least squares method to fit the point clusters into boundary line segments based on the number of boundary points in each point cluster, including:
  • the above method is suitable for various scenarios such as urban roads or unstructured roads in the field of autonomous driving, and can effectively solve the problems existing in the existing technology such as unstable curb occlusion recognition, poor robustness on undulating roads, and insufficient detection accuracy.
  • a drivable area detection device including:
  • the acquisition module is used to use lidar to obtain the three-dimensional point cloud of the target detection area, and use the semantic segmentation network to determine the type label of each point in the three-dimensional point cloud.
  • the above-mentioned type label includes the ground and ground objects;
  • the division module is used to divide the above three-dimensional point cloud into different sector-shaped areas using the position of the lidar as the origin of the coordinates, and divide each sector-shaped area into different grids according to the distance between the points and the origin;
  • the boundary point determination module is used to perform raster traversal of each sector area and determine the boundary points of the drivable area of each sector area based on the type labels of the points contained in the raster;
  • the envelope point set determination module is used to perform segmented fitting of each boundary point, and resample the segmented fitted boundary lines at preset intervals to determine the envelope point set of the drivable area.
  • a drivable area detection device including: a memory and a processor.
  • the memory stores a computer program that can be run on the processor.
  • the computer program is executed by the processor , perform the following steps:
  • lidar to obtain the three-dimensional point cloud of the target detection area, and use the semantic segmentation network to determine the type label of each point in the three-dimensional point cloud.
  • the above-mentioned type labels include the ground and ground objects;
  • the above three-dimensional point cloud is divided into different sector areas, and each sector area is divided into different grids according to the distance between the point and the origin;
  • embodiments of the present application provide a computer storage medium, including: computer program instructions that, when run on a computer, cause the computer to perform any step in the above-mentioned drivable area detection method.
  • Figure 1 is a schematic flow chart of a drivable area detection method provided by an embodiment of the present application
  • Figure 2 is a schematic diagram of a grid division provided by an embodiment of the present application.
  • Figure 3 is a schematic diagram of an envelope point set provided by an embodiment of the present application.
  • Figure 4 is a schematic flowchart of a boundary point determination method provided by an embodiment of the present application.
  • Figure 5 is a schematic structural diagram of a drivable area detection device provided by an embodiment of the present application.
  • Figure 6 is a schematic structural diagram of a drivable area detection device provided by an embodiment of the present application.
  • Autonomous driving technology is a complex engineering system that requires the cooperation of various modules to ensure driving safety in all aspects.
  • the free space detection technology is one of the key technologies of the autonomous driving system, which can provide the basis for back-end modules such as path planning and behavioral decision-making.
  • camera images and lidar point clouds are the main data input sources used to detect drivable areas.
  • image-based extraction of drivable areas the color or texture features used are susceptible to interference from lighting and weather, and the lack of three-dimensional information also limits the adaptability of such algorithms in different scenarios.
  • Lidar can accurately provide rich road environment data in real time and has the advantages of high data dimension, accurate depth information, fast response frequency and high detection accuracy.
  • the current method of detecting drivable areas based on laser point clouds usually uses lidar to obtain point clouds, then performs ground plane fitting on all point clouds, divides them into multiple sector areas, and then performs drivable area detection in each sector area based on the height threshold method.
  • this method is not suitable for complex road conditions that are uneven or frequently undulating, and the overall plane fitting method is less robust to slopes, etc.; another method is to calculate each fan-shaped grid
  • the global angle, local angle, local height and height from the ground are used to determine whether the grid is a ground grid.
  • this method is more effective in solving ground conditions such as bumpy roads, the grid is only a rough representation of the drivable area. Extract concepts with low boundary accuracy.
  • the ground cannot be completely regarded as a drivable area. For example, when encountering obstacles such as vehicles and curbs, especially low curbs on urban structured roads, this raster extraction method cannot judge some situations.
  • the currently existing encoding-decoding network that uses residual dilated convolution uses network model training to obtain network parameters and then determine the drivable area.
  • the accuracy of the drivable area determined by this method is low and difficult to implement.
  • the embodiment of this application proposes a drivable area detection method based on the deep learning semantic segmentation network, which can effectively extract higher-precision drivable envelope areas in various road scenarios.
  • FIG. 1 is a schematic flow chart of a drivable area detection method provided by an embodiment of the present application. As shown in Figure 1, an embodiment of the present application provides a drivable area detection method, which includes:
  • Step 101 use lidar to obtain the three-dimensional point cloud of the target detection area, and use the semantic segmentation network to determine the type label of each point in the three-dimensional point cloud.
  • the above-mentioned type label includes the ground and ground objects;
  • the lidar installed on the autonomous vehicle can be used to obtain the three-dimensional point cloud of the target detection area, and at the same time, the three-dimensional coordinates x 0 , y 0 , z 0 and the three-dimensional coordinates of each point in the three-dimensional point cloud with the lidar as the coordinate origin are obtained Reflection intensity, where the positive direction of the x-axis is the forward direction of the vehicle body, the positive direction of the y-axis is to the left side of the vehicle body, and the positive direction of the z-axis is vertically upward, and in the embodiment of this application, the laser radar is used on the autonomous vehicle.
  • the installation location is not limited.
  • the above-mentioned semantic segmentation network can be RPVNet, Cylinder3D, etc., which currently have good segmentation results in the field of autonomous driving.
  • ground refers to the road surface on which vehicles can travel without obstacles
  • ground objects refer to the obstacles that affect the driving of vehicles, such as buildings, vehicles, etc.
  • type tags can be divided according to actual needs, for example Type labels can also be divided into various types such as ground, lane lines, targets (vehicles, pedestrians), curbs, fences, green plants, noise, buildings, etc.
  • the semantic segmentation network after using the semantic segmentation network to determine the type label of each point in the three-dimensional point cloud, it also includes:
  • the above-mentioned three-dimensional point cloud is preprocessed, and the above-mentioned preprocessing includes ROI (region of interest) filtering, noise filtering and downsampling processing.
  • ROI region of interest
  • the region of interest is filtered on the three-dimensional point cloud after semantic segmentation, and the own vehicle point cloud is filtered out at the same time.
  • the filtering range of the ROI and the own vehicle point cloud can be determined according to actual needs and the installation location of the lidar, such as filtering Remove point clouds whose distance from the lidar is greater than 100 meters; then perform noise filtering and downsampling on the filtered three-dimensional point cloud.
  • noise filtering can be used to eliminate outliers by calculating the number of neighboring points of each point, and downsampling
  • a point cloud downsampling algorithm based on Voxel Grid can be used.
  • Step 102 using the position of the lidar as the origin of the coordinates, divide the above three-dimensional point cloud into different sector-shaped areas, and divide each sector-shaped area into different grids according to the distance of the point from the origin;
  • the three-dimensional point cloud is divided into different sector-shaped areas, and each sector-shaped area is divided into different grids according to the distance between the points and the origin, including:
  • each point in the above-mentioned sector area is divided into a corresponding grid.
  • Figure 2 is a schematic diagram of a grid division provided by an embodiment of the present application. As shown in Figure 2, the three-dimensional point cloud is divided into corresponding grids according to the deviation angle of the midpoint relative to the reference direction and the distance from the above-mentioned coordinate origin.
  • Medium specific:
  • angle_idx is the deviation angle of the three-dimensional point cloud midpoint relative to the reference direction
  • angle_res is the angular resolution (unit: degree)
  • x and y are the coordinates with the lidar as the coordinate origin.
  • radius_idx is the distance between the midpoint of the three-dimensional point cloud and the origin of the above coordinates
  • radius_res is the distance resolution (unit: meters)
  • x and y are the coordinates with the lidar as the origin of the coordinates.
  • Step 103 Perform a grid traversal on each sector area, and determine the boundary points of the drivable area of each sector area based on the type labels of the points included in the grid;
  • the beam method can be used to traverse and search each grid, and logical judgments can be made based on the type label, height from the ground and other characteristics of each point in the three-dimensional point cloud in each grid, and finally the drivable area can be extracted based on the ground direction. Boundary point.
  • the grids in each sector area are traversed in order from the nearest to the farthest distance from the coordinate origin;
  • Step 104 Perform piecewise fitting on each boundary point, and resample the piecewise fitted boundary lines according to preset intervals to determine the envelope point set of the drivable area;
  • the determined boundary points may have dense distribution of close points and burr points, and sparse distribution of far points, therefore, in order to make it easier for downstream modules such as trajectory planning to use drivable area information, it is necessary to determine the drivable area information.
  • boundary line formed by boundary point fitting is long and has many corners, it is necessary to perform segmented fitting, and finally determine the envelope point set of the drivable area through resampling, as shown in Figure 3.
  • the boundary points are extracted, they are arranged in the order of deviation angles relative to the reference direction and then segmented.
  • the point clusters determined after segmentation are fitted into polynomial curves using the least squares method, and then the polynomial curves of the continuous segments are reprocessed.
  • Sampling forms a uniform envelope point set to determine the final drivable area; among them, 0.1 to 0.5 meters can be selected as an interval for resampling.
  • the embodiments of this application adopt semantic segmentation point cloud category labels to address the problem of unstable roadside occlusion recognition, which can effectively distinguish road surfaces, roadsides, vehicles, etc.; to address the problem of poor robustness on undulating roads , based on the semantic segmentation results, the recall rate of most current segmentation network ground exceeds 90%. At the same time, within each grid, it will also be judged based on the average height, height difference, angle and other feature information.
  • the embodiment of the present application determines the boundary points within the grid, achieving higher precision extraction; in addition, the embodiment of the present application performs fitting and resampling of the boundary points , which can be directly transmitted to downstream modules such as trajectory planning for easy use and has engineering significance. Therefore, the drivable area detection method provided by the embodiments of this application is suitable for various scenarios such as urban roads or unstructured roads in the field of autonomous driving, and can effectively solve the problems of unstable roadside occlusion recognition and robust undulating roads existing in the existing technology. Problems such as poor performance and insufficient detection accuracy.
  • step 103 The specific method of determining the boundary point in step 103 is described below.
  • the grids in each sector area are traversed in order from the nearest to the farthest distance from the coordinate origin;
  • determine that the height of the points in the above grid satisfies the boundary area conditions including any of the following:
  • the average height of the points in the above grid is greater than the set height threshold
  • the maximum height difference of the points in the above grid is greater than the set height threshold
  • the average height difference between the above grid and the points in the previous grid is greater than the set height threshold
  • the above raster contains points whose type label is ground object.
  • the specific value of the above-mentioned set height threshold can be set according to actual needs, and the above-mentioned average height, maximum height difference, and the set height threshold corresponding to the average height difference of points in the previous grid can be the same or different.
  • the average height difference between the above-mentioned points in the previous grid can also be reflected as the height angle difference between the current grid and the previous grid.
  • ground objects can be set to include curbs and obstacles, and points in the grid with type labels as ground objects in the boundary area conditions are set as points in the grid with type labels as curbs.
  • the height of each point in the three-dimensional point cloud can be converted into the height from the ground through plane fitting.
  • the above method also includes:
  • the average height of the points in each grid, the maximum height difference of the points, and the average height difference from the points in the previous grid are determined.
  • determining the boundary points of the drivable area of the current sector area from the grid includes:
  • the point in the current fan-shaped area that is closest to the above-mentioned empty grid and whose type label is ground is obtained, and used as the boundary point of the drivable area of the current sector-shaped area;
  • FIG. 4 is a schematic flowchart of a boundary point determination method provided by an embodiment of the present application. The specific steps of the boundary point determination method will be introduced below in conjunction with FIG. 4 .
  • the sector is traversed sequentially starting from the specified deviation angle, and the grid is traversed outward from the origin at a certain angle:
  • Step 1 if the current grid is an empty grid and is not the last grid in the current fan-shaped area, the empty counter is incremented by 1, and the number of empty grids (empty counter count) in the current fan-shaped area is determined;
  • Step 2 if the empty counter count is less than or equal to the set number, enter the next grid;
  • Step 3 When the empty counter count is greater than the set number, search for the road (point with type label: ground, hereafter referred to as road) point closest to the current grid in the current sector area. If found, it will be stored in the boundary memory. If not found, it will be stored in the boundary memory. Create a virtual point and store it in the boundary memory;
  • Step 4 if the conditions in step 1 are not met (the current grid is the last grid in the sector area), create a virtual point and store it in the boundary memory;
  • Step 5 if the conditions in steps 1 and 4 are not met (the current raster is not an empty raster), then determine whether this raster has only one type label (label), and if so, go to the next raster;
  • Step 6 If the conditions in step 5 are not met, determine whether the number of type labels of the points in this raster is greater than 1 and includes road;
  • Step 7 If the conditions in step 6 are met, use the ground plane equation to determine the height. If the average height or height difference of the points in the grid or the angle with the previous grid is greater than the corresponding threshold or the type label in the grid is road. Points along the grid will give the road point with the largest or smallest deviation angle in the grid according to the ground direction, and store it in the boundary memory;
  • Step 8 if the conditions in step 7 are not met, go to the next grid;
  • Step 9 If the conditions in step 6 are not met (more than 0 and not road), use the ground plane equation to determine the height. If the average height or maximum height difference of the points in the grid or the height difference from the point in the previous grid If the average height difference is greater than the corresponding threshold or there are points in the grid with a type label of road edge, then the non-road point with the largest or smallest angle in the grid (points with a type label other than road) is given based on the ground direction, and stored in the boundary storage in the vessel;
  • Step 10 If the conditions in step 9 are not met, go to the next grid.
  • step 104 The method of performing piecewise fitting of each boundary point in step 104 is described in detail below.
  • perform piecewise fitting on each boundary point including:
  • the least squares method is used to fit the point clusters into boundary line segments, and according to the principle of connecting end to end, the fitted boundary line segments are synthesized into at least one boundary line.
  • each boundary point is divided into multiple point clusters based on the distance and direction between each boundary point, including:
  • the cumulative distance to the first boundary point in the current point cluster is not greater than the first distance threshold
  • the distance from the previous boundary point is not greater than the second distance threshold
  • the difference in angle between the next boundary point and the previous boundary point is within the set range.
  • the boundary points are first sorted according to the deviation angle of each boundary point relative to the reference direction, and the point with the smallest deviation angle relative to the reference direction is determined as the first point cluster.
  • the first boundary point and then make logical judgments based on the above conditions starting from the second boundary point.
  • a boundary point that does not belong to the first point cluster is determined, it will be used as the first boundary in the second point cluster. point, and continue to judge the next point according to the above method until all boundary points are judged.
  • the cumulative distance from the first boundary point in the current point cluster refers to the distance between the current boundary point and the first boundary point in the current point cluster after sorting the boundary points according to the deviation angle of the boundary point relative to the reference direction.
  • the accumulation of the distance between two adjacent points between two boundary points; the upper/next boundary point refers to the adjacent boundary arranged before/after the current boundary point after sorting the boundary points according to the deviation angle of the boundary point relative to the reference direction. point.
  • the least squares method is used to fit the point clusters into boundary line segments, including:
  • N 1 and N 2 are the set number of point clouds, and p is the degree of the polynomial equation.
  • the polynomial equation of y with respect to x is generally used for fitting. However, when the span of y is greater than x, the polynomial can be changed into a polynomial equation of x with respect to y.
  • the embodiment of the present application also provides a drivable area detection device. Since this device is the device in the method in the embodiment of the present application, and the principle of solving the problem of the device is similar to that of the method, therefore the device For the implementation of the device, please refer to the implementation of the method, and repeated details will not be repeated.
  • an embodiment of the present application provides a drivable area detection device, which includes:
  • the acquisition module 501 is used to obtain the three-dimensional point cloud of the target detection area using lidar, and use the semantic segmentation network to determine the type label of each point in the three-dimensional point cloud.
  • the type label includes the ground and ground objects;
  • the dividing module 502 is used to divide the above-mentioned three-dimensional point cloud into different sector-shaped areas using the position of the lidar as the origin of the coordinates, and divide each sector-shaped area into different grids according to the distance between the points and the origin;
  • the boundary point determination module 503 is used to perform a grid traversal on each sector area and determine the boundary points of the drivable area of each sector area based on the type tags of the points contained in the grid;
  • the envelope point set determination module 504 is used to perform piecewise fitting on each boundary point, resample the piecewise fitted boundary line according to preset intervals, and determine the envelope point set of the drivable area.
  • the above-mentioned dividing module 502 is used to divide the above-mentioned three-dimensional point cloud into different sector-shaped areas, and divide each sector-shaped area into different grids according to the distance of the point from the origin, including:
  • each point in the above-mentioned sector area is divided into a corresponding grid.
  • the above-mentioned boundary point determination module 503 is used to perform a raster traversal of each sector-shaped area, and determine the boundary points of the drivable area of each sector-shaped area based on the type labels of the points contained in the grid, including:
  • the grids in each sector area are traversed in order from the nearest to the farthest distance from the coordinate origin;
  • the above-mentioned boundary point determination module 503 is used to determine that the height of the above-mentioned grid point satisfies the boundary area condition, including any of the following:
  • the average height of the points in the above grid is greater than the set height threshold
  • the maximum height difference of the points in the above grid is greater than the set height threshold
  • the average height difference between the above grid and the points in the previous grid is greater than the set height threshold
  • the above raster contains points whose type label is ground object.
  • the above-mentioned boundary point determination module 503 is used to determine the boundary points of the drivable area of the current sector area from the grid, including:
  • the above-mentioned boundary point determination module 503 is also used to:
  • the average height of the points in each grid, the maximum height difference of the points, and the average height difference from the points in the previous grid are determined.
  • the above-mentioned boundary point determination module 503 is used to traverse to a grid containing points whose type labels are different and contain the ground or whose type label contains a ground object, and also includes:
  • the point in the current fan-shaped area that is closest to the above-mentioned empty grid and whose type label is ground is obtained, and used as the boundary point of the drivable area of the current sector-shaped area;
  • envelope point set determination module 504 is used to perform piecewise fitting of each boundary point, including:
  • the least squares method is used to fit the point clusters into boundary line segments, and according to the principle of connecting end to end, the fitted boundary line segments are synthesized into at least one boundary line.
  • the above-mentioned envelope point set determination module 504 is used to divide each boundary point into multiple point clusters according to the distance and direction between the boundary points, including:
  • the cumulative distance to the first boundary point in the current point cluster is not greater than the first distance threshold
  • the distance from the previous boundary point is not greater than the second distance threshold
  • the difference in angle between the next boundary point and the previous boundary point is within the set range.
  • the above-mentioned envelope point set determination module 504 is used to fit the point clusters into boundary line segments using the least squares method according to the number of boundary points in each point cluster, including:
  • the embodiment of the present application also provides a drivable area detection device. Since this device is the device in the method in the embodiment of the present application, and the principle of solving the problem of the device is similar to that of the method, Therefore, the implementation of the device can be referred to the implementation of the method, and repeated details will not be repeated.
  • a device may include at least one processor and at least one memory.
  • the memory stores program code.
  • the program code When executed by the processor, it causes the processor to perform the steps in the drivable area detection method described above in this specification according to various exemplary embodiments of the present application.
  • This drivable area detection device 600 is described below with reference to FIG. 6 .
  • the device 600 shown in Figure 6 is only an example and should not impose any restrictions on the functions and usage scope of the embodiments of the present application.
  • the device 600 is in the form of a general device.
  • the components of the device 600 may include, but are not limited to: the above-mentioned at least one processor 601, the above-mentioned at least one memory 602, and a bus 603 connecting different system components (including the memory 602 and the processor 601).
  • the memory stores program code. When the program When the code is executed by the processor, it causes the processor to perform the following steps:
  • lidar to obtain the three-dimensional point cloud of the target detection area, and use the semantic segmentation network to determine the type label of each point in the three-dimensional point cloud.
  • the above-mentioned type labels include the ground and ground objects;
  • the above three-dimensional point cloud is divided into different sector areas, and each sector area is divided into different grids according to the distance between the point and the origin;
  • Bus 603 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, a processor, or a local bus using any of a variety of bus structures.
  • Memory 602 may include readable media in the form of volatile memory, such as random access memory (RAM) 6021 and/or cache memory 6022, and may further include read only memory (ROM) 6023.
  • RAM random access memory
  • ROM read only memory
  • Memory 602 may also include a program/utility 6025 having a set of (at least one) program modules 6024 including, but not limited to: an operating system, one or more application programs, other program modules, and program data. Each of the examples, or some combination thereof, may include the implementation of a network environment.
  • Device 600 may also communicate with one or more external devices 604 (e.g., keyboard, pointing device, etc.), with one or more devices that enable a user to interact with device 600, and/or with one or more devices that enable the device 600 to interact with Any device (such as a router, modem, etc.) that communicates with one or more other devices. This communication may occur through input/output (I/O) interface 605. Also, device 600 may communicate with one or more networks (eg, a local area network (LAN), a wide area network (WAN), and/or a public network, such as the Internet) through network adapter 606. As shown, network adapter 606 communicates with other modules for device 600 over bus 603.
  • I/O input/output
  • network adapter 606 communicates with other modules for device 600 over bus 603.
  • device 600 may be used in conjunction with device 600, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, and Data backup storage system, etc.
  • the above-mentioned processor is used to divide the above-mentioned three-dimensional point cloud into different sector-shaped areas, and divide each sector-shaped area into different grids according to the distance of the point from the origin, including:
  • each point in the above-mentioned sector area is divided into a corresponding grid.
  • the above-mentioned processor is used to perform raster traversal on each sector area, and determine the boundary points of the drivable area of each sector area based on the type labels of the points contained in the raster, including:
  • the grids in each sector area are traversed in order from the nearest to the farthest distance from the coordinate origin;
  • the above processor is used to determine that the height of the point in the above grid satisfies the boundary area condition, including any of the following:
  • the average height of the points in the above grid is greater than the set height threshold
  • the maximum height difference of the points in the above grid is greater than the set height threshold
  • the average height difference between the above grid and the points in the previous grid is greater than the set height threshold
  • the above raster contains points whose type label is ground object.
  • the above processor is used to determine the boundary points of the drivable area of the current sector area from the grid, including:
  • the above processor is also used for:
  • the average height of the points in each grid, the maximum height difference of the points, and the average height difference from the points in the previous grid are determined.
  • the above processor before the above processor is used to traverse to a grid containing points whose type labels are different and contain ground or whose type labels contain ground objects, it also includes:
  • the point in the current fan-shaped area that is closest to the above-mentioned empty grid and whose type label is ground is obtained, and used as the boundary point of the drivable area of the current sector-shaped area;
  • the above processor is used to perform piecewise fitting of each boundary point, including:
  • the least squares method is used to fit the point clusters into boundary line segments, and according to the principle of connecting end to end, the fitted boundary line segments are synthesized into at least one boundary line.
  • the above processor is used to divide each boundary point into multiple point clusters based on the distance and direction between the boundary points, including:
  • the cumulative distance to the first boundary point in the current point cluster is not greater than the first distance threshold
  • the distance from the previous boundary point is not greater than the second distance threshold
  • the difference in angle between the next boundary point and the previous boundary point is within the set range.
  • the above processor is used to fit the point clusters into boundary line segments using the least squares method according to the number of boundary points in each point cluster, including:
  • various aspects of a drivable area detection method provided by this application can also be implemented in the form of a program product, which includes program code.
  • program product which includes program code.
  • the program code It is used to cause the computer device to execute the steps in the drivable area detection method described above according to various exemplary embodiments of the present application.
  • the Program Product may take the form of one or more readable media in any combination.
  • the readable medium may be a readable signal medium or a readable storage medium.
  • the readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device or device, or any combination thereof. More specific examples (non-exhaustive list) of readable storage media include: electrical connection with one or more conductors, portable disk, hard disk, random access memory (RAM), read only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • the program product for monitoring may adopt a portable compact disk read-only memory (CD-ROM) and include the program code, and may be run on the device.
  • CD-ROM portable compact disk read-only memory
  • the program product of the present application is not limited thereto.
  • a readable storage medium may be any tangible medium containing or storing a program that may be used by or in combination with an instruction execution system, apparatus or device.
  • the readable signal medium may include a data signal propagated in baseband or as part of a carrier wave carrying readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above.
  • a readable signal medium may also be any readable medium other than a readable storage medium that can send, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Program code embodied on a readable medium may be transmitted using any suitable medium, including but not limited to wireless, wireline, optical cable, RF, etc., or any suitable combination of the foregoing.
  • the program code for performing the operations of the present application can be written in any combination of one or more programming languages, including object-oriented programming languages such as Java, C++, etc., as well as conventional procedural programming. Language—such as "C” or a similar programming language.
  • the program code may execute entirely on the user's device, partly on the user's device, as a stand-alone software package, partly on the user's device and partly on a remote device, or entirely on the remote device or server.
  • the remote device can be connected to the user device through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external device (such as an Internet service provider through Internet connection).
  • LAN local area network
  • WAN wide area network
  • Internet service provider such as an Internet service provider through Internet connection
  • embodiments of the present application may be provided as methods, systems, or computer program products. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment that combines software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
  • computer-usable storage media including, but not limited to, disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions may also be stored in a computer-readable memory that causes a computer or other programmable data processing apparatus to operate in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction means, the instructions
  • the device implements the functions specified in a process or processes in the flowchart and in a block or blocks in the block diagram.
  • These computer program instructions may also be loaded onto a computer or other programmable data processing device, causing a series of operating steps to be performed on the computer or other programmable device to produce computer-implemented processing, thereby executing on the computer or other programmable device.
  • Instructions provide steps for implementing the functions specified in a process or processes of a flowchart and a block or blocks of a block diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)

Abstract

本申请提供一种可行驶区域检测方法、装置及设备,其方法包括:利用激光雷达获取目标检测区域的三维点云,并利用语义分割网络确定三维点云中每个点的类型标签,所述类型标签包括地面、地面物体;以激光雷达的位置为坐标原点,将所述三维点云划分为不同的扇形区域,并对各扇形区域按照点距离原点的距离划分为不同栅格;对各扇形区域进行栅格遍历,基于栅格中包含的点的类型标签确定各扇形区域的可行驶区域的边界点;对各边界点进行分段拟合,并对分段拟合后的边界线按照预设间隔进行重采样,确定可行驶区域的包络点集;上述方法有效解决现有技术中存在的路沿遮挡识别不稳定、起伏道路鲁棒性差及检测精度不够等问题。

Description

一种可行驶区域检测方法、装置及设备
相关申请的交叉引用
本申请要求在2022年05月11日提交中国专利局、申请号为202210511306.8、申请名称为“一种可行驶区域检测方法、装置及设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及自动驾驶感知技术领域,尤其涉及一种可行驶区域检测方法、装置及设备。
背景技术
自动驾驶技术是一个复杂的工程体系,需要各个模块相互配合才能全方位保证驾驶的安全性。其中,可行驶区域(Free Space)检测技术是自动驾驶系统的关键技术之一,可以为路径规划和行为决策等后端模块提供依据。目前相机图像和激光雷达点云是用来检测可行驶区域主要的数据输入源。基于图像来提取可行驶区域方法中,使用的颜色或纹理特征易受光照和天气的干扰,三维信息的缺失也限制了此类算法在不同场景的适应性。而激光雷达能够实时精准地提供丰富的道路环境数据,具有数据维度高、深度信息准确、响应频率快和检测精度高的优点。
目前基于激光点云检测可行驶区域的方法通常为利用激光雷达获取点云后,对所有点云进行地面平面拟合,并划分为多个扇形区域后在各扇形区域根据高度阈值的方法进行可驾驶区域的检测,这种方法并不适用于不平整或者起伏频繁的复杂路况,且整体拟合平面的方式对斜坡等路况的鲁棒性较差;另一种方法为通过计算每个扇状栅格内的全局角度、局部角度、局部高度和距地面高度来判断所属栅格是否为地面栅格,这种方法虽然比较有效解决了道路起伏崎岖等地面情况,但栅格只是可行驶区域的一个粗提取概念,边界 精度较低。另外地面并不能完全当做可行驶区域,比如遇到车辆等障碍物和路沿,尤其是城市结构化道路低矮路沿,这种栅格提取方式有些情况是不能判断的,因此,目前缺乏一种适用性广且检测精度高的可驾驶区域的检测方法。
发明内容
本申请实施例提供了一种可行驶区域检测方法、装置及设备,用于解决现有技术中存在的路沿遮挡识别不稳定、起伏道路鲁棒性差及检测精度不够等问题。
第一方面,本申请实施例提供了一种可行驶区域检测方法,包括:
利用激光雷达获取目标检测区域的三维点云,并利用语义分割网络确定三维点云中每个点的类型标签,上述类型标签包括地面、地面物体;
以激光雷达的位置为坐标原点,将上述三维点云划分为不同的扇形区域,并对各扇形区域按照点距离原点的距离划分为不同栅格;
对各扇形区域进行栅格遍历,基于栅格中包含的点的类型标签确定各扇形区域的可行驶区域的边界点;
对各边界点进行分段拟合,并对分段拟合后的边界线按照预设间隔进行重采样,确定可行驶区域的包络点集。
一种可选的实施方式为,将上述三维点云划分为不同的扇形区域,并对各扇形区域按照点距离原点的距离划分为不同栅格,包括:
将上述激光雷达对准方向确定为参考方向,根据上述三维点云中点相对参考方向的偏差角度,将每个点划分到对应扇形区域中;
根据各扇形区域中每个点与上述坐标原点的距离,将上述扇形区域中每个点划分到对应栅格中。
一种可选的实施方式为,对各扇形区域进行栅格遍历,基于栅格中包含的点的类型标签确定各扇形区域的可行驶区域的边界点,包括:
按照不同扇形区域相对参考方向的偏差角度,依次对每个扇形区域内的 栅格按照与上述坐标原点的距离由近至远的顺序进行遍历;
遍历至包含类型标签为地面物体的点的栅格,确定上述栅格内点的高度满足边界区域条件时,从栅格中确定出当前扇形区域的可行驶区域的边界点。
一种可选的实施方式为,确定上述栅格内点的高度满足边界区域条件,包括如下任一项:
上述栅格内点的平均高度大于设定高度阈值;
上述栅格内点的最大高度差大于设定高度阈值;
上述栅格与上一个栅格内点的平均高度差大于设定高度阈值;
上述栅格内存在类型标签为地面物体的点。
一种可选的实施方式为,从栅格中确定出当前扇形区域的可行驶区域的边界点,包括:
确定当前栅格中包含类型标签不同的点,将当前栅格中类型标签为地面且相对参考方向的偏差角度最大/小的点确定为当前扇形区域的可行驶区域的边界点;
确定当前栅格中仅包含类型标签为地面物体的点,将当前栅格中类型标签为地面物体且相对参考方向的偏差角度最大/小的点确定为当前扇形区域的可行驶区域的边界点。
一种可选的实施方式为,上述方法还包括:
对目标检测区域的三维点云进行平面拟合,确定目标检测区域内基准地面的位置,并确定各栅格内的每个点基于上述基准地面的相对高度;
根据上述相对高度确定各栅格内点的平均高度、点的最大高度差、与上一个栅格内点的平均高度差。
一种可选的实施方式为,遍历至包含的点的类型标签不同且包含地面或者类型标签包含地面物体的栅格之前,还包括:
确定存在空栅格,判断上述空栅格是否为当前扇形区域内的最后一个栅格;
判定上述空栅格为当前扇形区域内最后一个栅格,根据预设原则在上述 空栅格中创建一个虚拟点作为当前扇形区域的可行驶区域的边界点;
判定上述空栅格非当前扇形区域内最后一个栅格,获取当前扇形区域已遍历的栅格中空栅格的个数;
确定空栅格的个数到达预设个数时,获取当前扇形区域内距离上述空栅格距离最近且类型标签为地面的点,并作为当前扇形区域的可行驶区域的边界点;
未获取到当前扇形区域内距离上述空栅格距离最近且类型标签为地面的点时,根据预设原则在上述空栅格中创建一个虚拟点作为当前扇形区域的可行驶区域的边界点。
一种可选的实施方式为,对各边界点进行分段拟合,包括:
根据各边界点相对参考方向的偏差角度对各边界点进行排序,并依次根据各边界点间的距离与走向将各边界点划分为多个点簇;
根据每个点簇中边界点的数量,利用最小二乘法将点簇拟合为边界线段,并按照首尾相连的原则,将拟合后的各边界线段合成至少一条边界线。
一种可选的实施方式为,依次根据各边界点间的距离与走向将各边界点划分为多个点簇,包括:
将同时满足如下多个条件的边界点划分为同一点簇:
与当前点簇中第一个边界点的累加距离不大于第一距离阈值;
与上一个边界点的距离不大于第二距离阈值;
与下一个边界点和上一个边界点形成角度的差值在设定范围内。
一种可选的实施方式为,根据每个点簇中边界点的数量,利用最小二乘法将点簇拟合为边界线段,包括:
根据每个点簇中边界点的数量,确定与上述边界点的数量对应的多项式拟合方程;
利用最小二乘法确定上述多项式拟合方程中的参数值,得到拟合后的边界线段。
上述方法适用于自动驾驶领域城市道路或非结构化道路等各类场景,可 以有效解决现有技术中存在的路沿遮挡识别不稳定、起伏道路鲁棒性差及检测精度不够等问题。
第二方面,本申请实施例提供了一种可行驶区域检测装置,包括:
获取模块,用于利用激光雷达获取目标检测区域的三维点云,并利用语义分割网络确定三维点云中每个点的类型标签,上述类型标签包括地面、地面物体;
划分模块,用于以激光雷达的位置为坐标原点,将上述三维点云划分为不同的扇形区域,并对各扇形区域按照点距离原点的距离划分为不同栅格;
边界点确定模块,用于对各扇形区域进行栅格遍历,基于栅格中包含的点的类型标签确定各扇形区域的可行驶区域的边界点;
包络点集确定模块,用于对各边界点进行分段拟合,并对分段拟合后的边界线按照预设间隔进行重采样,确定可行驶区域的包络点集。
第三方面,本申请实施例提供了一种可行驶区域检测设备,包括:存储器和处理器,上述存储器上存储有可在上述处理器上运行的计算机程序,当上述计算机程序被上述处理器执行时,实现以下步骤:
利用激光雷达获取目标检测区域的三维点云,并利用语义分割网络确定三维点云中每个点的类型标签,上述类型标签包括地面、地面物体;
以激光雷达的位置为坐标原点,将上述三维点云划分为不同的扇形区域,并对各扇形区域按照点距离原点的距离划分为不同栅格;
对各扇形区域进行栅格遍历,基于栅格中包含的点的类型标签确定各扇形区域的可行驶区域的边界点;
对各边界点进行分段拟合,并对分段拟合后的边界线按照预设间隔进行重采样,确定可行驶区域的包络点集。
第四方面,本申请实施例提供的一种计算机存储介质,包括:计算机程序指令,当其在计算机上运行时,使得计算机执行上述可行驶区域检测方法中的任一步骤。
另外,上述可行驶区域检测装置、设备及计算机可读存储介质中任一种 实现方式所带来的技术效果可参见上述可行驶区域检测方法中不同实现方式所带来的技术效果,此处不再赘述。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简要介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域的普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的一种可行驶区域检测方法的流程示意图;
图2为本申请实施例提供的一种栅格划分的示意图;
图3为本申请实施例提供的一种包络点集的示意图;
图4为本申请实施例提供的一种边界点确定方法的流程示意图;
图5为本申请实施例提供的一种可行驶区域检测装置的结构示意图;
图6为本申请实施例提供的一种可行驶区域检测设备的结构示意图。
具体实施方式
为了使本发明的目的、技术方案和优点更加清楚,下面将结合附图对本发明作进一步地详细描述。
本申请实施例描述的应用场景是为了更加清楚的说明本申请实施例的技术方案,并不构成对于本申请实施例提供的技术方案的限定,本领域普通技术人员可知,随着新应用场景的出现,本申请实施例提供的技术方案对于类似的技术问题,同样适用。其中,在本申请的描述中,除非另有说明,“多个”的含义是两个或两个以上。
需要说明的是,本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本发明的实施例能够以除了在这里图示或描述的那些以外的顺序实施。以下示例性实施 例中所描述的实施方式并不代表与本发明相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本申请的一些方面相一致的装置和方法的例子。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本申请保护的范围。
自动驾驶技术是一个复杂的工程体系,需要各个模块相互配合才能全方位保证驾驶的安全性。其中,可行驶区域(Free Space)检测技术是自动驾驶系统的关键技术之一,可以为路径规划和行为决策等后端模块提供依据。目前相机图像和激光雷达点云是用来检测可行驶区域主要的数据输入源。基于图像来提取可行驶区域方法中,使用的颜色或纹理特征易受光照和天气的干扰,三维信息的缺失也限制了此类算法在不同场景的适应性。而激光雷达能够实时精准地提供丰富的道路环境数据,具有数据维度高、深度信息准确、响应频率快和检测精度高的优点。
目前基于激光点云检测可行驶区域的方法通常为利用激光雷达获取点云后,对所有点云进行地面平面拟合,并划分为多个扇形区域后在各扇形区域根据高度阈值的方法进行可驾驶区域的检测,这种方法并不适用于不平整或者起伏频繁的复杂路况,且整体拟合平面的方式对斜坡等的鲁棒性较差;另一种方法为通过计算每个扇状栅格内的全局角度、局部角度、局部高度和距地面高度来判断所属栅格是否为地面栅格,这种方法虽然比较有效解决了道路起伏崎岖等地面情况,但栅格只是可行驶区域的一个粗提取概念,边界精度较低。另外地面并不能完全当做可行驶区域,比如遇到车辆等障碍物和路沿,尤其是城市结构化道路低矮路沿,这种栅格提取方式有些情况是不能判断的。
另外,目前存在的利用残差扩张卷积的编码-解码网络,利用网络模型训练获得网络参数,进而确定可行驶区域,这种方法确定出的可行驶区域准确率较低,且难以实现。
本申请实施例为解决上述问题,在深度学习语义分割网络的基础上提出了一种可行驶区域检测方法,可以有效提取到各种道路场景下较高精度可行 驶包络区域。
图1为本申请实施例提供的一种可行驶区域检测方法的流程示意图,如图1所示,本申请实施例提供一种可行驶区域检测方法,包括:
步骤101,利用激光雷达获取目标检测区域的三维点云,并利用语义分割网络确定三维点云中每个点的类型标签,上述类型标签包括地面、地面物体;
实施中,可以利用安装在自动驾驶车辆上的激光雷达获取目标检测区域的三维点云,同时获取三维点云中每个点以激光雷达为坐标原点的三维坐标x 0,y 0,z 0以及反射强度,其中,x轴正方向为车体前进方向,y轴正方向为向车体左侧,z轴正方向为竖直向上,并且,本申请实施例中对激光雷达在自动驾驶车辆上的安装位置不做限定。
本申请实施例中上述语义分割网络可以为目前自动驾驶领域分割效果较好的RPVNet、Cylinder3D等。
上述地面指不存在障碍物遮挡的车辆可行驶的路面等,地面物体指存在影响车辆行驶的障碍物,如建筑、车辆等,在本申请实施例中上述类型标签可根据实际需求进行划分,例如类型标签还可划分为地面、车道线,目标(车辆、行人),路沿、栅栏、绿植、噪声、建筑等多种类型。
作为一种可选的实施方式,利用语义分割网络确定三维点云中每个点的类型标签后,还包括:
对上述三维点云进行预处理,上述预处理包括ROI(感兴趣区域)过滤、噪声过滤和下采样处理。
具体的,对语义分割后的三维点云进行感兴趣区域(ROI)过滤,同时滤除本车点云,ROI和本车点云过滤范围可根据实际需求以及激光雷达的安装位置确定,例如滤除距离激光雷达距离大于100米的点云;然后对过滤后的三维点云进行噪声过滤和下采样,其中,噪声滤除可采用计算每个点近邻点数量的方式剔除离群点,下采样可以采用基于Voxel Grid(网格栅格)的点云下采样算法。
步骤102,以激光雷达的位置为坐标原点,将上述三维点云划分为不同的 扇形区域,并对各扇形区域按照点距离原点的距离划分为不同栅格;
作为一种可选的实施方式,上述将三维点云划分为不同的扇形区域,并对各扇形区域按照点距离原点的距离划分为不同栅格,包括:
将上述激光雷达对准方向确定为参考方向,根据上述三维点云中点相对参考方向的偏差角度,将每个点划分到对应扇形区域中;
根据各扇形区域中每个点与上述坐标原点的距离,将上述扇形区域中每个点划分到对应栅格中。
图2为本申请实施例提供的一种栅格划分的示意图,如图2所示,根据三维点云中点相对参考方向的偏差角度以及与上述坐标原点的距离将其划分到对应的栅格中,具体的:
首先利用以下公式确定三维点云中点相对参考方向的偏差角度,并按该偏差角度将三维点云划分为多个扇形区域:
Figure PCTCN2022116601-appb-000001
其中,angle_idx为三维点云中点相对参考方向的偏差角度,angle_res为角度分辨率(单位:度),x、y为以激光雷达为坐标原点的坐标。
然后利用以下公式确定三维点云中点与上述坐标原点的距离,并将上述扇形区域中每个点划分到对应栅格中:
Figure PCTCN2022116601-appb-000002
其中,radius_idx为三维点云中点与上述坐标原点的距离,radius_res为距离分辨率(单位:米),x、y为以激光雷达为坐标原点的坐标。
步骤103,对各扇形区域进行栅格遍历,基于栅格中包含的点的类型标签确定各扇形区域的可行驶区域的边界点;
实施中,可利用光束法遍历搜索每个栅格,根据每个栅格内三维点云中每个点的类型标签、距地高度等特征进行逻辑判断,最后根据地面走向提取出可行驶区域的边界点。
作为一种可选的实施方式,对各扇形区域进行栅格遍历,基于栅格中包 含的点的类型标签确定各扇形区域的可行驶区域的边界点,包括:
按照不同扇形区域相对参考方向的偏差角度,依次对每个扇形区域内的栅格按照与上述坐标原点的距离由近至远的顺序进行遍历;
遍历至包含类型标签为地面物体的点的栅格,确定上述栅格内点的高度满足边界区域条件时,从栅格中确定出当前扇形区域的可行驶区域的边界点。
步骤104,对各边界点进行分段拟合,并对分段拟合后的边界线按照预设间隔进行重采样,确定可行驶区域的包络点集;
实施中,由于确定的边界点可能出现距离近的点分布密集且存在毛刺点,距离远的点分布稀疏,因此,为了使轨迹规划等下游模块更方便使用可行驶区域信息,需要确定出可行驶区域的均匀包络点集。
另外,由于边界点拟合形成的边界线距离较长且存在较多折角,因此需要进行分段拟合,最后再通过重采样确定出可行驶区域的包络点集,如图3所示。
具体的,提取得到边界点后按相对参考方向的偏差角度顺序排列后进行分段处理,对分段后确定的点簇采用最小二乘法拟合成多项式曲线,再对连续段的多项式曲线进行重采样形成均匀包络点集,从而确定出最终的可行驶区域;其中,可选取0.1~0.5米作为间隔进行重采样。
与现有技术方案相比,本申请实施例中针对路沿遮挡识别不稳定问题,采用了语义分割的点云类别标签,可以有效区分路面、路沿及车辆等;针对起伏道路鲁棒性差问题,根据语义分割结果,使目前大部分的分割网络地面的召回率均超过90%,同时在每个栅格内,还会根据平均高度、高度差、角度等特征信息判断,两者结合使得对于起伏路段有较好的保证作用;针对检测精度不够问题,本申请实施例在栅格内确定边界点,实现了较高精度的提取;另外,本申请实施例对边界点做拟合及重采样,可以直接传输给轨迹规划等下游模块方便使用,具有工程意义。因此,本申请实施例提供的可驾驶区域检测方法适用于自动驾驶领域城市道路或非结构化道路等各类场景,可以有效解决现有技术中存在的路沿遮挡识别不稳定、起伏道路鲁棒性差及检 测精度不够等问题。
以下对上述步骤103中边界点的具体确定方法加以阐述。
作为一种可选的实施方式,对各扇形区域进行栅格遍历,基于栅格中包含的点的类型标签确定各扇形区域的可行驶区域的边界点,包括:
按照不同扇形区域相对参考方向的偏差角度,依次对每个扇形区域内的栅格按照与上述坐标原点的距离由近至远的顺序进行遍历;
遍历至包含类型标签为地面物体的点的栅格,确定上述栅格内点的高度满足边界区域条件时,从栅格中确定出当前扇形区域的可行驶区域的边界点。
作为一种可选的实施方式,确定上述栅格内点的高度满足边界区域条件,包括如下任一项:
上述栅格内点的平均高度大于设定高度阈值;
上述栅格内点的最大高度差大于设定高度阈值;
上述栅格与上一个栅格内点的平均高度差大于设定高度阈值;
上述栅格内存在类型标签为地面物体的点。
上述设定高度阈值的具体数值可根据实际需求进行设定,并且上述平均高度、最大高度差,与上一个栅格内点的平均高度差对应的设定高度阈值可以相同也可以不同。
并且,上述与上一个栅格内点的平均高度差也可以体现为当前栅格与上一个栅格的高度角度差。
在实施中,可以设定上述地面物体包括路沿和障碍物,并将边界区域条件中栅格内存在类型标签为地面物体的点,设定为栅格内存在类型标签为路沿的点。
在实施中,为了便于计算,可以通过平面拟合将三维点云中每个点的高度转化为距离地面的高度。
作为一种可选的实施方式,上述方法还包括:
对目标检测区域的三维点云进行平面拟合,确定目标检测区域内基准地面的位置,并确定各栅格内的每个点基于上述基准地面的相对高度;
根据上述相对高度确定各栅格内点的平均高度、点的最大高度差、与上一个栅格内点的平均高度差。
作为一种可选的实施方式,从栅格中确定出当前扇形区域的可行驶区域的边界点,包括:
确定当前栅格中包含类型标签不同的点,将当前栅格中类型标签为地面且相对参考方向的偏差角度最大/小的点确定为当前扇形区域的可行驶区域的边界点;
确定当前栅格中仅包含类型标签为地面物体的点,将当前栅格中类型标签为地面物体且相对参考方向的偏差角度最大/小的点确定为当前扇形区域的可行驶区域的边界点。
作为一种可选的实施方式,遍历至包含的点的类型标签不同且包含地面或者类型标签包含地面物体的栅格之前,还包括:
确定存在空栅格,判断上述空栅格是否为当前扇形区域内的最后一个栅格;
判定上述空栅格为当前扇形区域内最后一个栅格,根据预设原则在上述空栅格中创建一个虚拟点作为当前扇形区域的可行驶区域的边界点;
判定上述空栅格非当前扇形区域内最后一个栅格,获取当前扇形区域已遍历的栅格中空栅格的个数;
确定空栅格的个数到达预设个数时,获取当前扇形区域内距离上述空栅格距离最近且类型标签为地面的点,并作为当前扇形区域的可行驶区域的边界点;
未获取到当前扇形区域内距离上述空栅格距离最近且类型标签为地面的点时,根据预设原则在上述空栅格中创建一个虚拟点作为当前扇形区域的可行驶区域的边界点。
图4为本申请实施例提供的一种边界点确定方法的流程示意图,以下结合图4对边界点确定方法的具体步骤加以介绍。
具体地,从指定偏差角度开始对扇形依次遍历,而针对某一角度由原点 出发向外遍历栅格:
步骤1,如果当前栅格为空栅格且不是当前扇形区域的最后一个栅格,则空计数器加1,并确定当前扇形区域内空栅格(空计数器计数)的数量;
步骤2,若空计数器计数小于或等于设定个数,则进入下一个栅格;
步骤3,当空计数器计数大于设定个数,则查找当前扇形区域内距离当前栅格最近的road(类型标签为地面的点,以下简称为road)点,找到则存入边界存储器,未找到则创建一个虚拟点存入边界存储器;
步骤4,如果步骤1中条件不满足(当前栅格为该扇形区域内最后一个栅格),则创建一个虚拟点存入边界存储器;
步骤5,如果步骤1和4中条件均不满足(当前栅格不是空栅格),则判断此栅格是否只有一个类型标签(label),如果是则进入下一个栅格;
步骤6,如果步骤5中条件不满足,则判断此栅格的点的类型标签label个数是否大于1个,且包含road;
步骤7,如果步骤6中条件满足,则利用地面平面方程进行高度判断,如果栅格内点的平均高度或高度差或与前一个栅格的角度大于对应阈值或栅格内有类型标签为路沿的点,则根据地面方向给出栅格内偏差角度最大或最小的road点,并且存入边界储存器中;
步骤8,如果步骤7中条件不满足,则进入下一个栅格;
步骤9,如果步骤6中条件不满足(大于0个且不为road),则利用地面平面方程进行高度判断,如果栅格内点的平均高度或最大高度差或与上一个栅格内点的平均高度差大于对应阈值或栅格内有类型标签为路沿的点,则根据地面方向给出栅格内角度最大或最小的非road点(类型标签不是road的点),并且存入边界储存器中;
步骤10,如果步骤9中条件不满足,则进入下一个栅格。
以下对上述步骤104中对各边界点进行分段拟合的方法进行具体阐述。
作为一种可选的实施方式,对各边界点进行分段拟合,包括:
根据各边界点相对参考方向的偏差角度对各边界点进行排序,并依次根 据各边界点间的距离与走向将各边界点划分为多个点簇;
根据每个点簇中边界点的数量,利用最小二乘法将点簇拟合为边界线段,并按照首尾相连的原则,将拟合后的各边界线段合成至少一条边界线。
实施中,依次根据各边界点间的距离与走向将各边界点划分为多个点簇,包括:
将同时满足如下多个条件的边界点划分为同一点簇:
与当前点簇中第一个边界点的累加距离不大于第一距离阈值;
与上一个边界点的距离不大于第二距离阈值;
与下一个边界点和上一个边界点形成角度的差值在设定范围内。
本申请实施例中,在划分点簇时,首先根据各边界点相对参考方向的偏差角度对各边界点进行排序,并确定出相对参考方向的偏差角度最小的点作为第一个点簇中的第一个边界点,然后从第二个边界点开始依次根据上述条件进行逻辑判断,确定出不属于第一各点簇的边界点时,将其作为第二个点簇中的第一个边界点,并对下一个点按照上述方法继续判断,直至所有的边界点判断完成。
上述多个点簇划分条件中,与当前点簇中第一个边界点的累加距离指按照边界点相对参考方向的偏差角度对各边界点进行排序后,当期边界点与当前点簇中第一个边界点之间相邻两点的距离的累加;上/下一个边界点指按照边界点相对参考方向的偏差角度对各边界点进行排序后,排列在当前边界点之前/后的相邻边界点。
实施中,根据每个点簇中边界点的数量,利用最小二乘法将点簇拟合为边界线段,包括:
根据每个点簇中边界点的数量,确定与上述边界点的数量对应的多项式拟合方程;
利用最小二乘法确定上述多项式拟合方程中的参数值,得到拟合后的边界线段。
具体的,根据分段结果,对不同大小的点簇采用不同次数的多项式方程 拟合,具方法如下:
当点簇中点的个数n≥N 1,p=3,f(x)=a 0+a 1x+a 1x 2+a 1x 3
当点簇中点的个数N 2≤n<N 1,p=2,f(x)=a 0+a 1x+a 1x 2
当点簇中点的个数1≤n<N 2,p=1,f(x)=a 0+a 1x
N 1,N 2为设定的点云个数,p为多项式方程的次数,其中,对于任一组点簇数据P i(x i,y i)(i=1,2,…,n,i为点簇中点的数量)中的参数a 0,a 1可根据以下方式确定:
Figure PCTCN2022116601-appb-000003
令XA=Y,可利用最小二乘法推导公式解得A=(X TX) -1X TY,其中,A为由参数a 0,…,a p形成的矩阵。
在分段拟合的过程中,一般采用y关于x的多项式方程进行拟合,但当遇到y的跨度大于x时,可将多项式变为x关于y的多项式方程。
基于相同的公开构思,本申请实施例还提供一种可行驶区域检测装置,由于该装置即是本申请实施例中的方法中的装置,并且该装置解决问题的原理与该方法相似,因此该装置的实施可以参见方法的实施,重复之处不再赘述。
如图5所示,本申请实施例提供一种可行驶区域检测装置,该装置包括:
获取模块501,用于利用激光雷达获取目标检测区域的三维点云,并利用语义分割网络确定三维点云中每个点的类型标签,上述类型标签包括地面、地面物体;
划分模块502,用于以激光雷达的位置为坐标原点,将上述三维点云划分为不同的扇形区域,并对各扇形区域按照点距离原点的距离划分为不同栅格;
边界点确定模块503,用于对各扇形区域进行栅格遍历,基于栅格中包含的点的类型标签确定各扇形区域的可行驶区域的边界点;
包络点集确定模块504,用于对各边界点进行分段拟合,并对分段拟合后的边界线按照预设间隔进行重采样,确定可行驶区域的包络点集。
可选的,上述划分模块502用于将上述三维点云划分为不同的扇形区域,并对各扇形区域按照点距离原点的距离划分为不同栅格,包括:
将上述激光雷达对准方向确定为参考方向,根据上述三维点云中点相对参考方向的偏差角度,将每个点划分到对应扇形区域中;
根据各扇形区域中每个点与上述坐标原点的距离,将上述扇形区域中每个点划分到对应栅格中。
可选的,上述边界点确定模块503用于对各扇形区域进行栅格遍历,基于栅格中包含的点的类型标签确定各扇形区域的可行驶区域的边界点,包括:
按照不同扇形区域相对参考方向的偏差角度,依次对每个扇形区域内的栅格按照与上述坐标原点的距离由近至远的顺序进行遍历;
遍历至包含类型标签为地面物体的点的栅格,确定上述栅格内点的高度满足边界区域条件时,从栅格中确定出当前扇形区域的可行驶区域的边界点。
可选的,上述边界点确定模块503用于确定上述栅格内点的高度满足边界区域条件,包括如下任一项:
上述栅格内点的平均高度大于设定高度阈值;
上述栅格内点的最大高度差大于设定高度阈值;
上述栅格与上一个栅格内点的平均高度差大于设定高度阈值;
上述栅格内存在类型标签为地面物体的点。
可选的,上述边界点确定模块503用于从栅格中确定出当前扇形区域的可行驶区域的边界点,包括:
确定当前栅格中包含类型标签不同的点,将当前栅格中类型标签为地面且相对参考方向的偏差角度最大/小的点确定为当前扇形区域的可行驶区域的边界点;
确定当前栅格中仅包含类型标签为地面物体的点,将当前栅格中类型标签为地面物体且相对参考方向的偏差角度最大/小的点确定为当前扇形区域的可行驶区域的边界点。
可选的,上述边界点确定模块503还用于:
对目标检测区域的三维点云进行平面拟合,确定目标检测区域内基准地面的位置,并确定各栅格内的每个点基于上述基准地面的相对高度;
根据上述相对高度确定各栅格内点的平均高度、点的最大高度差、与上一个栅格内点的平均高度差。
可选的,上述边界点确定模块503用于遍历至包含的点的类型标签不同且包含地面或者类型标签包含地面物体的栅格之前,还包括:
确定存在空栅格,判断上述空栅格是否为当前扇形区域内的最后一个栅格;
判定上述空栅格为当前扇形区域内最后一个栅格,根据预设原则在上述空栅格中创建一个虚拟点作为当前扇形区域的可行驶区域的边界点;
判定上述空栅格非当前扇形区域内最后一个栅格,获取当前扇形区域已遍历的栅格中空栅格的个数;
确定空栅格的个数到达预设个数时,获取当前扇形区域内距离上述空栅格距离最近且类型标签为地面的点,并作为当前扇形区域的可行驶区域的边界点;
未获取到当前扇形区域内距离上述空栅格距离最近且类型标签为地面的点时,根据预设原则在上述空栅格中创建一个虚拟点作为当前扇形区域的可行驶区域的边界点。
可选的,上述包络点集确定模块504用于对各边界点进行分段拟合,包括:
根据各边界点相对参考方向的偏差角度对各边界点进行排序,并依次根据各边界点间的距离与走向将各边界点划分为多个点簇;
根据每个点簇中边界点的数量,利用最小二乘法将点簇拟合为边界线段,并按照首尾相连的原则,将拟合后的各边界线段合成至少一条边界线。
可选的,上述包络点集确定模块504用于依次根据各边界点间的距离与走向将各边界点划分为多个点簇,包括:
将同时满足如下多个条件的边界点划分为同一点簇:
与当前点簇中第一个边界点的累加距离不大于第一距离阈值;
与上一个边界点的距离不大于第二距离阈值;
与下一个边界点和上一个边界点形成角度的差值在设定范围内。
可选的,上述包络点集确定模块504用于根据每个点簇中边界点的数量,利用最小二乘法将点簇拟合为边界线段,包括:
根据每个点簇中边界点的数量,确定与上述边界点的数量对应的多项式拟合方程;
利用最小二乘法确定上述多项式拟合方程中的参数值,得到拟合后的边界线段。
基于相同的公开构思,本申请实施例中还提供了一种可行驶区域检测设备,由于该设备即是本申请实施例中的方法中的设备,并且该设备解决问题的原理与该方法相似,因此该设备的实施可以参见方法的实施,重复之处不再赘述。
所属技术领域的技术人员能够理解,本申请的各个方面可以实现为系统、方法或程序产品。因此,本申请的各个方面可以具体实现为以下形式,即:完全的硬件实施方式、完全的软件实施方式(包括固件、微代码等),或硬件和软件方面结合的实施方式,这里可以统称为“电路”、“模块”或“系统”。
在一些可能的实施方式中,根据本申请的设备可以至少包括至少一个处理器、以及至少一个存储器。其中,存储器存储有程序代码,当程序代码被处理器执行时,使得处理器执行本说明书上述描述的根据本申请各种示例性实施方式的可行驶区域检测方法中的步骤。
下面参照图6来描述根据本申请的这种可行驶区域检测设备600。图6显示的设备600仅仅是一个示例,不应对本申请实施例的功能和使用范围带来任何限制。
如图6所示,设备600以通用设备的形式表现。设备600的组件可以包括但不限于:上述至少一个处理器601、上述至少一个存储器602、连接不同系统组件(包括存储器602和处理器601)的总线603,其中,存储器存储有程序代码,当程序代码被处理器执行时,使得处理器执行以下步骤:
利用激光雷达获取目标检测区域的三维点云,并利用语义分割网络确定 三维点云中每个点的类型标签,上述类型标签包括地面、地面物体;
以激光雷达的位置为坐标原点,将上述三维点云划分为不同的扇形区域,并对各扇形区域按照点距离原点的距离划分为不同栅格;
对各扇形区域进行栅格遍历,基于栅格中包含的点的类型标签确定各扇形区域的可行驶区域的边界点;
对各边界点进行分段拟合,并对分段拟合后的边界线按照预设间隔进行重采样,确定可行驶区域的包络点集。
总线603表示几类总线结构中的一种或多种,包括存储器总线或者存储器控制器、外围总线、处理器或者使用多种总线结构中的任意总线结构的局域总线。
存储器602可以包括易失性存储器形式的可读介质,例如随机存取存储器(RAM)6021和/或高速缓存存储器6022,还可以进一步包括只读存储器(ROM)6023。
存储器602还可以包括具有一组(至少一个)程序模块6024的程序/实用工具6025,这样的程序模块6024包括但不限于:操作系统、一个或者多个应用程序、其它程序模块以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。
设备600也可以与一个或多个外部设备604(例如键盘、指向设备等)通信,还可与一个或者多个使得用户能与设备600交互的设备通信,和/或与使得该设备600能与一个或多个其它设备进行通信的任何设备(例如路由器、调制解调器等等)通信。这种通信可以通过输入/输出(I/O)接口605进行。并且,设备600还可以通过网络适配器606与一个或者多个网络(例如局域网(LAN),广域网(WAN)和/或公共网络,例如因特网)通信。如图所示,网络适配器606通过总线603与用于设备600的其它模块通信。应当理解,尽管图中未示出,可以结合设备600使用其它硬件和/或软件模块,包括但不限于:微代码、设备驱动器、冗余处理器、外部磁盘驱动阵列、RAID系统、磁带驱动器以及数据备份存储系统等。
可选的,上述处理器用于将上述三维点云划分为不同的扇形区域,并对各扇形区域按照点距离原点的距离划分为不同栅格,包括:
将上述激光雷达对准方向确定为参考方向,根据上述三维点云中点相对参考方向的偏差角度,将每个点划分到对应扇形区域中;
根据各扇形区域中每个点与上述坐标原点的距离,将上述扇形区域中每个点划分到对应栅格中。
可选的,上述处理器用于对各扇形区域进行栅格遍历,基于栅格中包含的点的类型标签确定各扇形区域的可行驶区域的边界点,包括:
按照不同扇形区域相对参考方向的偏差角度,依次对每个扇形区域内的栅格按照与上述坐标原点的距离由近至远的顺序进行遍历;
遍历至包含类型标签为地面物体的点的栅格,确定上述栅格内点的高度满足边界区域条件时,从栅格中确定出当前扇形区域的可行驶区域的边界点。
可选的,上述处理器用于确定上述栅格内点的高度满足边界区域条件,包括如下任一项:
上述栅格内点的平均高度大于设定高度阈值;
上述栅格内点的最大高度差大于设定高度阈值;
上述栅格与上一个栅格内点的平均高度差大于设定高度阈值;
上述栅格内存在类型标签为地面物体的点。
可选的,上述处理器用于从栅格中确定出当前扇形区域的可行驶区域的边界点,包括:
确定当前栅格中包含类型标签不同的点,将当前栅格中类型标签为地面且相对参考方向的偏差角度最大/小的点确定为当前扇形区域的可行驶区域的边界点;
确定当前栅格中仅包含类型标签为地面物体的点,将当前栅格中类型标签为地面物体且相对参考方向的偏差角度最大/小的点确定为当前扇形区域的可行驶区域的边界点。
可选的,上述处理器还用于:
对目标检测区域的三维点云进行平面拟合,确定目标检测区域内基准地面的位置,并确定各栅格内的每个点基于上述基准地面的相对高度;
根据上述相对高度确定各栅格内点的平均高度、点的最大高度差、与上一个栅格内点的平均高度差。
可选的,上述处理器用于遍历至包含的点的类型标签不同且包含地面或者类型标签包含地面物体的栅格之前,还包括:
确定存在空栅格,判断上述空栅格是否为当前扇形区域内的最后一个栅格;
判定上述空栅格为当前扇形区域内最后一个栅格,根据预设原则在上述空栅格中创建一个虚拟点作为当前扇形区域的可行驶区域的边界点;
判定上述空栅格非当前扇形区域内最后一个栅格,获取当前扇形区域已遍历的栅格中空栅格的个数;
确定空栅格的个数到达预设个数时,获取当前扇形区域内距离上述空栅格距离最近且类型标签为地面的点,并作为当前扇形区域的可行驶区域的边界点;
未获取到当前扇形区域内距离上述空栅格距离最近且类型标签为地面的点时,根据预设原则在上述空栅格中创建一个虚拟点作为当前扇形区域的可行驶区域的边界点。
可选的,上述处理器用于对各边界点进行分段拟合,包括:
根据各边界点相对参考方向的偏差角度对各边界点进行排序,并依次根据各边界点间的距离与走向将各边界点划分为多个点簇;
根据每个点簇中边界点的数量,利用最小二乘法将点簇拟合为边界线段,并按照首尾相连的原则,将拟合后的各边界线段合成至少一条边界线。
可选的,上述处理器用于依次根据各边界点间的距离与走向将各边界点划分为多个点簇,包括:
将同时满足如下多个条件的边界点划分为同一点簇:
与当前点簇中第一个边界点的累加距离不大于第一距离阈值;
与上一个边界点的距离不大于第二距离阈值;
与下一个边界点和上一个边界点形成角度的差值在设定范围内。
可选的,上述处理器用于根据每个点簇中边界点的数量,利用最小二乘法将点簇拟合为边界线段,包括:
根据每个点簇中边界点的数量,确定与上述边界点的数量对应的多项式拟合方程;
利用最小二乘法确定上述多项式拟合方程中的参数值,得到拟合后的边界线段。
在一些可能的实施方式中,本申请提供的一种可行驶区域检测方法的各个方面还可以实现为一种程序产品的形式,其包括程序代码,当程序产品在计算机设备上运行时,程序代码用于使计算机设备执行本说明书上述描述的根据本申请各种示例性实施方式的一种可行驶区域检测方法中的步骤。
程序产品可以采用一个或多个可读介质的任意组合。可读介质可以是可读信号介质或者可读存储介质。可读存储介质例如可以是——但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。
本申请的实施方式的用于监控的程序产品可以采用便携式紧凑盘只读存储器(CD-ROM)并包括程序代码,并可以在设备上运行。然而,本申请的程序产品不限于此,在本文件中,可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。
可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了可读程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。可读信号介质还可以是可读存储介质以外的任何可读介质,该可读介质可以发送、传播或者传输用 于由指令执行系统、装置或者器件使用或者与其结合使用的程序。
可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于无线、有线、光缆、RF等等,或者上述的任意合适的组合。
可以以一种或多种程序设计语言的任意组合来编写用于执行本申请操作的程序代码,程序设计语言包括面向对象的程序设计语言—诸如Java、C++等,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户设备上执行、部分地在用户设备上执行、作为一个独立的软件包执行、部分在用户设备上部分在远程设备上执行、或者完全在远程设备或服务端上执行。在涉及远程设备的情形中,远程设备可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户设备,或者,可以连接到外部设备(例如利用因特网服务提供商来通过因特网连接)。
应当注意,尽管在上文详细描述中提及了装置的若干单元或子单元,但是这种划分仅仅是示例性的并非强制性的。实际上,根据本申请的实施方式,上文描述的两个或更多单元的特征和功能可以在一个单元中具体化。反之,上文描述的一个单元的特征和功能可以进一步划分为由多个单元来具体化。
此外,尽管在附图中以特定顺序描述了本申请方法的操作,但是,这并非要求或者暗示必须按照该特定顺序来执行这些操作,或是必须执行全部所示的操作才能实现期望的结果。附加地或备选地,可以省略某些步骤,将多个步骤合并为一个步骤执行,和/或将一个步骤分解为多个步骤执行。
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和方框图来描述的。应理解可由计算机程序指令实现流程图和方 框图中的每一流程和/或方框、以及流程图和方框图中的流程和方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和方框图一个方框或多个方框中指定的功能的步骤。
尽管已描述了本申请的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例做出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本申请范围的所有变更和修改。
显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本申请的精神和范围。这样,倘若本申请的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。

Claims (13)

  1. 一种可行驶区域检测方法,其特征在于,包括:
    利用激光雷达获取目标检测区域的三维点云,并利用语义分割网络确定三维点云中每个点的类型标签,所述类型标签包括地面、地面物体;
    以激光雷达的位置为坐标原点,将所述三维点云划分为不同的扇形区域,并对各扇形区域按照点距离原点的距离划分为不同栅格;
    对各扇形区域进行栅格遍历,基于栅格中包含的点的类型标签确定各扇形区域的可行驶区域的边界点;
    对各边界点进行分段拟合,并对分段拟合后的边界线按照预设间隔进行重采样,确定可行驶区域的包络点集。
  2. 根据权利要求1所述的方法,其特征在于,将所述三维点云划分为不同的扇形区域,并对各扇形区域按照点距离原点的距离划分为不同栅格,包括:
    将所述激光雷达对准方向确定为参考方向,根据所述三维点云中点相对参考方向的偏差角度,将每个点划分到对应扇形区域中;
    根据各扇形区域中每个点与所述坐标原点的距离,将所述扇形区域中每个点划分到对应栅格中。
  3. 根据权利要求1或2所述的方法,其特征在于,对各扇形区域进行栅格遍历,基于栅格中包含的点的类型标签确定各扇形区域的可行驶区域的边界点,包括:
    按照不同扇形区域相对参考方向的偏差角度,依次对每个扇形区域内的栅格按照与所述坐标原点的距离由近至远的顺序进行遍历;
    遍历至包含类型标签为地面物体的点的栅格,确定所述栅格内点的高度满足边界区域条件时,从栅格中确定出当前扇形区域的可行驶区域的边界点。
  4. 根据权利要求3所述的方法,其特征在于,确定所述栅格内点的高度满足边界区域条件,包括如下任一项:
    所述栅格内点的平均高度大于设定高度阈值;
    所述栅格内点的最大高度差大于设定高度阈值;
    所述栅格与上一个栅格内点的平均高度差大于设定高度阈值;
    所述栅格内存在类型标签为地面物体的点。
  5. 根据权利要求3所述的方法,其特征在于,从栅格中确定出当前扇形区域的可行驶区域的边界点,包括:
    确定当前栅格中包含类型标签不同的点,将当前栅格中类型标签为地面且相对参考方向的偏差角度最大/小的点确定为当前扇形区域的可行驶区域的边界点;
    确定当前栅格中仅包含类型标签为地面物体的点,将当前栅格中类型标签为地面物体且相对参考方向的偏差角度最大/小的点确定为当前扇形区域的可行驶区域的边界点。
  6. 根据权利要求4所述的方法,其特征在于,还包括:
    对目标检测区域的三维点云进行平面拟合,确定目标检测区域内基准地面的位置,并确定各栅格内的每个点基于所述基准地面的相对高度;
    根据所述相对高度确定各栅格内点的平均高度、点的最大高度差、与上一个栅格内点的平均高度差。
  7. 根据权利要求3所述的方法,其特征在于,遍历至包含的点的类型标签不同且包含地面或者类型标签包含地面物体的栅格之前,还包括:
    确定存在空栅格,判断所述空栅格是否为当前扇形区域内的最后一个栅格;
    判定所述空栅格为当前扇形区域内最后一个栅格,根据预设原则在所述空栅格中创建一个虚拟点作为当前扇形区域的可行驶区域的边界点;
    判定所述空栅格非当前扇形区域内最后一个栅格,获取当前扇形区域已遍历的栅格中空栅格的个数;
    确定空栅格的个数到达预设个数时,获取当前扇形区域内距离所述空栅格距离最近且类型标签为地面的点,并作为当前扇形区域的可行驶区域的边 界点;
    未获取到当前扇形区域内距离所述空栅格距离最近且类型标签为地面的点时,根据预设原则在所述空栅格中创建一个虚拟点作为当前扇形区域的可行驶区域的边界点。
  8. 根据权利要求1所述的方法,其特征在于,对各边界点进行分段拟合,包括:
    根据各边界点相对参考方向的偏差角度对各边界点进行排序,并依次根据各边界点间的距离与走向将各边界点划分为多个点簇;
    根据每个点簇中边界点的数量,利用最小二乘法将点簇拟合为边界线段,并按照首尾相连的原则,将拟合后的各边界线段合成至少一条边界线。
  9. 根据权利要求8所述的方法,其特征在于,依次根据各边界点间的距离与走向将各边界点划分为多个点簇,包括:
    将同时满足如下多个条件的边界点划分为同一点簇:
    与当前点簇中第一个边界点的累加距离不大于第一距离阈值;
    与上一个边界点的距离不大于第二距离阈值;
    与下一个边界点和上一个边界点形成角度的差值在设定范围内。
  10. 根据权利要求8所述的方法,其特征在于,根据每个点簇中边界点的数量,利用最小二乘法将点簇拟合为边界线段,包括:
    根据每个点簇中边界点的数量,确定与所述边界点的数量对应的多项式拟合方程;
    利用最小二乘法确定所述多项式拟合方程中的参数值,得到拟合后的边界线段。
  11. 一种可行驶区域检测装置,其特征在于,包括:
    获取模块,用于利用激光雷达获取目标检测区域的三维点云,并利用语义分割网络确定三维点云中每个点的类型标签,所述类型标签包括地面、地面物体;
    划分模块,用于以激光雷达的位置为坐标原点,将所述三维点云划分为 不同的扇形区域,并对各扇形区域按照点距离原点的距离划分为不同栅格;
    边界点确定模块,用于对各扇形区域进行栅格遍历,基于栅格中包含的点的类型标签确定各扇形区域的可行驶区域的边界点;
    包络点集确定模块,用于对各边界点进行分段拟合,并对分段拟合后的边界线按照预设间隔进行重采样,确定可行驶区域的包络点集。
  12. 一种可行驶区域检测设备,其特征在于,该设备包括存储器和处理器,所述存储器上存储有可在所述处理器上运行的计算机程序,当所述计算机程序被所述处理器执行时,实现权利要求1~10中任一项所述的方法。
  13. 一种计算机可读存储介质,其特征在于,包括计算机程序指令,当其在计算机上运行时,使得计算机执行如权利要求1~10中任一项所述的方法。
PCT/CN2022/116601 2022-05-11 2022-09-01 一种可行驶区域检测方法、装置及设备 WO2023216470A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210511306.8A CN114842450B (zh) 2022-05-11 2022-05-11 一种可行驶区域检测方法、装置及设备
CN202210511306.8 2022-05-11

Publications (1)

Publication Number Publication Date
WO2023216470A1 true WO2023216470A1 (zh) 2023-11-16

Family

ID=82570762

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/116601 WO2023216470A1 (zh) 2022-05-11 2022-09-01 一种可行驶区域检测方法、装置及设备

Country Status (2)

Country Link
CN (1) CN114842450B (zh)
WO (1) WO2023216470A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117351449A (zh) * 2023-12-04 2024-01-05 上海几何伙伴智能驾驶有限公司 基于极坐标加权的道路可通行区域边界优化方法
CN117491983A (zh) * 2024-01-02 2024-02-02 上海几何伙伴智能驾驶有限公司 实现可通行区域边界获取及目标相对位置判别的方法
CN118038027A (zh) * 2024-04-11 2024-05-14 深圳市木牛机器人科技有限公司 平板运输车的兴趣点检测方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114842450B (zh) * 2022-05-11 2023-06-30 合众新能源汽车股份有限公司 一种可行驶区域检测方法、装置及设备
CN116520353B (zh) * 2023-06-29 2023-09-26 广汽埃安新能源汽车股份有限公司 基于激光点云的地面检测方法、装置、存储介质及设备

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110320504A (zh) * 2019-07-29 2019-10-11 浙江大学 一种基于激光雷达点云统计几何模型的非结构化道路检测方法
CN110569749A (zh) * 2019-08-22 2019-12-13 江苏徐工工程机械研究院有限公司 一种矿山道路的边界线及可行驶区域检测方法及系统
KR102083482B1 (ko) * 2018-12-13 2020-03-02 국민대학교산학협력단 라이다 기반의 차량주행 가능영역 검출장치 및 방법
CN111208533A (zh) * 2020-01-09 2020-05-29 上海工程技术大学 一种基于激光雷达的实时地面检测方法
KR102306083B1 (ko) * 2021-02-03 2021-09-29 국방과학연구소 영상 및 라이다를 활용한 차량의 주행 가능 영역 식별 장치 및 방법
CN113792707A (zh) * 2021-11-10 2021-12-14 北京中科慧眼科技有限公司 基于双目立体相机的地形环境检测方法、系统和智能终端
CN114842450A (zh) * 2022-05-11 2022-08-02 合众新能源汽车有限公司 一种可行驶区域检测方法、装置及设备

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102083482B1 (ko) * 2018-12-13 2020-03-02 국민대학교산학협력단 라이다 기반의 차량주행 가능영역 검출장치 및 방법
CN110320504A (zh) * 2019-07-29 2019-10-11 浙江大学 一种基于激光雷达点云统计几何模型的非结构化道路检测方法
CN110569749A (zh) * 2019-08-22 2019-12-13 江苏徐工工程机械研究院有限公司 一种矿山道路的边界线及可行驶区域检测方法及系统
CN111208533A (zh) * 2020-01-09 2020-05-29 上海工程技术大学 一种基于激光雷达的实时地面检测方法
KR102306083B1 (ko) * 2021-02-03 2021-09-29 국방과학연구소 영상 및 라이다를 활용한 차량의 주행 가능 영역 식별 장치 및 방법
CN113792707A (zh) * 2021-11-10 2021-12-14 北京中科慧眼科技有限公司 基于双目立体相机的地形环境检测方法、系统和智能终端
CN114842450A (zh) * 2022-05-11 2022-08-02 合众新能源汽车有限公司 一种可行驶区域检测方法、装置及设备

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117351449A (zh) * 2023-12-04 2024-01-05 上海几何伙伴智能驾驶有限公司 基于极坐标加权的道路可通行区域边界优化方法
CN117351449B (zh) * 2023-12-04 2024-02-09 上海几何伙伴智能驾驶有限公司 基于极坐标加权的道路可通行区域边界优化方法
CN117491983A (zh) * 2024-01-02 2024-02-02 上海几何伙伴智能驾驶有限公司 实现可通行区域边界获取及目标相对位置判别的方法
CN117491983B (zh) * 2024-01-02 2024-03-08 上海几何伙伴智能驾驶有限公司 实现可通行区域边界获取及目标相对位置判别的方法
CN118038027A (zh) * 2024-04-11 2024-05-14 深圳市木牛机器人科技有限公司 平板运输车的兴趣点检测方法

Also Published As

Publication number Publication date
CN114842450B (zh) 2023-06-30
CN114842450A (zh) 2022-08-02

Similar Documents

Publication Publication Date Title
WO2023216470A1 (zh) 一种可行驶区域检测方法、装置及设备
WO2018068653A1 (zh) 点云数据处理方法、装置及存储介质
CN106780524B (zh) 一种三维点云道路边界自动提取方法
KR102062680B1 (ko) 레이저 포인트 클라우드 기반의 도시 도로 인식 방법, 장치, 저장 매체 및 기기
WO2021097618A1 (zh) 点云分割方法、系统及计算机存储介质
CN110674705B (zh) 基于多线激光雷达的小型障碍物检测方法及装置
CN112184736B (zh) 一种基于欧式聚类的多平面提取方法
CN112200171B (zh) 一种基于扫描线的道路点云的提取方法
WO2023024241A1 (zh) 一种基于激光雷达点云的小障碍物检测方法及装置
WO2023000221A1 (zh) 可行驶区域生成方法、可移动平台及存储介质
CN114020015A (zh) 基于障碍物地图双向搜索的无人机路径规划系统及方法
Ma et al. Automatic framework for detecting obstacles restricting 3D highway sight distance using mobile laser scanning data
CN112558072A (zh) 车辆定位方法、装置、系统、电子设备及存储介质
CN116071729A (zh) 可行驶区域和路沿的检测方法、装置及相关设备
CN113345093B (zh) 一种针对激光雷达点云拖尾点的滤波方法
Guo et al. Curb detection and compensation method for autonomous driving via a 3-D-LiDAR sensor
CN111861946B (zh) 自适应多尺度车载激光雷达稠密点云数据滤波方法
CN114387576A (zh) 一种车道线识别方法、系统、介质、设备及信息处理终端
CN116429145B (zh) 一种复杂场景下无人车与垃圾桶自动对接导航方法及系统
CN113377748B (zh) 激光雷达点云数据的静态点去除方法和装置
CN114357843B (zh) 一种针对风电装备运输进行数值碰撞实验模拟的方法
CN116434181A (zh) 地面点检测方法、装置、电子设备和介质
CN111696115B (zh) 一种基于点云扫描线的斑马线角点提取方法及系统
CN115792958A (zh) 一种基于3d激光雷达的无人矿车障碍物检测方法
Mi et al. Automatic road structure detection and vectorization Using Mls point clouds

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22941411

Country of ref document: EP

Kind code of ref document: A1