CN115330969A - Local static environment vectorization description method for ground unmanned vehicle - Google Patents
Local static environment vectorization description method for ground unmanned vehicle Download PDFInfo
- Publication number
- CN115330969A CN115330969A CN202211250177.8A CN202211250177A CN115330969A CN 115330969 A CN115330969 A CN 115330969A CN 202211250177 A CN202211250177 A CN 202211250177A CN 115330969 A CN115330969 A CN 115330969A
- Authority
- CN
- China
- Prior art keywords
- ground
- obstacle
- grid map
- point cloud
- unmanned vehicle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 230000003068 static effect Effects 0.000 title claims abstract description 34
- 238000000605 extraction Methods 0.000 claims abstract description 12
- 230000011218 segmentation Effects 0.000 claims abstract description 11
- 230000000877 morphologic effect Effects 0.000 claims abstract description 6
- 230000004888 barrier function Effects 0.000 claims description 17
- 238000013519 translation Methods 0.000 claims description 5
- 238000012423 maintenance Methods 0.000 claims description 4
- 238000012545 processing Methods 0.000 claims description 4
- 238000009499 grossing Methods 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 abstract description 4
- 230000007547 defect Effects 0.000 abstract description 4
- 238000001514 detection method Methods 0.000 description 10
- 230000004927 fusion Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 3
- 230000008447 perception Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Remote Sensing (AREA)
- Computer Graphics (AREA)
- Quality & Reliability (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a local static environment vectorization description method for a ground unmanned vehicle, which comprises the steps of firstly, obtaining point cloud attributes and ground clearance by a ground segmentation method; then generating a two-dimensional obstacle point cloud by combining a point cloud ground segmentation result and real-time pose information; generating and maintaining a local grid map by using two-dimensional obstacle point cloud information and real-time pose information; obtaining a binary image describing occupation information through binarization and morphological operations; and finally, segmenting the static environment into a plurality of convex polygons through obstacle edge extraction and convex polygon vectorization segmentation. The method has simple and clear logic and flexible and changeable implementation scheme, realizes the purposes of robustly detecting static obstacles and compactly describing the environment, and overcomes the defects of large memory occupation and increased calculation cost of the traditional occupied grid map.
Description
Technical Field
The invention relates to the field of environment perception based on a ground unmanned vehicle, in particular to a local static environment vectorization description method for the ground unmanned vehicle.
Background
The ground unmanned vehicle is taken as a current important research field, and relates to a plurality of application directions, including unmanned driving, intelligent inspection, military security and the like. In practical application, robust obstacle detection realized based on various external perception sensors is an indispensable function of an unmanned ground platform in the process of executing various complex tasks. The obstacles can be generally classified into two types, i.e., dynamic obstacles and static obstacles, according to the dynamic and static properties of the obstacles. For the former, mainly including vehicles, pedestrians, bicycles, and the like, dynamic obstacle detection has made an important progress in the past period, especially an object detection method based on deep learning. On the other hand, for static obstacles in a complex environment, many scholars are inspired by the field of robots, and various general obstacle detection methods are proposed.
Although the above methods have made great progress in the field of ground unmanned vehicles, the robust perception of local static environments is not yet mature. Especially for various static obstacles in a complex environment, the method based on deep learning cannot realize robust obstacle detection. On the other hand, the general obstacle detection method based on the occupation grid map can overcome the challenges, but in consideration of the requirement of the motion planning of unmanned vehicles, the traditional occupation grid map has the defects of large memory occupation and increased calculation cost.
Disclosure of Invention
The invention aims to provide a local static environment vectorization description method for a ground unmanned vehicle, aiming at the defects of the prior art.
The purpose of the invention is realized by the following technical scheme: a local static environment vectorization description method for a ground unmanned vehicle comprises the following steps:
(1) Processing laser point cloud information obtained by the multi-line laser sensor by utilizing ground segmentation to obtain attributes corresponding to each laser point in a current laser scanning frame and ground clearance;
(2) Extracting obstacle point clouds according to the attribute corresponding to each laser point obtained in the step (1) and the ground clearance, and converting the three-dimensional obstacle point clouds into two-dimensional obstacle point clouds;
(3) Acquiring pose information of the ground unmanned vehicle according to the point cloud timestamp, and constructing a local probability grid map according to the pose information and the two-dimensional obstacle point cloud obtained in the step (2);
(4) Obtaining a binary image describing occupation information through binarization and morphological operations according to the probability grid map obtained in the step (3);
(5) And (5) extracting the outline of the obstacle and segmenting the convex polygon according to the binary image obtained in the step (4).
Further, the step (1) comprises the following sub-steps:
(1.1) projecting all laser point clouds to a fan-shaped grid map according to a given angle resolution alpha and a given distance resolution beta;
(1.2) determining the laser point with the minimum z value in each fan-shaped grid, and completing ground extraction by using a region growing method to obtain a surface expression;
(1.3) by calculating the height difference between all the laser points and the expression of the area ground, giving the attribute of whether the laser points are the ground or not, and obtaining the ground clearance for the stress light points.
Further, the step (2) includes the following sub-steps:
(2.1) removing ground points and high altitude points to obtain an effective three-dimensional obstacle point cloud according to the point cloud obtained by ground segmentation and the ground clearance;
(2.2) Generation according to the angular resolution γ inherent to the laser sensorA queue of obstacle points(whereinFor rounding-up operation), sequentially storing all the barrier points into corresponding barrier point queues;
and (2.3) finally, selecting the nearest barrier point by using a sorting algorithm to obtain the final two-dimensional barrier point cloud.
Further, the step (3) includes the following sub-steps:
(3.1) in the initialization stage of the probability grid map, taking the current vehicle body center as the origin point and giving the offsetDetermining the central position of a map, and updating the probability of a non-occupied area and an occupied area through a ray tracing model;
and (3.2) in the maintenance stage of the local probability grid map, firstly obtaining the current vehicle body center coordinate and the center position of the local grid map at the last moment, then carrying out map translation and reserving an overlapping area, and finally respectively updating a non-occupied area and an occupied area through a ray tracing model.
Further, the step (4) comprises the following sub-steps:
(4.1) converting the probability grid map into a binary map through binarization operation;
and (4.2) closing the area with the gap by using an image morphological operation method, namely closed operation, and effectively removing the noise existing in the original grid map.
Further, the step (5) comprises the following sub-steps:
(5.1) acquiring outline edge information of an occupied area by an edge extraction method, and smoothing the outline edge of the obstacle by using an IEPF algorithm;
(5.2) judging a concave polygon of each obstacle outline;
and (5.3) if the polygon is a concave polygon, further dividing the obstacle outline, and dividing the current concave polygon into a plurality of convex polygons by using a computer graphics vector method.
The technical scheme of the invention is summarized as follows: the invention provides a robust and compact vectorization description method for a local static environment of a ground unmanned vehicle. Firstly, laser point cloud and ground clearance are obtained by ground segmentation, and two-dimensional obstacle point cloud is further generated. Finally, the advantages of the probability grid map in multi-frame fusion and the advantages of convex polygons in a compact description environment are combined, and a novel universal obstacle detection method in a local static environment is provided. The invention provides a feasible solution for detecting the general obstacles in the field of ground unmanned vehicles.
The invention has the beneficial effects that: the local static environment description method provided by the invention has the advantages of simple and clear algorithm logic, flexible and changeable implementation scheme, and can achieve the purposes of robustly detecting static obstacles and compactly describing the environment. The method has the advantages that the method combines the advantages of the probability grid map in multi-frame fusion to achieve the purpose of robustly detecting static obstacles, simultaneously uses the convex polygons as vectorization elements to describe the local static environment in an efficient and compact mode, and further overcomes the defects that the traditional occupied grid map occupies a large amount of memory and increases the calculation cost.
Drawings
Fig. 1 is a flowchart of vectorization of a local static environment in the present invention.
FIG. 2 is a schematic diagram showing a translation of a probability grid map in accordance with the present invention.
Fig. 3 is a graph showing a result of detecting a static obstacle according to a typical case of the present invention, in which (a) is an original laser spot cloud, (b) is an occupancy grid map, and (c) is a static obstacle detection result graph.
Fig. 4 is a vectorization description result diagram showing a local static environment in an uphill and downhill scene in the present invention, where (a) is an uphill environment and (b) is a downhill environment.
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present invention, are given by way of illustration and explanation only, not limitation.
The invention provides a local static environment vectorization description method for a ground unmanned vehicle, which aims at the description methods that the related work of the general obstacle detection in the field of the current unmanned vehicle is less and most of the general obstacles are concentrated on a grid map.
The invention discloses a local static environment vectorization description method for a ground unmanned vehicle, which has the overall implementation flow as shown in figure 1 and comprises the following steps:
(1) Processing laser point cloud information obtained by the multi-line laser sensor by utilizing ground segmentation to obtain the attribute corresponding to each laser point in the current laser scanning frame and the ground clearance; the method specifically comprises the following substeps:
(1.1) projecting all laser point clouds to a fan-shaped grid map according to a given angle resolution alpha and a given distance resolution beta;
the sector grid map can divide the free space around the ground unmanned vehicle into a certain number of grids, and based on the grids, the ground division process can be independent of a huge number of point clouds, so that the ground extraction efficiency is further improved; on the other hand, the information fusion of a plurality of point clouds in the same grid enables the ground extraction to be more robust.
(1.2) determining the position of each sector within the gridThe laser point with the minimum value completes ground extraction by using a region growing method to obtain a surface-to-surface expression;
considering the prior attribute of the minimum height of the ground points, for a plurality of laser points falling into the same grid, only the laser point with the minimum z value is reserved to complete information fusion so as to realize robust ground extraction; and then, for the continuous grids in the specified direction, finishing ground extraction by using a linear extraction method based on region growing, wherein the expression isH and r are respectively the ground height and the laser point distance, and k and b are expression parameters;
(1.3) giving the attribute of whether the laser spot is the ground or not by calculating the height difference between all the laser spots and the ground expression of the area where the laser spot is located, and obtaining the ground clearance of the stress light spot;
substituting the coordinate information of the laser spot into an expression of the location surface, and calculating to obtain the ground clearance of the stress light spot; simultaneously, setting a ground clearance threshold value delta h, if the current laser point ground clearance is smaller than delta h, giving a laser point ground attribute, otherwise, giving a non-ground attribute, and simultaneously keeping the ground clearance information of the stress light point;
(2) Extracting obstacle point clouds according to the attribute corresponding to each laser point obtained in the step (1) and the ground clearance, and converting the three-dimensional obstacle point clouds into two-dimensional obstacle point clouds; the method specifically comprises the following substeps:
(2.1) removing ground points and high altitude points according to the point cloud obtained by ground segmentation and the ground clearance to obtain an effective three-dimensional obstacle point cloud;
the high-altitude obstacles, such as cross beams, street lamps and the like, are used as non-ground information and do not influence the normal running of the ground unmanned vehicle. Therefore, in the process of generating the effective three-dimensional obstacle point cloud, firstly, a height threshold value delta h of a high altitude point is given, which is generally slightly larger than the height of the ground unmanned vehicle, and if the height of the current laser point from the ground is larger than the delta h, the laser point is taken as the high altitude point and needs to be removed;
(2.2) Generation according to the angular resolution γ inherent to the laser sensorA queue of obstacle points (among them)For rounding-up operation), sequentially storing all the barrier points into corresponding barrier point queues;
the obstacle point p may be defined by coordinatesExpressing, using the angle calculation formula:calculating to obtain the horizontal deflection angle of the current laser point, and storing the corresponding barrier point into a corresponding queue according to a specific angle value;
and (2.3) finally, selecting the nearest barrier point by using a sorting algorithm to obtain the final two-dimensional barrier point cloud.
For an unmanned ground vehicle with a workspace on a two-dimensional plane, it is sufficient to retain reliable two-dimensional obstacle information to express surrounding travelable space information. Therefore, the nearest barrier point is reserved by traversing the barrier point queue, and the dimension reduction is further performed on the barrier point cloud so as to improve the subsequent processing efficiency.
(3) Acquiring pose information of the ground unmanned vehicle according to the point cloud timestamp, and constructing a local probability grid map according to the pose information and the two-dimensional obstacle point cloud obtained in the step (2); the method specifically comprises the following substeps:
(3.1) in the initialization stage of the probability grid map, taking the current vehicle body center as the origin point and giving the offsetDetermining the central position of a map, and updating the probability of a non-occupied area and an occupied area through a ray tracing model;
in this embodiment, the probability grid map is updated by the log odds probability expression commonly used in the robotic field. For the Bayesian filter, it is computationally elegant to update the probability information in the form of a log odds, avoiding the truncation problem that occurs when the probability approaches 0 or 1.
And (3.2) in the maintenance stage of the local probability grid map, firstly obtaining the current vehicle body center coordinate and the center position of the local grid map at the previous moment, then carrying out map translation and reserving an overlapping area, and finally respectively updating a non-occupied area and an occupied area through a ray tracing model.
For unmanned ground vehicles operating in large outdoor environments, the use of global grid maps for the surrounding environment is an extremely luxury method due to limited memory. Therefore, as the vehicle moves, the invention emphasizes that the local static environment information is maintained only in a fixed range, thereby avoiding the loss of the map information and completing the efficient maintenance of the map information, and the specific map translation process can refer to fig. 2.
(4) Obtaining a binary image describing occupation information through binarization and morphological operations according to the probability grid map obtained in the step (3); the method specifically comprises the following substeps:
(4.1) converting the probability grid map into a binary map through binarization operation;
given an occupancy probability thresholdTraversing a probability grid map to occupy more than a probabilityThe state of the grid mark is 1, otherwise, the state is 0, and therefore the probability grid map is converted into a binary map;
and (4.2) closing the area with the gap by using an image morphological operation method, namely closed operation, and effectively removing the noise existing in the original grid map.
(5) Extracting the outline of the obstacle and segmenting the convex polygon according to the binary image obtained in the step (4); the method specifically comprises the following substeps:
(5.1) acquiring outline edge information of an occupied area by an edge extraction method, and smoothing the outline edge of the obstacle by using an IEPF (Iterative End Point Fit) algorithm;
(5.2) judging a concave polygon of each obstacle outline;
the following strategy can be adopted for concave polygon discrimination: for clockwise alignment on the contour of the obstacleThe concave polygon is judged to beIf, ifJudging the polygon to be a concave polygon and further dividing the polygon;
and (5.3) if the polygon is a concave polygon, further dividing the obstacle outline, and dividing the current concave polygon into a plurality of convex polygons by using a computer graphics vector method.
The method comprises the steps of firstly, obtaining high-frequency pose information through pose interpolation based on laser point cloud information and IMU information; then, obtaining laser point cloud and ground clearance by utilizing ground segmentation, and further generating two-dimensional obstacle point cloud; and finally, combining the probability grid map and the convex polygon to complete the vectorization description of the local static environment. FIG. 3 illustrates the effect of the present invention on obstacle detection in a typical deep learning failure case; figure 4 shows the vectorization effect of the present invention for a local static environment.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.
Claims (6)
1. A local static environment vectorization description method for a ground unmanned vehicle is characterized by comprising the following steps:
(1) Processing laser point cloud information obtained by the multi-line laser sensor by utilizing ground segmentation to obtain the attribute corresponding to each laser point in the current laser scanning frame and the ground clearance;
(2) Extracting obstacle point clouds according to the attribute corresponding to each laser point obtained in the step (1) and the ground clearance, and converting the three-dimensional obstacle point clouds into two-dimensional obstacle point clouds;
(3) Acquiring pose information of the ground unmanned vehicle according to the point cloud timestamp, and constructing a local probability grid map according to the pose information and the two-dimensional obstacle point cloud obtained in the step (2);
(4) Obtaining a binary image describing occupation information through binarization and morphological operations according to the probability grid map obtained in the step (3);
(5) And (4) extracting the outline of the obstacle and dividing the convex polygon according to the binary image obtained in the step (4).
2. A local static environment vectoring description method for a ground-based unmanned vehicle according to claim 1, characterized in that said step (1) comprises the following sub-steps:
(1.1) projecting all laser point clouds to a fan-shaped grid map according to a given angle resolution alpha and a given distance resolution beta;
(1.2) determining the laser point with the minimum z value in each fan-shaped grid, and completing ground extraction by using a region growing method to obtain a ground expression;
(1.3) by calculating the height difference between all the laser points and the expression of the area ground, giving the attribute of whether the laser points are the ground or not, and obtaining the ground clearance for the stress light points.
3. The local static environment vectorization description method for a ground unmanned vehicle according to claim 1, wherein said step (2) comprises the sub-steps of:
(2.1) removing ground points and high altitude points to obtain an effective three-dimensional obstacle point cloud according to the point cloud obtained by ground segmentation and the ground clearance;
(2.2) Generation according to the angular resolution γ inherent to the laser sensorA barrier point queue for storing all barrier points in sequence into corresponding barrier point queues, wherein,is an operation of rounding up;
And (2.3) finally, selecting the nearest barrier point by using a sorting algorithm to obtain the final two-dimensional barrier point cloud.
4. A local static environment vectoring description method for a ground-based unmanned vehicle according to claim 1, characterized in that said step (3) comprises the following sub-steps:
(3.1) in the initialization stage of the probability grid map, taking the current vehicle body center as the origin point and giving the offsetDetermining the central position of a map, and updating the probability of a non-occupied area and an occupied area through a ray tracing model;
and (3.2) in the maintenance stage of the local probability grid map, firstly obtaining the current vehicle body center coordinate and the center position of the local grid map at the previous moment, then carrying out map translation and reserving an overlapping area, and finally respectively updating a non-occupied area and an occupied area through a ray tracing model.
5. A local static environment vectoring description method for a ground-based unmanned vehicle according to claim 1, characterized in that said step (4) comprises the following sub-steps:
(4.1) converting the probability grid map into a binary map through binarization operation;
and (4.2) closing the area with the gap by using an image morphology operation method, namely closing operation, and effectively removing the noise existing in the original grid map.
6. A local static environment vectoring description method for a ground-based unmanned vehicle according to claim 1, characterized in that said step (5) comprises the following sub-steps:
(5.1) acquiring outline edge information of an occupied area by an edge extraction method, and smoothing the outline edge of the obstacle by using an IEPF algorithm;
(5.2) judging a concave polygon of each obstacle outline, if the obstacle outline is a concave polygon, further dividing the obstacle outline, and dividing the current concave polygon into a plurality of convex polygons by using a computer graphics vector method; otherwise, no operation is performed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211250177.8A CN115330969A (en) | 2022-10-12 | 2022-10-12 | Local static environment vectorization description method for ground unmanned vehicle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211250177.8A CN115330969A (en) | 2022-10-12 | 2022-10-12 | Local static environment vectorization description method for ground unmanned vehicle |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115330969A true CN115330969A (en) | 2022-11-11 |
Family
ID=83914552
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211250177.8A Pending CN115330969A (en) | 2022-10-12 | 2022-10-12 | Local static environment vectorization description method for ground unmanned vehicle |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115330969A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115903853A (en) * | 2023-01-06 | 2023-04-04 | 北京理工大学 | Safe feasible domain generation method and system based on effective barrier |
CN116434183A (en) * | 2023-03-08 | 2023-07-14 | 之江实验室 | Road static environment description method based on multipoint cloud collaborative fusion |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110749895A (en) * | 2019-12-23 | 2020-02-04 | 广州赛特智能科技有限公司 | Laser radar point cloud data-based positioning method |
CN111507973A (en) * | 2020-04-20 | 2020-08-07 | 上海商汤临港智能科技有限公司 | Target detection method and device, electronic equipment and storage medium |
CN112543938A (en) * | 2020-09-29 | 2021-03-23 | 华为技术有限公司 | Generation method and device of grid occupation map |
US20210365697A1 (en) * | 2020-05-20 | 2021-11-25 | Toyota Research Institute, Inc. | System and method for generating feature space data |
CN114219905A (en) * | 2021-11-12 | 2022-03-22 | 深圳市优必选科技股份有限公司 | Map construction method and device, terminal equipment and storage medium |
WO2022111723A1 (en) * | 2020-11-30 | 2022-06-02 | 深圳市普渡科技有限公司 | Road edge detection method and robot |
CN114638934A (en) * | 2022-03-07 | 2022-06-17 | 南京航空航天大学 | Post-processing method for dynamic barrier in 3D laser slam graph building |
CN114758063A (en) * | 2022-03-18 | 2022-07-15 | 中国科学院计算技术研究所 | Local obstacle grid map construction method and system based on octree structure |
CN115016507A (en) * | 2022-07-27 | 2022-09-06 | 深圳市普渡科技有限公司 | Mapping method, positioning method, device, robot and storage medium |
-
2022
- 2022-10-12 CN CN202211250177.8A patent/CN115330969A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110749895A (en) * | 2019-12-23 | 2020-02-04 | 广州赛特智能科技有限公司 | Laser radar point cloud data-based positioning method |
CN111507973A (en) * | 2020-04-20 | 2020-08-07 | 上海商汤临港智能科技有限公司 | Target detection method and device, electronic equipment and storage medium |
US20210365697A1 (en) * | 2020-05-20 | 2021-11-25 | Toyota Research Institute, Inc. | System and method for generating feature space data |
CN112543938A (en) * | 2020-09-29 | 2021-03-23 | 华为技术有限公司 | Generation method and device of grid occupation map |
WO2022067534A1 (en) * | 2020-09-29 | 2022-04-07 | 华为技术有限公司 | Occupancy grid map generation method and device |
WO2022111723A1 (en) * | 2020-11-30 | 2022-06-02 | 深圳市普渡科技有限公司 | Road edge detection method and robot |
CN114219905A (en) * | 2021-11-12 | 2022-03-22 | 深圳市优必选科技股份有限公司 | Map construction method and device, terminal equipment and storage medium |
CN114638934A (en) * | 2022-03-07 | 2022-06-17 | 南京航空航天大学 | Post-processing method for dynamic barrier in 3D laser slam graph building |
CN114758063A (en) * | 2022-03-18 | 2022-07-15 | 中国科学院计算技术研究所 | Local obstacle grid map construction method and system based on octree structure |
CN115016507A (en) * | 2022-07-27 | 2022-09-06 | 深圳市普渡科技有限公司 | Mapping method, positioning method, device, robot and storage medium |
Non-Patent Citations (3)
Title |
---|
HAIMING GAO: "CVR-LSE: Compact Vectorization Representation of Local Static Environments for Unmanned Ground Vehicles", 《ARXIV》 * |
王灿等: "基于三维激光雷达的道路边界提取和障碍物检测算法", 《模式识别与人工智能》 * |
谢德胜等: "基于三维激光雷达的无人车障碍物检测与跟踪", 《汽车工程》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115903853A (en) * | 2023-01-06 | 2023-04-04 | 北京理工大学 | Safe feasible domain generation method and system based on effective barrier |
CN115903853B (en) * | 2023-01-06 | 2023-05-30 | 北京理工大学 | Safe feasible domain generation method and system based on effective barrier |
CN116434183A (en) * | 2023-03-08 | 2023-07-14 | 之江实验室 | Road static environment description method based on multipoint cloud collaborative fusion |
CN116434183B (en) * | 2023-03-08 | 2023-11-14 | 之江实验室 | Road static environment description method based on multipoint cloud collaborative fusion |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111583369B (en) | Laser SLAM method based on facial line angular point feature extraction | |
CN115330969A (en) | Local static environment vectorization description method for ground unmanned vehicle | |
CN111598916A (en) | Preparation method of indoor occupancy grid map based on RGB-D information | |
CN112258618A (en) | Semantic mapping and positioning method based on fusion of prior laser point cloud and depth map | |
Broggi et al. | A full-3D voxel-based dynamic obstacle detection for urban scenario using stereo vision | |
CN112613378B (en) | 3D target detection method, system, medium and terminal | |
CN106199558A (en) | Barrier method for quick | |
CN112801022A (en) | Method for rapidly detecting and updating road boundary of unmanned mine card operation area | |
CN113345008B (en) | Laser radar dynamic obstacle detection method considering wheel type robot position and posture estimation | |
CN113902860A (en) | Multi-scale static map construction method based on multi-line laser radar point cloud | |
Han et al. | Urban scene LOD vectorized modeling from photogrammetry meshes | |
Ouyang et al. | A cgans-based scene reconstruction model using lidar point cloud | |
CN114066773A (en) | Dynamic object removal method based on point cloud characteristics and Monte Carlo expansion method | |
CN117419738A (en) | Based on visual view and D * Path planning method and system of Lite algorithm | |
Zhang et al. | Accurate real-time SLAM based on two-step registration and multimodal loop detection | |
Lei et al. | Automatic identification of street trees with improved RandLA-Net and accurate calculation of shading area with density-based iterative α-shape | |
CN116993750A (en) | Laser radar SLAM method based on multi-mode structure semantic features | |
CN114353779B (en) | Method for rapidly updating robot local cost map by adopting point cloud projection | |
Han et al. | GardenMap: Static point cloud mapping for Garden environment | |
CN115641346A (en) | Method for rapidly extracting ground point cloud of laser radar | |
CN112348950B (en) | Topological map node generation method based on laser point cloud distribution characteristics | |
CN114926637A (en) | Garden map construction method based on multi-scale distance map and point cloud semantic segmentation | |
CN114202567A (en) | Point cloud processing obstacle avoidance method based on vision | |
CN113671511A (en) | Laser radar high-precision positioning method for regional scene | |
CN113420460A (en) | Urban building height limit rapid analysis method and system based on OSG data skyline |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20221111 |