CN117419738A - Based on visual view and D * Path planning method and system of Lite algorithm - Google Patents

Based on visual view and D * Path planning method and system of Lite algorithm Download PDF

Info

Publication number
CN117419738A
CN117419738A CN202311427749.XA CN202311427749A CN117419738A CN 117419738 A CN117419738 A CN 117419738A CN 202311427749 A CN202311427749 A CN 202311427749A CN 117419738 A CN117419738 A CN 117419738A
Authority
CN
China
Prior art keywords
point
points
path
current
key
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311427749.XA
Other languages
Chinese (zh)
Inventor
周乐来
曹路阳
李贻斌
马昕
田新诚
宋锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN202311427749.XA priority Critical patent/CN117419738A/en
Publication of CN117419738A publication Critical patent/CN117419738A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3446Details of route searching algorithms, e.g. Dijkstra, A*, arc-flags, using precalculated routes

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a visual view and D-based method * The method and the system for path planning of Lite algorithm comprise the following steps: acquiring map point cloud data in a window range with a robot as a center; filtering the map point cloud data; mapping the filtered map point cloud data into an image, and sequentially carrying out fuzzy processing, edge detection and expansion processing on the image; extracting an obstacle outline from the expanded image, performing polygonal approximation on the obstacle outline, extracting a convex hull from the polygonal approximated obstacle outline, and performing outward expansion processing on each salient point on the convex hull to obtain a visible vertex; extracting update points from the vertexes of two groups of visual views extracted in two adjacent rounds by using a nearest neighbor search algorithm, updating a hash table based on the update points, and maintaining the visual views based on the updated hash table; based on D * And calculating the maintained visible view by using a Lite algorithm to obtain a shortest path, and moving the robot according to the shortest path.

Description

Based on visual view and D * Path planning method and system of Lite algorithm
Technical Field
The invention relates to the technical field of mobile robot path planning, in particular to a mobile robot path planning method based on visual view and D * Lite algorithm path planning method and system.
Background
The statements in this section merely relate to the background of the present disclosure and may not necessarily constitute prior art.
In recent years, along with the development of mobile robots, the autonomy of the robots is particularly important in the face of complex tasks such as search and rescue, construction and exploration of unknown environments, and the like. For autonomous mobile robots, path planning is an important technology, and a suitable path planning algorithm can enable the robot to quickly find a collision-free feasible path between a starting point and a target point, wherein the path length, the time efficiency, the safety, the feasibility, the instantaneity and the like are important evaluation indexes for measuring the path planning algorithm.
With respect to the path planning problem, there have been many mainstream methods such as the sampling-based RRT algorithm, the search-based D-x algorithm, the learning-based algorithm, and the like. However, when facing an unknown environment or a large complex environment, the above algorithm has unavoidable drawbacks, such as the problem that sampling efficiency is low, potential feasible paths are omitted, and maintenance and calculation costs of trees and graphs are high; search-based algorithms often rely on grid maps, where the accuracy of path planning is degraded by the low resolution of the grid, and where such maps occupy a lot of memory space when facing large environments; learning-based methods require large amounts of data to train, which is data-driven in nature, and the size and quality of the data limit the ability of the method to extend to different environments.
Disclosure of Invention
In order to solve the defects in the prior art, the invention provides a method based on the visual view and D * Path planning method and system of Lite algorithm; the method solves the problems that in the background technology, a path is difficult to plan in an unknown environment, the memory occupied by using a grid map is high, the planning efficiency is low, the visual information redundancy is poor, the scheme robustness is poor, the instantaneity is poor and the like.
In one aspect, a visual-based and D-based system is provided * A path planning method of Lite algorithm;
based on visual view and D * The path planning method of the Lite algorithm comprises the following steps:
acquiring map point cloud data in a window range with a robot as a center; filtering the map point cloud data;
mapping the filtered map point cloud data into an image, and sequentially carrying out fuzzy processing, edge detection and expansion processing on the image;
extracting an obstacle outline from the expanded image, performing polygonal approximation on the obstacle outline, extracting a convex hull from the polygonal approximated obstacle outline, and performing outward expansion processing on each salient point on the convex hull to obtain a visible vertex;
extracting update points from the vertexes of two groups of visual views extracted in two adjacent rounds by using a nearest neighbor search algorithm, updating a hash table based on the update points, and maintaining the visual views based on the updated hash table;
Based on D * And calculating the maintained visible view by using a Lite algorithm to obtain a shortest path, and moving the robot according to the shortest path.
In another aspect, a visual based and D-based system is provided * A path planning system of Lite algorithm;
based on visual view and D * A path planning system for Lite algorithm comprising:
an acquisition module configured to: acquiring map point cloud data in a window range with a robot as a center; filtering the map point cloud data;
a first processing module configured to: mapping the filtered map point cloud data into an image, and sequentially carrying out fuzzy processing, edge detection and expansion processing on the image;
a second processing module configured to: extracting an obstacle outline from the expanded image, performing polygonal approximation on the obstacle outline, extracting a convex hull from the polygonal approximated obstacle outline, and performing outward expansion processing on each salient point on the convex hull to obtain a visible vertex;
a third processing module configured to: extracting update points from the vertexes of two groups of visual views extracted in two adjacent rounds by using a nearest neighbor search algorithm, updating a hash table based on the update points, and maintaining the visual views based on the updated hash table;
A path generation module configured to: based on D * And calculating the maintained visible view by using a Lite algorithm to obtain a shortest path, and moving the robot according to the shortest path.
In still another aspect, there is provided an electronic device including:
a memory for non-transitory storage of computer readable instructions; and
a processor for executing the computer-readable instructions,
wherein the computer readable instructions, when executed by the processor, perform the method of the first aspect described above.
In yet another aspect, there is also provided a storage medium non-transitory storing computer readable instructions, wherein the instructions of the method of the first aspect are executed when the non-transitory computer readable instructions are executed by a computer.
In a further aspect, there is also provided a computer program product comprising a computer program for implementing the method of the first aspect described above when run on one or more processors.
The technical scheme has the following advantages or beneficial effects:
the method provided by the invention filters and maps point cloud data obtained by a laser SLAM algorithm LeGO-LOAM in a window centered by a robot to an image, extracts the outline of an obstacle by using an image processing technology, updates a dynamic environment by using the increment of outline points to realize a local-global map fusion process, extracts salient points by using a convex hull algorithm and expands outwards along the middle line of an inner angle to further create visible vertexes, detects the vertexes updated by each round by using a kd-tree, maintains and updates visible view information by using a hash table, and can realize the visual view and D * Lite junctionAnd performing incremental search according to the change of the map updating time point of each round. The invention has the advantages that:
according to the invention, the contour points are extracted from the point cloud in the window with the robot as the center by means of the laser SLAM algorithm LeGO-LOAM, the map is updated in an increment mode by using the contour point set to represent global map information, the fusion from local to global is realized, the storage cost of barrier information in the map is reduced, and the real-time performance of path planning is improved. The outline points are thinned by using polygon approximation and convex hull algorithm, unnecessary pits on the surface of the obstacle are avoided, the number of visible vertexes is reduced, and redundant information is avoided. The visual view is maintained in the form of a hash table, and a data structure based on the hash table has a faster data processing speed. Extracting visual vertexes and D, which change in the motion process of a robot, by using nearest neighbor search kd-tree * Lite algorithm combines to update this part of the points and calculate the shortest path because of D * The characteristic of Lite incremental search can avoid unnecessary repeated calculation when searching the global path in an unknown environment, shortens the time of path planning and reduces the waste of exploration space.
D * The Lite algorithm can be well applied to dynamic planning in an unknown environment, and the path can be updated in real time and efficiently based on the idea of incremental planning. The invention provides a method for extracting contour point increment to construct global map information according to local point cloud information, and combining a visible view with D * Lite algorithm combines the method for searching the shortest global path, and when the method is faced with the conditions of larger environment and frequent environment dynamic change, the method can use less memory space to store and update map information, and is based on the visual view maintained by hash table and the incremental search algorithm D * Lite in combination with real-time planning paths consumes less computational cost. The method is suitable for a rapid dynamic environment, local point cloud information is acquired in a window centered by a robot, point clouds are mapped to a two-dimensional image, contour points of obstacles in the map are extracted by image processing, the contour points are used for storing and incrementally updating the dynamic environment to realize a local-global map fusion process, and convex hull algorithm of the contour of the obstacles is used for extracting convex points and along the central line of internal anglesExpanding and creating visual vertexes, detecting vertexes updated in each round by using nearest neighbor search kd-Tree, maintaining and updating visual information by adopting hash table, and performing visual and D * Lite is combined, and incremental search is performed according to the change of each round of map updating time point. Compared with the traditional path planning method, the method has faster processing speed and lower calculation cost and can better cope with unknown complex and large scenes.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention.
FIG. 1 is an overall flow chart of the method of the present invention;
fig. 2 (a) and 2 (b) are schematic views of an obstacle after image processing in the present invention;
FIGS. 3 (a) and 3 (b) are schematic diagrams of simplified visual vertices in the present invention;
fig. 4 is a schematic view of a global waypoint of the robot of the present invention.
Detailed Description
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the invention. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
Building a mobile robot simulation platform by using an open source robot operating system ROS; the laser SLAM algorithm LeGO-LOAM is utilized, the laser radar and IMU input is received by the laser SLAM algorithm, the map can be updated continuously by movement of the robot, the position of the robot in the map can be updated in real time, the robot is helped to conduct path planning and obstacle avoidance through map information, and therefore autonomous navigation is achieved. Even in long-term operation, the SLAM algorithm can continuously provide accurate positioning information through fusion of sensor data and updating of a map. The map environment point cloud information and the position information of the map environment point cloud information are obtained by using a LeGO-LOAM algorithm. The overall flow chart of the invention is shown in figure 1.
Example 1
The embodiment provides a visual view and D-based method * A path planning method of Lite algorithm;
as shown in fig. 1, based on the visual view and D * The path planning method of the Lite algorithm comprises the following steps:
s101: acquiring map point cloud data in a window range with a robot as a center; filtering the map point cloud data;
s102: mapping the filtered map point cloud data into an image, and sequentially carrying out fuzzy processing, edge detection and expansion processing on the image;
s103: extracting an obstacle outline from the expanded image, performing polygonal approximation on the obstacle outline, extracting a convex hull from the polygonal approximated obstacle outline, and performing outward expansion processing on each salient point on the convex hull to obtain a visible vertex;
s104: extracting update points from the vertexes of two groups of visual views extracted in two adjacent rounds by using a nearest neighbor search algorithm, updating a hash table based on the update points, and maintaining the visual views based on the updated hash table;
s105: based on D * And (3) calculating the maintained visible view by using a Lite algorithm to obtain a shortest path, moving the robot according to the shortest path, and returning to S101.
Further, the step S101: the method for acquiring the map point cloud data in the window range with the robot as the center specifically comprises the following steps: map point cloud data within a window range centered on the robot was obtained by laser SLAM (Simultaneous Localization and Mapping) algorithm LeGO-LOAM (Lightweight and Groud-Optimized Lidar Odometry and Mapping on Variable Terrain).
Illustratively, the window range may be 40m by 40m.
Further, the step S101: filtering the map point cloud data, specifically including:
setting a height threshold according to the height of the mobile robot;
filtering the map point cloud data according to the height threshold, reserving the map point cloud data in the range of the height threshold, and deleting the map point cloud data outside the range of the height threshold;
and compressing the map point cloud data according to the set resolution by using the voxel grid.
Illustratively, the set resolution is 0.3 meters.
It should be understood that since the travel space of a mobile robot is generally limited, only obstacles around it need to be considered during the travel of the mobile robot, only a partial map within a 40m x 40m window centered around the robot is perceived and a height threshold is set according to the height information of the mobile robot model built by simulation. Objects above the threshold are not generally considered as obstacles to be bypassed or avoided, point clouds are filtered according to the z-axis height of the point cloud information, so that the robot can more effectively sense the environment, avoid collision risks, simplify the path planning process, and set the z-axis height to be 0.2-1.0 m. Since point cloud maps typically occupy a large amount of memory space and provide many unnecessary details, the point cloud is compressed using a voxel grid, setting a resolution of 0.3m.
Further, the step S102: mapping the filtered map point cloud data into an image, and sequentially carrying out fuzzy processing, edge detection and expansion processing on the image, wherein the map point cloud data specifically comprises the following steps:
s102-1: obtaining maximum values and minimum values of x and y of coordinates of each point in the point cloud according to the point cloud distribution, setting the distance between pixel points to be 0.05m, and determining the width and length of an image by the coordinate extreme value; traversing points in the point cloud, mapping three-dimensional coordinates of the points in the point cloud to two-dimensional coordinates in the image, and if the pixel value of the image is 255, indicating that the current pixel point is an obstacle;
s102-2: blurring processing is carried out on the image, and discontinuous edges in the obstacle and noise of point cloud data are smoothed;
s102-3: carrying out edge extraction on the blurred image by adopting an edge detection algorithm;
s102-4: and performing expansion processing on the image after edge extraction.
Illustratively, the S102-2: blurring the image according to the formula:
setting a filter kernel with fixed size (5, 5) to carry out fuzzy processing on the image;
wherein the value of the pixel (x, y) in the output image after the O (x, y) blurring, N represents the size of the kernel,for normalization of->Representing summing pixels within a kernel, +. >
Illustratively, the S102-3: and carrying out edge extraction on the blurred image by adopting an edge detection algorithm, and realizing the blurring image by adopting a Canny edge detection algorithm.
It should be understood that S102-4: the expansion processing is carried out on the image after the edge extraction, and the extracted edge is expanded in consideration of the safety of the robot in the advancing process and the vertex of the visual image is extracted subsequently.
Illustratively, S102-4: the image after edge extraction is subjected to expansion treatment, and the method specifically comprises the following steps:
setting a structural element S of a size of (3, 3), according to the formula:
D(x,y)=max (i,j)∈S I(x+i,y+j)
the expansion core S slides on the image, the covered pixels are compared and the maximum value is selected, so that the expansion of the image is realized, and the iteration number of expansion selection is set to be 8;
wherein D (x, y) represents the pixel value, max, in the output image after dilation (i,j)∈S Representing the maximum value, I (x+i, y+j) representing the pixel value in the input image, (-)i, j) traverses all points in the expansion core where (x, y) is located.
It should be understood that the filtered point cloud data is processed by an image processing technology, the obstacle information is mapped to a gray level image according to the distribution of the point cloud, and the image is subjected to blurring processing, edge detection and expansion processing, so as to prepare for the next step of constructing a visual view.
And processing the point cloud data based on the PCL library, obtaining and recording the maximum and minimum values of the coordinates x and y according to the point cloud distribution, setting each pixel to be 0.05m, and determining the width and the height of the image according to Row= (max_y-min_y)/0.05 and Col= (max_x-min_x)/0.05. Each point p in the received point cloud is traversed, its coordinates are converted to coordinates in the image by (x, y) = ((p.x, p.y) - (min_x, min_y))/0.05, and the pixel value of the corresponding coordinate position is set to 255 for representing here as an obstacle.
The gray scale image is manipulated using image processing techniques. Due to the non-smooth and discontinuous edges in the obstacle, noise in the point cloud data and the like, the image is firstly subjected to fuzzy processing according to the formulaSetting a block k with fixed size of (5, 5) as a filtering kernel, adding all pixel values P in the kernel in the filtering process, and obtaining an average value according to the size of the filtering kernel so as to realize the blurring effect.
And extracting edges from the image through a Canny edge detection algorithm after the image is subjected to fuzzy processing. Considering that the robot needs to keep a certain distance from the obstacle to ensure safety in the process of travelling along the path point, the selected path point needs to have a certain safety distance from the obstacle, and the extracted edge needs to be inflated in order to facilitate the extraction of the vertex of the visual view in step 3. The (3, 3) sized structural element S is used to expand the input image I according to the formula D (x, y) =max (i,j)∈S I (x+i, y+j), the structural element S is slid on the image, the covered pixels are compared and the maximum value is selected to replace the central position value, thereby realizing the expansion of the image, and the iteration times of the expansion are setThe number is 8 times. By expanding the point cloud in fig. 2 (a), the expansion effect is as shown in fig. 2 (b), and by the expansion treatment, unnecessary information in the point cloud can be filtered out, and meanwhile, the complex obstacle structure is simplified.
Further, the step S103: extracting an obstacle outline from the expanded image, performing polygonal approximation on the obstacle outline, extracting a convex hull from the polygonal approximated obstacle outline, and performing outward expansion processing on each convex point on the convex hull to obtain a visible vertex, wherein the method specifically comprises the following steps of:
s103-1: marking the pixels which have the same pixel value and are adjacent to each other by adopting a connected region analysis algorithm on the image after the expansion processing, extracting the outline of the obstacle by adopting a connected region marked by an outline tracking algorithm, converting the coordinates of the outline points of the obstacle into the real world coordinates and storing the real world coordinates into an outline point set of the obstacle, and using the outline point set to represent global map information;
s103-2: filling the obstacle contour point set, extracting contour points, approximating the polygons of the contour by fold line approximation, extracting convex hulls from the obstacles after the polygons are approximated, and arranging salient points at positions extending outwards by set distances along the inner angle central line direction of the convex hulls to obtain the vertexes of the visible view.
It should be appreciated that, based on the connected region analysis and contour tracing algorithm in the image, the obstacle contour is extracted from the inflated image, the coordinates of the contour points in the image are converted to real world coordinates according to the location information of the LeGO-LOAM and stored to the contour point set contoursGlobal, which is used to represent global map information. Filling an obstacle contour point set contoursGlobal, re-extracting contour points, approximating a polygon of a given contour through broken line approximation according to a Douglas-Peucker algorithm, thinning the points on the contour to reduce the number of the points and approximate the shape of the contour, extracting a convex hull from the obstacle contour after the approximation of the polygon, expanding convex points along the inner angle middle line, and creating visual vertexes to reduce visual redundant information.
And performing contour detection extraction on the preliminarily processed image, storing perceived obstacle information of each round by using a contour point set, performing polygon approximation on the obstacle contour, extracting salient points by using a convex hull algorithm, expanding the salient points according to the inner angle middle line and creating visual vertexes.
The expanded image has a pixel set of adjacent white pixels forming a plurality of connected regions representing obstacles. And extracting the outline based on the connected region analysis and the outline tracking algorithm in the image, namely, a point set consisting of pixel points on the boundary of the obstacle, wherein each round of perception range is a window with a robot as a center, converting coordinates of outline points in the image into real world coordinates according to positioning information of a laser SLAM algorithm LeGO-LOAM, storing the real world coordinates into the outline point set contoursGlobal, and using the point set to represent global map information.
And filling and re-extracting a plurality of groups of barrier contour points in the contoursGlobal by using an OpenCV library, approximating the polygon of a given contour by a broken line approximation according to a Douglas-Peucker algorithm, thinning the points on the contour to reduce the number of the points and approximate the shape of the contour, and approximating the contour curve to be a simpler polygon curve. Extracting convex hulls from the outline of the obstacle after the polygon approximation, regarding each convex point on the convex hulls, calculating vectors according to k outline points of the obstacle at the same position and k-1 outline points of the obstacle to obtain a central line of a corresponding inner angle, expanding the convex points by 16 pixel points, taking the convex points after expanding the convex points by 16 pixel points as the vertexes in the visible view, wherein the visible vertexes are far away from the obstacle and are positioned in a safe passing area, and burrs of the obstacle in the image are prevented from influencing the creation of the visible view. As shown in fig. 3 (a) for extracting the outline points of the obstacle, and fig. 3 (b) for the convex points after being expanded along the middle line of the inner angle, the convex hull algorithm avoids unnecessary concave points of the obstacle, and reduces redundant information of the visual view.
Further, the step S104: extracting update points from vertexes of two groups of visual views extracted in two adjacent rounds by using a nearest neighbor search algorithm, updating a hash table based on the update points, and maintaining the visual views based on the updated hash table, wherein the method specifically comprises the following steps:
Along with the movement of the robot, the global map information is continuously updated, nearest neighbor searching is carried out on the creation kd-tree of the vertexes of the visual views at the two successive moments, the distance threshold value is set to be 0.3m, and if the distance between the position points of the same vertex at the front moment and the rear moment exceeds the distance threshold value, the point is regarded as a vanishing point or an increased point; either the vanishing point or the increasing point is considered as the update point;
if the robot position information p_start and the target point information p_gold coordinates change, old position information and target points are added in the vanishing point set difindicesemave, and the latest position information and target points are added in the added point set difindicecadd.
In the first round of path planning of the robot, points in an added point set difIndinceAdd are directly used, a hash table is created, visual view information is maintained, keys of the hash table are set to be coordinates of vertexes s of a visual view, the values of the hash table comprise two values of rhs(s) and g(s) of the vertexes s and adjacent point sets neighbor of the vertexes, the two values of rhs(s) and g(s) represent costs from a current node s to an end point, the actual costs for planning to the current node are represented, rhs(s) are look-ahead values calculated on the basis of g(s) in the calculation process, the rhs(s) are updated earlier, node states can be recorded in real time, g(s) are usually obtained by assigning values to rhs(s), the adjacent point sets neighbor are created by using the hash table, the coordinates of the adjacent points are keys, and edge cost values cost from the vertexes of the visual view s to the adjacent points are values cost of the adjacent points.
For each point in the added point set difIndinceAdd, taking the point set difIndinceAdd as a starting endpoint p_src, taking the rest points as a terminating endpoint p_dest, traversing the pixel values of the points on the line according to 5 pixel steps on the line between the p_src and the p_dest, if no barrier exists on the line, creating a visible connecting edge between the starting endpoint and the terminating endpoint, creating the points p_src and the p_dest in a hash table, setting rhs(s) and g(s) of the points as infinity, and adding the points as adjacent points if the path cost value of the vertex s of the visible point to the adjacent point is more than 1 meter, and taking the Euclidean distance from each other:
where cost represents the edge cost value from vertex s to adjacent point, x p_src And y p_src Representing the coordinates of p_src, x p_dest And y p_dest Representing the coordinates of p_dest;
when the robot path planning is not the first round, updating the hash table, traversing the points s in the vanishing point set difIndinceRemove, deleting the key value pairs corresponding to the points s in the hash table, and then re-detecting obstacles between the rest points in the hash table and the adjacent points thereof to determine the visibility because of updating the map information. The points in difIndicesAdd are then traversed to add visual information in the hash table.
It should be appreciated that the changing points are extracted using the nearest neighbor search kd-tree for the visual vertex vertical extracted at the nth round and the visual vertex vertical extracted at the n+1 round, thereby creating and updating visual view information, visual view and D * Lite, in combination, maintains the visual view using a hash table.
A k-d tree (k-dimension tree for short) is a data structure that partitions k-dimensional data space. The method is mainly applied to searching of multi-dimensional space key data (such as range searching and nearest neighbor searching).
As the robot moves continuously and perceives the surrounding environment, the global map information is updated continuously, part of the vertices in the vertical vanishes, and newly created vertices in the vertical.
A kd-Tree was created and traversed for two consecutive sets of visual vertices, setting a distance threshold of 0.3m, and points beyond this threshold were considered vanishing or increasing points, denoted difIndinceRemove and difIndinceAdd. The robot position information p_start obtained by a laser SLAM algorithm and the target point information p_gold released in a simulation environment are respectively subscribed by utilizing ROS topic communication, if coordinates change, old position information and a target point are added in difIndinceRemove, and the latest position information and the latest target point are added in difIndinceAdd.
Further, the step S105: based on D * Lite algorithm, calculating the maintained visible view to obtain the shortest path, and moving the robot according to the shortest pathThe dynamic method specifically comprises the following steps:
storing the changed points in S104 into a set nodesChange, wherein the changed points comprise points in a set difIndinceRemove and difIndinceAdd and points where adjacent points change, storing the points to be updated and detected in the process of calculating the shortest path subsequently by using a priority queue openlist, and determining the priority according to the corresponding key (S) value when a new point S is added into the queue;
key(s)=[k 1 (s);k 2 (s)],
k 1 (s)=min(g(s),rhs(s))+h(s,p start )+km,
k 2 (s)=min(g(s),rhs(s)),
km=km+h(s start ,s startLast ),
wherein h(s) start ,s startLast ) The starting point s representing the current round of search start S with the start of the previous search startLast Heuristic value between two points, euclidean distance between two points is taken by heuristic value, h (s, p start ) Heuristic value, k, representing node s to origin 1 (s) represents the estimated total cost of the path passing through the node s, and consists of the actual minimum cost of the end point reaching the point and a heuristic function, k 2 (s) represents the minimum cost of endpoint to node s, p start The method comprises the steps that a starting point is represented, km is used for modifying a key value, the km initial value is set to 0, and heuristic values delta h are accumulated along with each movement of a robot, wherein delta h represents a heuristic value between the starting point of a current round of search and the starting point of a previous round of search;
When comparing priority, determining key(s) value between different points, firstly comparing k of different points 1 (s);
If k of different points 1 (s) values are not equal, k 1 Points with large(s) values, high priority, k 1 (s) points with small values, low priority;
if k of different points 1 (s) are equal, comparing the different pointsK between 2 (s),k 2 Points with large(s) values, high priority, k 2 (s) points with small values, low priority;
wherein, rhs(s) formula is:
g(s) is actually given by rhs(s), and each point in the hash table maintains two values of rhs(s) and g(s).
Wherein rhs(s) represents the cost p_gold of the node s to the end point, the cost (s, s ') represents the moving cost of the node s to s ', g (s ') represents the shortest path cost of s ', s ' represents the neighboring point, p_next all neighbors representing node s;
D * the specific steps of the Lite algorithm are as follows:
(1) If the first search or the target point is changed, the priority queue openlist is emptied, the values of rhs(s) and g(s) of each point in the hash table are ensured to be infinity, and
km=0,
rhs(p_goal)=0,
key(p_goal)=[h(p_start,p_goal);0],
will [ key (p) goal );p goal ]Storing the priority queue openlist;
where km represents an initial value of 0, rhs (p_gold) represents a cost from a target point to a target point, p_gold represents a path target point, key (p_gold) represents a key value of the target point, p_start represents a path start point, and h (p_start, p_gold) represents a heuristic value between the start point and the target point.
If the target point is not changed, accumulating km and traversing the non-target points in the change point set nodesChange to update rhs(s) and key(s) of the target point.
(2) The calculation of the shortest path is then started:
(21) When the priority queue openlist is not empty, popping up a point with the highest priority as a current node p_current, recording a key value corresponding to the p_current in the priority queue as key_old, and recalculating a key value key_new of the p_current;
(22) If key_old < key_new, [ key_new; p_current is stored in a priority queue openlist and the next point is accessed, otherwise, the judgment is carried out (23); wherein key_old represents the old key value of the current node p_current;
(23) If g (p_current)>rhs (p_current), let g (p_current) =rhs (p_current), otherwise g (p_current) is infinity and recalculate rhs (p_current) and update the key of p_current in the queue; wherein g (p c urent) and rhs (p_current) represent cost values of the current node p_current to the endpoint;
(24) Then updating the adjacent points according to the p_current;
(25) When key_old is greater than or equal to key (p_start) and g (p_start) =rhs (p_start), marking that the shortest path is found and ending;
wherein key_old represents the old key value of the current node p_current, key (p_start) represents the key value of the path start point, p_start represents the path start point, g (p_start) and rhs (p_start) represent the cost from the start point to the end point;
(3) If the shortest path is found, the path point is found according to the calculation result from p_start:
adding p_start to a path point set path, marking the path point set path as p_current, accessing an adjacent point p_next, adding a point corresponding to the smallest g (p_next) to the path, marking the point corresponding to the smallest g (p_next) as p_current, continuing to start from the just marked p_current, accessing the adjacent point p_next, and adding the point corresponding to the smallest g (p_next) to the path;
when p_current=p_gold, a complete path point set path is obtained.
(4) And issuing a global path point, moving the robot to the next path point, simultaneously constructing map information through a SLAM (Simultaneous Localization and Mapping) algorithm, positioning, returning to S101 and recalculating the shortest path. When there is no path to the target point, the robot waits in place for a new target point to issue.
After constructing the visual view, use D * Lite algorithm is on the constructed mapA graph search is performed to find the shortest path, which is the same as a * The algorithm is similar to that of heuristic search, and adopts a reverse search strategy from a target point to a starting point, and the strategy of incremental search ensures that only the affected partial paths are calculated in each iteration without recalculating the whole paths, thereby saving calculation time. As the robot travels along the selected path point, the robot incrementally updates the global map information and updates the hash table maintaining the visual view information from step S101.
As shown in fig. 4, after the global path point is calculated and extracted in step S104 on the visual view, the path points are displayed in the image mapped by the original point cloud in step S102, and the path points are connected by a connection line.
Example two
The embodiment provides a visual view and D-based method * A path planning system of Lite algorithm;
based on visual view and D * A path planning system for Lite algorithm comprising:
an acquisition module configured to: acquiring map point cloud data in a window range with a robot as a center; filtering the map point cloud data;
a first processing module configured to: mapping the filtered map point cloud data into an image, and sequentially carrying out fuzzy processing, edge detection and expansion processing on the image;
a second processing module configured to: extracting an obstacle outline from the expanded image, performing polygonal approximation on the obstacle outline, extracting a convex hull from the polygonal approximated obstacle outline, and performing outward expansion processing on each salient point on the convex hull to obtain a visible vertex;
A third processing module configured to: extracting update points from the vertexes of two groups of visual views extracted in two adjacent rounds by using a nearest neighbor search algorithm, updating a hash table based on the update points, and maintaining the visual views based on the updated hash table;
a path generation module configured to: based on D * And calculating the maintained visible view by using a Lite algorithm to obtain a shortest path, and moving the robot according to the shortest path.
Here, the acquiring module, the first processing module, the second processing module, the third processing module, and the path generating module correspond to steps S101 to S104 in the first embodiment, and the modules are the same as examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of the first embodiment. It should be noted that the modules described above may be implemented as part of a system in a computer system, such as a set of computer-executable instructions.
The foregoing embodiments are directed to various embodiments, and details of one embodiment may be found in the related description of another embodiment.
The proposed system may be implemented in other ways. For example, the system embodiments described above are merely illustrative, such as the division of the modules described above, are merely a logical function division, and may be implemented in other manners, such as multiple modules may be combined or integrated into another system, or some features may be omitted, or not performed.
Example III
The embodiment also provides an electronic device, including: one or more processors, one or more memories, and one or more computer programs; wherein the processor is coupled to the memory, the one or more computer programs being stored in the memory, the processor executing the one or more computer programs stored in the memory when the electronic device is running, to cause the electronic device to perform the method of the first embodiment.
It should be understood that in this embodiment, the processor may be a central processing unit CPU, and the processor may also be other general purpose processors, digital signal processors DSP, application specific integrated circuits ASIC, off-the-shelf programmable gate array FPGA or other programmable logic device, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory may include read only memory and random access memory and provide instructions and data to the processor, and a portion of the memory may also include non-volatile random access memory. For example, the memory may also store information of the device type.
In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or by instructions in the form of software.
The method in the first embodiment may be directly implemented as a hardware processor executing or implemented by a combination of hardware and software modules in the processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor reads the information in the memory and, in combination with its hardware, performs the steps of the above method. To avoid repetition, a detailed description is not provided herein.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
Example IV
The present embodiment also provides a computer-readable storage medium storing computer instructions that, when executed by a processor, perform the method of embodiment one.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. Based on visual view and D * The path planning method of Lite algorithm is characterized by comprising the following steps:
acquiring map point cloud data in a window range with a robot as a center; filtering the map point cloud data;
mapping the filtered map point cloud data into an image, and sequentially carrying out fuzzy processing, edge detection and expansion processing on the image;
extracting an obstacle outline from the expanded image, performing polygonal approximation on the obstacle outline, extracting a convex hull from the polygonal approximated obstacle outline, and performing outward expansion processing on each salient point on the convex hull to obtain a visible vertex;
Extracting update points from the vertexes of two groups of visual views extracted in two adjacent rounds by using a nearest neighbor search algorithm, updating a hash table based on the update points, and maintaining the visual views based on the updated hash table;
based on D * And calculating the maintained visible view by using a Lite algorithm to obtain a shortest path, and moving the robot according to the shortest path.
2. The visual and D-based system of claim 1 * The path planning method of Lite algorithm is characterized in that the filtered map point cloud data is mapped into an image, and the image is sequentially subjected to fuzzy processing, edge detection and expansion processing, and the method specifically comprises the following steps:
obtaining maximum values and minimum values of x and y of coordinates of each point in the point cloud according to the point cloud distribution, setting the distance between pixel points, and determining the width and length of an image by the coordinate extreme value; traversing points in the point cloud, mapping three-dimensional coordinates of the points in the point cloud to two-dimensional coordinates in the image, and if the pixel value of the image is 255, indicating that the current pixel point is an obstacle;
blurring processing is carried out on the image, and discontinuous edges in the obstacle and noise of point cloud data are smoothed;
carrying out edge extraction on the blurred image by adopting an edge detection algorithm;
And performing expansion processing on the image after edge extraction.
3. The visual and D-based system of claim 1 * The path planning method of Lite algorithm is characterized in that the method extracts the obstacle outline from the image after expansion treatment, performs polygon approximation to the obstacle outline, extracts the convex hull from the obstacle outline with the polygon approximation, performs expansion treatment to each convex point on the convex hull to obtain the visible vertex, and specifically comprises the following steps:
marking the pixels which have the same pixel value and are adjacent to each other by adopting a connected region analysis algorithm on the image after the expansion processing, extracting the outline of the obstacle by adopting a connected region marked by an outline tracking algorithm, converting the coordinates of the outline points of the obstacle into the real world coordinates and storing the real world coordinates into an outline point set of the obstacle, and using the outline point set to represent global map information;
filling the obstacle contour point set, extracting contour points, approximating the polygons of the contour by fold line approximation, extracting convex hulls from the obstacles after the polygons are approximated, and arranging salient points at positions extending outwards by set distances along the inner angle central line direction of the convex hulls to obtain the vertexes of the visible view.
4. The visual and D-based system of claim 1 * The path planning method of Lite algorithm is characterized in that, for the vertexes of two groups of visual views extracted by two adjacent rounds, an nearest neighbor search algorithm is used for extracting an update point, a hash table is updated based on the update point, and the visual views are maintained based on the updated hash table, and the method specifically comprises the following steps:
along with the movement of the robot, the global map information is continuously updated, nearest neighbor searching is carried out on the creation kd-tree of the vertexes of the visual views at the two successive moments, the distance threshold value is set to be 0.3m, and if the distance between the position points of the same vertex at the front moment and the rear moment exceeds the distance threshold value, the point is regarded as a vanishing point or an increased point; either the vanishing point or the increasing point is considered as the update point;
if the robot position information p_start and the target point information p_gold coordinates change, old position information and target points are added in the vanishing point set difindicesemave, and the latest position information and target points are added in the added point set difindicecadd.
5. The visual and D-based system of claim 4 * The path planning method of the Lite algorithm is characterized in that when a first round of path planning is carried out on a robot, points in an added point set difIndinceAdd are directly used, a hash table is created, visual information is maintained, keys of the hash table are set to be coordinates of vertexes s of a visual view, the values of the hash table comprise two values of rhs(s) and g(s) of the vertexes s and two values of adjacent point sets neighbor of the vertexes s, the two values of rhs(s) and g(s) represent costs from the current node s to an end point, the costs represent actual costs for planning to the current node, rhs(s) are look-ahead values calculated on the basis of g(s) in the calculation process, rhs(s) are updated earlier and can record node states in real time, g(s) is usually obtained by assigning values to rhs(s), the adjacent point sets neighbor are also created by using the hash table, and the coordinates of the adjacent points are keys, and the costs from the vertexes of the visual view to the adjacent points are values of the adjacent points;
For each point in the added point set difIndinceAdd, taking the point set difIndinceAdd as a starting endpoint p_src, taking the rest points as a terminating endpoint p_dest, traversing the pixel values of the points on the connecting line according to n pixel steps on the connecting line between the p_src and the p_dest, if no barrier exists on the connecting line, creating visible connecting edges between the starting endpoint and the terminating endpoint, creating the points p_src and the p_dest in a hash table, setting rhs(s) and g(s) of the points as infinity, and adding the points as adjacent points if the path cost value of the visible vertex s to the adjacent point is more than 1 meter, and taking the distances of the adjacent points:
where cost represents the edge cost value from vertex s to adjacent point, x p_src And y p_src Representing the coordinates of p_src, x p_dest And y p_dest Representing the coordinates of p_dest;
when the robot path planning is not the first round, updating the hash table, traversing the points s in the vanishing point set difIndinceRemove, deleting key value pairs corresponding to the points s in the hash table, and then re-detecting barriers between the rest points in the hash table and adjacent points of the rest points in the hash table to determine visibility because of updating map information; the points in difIndicesAdd are then traversed to add visual information in the hash table.
6. The visual and D-based system of claim 5 * The path planning method of Lite algorithm is characterized by being based on D * The Lite algorithm is used for calculating the maintained visible view to obtain a shortest path, and the robot moves according to the shortest path, and specifically comprises the following steps:
storing the changed points into a set nodesChange, wherein the changed points comprise points in a set difIndincesRemove and difIndinceAdd and points where adjacent points change, storing the points to be updated and detected in the process of calculating the shortest path later by using a priority queue openlist, and determining the priority according to the corresponding key(s) value when a new point s is added into the queue;
key(s)=[k 1 (s);k 2 (s)],
k 1 (s)=min(g(s),rhs(s))+h(s,p start )+km,
k 2 (s)=min(g(s),rhs(s)),
km=km+h(s start ,s startLast ),
wherein h(s) start ,s startLast ) The starting point s representing the current round of search start S with the start of the previous search startLast A heuristic value between the two,the heuristic value takes the Euclidean distance between two points, h (s, p start ) Heuristic value, k, representing node s to origin 1 (s) represents the estimated total cost of the path passing through the node s, and consists of the actual minimum cost of the end point reaching the point and a heuristic function, k 2 (s) represents the minimum cost of endpoint to node s, p start The method comprises the steps that a starting point is represented, km is used for modifying a key value, the km initial value is set to 0, and heuristic values delta h are accumulated along with each movement of a robot, wherein delta h represents a heuristic value between the starting point of a current round of search and the starting point of a previous round of search;
When comparing priority, determining key(s) value between different points, firstly comparing k of different points 1 (s);
If k of different points 1 (s) values are not equal, k 1 Points with large(s) values, high priority, k 1 (s) points with small values, low priority;
if k of different points 1 (s) equal in value, k between different points is compared 2 (s),k 2 Points with large(s) values, high priority, k 2 (s) points with small values, low priority;
wherein, rhs(s) formula is:
g(s) is actually given by rhs(s), and each point in the hash table maintains two values of rhs(s) and g(s);
wherein rhs(s) represents the cost p_gold of the node s to the end point, cost (s, s ') represents the moving cost of the node s to s ', g (s ') represents the shortest path cost of s ', s ' represents the neighboring points, and p_next represents all neighboring points of the node s.
7. The path planning method based on the visual and D Lite algorithm according to claim 1, wherein the specific steps of the D Lite algorithm are as follows:
(1) If the first search or the target point is changed, the priority queue openlist is emptied, the values of rhs(s) and g(s) of each point in the hash table are ensured to be infinity, and
km=0,
rhs(p_goal)=0,
key(p_goal)=[h(p_start,p_goal);0],
will [ key (p) goal );p goal ]Storing the priority queue openlist;
wherein km represents an initial value of 0, rhs (p_gold) represents a cost from a target point to the target point, p_gold represents a path target point, key (p_gold) represents a key value of the target point, p_start represents a path start point, and h (p_start, p_gold) represents a heuristic value between the start point and the target point;
If the target point does not change, accumulating km and traversing non-target points in the change point set nodesChange to update rhs(s) and key(s) of the target point;
(2) The calculation of the shortest path is then started:
(21) When the priority queue openlist is not empty, popping up a point with the highest priority as a current node p_current, recording a key value corresponding to the p_current in the priority queue as key_old, and recalculating a key value key_new of the p_current;
(22) If key_old < key_new, [ key_new; p_current is stored in a priority queue openlist and the next point is accessed, otherwise, the judgment is carried out (23); wherein key_old represents the old key value of the current node p_current;
(23) If g (p_current) > rhs (p_current), let:
g(p_current)=rhs(p_current),
otherwise g (p_current) is infinity and recalculates rhs (p_current) and updates the key of p_current in the queue; wherein g (p c urent) and rhs (p_current) represent cost values of the current node p_current to the endpoint;
(24) Then updating the adjacent points according to the p_current;
(25) When key_old is greater than or equal to key (p_start) and g (p_start) =rhs (p_start), marking that the shortest path is found and ending;
wherein key_old represents the old key value of the current node p_current, key (p_start) represents the key value of the path start point, p_start represents the path start point, g (p_start) and rhs (p_start) represent the cost from the start point to the end point;
(3) If the shortest path is found, the path point is found according to the calculation result from p_start:
adding p_start to a path point set path, marking the path point set path as p_current, accessing an adjacent point p_next, adding a point corresponding to the smallest g (p_next) to the path, marking the point corresponding to the smallest g (p_next) as p_current, continuing to start from the just marked p_current, accessing the adjacent point p_next, and adding the point corresponding to the smallest g (p_next) to the path;
when p_current=p_gold, a complete path point set path is obtained;
(4) The global path point is released, the robot moves to the next path point, meanwhile, map information is constructed and positioned through an SLAM algorithm, and the shortest path is recalculated; when there is no path to the target point, the robot waits in place for a new target point to issue.
8. Based on visual view and D * The Lite algorithm path planning system is characterized by comprising:
an acquisition module configured to: acquiring map point cloud data in a window range with a robot as a center; filtering the map point cloud data;
a first processing module configured to: mapping the filtered map point cloud data into an image, and sequentially carrying out fuzzy processing, edge detection and expansion processing on the image;
A second processing module configured to: extracting an obstacle outline from the expanded image, performing polygonal approximation on the obstacle outline, extracting a convex hull from the polygonal approximated obstacle outline, and performing outward expansion processing on each salient point on the convex hull to obtain a visible vertex;
a third processing module configured to: extracting update points from the vertexes of two groups of visual views extracted in two adjacent rounds by using a nearest neighbor search algorithm, updating a hash table based on the update points, and maintaining the visual views based on the updated hash table;
a path generation module configured to: based on D * And calculating the maintained visible view by using a Lite algorithm to obtain a shortest path, and moving the robot according to the shortest path.
9. An electronic device, comprising:
a memory for non-transitory storage of computer readable instructions; and
a processor for executing the computer-readable instructions,
wherein the computer readable instructions, when executed by the processor, perform the method of any of the preceding claims 1-7.
10. A storage medium, characterized by non-transitory storage of computer readable instructions, wherein the instructions of the method of any of claims 1-7 are performed when the non-transitory computer readable instructions are executed by a computer.
CN202311427749.XA 2023-10-30 2023-10-30 Based on visual view and D * Path planning method and system of Lite algorithm Pending CN117419738A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311427749.XA CN117419738A (en) 2023-10-30 2023-10-30 Based on visual view and D * Path planning method and system of Lite algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311427749.XA CN117419738A (en) 2023-10-30 2023-10-30 Based on visual view and D * Path planning method and system of Lite algorithm

Publications (1)

Publication Number Publication Date
CN117419738A true CN117419738A (en) 2024-01-19

Family

ID=89528040

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311427749.XA Pending CN117419738A (en) 2023-10-30 2023-10-30 Based on visual view and D * Path planning method and system of Lite algorithm

Country Status (1)

Country Link
CN (1) CN117419738A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117951240A (en) * 2024-03-27 2024-04-30 国能龙源环保有限公司 Global three-dimensional point cloud map storage and real-time voxel retrieval method, device and equipment
CN118464028A (en) * 2024-07-15 2024-08-09 龙门实验室 Agricultural machinery path planning method based on divide-and-conquer strategy

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117951240A (en) * 2024-03-27 2024-04-30 国能龙源环保有限公司 Global three-dimensional point cloud map storage and real-time voxel retrieval method, device and equipment
CN117951240B (en) * 2024-03-27 2024-07-23 国能龙源环保有限公司 Global three-dimensional point cloud map storage and real-time voxel retrieval method, device and equipment
CN118464028A (en) * 2024-07-15 2024-08-09 龙门实验室 Agricultural machinery path planning method based on divide-and-conquer strategy

Similar Documents

Publication Publication Date Title
US11354856B2 (en) Unmanned aerial vehicle navigation map construction system and method based on three-dimensional image reconstruction technology
US11709058B2 (en) Path planning method and device and mobile device
CN113269837B (en) Positioning navigation method suitable for complex three-dimensional environment
CN117419738A (en) Based on visual view and D * Path planning method and system of Lite algorithm
CN108280866B (en) Road point cloud data processing method and system
US10402978B1 (en) Method for detecting pseudo-3D bounding box based on CNN capable of converting modes according to poses of objects using instance segmentation and device using the same
CN115564926B (en) Three-dimensional patch model construction method based on image building structure learning
CN110645998A (en) Dynamic object-free map segmentation establishing method based on laser point cloud
CN113110455B (en) Multi-robot collaborative exploration method, device and system for unknown initial state
CN112835064B (en) Mapping positioning method, system, terminal and medium
CN113592891B (en) Unmanned vehicle passable domain analysis method and navigation grid map manufacturing method
EP3931657B1 (en) System and method for surface feature detection and traversal
US10445611B1 (en) Method for detecting pseudo-3D bounding box to be used for military purpose, smart phone or virtual driving based-on CNN capable of converting modes according to conditions of objects and device using the same
Wang et al. Map-enhanced ego-lane detection in the missing feature scenarios
CN116416366A (en) 3D model construction method and device and electronic equipment
CN114549286A (en) Lane line generation method and device, computer-readable storage medium and terminal
CN114140758A (en) Target detection method and device and computer equipment
CN115330969A (en) Local static environment vectorization description method for ground unmanned vehicle
Rau et al. Lod generation for 3d polyhedral building model
CN117075612A (en) Rapid path planning based on improved visual view and track optimization method based on ESDF map
Meng et al. Berm detection for autonomous truck in surface mine dump area
CN111127474B (en) Airborne LiDAR point cloud assisted orthophoto mosaic line automatic selection method and system
CN114791936A (en) Storage, efficient editing and calling method for passable area of unmanned vehicle
CN114202567A (en) Point cloud processing obstacle avoidance method based on vision
CN113538677A (en) Positioning method, robot and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination