CN113741438A - Path planning method and device, storage medium, chip and robot - Google Patents

Path planning method and device, storage medium, chip and robot Download PDF

Info

Publication number
CN113741438A
CN113741438A CN202110960320.1A CN202110960320A CN113741438A CN 113741438 A CN113741438 A CN 113741438A CN 202110960320 A CN202110960320 A CN 202110960320A CN 113741438 A CN113741438 A CN 113741438A
Authority
CN
China
Prior art keywords
obstacle
static
robot
cost map
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110960320.1A
Other languages
Chinese (zh)
Other versions
CN113741438B (en
Inventor
沈孝通
何若男
秦宝星
程昊天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Gaussian Automation Technology Development Co Ltd
Original Assignee
Shanghai Gaussian Automation Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Gaussian Automation Technology Development Co Ltd filed Critical Shanghai Gaussian Automation Technology Development Co Ltd
Priority to CN202110960320.1A priority Critical patent/CN113741438B/en
Publication of CN113741438A publication Critical patent/CN113741438A/en
Application granted granted Critical
Publication of CN113741438B publication Critical patent/CN113741438B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The application discloses a path planning method, a path planning device, a storage medium, a chip and a robot. Compared with the global cost map navigation scheme in the prior art, the method is executed by the robot, the problem that the robot gives way and replans the navigation path frequently in an actual working scene, such as meeting pedestrians and moving objects, is solved, the static cost map of a preset space is generated and maintained to serve as the global path plan before the robot executes tasks each time, and the local path plan for optimizing the global path is generated on the local cost map according to the real-time sensed obstacle information marks, so that the robot can reasonably avoid obstacles, and the optimization of the navigation planning result of the robot is realized.

Description

Path planning method and device, storage medium, chip and robot
Technical Field
The application relates to the technical field of robots, in particular to a path planning method, a path planning device, a storage medium, a chip and a robot.
Background
With the development of artificial intelligence technology, various intelligent robots appear, such as floor sweeping robots, floor mopping robots, dust collectors, weeding machines, and the like. The cleaning robots can automatically identify surrounding obstacles in the working process and perform obstacle avoidance operation on the obstacles, and the cleaning robots not only liberate labor force, save labor cost, but also improve cleaning efficiency.
At present, when a cleaning robot carries out cleaning operation according to a navigation map, the existing global cost map navigation scheme does not have semantic information for setting dynamic obstacles and static obstacles for distinguishing, so that the type of the obstacles cannot be distinguished as dynamic or static. In an actual working scene, when a moving pedestrian or a moving object passes through a path planned by the robot, the path planned by the robot is only temporary, but if the robot still considers the moving pedestrian or the moving object as a static obstacle, the robot frequently gives way and replans a navigation path, which affects the fluency of robot navigation.
Disclosure of Invention
In view of this, an embodiment of the present invention provides a path planning method, an apparatus, a storage medium, a chip, and a robot, so as to solve the problem that navigation fluency of a robot is affected by frequent way giving and navigation path re-planning of the robot in the prior art.
According to an aspect of the invention, there is provided a path planning method, the method being performed by a robot, the method comprising: generating and maintaining a static cost map corresponding to a preset space; generating a global path plan for the task based on the static cost map before the robot executes the task; in the motion process of the robot, sensing whether an obstacle exists in a preset range of a current travel path of the robot in real time, updating a local cost map and the static cost map based on a sensing result, and then generating a local path plan for optimizing the global path plan based on the local cost map; the local cost map marks the position information of all the obstacles in the local area monitored in real time, and the static cost map marks the position information of all the static obstacles in the global area.
According to yet another aspect of the present invention, there is provided a path planning apparatus for a robot, the apparatus including: the system comprises a static map generation module, a first path planning module and a second path planning module;
the static map generation module is used for generating and maintaining a static cost map corresponding to a preset space;
the first path planning module is used for generating a global path plan for the task based on the static cost map before the robot executes the task;
the second path planning module is used for sensing whether an obstacle exists in a preset range of a current travel path of the robot in real time in the motion process of the robot, updating a local cost map and the static cost map based on a sensing result, and then generating a local path plan for optimizing the global path plan based on the local cost map;
the local cost map marks the position information of all the obstacles in the local area monitored in real time, and the static cost map marks the position information of all the static obstacles in the global area.
According to a further aspect of the invention, there is provided a storage medium storing a computer program which can be loaded by a processor to perform the steps of any of the path planning methods described above.
According to yet another aspect of the invention, there is provided a chip comprising at least one processor and an interface; wherein the interface is configured to provide the at least one processor with program instructions or data; the at least one processor is configured to execute the program line instructions to perform the steps of any of the path planning methods described above.
According to yet another aspect of the present invention, there is provided a robot comprising a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the steps of any of the path planning methods described above.
The path planning method, the device, the storage medium, the chip and the robot provided by the embodiment of the invention can generate and maintain a static cost map of a preset space, generate a global path planning map for each navigation of the robot based on the static cost map, update a local cost map and the static cost map according to real-time sensed obstacle information, and generate a local path plan for optimizing the global path planning based on the local cost map. Compared with the global cost map navigation scheme in the prior art, the method can solve the problems that the robot gives way and plans the navigation path again frequently in an actual working scene, such as meeting pedestrians and moving objects, and can generate and maintain the static cost map (marking the position information of all static obstacles in the global area) of the preset space as the global path plan before the robot executes tasks each time, and generate the local path plan for optimizing the global path on the local cost map according to the real-time sensed obstacle information marks, so that the robot can reasonably avoid obstacles, and the optimization of the navigation planning result of the robot is realized.
Drawings
The technical solution and other advantages of the present invention will become apparent from the following detailed description of specific embodiments of the present invention, which is to be read in connection with the accompanying drawings.
Fig. 1 is a schematic flow chart of a path planning method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of the substeps of step S100 shown in FIG. 1;
FIG. 3 is a schematic flow chart of the sub-steps of step S110 shown in FIG. 2;
fig. 4 is a schematic diagram of connection of multiple feature points corresponding to multiple frames of point cloud data in the same stereoscopic space according to an embodiment of the present invention;
fig. 5 is a schematic diagram of connection of a plurality of feature points corresponding to another plurality of frames of point cloud data in the same stereo space according to the embodiment of the present invention;
fig. 6 is a schematic diagram of the connection of a plurality of feature points corresponding to another plurality of frames of point cloud data in the same stereo space and the main direction corresponding to the edge point set according to the embodiment of the present invention;
fig. 7 is a schematic diagram of dividing a target obstacle according to a connection relationship between points and a plurality of feature points corresponding to multiple frames of point cloud data according to an embodiment of the present invention;
fig. 8a is a schematic structural diagram of a path planning apparatus for a robot according to an embodiment of the present invention;
FIG. 8b is an architectural diagram of a sub-module of the static map generation module shown in FIG. 8 a;
FIG. 8c is a block diagram of sub-modules of the obstacle detection module shown in FIG. 8 b;
FIG. 8d is a schematic diagram of the structure of the sub-unit of the obstacle dividing unit shown in FIG. 8 c;
FIG. 9 is a schematic diagram of a chip according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of a robot provided according to an embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first", "second" and "first" are used herein for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, features defined as "first", "second", may explicitly or implicitly include one or more of the described features. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; may be mechanically connected, may be electrically connected or may be in communication with each other; either directly or indirectly through intervening media, either internally or in any other relationship. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
The following disclosure provides many different embodiments or examples for implementing different features of the invention. To simplify the disclosure of the present invention, the components and arrangements of specific examples are described below. Of course, they are merely examples and are not intended to limit the present invention. Furthermore, the present invention may repeat reference numerals and/or letters in the various examples, such repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed.
In view of the problems mentioned in the background, the present invention is directed to provide a more efficient robot path planning method to improve the fluency of robot navigation.
The core idea of the invention is that a static cost map is added on the basis of two traditional maps (a global cost map and a local cost map), the position information of all static obstacles in a preset space is marked on the static cost map, the static cost map is used as the basis of global path planning before the robot executes tasks each time, and simultaneously the position information of the static obstacles and the dynamic obstacles sensed in real time is added or updated in the corresponding local cost map and the static cost map; and then generating a local path plan for optimizing the global path plan based on the local cost map, which is beneficial to improving the fluency of the robot path plan. The following is a detailed description of specific embodiments.
Fig. 1 is a flowchart of a path planning method according to an embodiment of the present invention, and referring to fig. 1, an embodiment of the present invention provides a path planning method, which is performed by a robot, and includes: step S100, generating and maintaining a static cost map corresponding to a preset space; step S200, before the robot executes a task, generating a global path plan for the task based on the static cost map; step S300, in the motion process of the robot, sensing whether an obstacle exists in a preset range of the current travel path of the robot in real time, updating a local cost map and the static cost map based on a sensing result, and then generating a local path plan for optimizing the global path plan based on the local cost map. The local cost map marks the position information of all the obstacles in the local area monitored in real time, and the static cost map marks the position information of all the static obstacles in the global area.
In this embodiment, the robot may be, for example, an intelligent cleaning robot. In some other embodiments, the robot may also be a robot with other specific functions, such as a delivery robot, a navigation robot, etc.
In step S100, a static cost map corresponding to a preset space is generated and maintained. For example, when creating the initial static cost map, the robot may be driven to patrol a preset space determined by a user And to pre-construct a basic static map according to a 3D laser SLAM (Simultaneous Localization And Mapping) algorithm, or the basic static map may be acquired from a network or input by the user through an application program, wherein the basic static map constructed by the application program may be visually displayed on the smart device.
In step S200, before the robot executes a task, a global path plan for the task is generated based on the static cost map. The static cost map is mainly used for reflecting the cost value of an area where a static obstacle is located in a preset space, and the probability of collision of the robot is higher in the area with the larger cost value, so that the robot should be controlled to avoid the areas to walk as far as possible, and when the cost value of a certain point in a certain area is the largest, the robot is bound to collide with the obstacle. For example, if the cost value range of a point on the static cost map is [0, 254], the robot will preferentially select a path with a smaller cost value to walk when planning the path, and when the cost value of a certain point is the maximum value 254, the robot will inevitably collide with a static obstacle at the point.
Illustratively, the robot generates a global path plan for the task according to the static cost map, and drives forward based on the global path plan to perform a predetermined task, such as cleaning the ground. When the robot is traveling, it senses whether an obstacle exists within a preset range (e.g., 10 m to 30 m) by its own sensing means (e.g., a laser radar, a camera, etc.). For example, the robot can scan an environment by a laser radar provided on the robot, collect distance information between the robot and an obstacle from reflected waves of laser light, and then calculate position information of the obstacle using the distance information. For example, the robot may output the structured information of the obstacle in the current environment according to the input laser point cloud data and the image information of the obstacle collected by the camera, and the like, and generally output the structured information in the form of a grid occupation map, obstacle convex hull contour information, and the like. In addition, in the process of the robot traveling, a Simultaneous Localization And Mapping (SLAM) technology can be adopted to realize autonomous movement And Localization, And finally the robot is ensured to reach a target point.
In step S300, since there may be a plurality of reachable paths from the current point to the target point within the preset space, the reachable paths are paths from the current point to the target point. In order to prevent the robot from bypassing and improve the walking efficiency of the robot, the robot generates a global path plan (a first walking path) for the task according to the static cost map, and in the running process of the robot, the obstacle information acquired in real time is respectively marked in the local cost map and the static cost map, wherein the local cost map marks the position information of all the obstacles in the local area monitored in real time, and the static cost map marks the position information of all the static obstacles in the global area. A local cost map and the static cost map are updated according to the sensing result of the obstacle, and then a local path plan (second walking path) for optimizing the global path plan is generated based on the local cost map. And obtaining a path with the minimum sum of path costs from the current point to the target point by adopting a corresponding path planning algorithm in the processes of global path planning and subsequent local path optimization of the robot. For example, a global path with the minimum sum of path costs from the current point to the target point, i.e., a first walking path, is obtained according to the static cost map by using a corresponding path algorithm. The adopted path planning algorithm mainly comprises a front-end path searching algorithm and a rear-end track algorithm, wherein the path searching algorithm has the function of finding the shortest path between two points in a known map, such as an A-star algorithm, a Dijkstra algorithm, an RRT algorithm and the like, and is not limited specifically here. The track optimization function is to optimize the path output by the path-finding algorithm, so that the path is more in line with kinematic dynamics constraint, energy constraint and safety constraint.
It should be noted that the local cost map is mainly used for reflecting a cost value of an area where an obstacle in a preset range of a current travel path point of the robot is located in a movement process, according to the local cost map, the robot selects a path (a second travel path) with the lowest cost and avoids the local obstacle by controlling a travel speed, and meanwhile, the robot continues to monitor whether the obstacle appears in the preset range of the robot in a process of continuing to travel forward according to the second travel path, that is, the robot detects the obstacle on the navigation path in real time before reaching a target point, that is, the detection is continuously performed according to a preset frequency.
Compared with the global cost map navigation scheme in the prior art, the technical scheme provided by the embodiment of the invention can solve the problems that the robot gives way and replans the navigation path frequently in an actual working scene, such as meeting pedestrians and moving objects, and influences the navigation fluency, and can generate and maintain the static cost map (marking the position information of all static obstacles in the global area) of the preset space as the global path plan before the robot executes the task each time, and generate the local path plan for optimizing the global path on the local cost map according to the real-time sensed obstacle information marks, so that the robot can reasonably avoid the obstacle, and the optimization of the navigation planning result of the robot can be realized.
Fig. 2 is a flowchart illustrating sub-steps of step S100 shown in fig. 1, and as shown in fig. 2, the step of generating and maintaining the static cost map corresponding to the preset space includes: step S110, detecting obstacles in the preset space based on the detection data, and determining the type of each detected obstacle; step S120, for each obstacle, if it is determined that the obstacle is a static obstacle, adding or updating the position information of the obstacle in the static cost map, and if it is determined that the obstacle is a dynamic obstacle, only adding or updating the position information of the obstacle in the local cost map.
When the robot moves according to the global path plan of the task generated based on the static cost map (marking the position information of all static obstacles in the global area), it is likely to encounter dynamic obstacles such as pedestrians, moving objects and the like, since the path of the moving pedestrian or moving object through the robot planning is only temporary, therefore, whether the robot has an unmarked obstacle in the original map within the preset range of the current path point needs to be detected in real time, so that the robot has the prejudgment, then the robot updates the corresponding local cost map and the static cost map according to different types of obstacles, then generates a local path plan for optimizing the global path plan based on the local cost map, therefore, the robot can reasonably avoid the obstacle and optimize the navigation planning result of the robot.
In step S110, obstacles of the preset space are detected based on the detection data, and the type of each detected obstacle is determined. Specifically, in one embodiment, data information including an obstacle may be collected by a sensor such as a flat laser, a flat camera, an oblique camera, or a bumper strip installed on the robot, and the data information is used as a source of marking data for marking the obstacle on a map, and after the obstacle information monitored by the laser, the camera, or the sensor is obtained, the robot directly marks a current obstacle corresponding to the obstacle information in a local cost map and a static cost map maintained by the robot.
For example, in another embodiment, during the movement of the robot, the existence duration of the current obstacle may be continuously monitored by a sensor such as a flat laser, a flat camera, a tilted camera, a bumper strip, etc. mounted on the robot, where the existence duration may be understood as a sensing duration during which the sensor can sense the current obstacle. If the existing time length exceeds the preset threshold value, the robot marks the current obstacle as a static obstacle in a local cost map and a static cost map which are maintained by the robot. For example, the single line laser radar point cloud information is semantic information including dynamic points, static pedestrian points and noise points, the static points of the single line laser radar can be directly accumulated for a long time in the process of marking static obstacles on a map, and if the stable existence duration of the static points exceeds a preset threshold, the robot marks the static points on a static cost map maintained by the robot; if the static pedestrian point exists stably for a long time, the robot carries out certain expansion processing on the static pedestrian point and then marks the static pedestrian point in the maintained static cost map. The reason for the expansion processing is that the robot to which the single line laser radar device is mounted can only sense the leg of a stationary pedestrian when actually monitoring obstacle information because the single line laser radar device has a certain height from the ground, and the expansion processing is performed on the stationary pedestrian point to prevent the presser foot processing. Furthermore, noise as well as dynamic points may be unmarked in a static cost map maintained by the robot itself. In addition, for the obstacle information acquired by sensors such as images and anti-collision bars acquired by a flat camera, as dynamic obstacle and static obstacle semantic information which can be distinguished are not obtained, the stable existing time length of the current obstacle can only be continuously monitored, and if the stable existing time length of the current obstacle exceeds a preset threshold value, the robot also marks the current stable existing static obstacle in a static cost map maintained by the robot.
In step S120, for each obstacle, if it is determined that the obstacle is a static obstacle, the position information of the obstacle is newly added or updated in the static cost map, and if it is determined that the obstacle is a dynamic obstacle, the position information of the obstacle is newly added or updated only in the local cost map. Namely, the static obstacles in the local area are detected in real time and updated and maintained in the static cost map, the static obstacles and the dynamic obstacles in the local area are detected in real time and marked in the local cost map, and a local path plan (a second walking path) for optimizing the global path plan is generated according to the local cost map, so that the robot can reasonably avoid the obstacles in the subsequent walking path and finally reach a target point.
Fig. 3 is a schematic flowchart of the sub-step of step S110 shown in fig. 2, and fig. 4 is a schematic diagram of connection of multiple feature points corresponding to multiple frames of point cloud data in the same stereo space according to an embodiment of the present invention.
As shown in fig. 3 to 4, the detecting of the obstacles in the preset space based on the detection data and the determining of the type of each of the detected obstacles includes: step S111, collecting continuous multi-frame point cloud data, wherein each frame of point cloud data comprises a plurality of feature points; step S112, stacking the collected multi-frame point cloud data in the same three-dimensional space; step S113, connecting the characteristic points in the three-dimensional space in a point-to-point mode based on a preset rule to form a plurality of edge lines; and step S114, dividing each target obstacle according to the connection relation between the points and the points, and then judging the dynamic and static attribute information of each target obstacle based on the edge line set corresponding to each target obstacle.
In step S111, point cloud data of an obstacle in a preset range on a current path point of the robot is collected by a laser radar, for example, a scanning angle range of the laser radar is-120 ° to-120 °, all obstacles in a field of view range of the robot can be sensed, the laser radar transmits laser pulses at a frequency of 10Hz, that is, rotates 10 turns per second, issues 10 frames of data, collects distance information between the robot and the obstacle according to a reflected wave of the laser, calculates position information of the obstacle using the distance information, and converts the position information into point cloud data of the obstacle.
In step S112, for example, the robot continuously acquires 8 frames of point cloud data through the laser radar, each frame of point cloud data may be established on a displayable three-dimensional space after being subjected to noise filtering, for example, by using Rviz software, a time sequence label corresponding to each frame of point cloud data may be automatically generated in the software, so as to distinguish the point cloud data obtained at each frame time. The collected continuous 8 frames of point cloud data are stacked in the same three-dimensional space, for example, the continuous 8 frames of point cloud data are simultaneously constructed in the same three-dimensional space in the form of stacking 8 layers of feature points, wherein each frame of point cloud data corresponds to one layer of feature points. Illustratively, a three-dimensional perspective view of 8 layers of feature points located in the same stereo space can be viewed through a graphical interface in the software.
In step S113, the feature points in the three-dimensional space are connected in a point-to-point manner based on a preset rule to establish a plurality of edge lines corresponding to the obstacle. Exemplarily, in the process of establishing the edge line, the euclidean distance between each feature point and a feature point adjacent to the same frame or an adjacent different frame may be calculated through software (Rviz), and each feature point and other target feature points are connected in a point-to-point manner under the condition that the euclidean distance between the point and the feature point satisfies a preset threshold, so as to establish the edge line corresponding to each obstacle, and after all the edge lines are established, each target obstacle may be divided according to the connection relationship between the point and the point.
In step S114, each target obstacle is divided according to the connection relationship between the points, and then the dynamic and static attribute information of each target obstacle is determined based on the edge line set corresponding to each target obstacle. The dynamic and static attribute information means that the obstacle belongs to either a dynamic type or a static type.
The criterion for dividing each target obstacle is based on the connection relationship between points in the range of feature points of 7 frames before the current frame, which is traced based on the feature points of the current frame, for example, according to the connection relationship between all feature points included in the range of two frames before the current frame and according to the connection relationship between all feature points of the current frame, if each edge line in an edge line set has a connection with one or more edge lines in the set, the edge line set is regarded as a target obstacle.
It should be noted that, the above-mentioned collection of continuous 8 frames of point cloud data is only an example, and in an actual application scenario, the number of frames for collecting continuous point cloud data may also be set according to a specific application scenario. The embodiments of the present invention are not limited thereto.
Fig. 5 is a schematic diagram of connection of a plurality of feature points corresponding to another plurality of frames of point cloud data in the same stereo space according to the embodiment of the present invention; fig. 6 is a schematic diagram of a plurality of feature points corresponding to another plurality of frames of point cloud data connected in the same stereo space and corresponding to main directions of edge line sets according to an embodiment of the present invention; fig. 7 is a schematic diagram of dividing a target obstacle according to a connection relationship between points and a plurality of feature points corresponding to multi-frame point cloud data according to an embodiment of the present invention.
As shown in fig. 4 to 7, the step of connecting the feature points in the three-dimensional space in a point-to-point manner based on the preset rule includes: aiming at the feature points belonging to the same frame, connecting each feature point with all feature points which are adjacent to the feature point on the index measurement and have Euclidean distance from the feature point smaller than a first preset threshold value in a point-to-point mode to form a first edge line group; and for each feature point, taking all feature points included in two frames before the frame to which the feature point belongs as target feature points, and connecting the feature point and all target feature points with Euclidean distances from the feature point smaller than a second preset threshold value in a point-to-point manner to form a second edge line group.
For example, in the process of establishing an edge line, for feature points belonging to the same frame, software (e.g., Rviz software) may calculate, from each feature point, all feature points adjacent to the feature point on the index metric, where a euclidean distance from the feature point is smaller than a first preset threshold, and if the euclidean distance between the two feature points is smaller than the preset threshold, the feature points are marked as valid feature points in the same frame, and the feature points and the valid feature points are connected in a point-to-point manner to form a first edge line, where all first edge lines of all frames form a first edge line group. The euclidean distance is a linear distance between two coordinate points in the same plane coordinate system obtained by converting the current feature point and the valid feature point, and may be calculated by using the pythagorean theorem.
In the process of establishing the edge line, regarding each feature point, taking all feature points included in the historical two-frame range of the frame to which the feature point belongs as target feature points, marking all target feature points with Euclidean distances smaller than a preset threshold value with the feature points as inter-frame effective feature points, and then connecting the feature points with all inter-frame effective feature points in a point-to-point mode to form a plurality of second edge lines. All the second edge lines form a second edge line group.
Further, the step of determining the dynamic and static attribute information of each target obstacle based on the edge line set corresponding to each target obstacle includes: and counting the number of the moving edge lines and the number of the continuous edge lines in the edge lines belonging to the second edge line group corresponding to each target obstacle. If the projection length of any edge line on the first plane is greater than a third preset threshold value, marking the edge line as a moving edge line; if an included angle theta between a direction vector of a projection of any edge line on the first plane and a main direction vector is smaller than a fourth preset threshold value, marking the edge line as a continuous edge line; the first plane is a base plane of the stereoscopic space (in the figure, a plane constructed in an x-y direction represents the first plane, and a z direction is a direction perpendicular to the first plane), and the principal direction vector is a direction vector of a projection of a connecting line between a geometric center point surrounded by feature points located at the bottommost layer and a geometric center point surrounded by feature points located at the topmost layer on the first plane, among feature points corresponding to the target obstacle.
In fig. 5, for example, the projection length of a second edge line formed by connecting feature points corresponding to the 4 th frame point cloud data of the obstacle a. and feature points corresponding to the 5 th frame point cloud data of the obstacle a. in a point-to-point manner according to a preset rule on the first plane is s1, and if the length of s1 exceeds a third preset threshold, the second edge line is marked as a moving edge line. It should be noted that the third preset threshold may be obtained by geometric mapping of a real physical model of the robot.
In the embodiment of the present invention, a concept of a main direction vector is introduced, for example, as shown in fig. 6, a projection vector of a line connecting a geometric center point surrounded by feature points on the 8 th frame of the bottom layer of the obstacle c to a geometric center point surrounded by feature points on the 1 st frame of the top layer of the obstacle c on a first plane is defined as the main direction vector, if an included angle θ between a direction vector of a projection of any one second edge line on the first plane and the main direction vector is smaller than a fourth preset threshold, the second edge line is marked as a continuous edge line, that is, the continuity of the second edge line is determined by an included angle between a projection vector of the second edge line in the first direction and the main direction vector of the second edge line belonging to the second edge line group corresponding to the target obstacle, the included angle θ between the projection vector of the second edge line in the first direction and the main direction vector is generally smaller, the better the continuity of the second edge line, the acute included angle theta is, for example, smaller than a fourth preset threshold, for example, smaller than 60 deg..
In the embodiment of the present invention, each target obstacle is divided according to a connection relationship between a point and a feature point corresponding to the multi-frame point cloud data, for example, four divided target obstacles (Obj1, Obj2, Obj3, Obj4) are shown in fig. 7, four sets of edge lines are correspondingly obtained, and then, it is further required to perform dynamic and static attribute judgment on each target obstacle according to a mobility index of an edge line belonging to the second edge line group and a continuity index of the edge line in each set of edge lines.
For each target obstacle, calculating the ratio of the number of the moving edge lines to all the edge lines belonging to the second edge line group corresponding to the target obstacle (i.e. the mobility index of the corresponding edge line) based on the counted number of the moving edge lines in the second edge line group; and calculating the ratio of the number of the continuous edge lines to the number of the moving edge lines (namely the continuity indexes of the corresponding edge lines) based on the counted number of the continuous edge lines for each target obstacle.
Further, the step of determining the dynamic and static attribute information of each target obstacle based on the edge line set corresponding to each target obstacle further includes: for each target obstacle, judging the dynamic and static attributes of the target obstacle according to the number ratio of the moving edge line in the edge line set corresponding to the target obstacle to all the edge lines belonging to the second edge line group corresponding to the target obstacle (namely, the mobility index of the corresponding edge line) and the number ratio of the continuous edge line to the moving edge line (namely, the continuity index of the corresponding edge line); if the ratio of the number of the moving edge lines to the number of all the edge lines belonging to the second edge line group corresponding to the target obstacle (i.e. the mobility index of the corresponding edge line) is less than a fifth preset threshold, determining that the type of the target obstacle is a static obstacle; if the ratio of the number of the moving edge lines to the number of all the edge lines belonging to the second edge line group corresponding to the target obstacle (i.e., the mobility index of the corresponding edge line) is greater than a sixth preset threshold, and the ratio of the number of the continuous edge lines to the number of the moving edge lines (i.e., the continuity index of the corresponding edge line) is greater than a seventh preset threshold, determining that the type of the target obstacle is a dynamic obstacle.
According to the technical scheme provided by the embodiment of the invention, in the process that the robot moves along a specific route, each target obstacle can be divided based on the connection relation between points in the characteristic points corresponding to multi-frame point cloud data in the laser point cloud detection data and the points, meanwhile, an edge line set corresponding to each target obstacle is obtained according to edge lines formed by point-to-point connection meeting preset conditions, and the dynamic and static attribute information of the target obstacle corresponding to the edge line set can be obtained by judging the mobility index of the edge lines in the edge line set and the continuity index of the edge lines.
If the mobility index of the edge line belonging to the second edge line group in the edge line set is smaller than a fifth preset threshold, determining the target obstacle corresponding to the edge line set as a static obstacle, and if the mobility index of the edge line belonging to the second edge line group in the edge line set is larger than a sixth preset threshold and the continuity index of the edge line belonging to the second edge line group is larger than a seventh preset threshold, determining the target obstacle corresponding to the edge line set as a dynamic obstacle. Based on this, the moving and static attributes of the four illustrated target obstacles (Obj1, Obj2, Obj3, Obj4) can be distinguished.
In order to improve the accuracy of identifying the dynamic and static attributes of the target obstacles based on the laser point cloud detection data, after distinguishing the dynamic and static attributes of the four target obstacles (Obj1, Obj2, Obj3 and Obj4), tracking learning is performed on the four target obstacles (Obj1, Obj2, Obj3 and Obj4), in the tracking learning process, static point clouds, moving point clouds and unstable point clouds in the laser point cloud detection data can be timed to distinguish which static obstacles exist stably, which dynamic obstacles and unstable obstacles, so that dynamic and static semantic information which can be distinguished from the corresponding obstacles is finally formed, and then the dynamic and static semantic information is sequentially assigned to corresponding grid points of the static cost map and assigned to corresponding grid points of the local cost map. In addition, for other sensor data without dynamic and static semantics, different stable counts are set according to the reliability of the sensor, and the sensor data are marked on the static map after meeting a certain threshold value.
In the embodiment of the present invention, each target obstacle may be divided according to a connection relationship between a point and a point in feature points corresponding to multiple frames of point cloud data in the laser point cloud detection data, an edge line set corresponding to each target obstacle is obtained according to point-to-point connection satisfying a preset condition, and dynamic and static attribute information of the target obstacle corresponding to each edge line set may be obtained by determining a mobility index of an edge line belonging to the second edge line group in each edge line set and a continuity index of the edge line. However, the above-mentioned detection data based on the laser point cloud can only determine the dynamic and static types of the target obstacles in the preset space, and the type of the target obstacles in the preset space cannot be known, for example, it cannot be determined that the target obstacles are people, cats, dogs, automobiles, or other objects, so that accurate semantic information setting cannot be performed on each target obstacle in the navigation map. In order to solve the above problem, in an embodiment of the present invention, the dynamic and static information of each target obstacle obtained based on the laser point cloud detection data is data-fused with the image information of the target obstacle collected by the camera device to obtain fused dynamic and static information of the target obstacle, and then the corresponding grid points of the static cost map and the corresponding grid points of the local cost map are assigned based on the fused dynamic and static information, and after the fused dynamic and static information of the target obstacle is obtained, the dynamic and static semantic information of each target obstacle can be finally set in the navigation map. It should be noted that the data fusion based on the laser point cloud detection data and the image information of the target obstacle collected by the camera device may be real-time, that is, the data fusion is real-time during the driving process of the robot along the preset navigation path.
According to the technical scheme provided by the embodiment of the invention, in the process that the robot moves along a specific route, each target obstacle can be divided based on the connection relation between points in the characteristic points corresponding to multi-frame point cloud data in the laser point cloud detection data and the points, meanwhile, the edge line set corresponding to each target obstacle is obtained according to the edge lines formed by point-to-point connection meeting the preset conditions, and the dynamic and static attribute information of the target obstacle corresponding to each edge line set can be obtained by judging the mobility index of the edge lines in the edge line set and the continuity index of the edge lines. The method comprises the steps of generating and maintaining a static cost map of a preset space, generating a global path plan used by the robot for navigation each time based on the static cost map, updating a local cost map and the static cost map according to real-time sensed obstacle information, and generating a local path plan used for optimizing the global path plan based on the local cost map. Compared with the global cost map navigation scheme in the prior art, the method can solve the problems that the robot gives way and plans the navigation path again frequently in an actual working scene, such as meeting pedestrians and moving objects, and can enable the robot to avoid obstacles reasonably and achieve optimization of the navigation planning result of the robot by generating and maintaining a static cost map (marking the position information of all static obstacles in a global area) of a preset space as the global path planning before the robot performs tasks each time and generating a local path planning for optimizing the global path on the local cost map according to the real-time sensed obstacle information marks.
Fig. 8a is a schematic structural diagram of a path planning apparatus for a robot according to an embodiment of the present invention, and as shown in fig. 8a, the apparatus includes a static map generation module 8100, a first path planning module 8200, and a second path planning module 8300.
The static map generation module 8100 generates and maintains a static cost map corresponding to a preset space; the initial static cost map may be a basic static map obtained by the robot through laser radar sensing in advance, or may be a basic static map input through an external program, for example, the robot may patrol a preset space determined by a user in response to a trigger of the user to automatically generate the initial static cost map, and for example, in the traveling process of the robot, an SLAM technique may be employed to achieve autonomous movement and positioning.
The first path planning module 8200 is configured to generate a global path plan for the task based on the static cost map before the robot executes the task;
the second path planning module 8300 is configured to sense whether an obstacle exists in a preset range of a current travel path of the robot in real time during a movement process of the robot, update a local cost map and the static cost map based on a sensing result, and then generate a local path plan for optimizing the global path plan based on the local cost map;
the local cost map marks the position information of all the obstacles in the local area monitored in real time, and the static cost map marks the position information of all the static obstacles in the global area.
Fig. 8b is a schematic structural diagram of sub-modules of the static map generation module shown in fig. 8a, and as shown in fig. 8b, the static map generation module 8100 further includes an obstacle detection module 8400, and the obstacle detection module 8400 is configured to detect obstacles in the preset space based on the detection data, and determine the type of each detected obstacle.
It should be noted that, the path planning apparatus for a robot provided in this embodiment may execute the path planning method described in this embodiment (for example, the embodiment that executes step S110 to step S120), and the implementation principle and the technical effect are similar, and are not described herein again.
Fig. 8c is a schematic diagram of an architecture of sub-modules of the obstacle detection module shown in fig. 8b, and as shown in fig. 8c, the obstacle detection module 8400 includes:
a point cloud data collecting unit 8410, configured to collect continuous multi-frame point cloud data of an obstacle in a preset range on a current path point of the robot, where each frame of point cloud data includes a plurality of feature points;
a point cloud data stacking unit 8420, configured to stack the collected multiple frames of point cloud data in the same three-dimensional space;
an edge line construction unit 8430, configured to connect feature points in the three-dimensional space in a point-to-point manner based on a preset rule to form a plurality of edge lines;
the obstacle dividing unit 8440 is configured to divide each target obstacle according to a connection relationship between a point and a point, and then determine dynamic and static attribute information of each target obstacle based on an edge line set corresponding to each target obstacle.
It should be noted that the path planning apparatus for a robot provided in this embodiment may execute the path planning method described in this embodiment (for example, the embodiment that executes step S111 to step S114), and the implementation principle and the technical effect are similar, and are not described herein again.
Fig. 8d is an architecture diagram of a subunit of the obstacle dividing unit shown in fig. 8c, and as shown in fig. 8d, in some embodiments, the obstacle dividing unit 8440 may include the following subunits, specifically, the obstacle dividing unit 8440 includes:
a counting unit 8441, configured to count, for each target obstacle, the number of moving edge lines and the number of continuous edge lines in the edge lines belonging to the second edge line group corresponding to the target obstacle;
a calculating unit 8442, configured to calculate, for each target obstacle, a number ratio between the moving edge line and all edge lines belonging to the second edge line group corresponding to the target obstacle; and calculating the number ratio of the continuous edge lines to the moving edge lines;
the determining unit 8443 is configured to determine, for each target obstacle, a dynamic and static attribute of the target obstacle according to a ratio of the number of the moving edge lines in the edge line set corresponding to the target obstacle to all edge lines belonging to the second edge line group corresponding to the target obstacle, and a ratio of the number of the continuous edge lines to the moving edge lines.
Compared with the global cost map navigation scheme in the prior art, the path planning device for the robot provided by the embodiment can solve the problem that the robot frequently gives way and replans a navigation path in an actual working scene, such as meeting pedestrians and moving objects, and can generate and maintain a static cost map (marking position information of all static obstacles in a global area) of a preset space as the global path plan before the robot performs a task each time, and generate a local path plan for optimizing the global path on the local cost map according to the real-time sensed obstacle information marks, so that the robot can reasonably avoid obstacles, and the optimization of the navigation planning result of the robot is realized.
Other aspects of the path planning apparatus for a robot proposed in this embodiment are the same as or similar to the path planning method described above, and are not described herein again. Furthermore, the present invention also provides a storage medium storing a computer program that can be loaded by a processor to perform the steps of any of the path planning methods described above.
Illustratively, the storage medium may be any one of the following: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
In addition, the present invention further provides a chip, fig. 9 is a schematic structural diagram of a chip according to an embodiment of the present invention, and as shown in fig. 9, the chip 900 includes one or more processors 901 and an interface circuit 902. Optionally, chip 900 may also contain bus 903. Wherein:
the processor 901 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be implemented by integrated logic circuits of hardware or instructions in the form of software in the processor 901. The processor 901 described above may be one or more of a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, an MCU, MPU, CPU, or co-processor. The methods, steps disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The interface circuit 902 may be used for sending or receiving data, instructions or information, and the processor 901 may perform processing by using the data, instructions or other information received by the interface circuit 902, and may send out processing completion information through the interface circuit 902.
Optionally, the chip also includes memory, which may include read-only memory and random access memory, and provides operating instructions and data to the processor. The portion of memory may also include non-volatile random access memory (NVRAM).
Alternatively, the memory stores executable software modules or data structures, and the processor may perform corresponding operations by calling the operation instructions stored in the memory (the operation instructions may be stored in an operating system).
Alternatively, the chip can be used in the device for robot path planning according to the embodiment of the present invention. Alternatively, the interface circuit 902 may be used to output the execution result of the processor 901. For the path planning method provided by one or more embodiments of the present invention, reference may be made to the foregoing embodiments, and details are not described here.
It should be noted that the functions corresponding to the processor 901 and the interface circuit 902 may be implemented by hardware design, software design, or a combination of software and hardware, which is not limited herein.
In addition, the present invention further provides a robot, fig. 10 is a schematic structural diagram of the robot according to the embodiment of the present invention, and as shown in fig. 10, a robot 1000 includes: a processor 1001 and a memory 1002; optionally, the robot 1000 further comprises a communication bus 1003 and a communication interface 1004, wherein the processor 1001, the communication interface 1004 and the memory 1002 are adapted to communicate with each other via the communication bus 1003, and the memory 1002 stores a computer program adapted to be loaded by the processor 1001 and to perform the steps of any of the previously described path planning methods.
The communication bus 1003 may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (enhanced Industry Standard Architecture) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For convenience of illustration, the buses in the figures disclosed in the embodiments of the present invention are not limited to only one bus or one type of bus.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The path planning method, the path planning device, the storage medium, the chip and the robot provided by the embodiment of the invention are described in detail, a specific example is applied in the description to explain the principle and the implementation of the invention, and the description of the embodiment is only used for helping to understand the technical scheme and the core idea of the invention; those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (12)

1. A path planning method, the method being performed by a robot, the method comprising:
generating and maintaining a static cost map corresponding to a preset space;
generating a global path plan for the task based on the static cost map before the robot executes the task;
in the motion process of the robot, sensing whether an obstacle exists in a preset range of a current travel path of the robot in real time, updating a local cost map and the static cost map based on a sensing result, and then generating a local path plan for optimizing the global path plan based on the local cost map;
the local cost map marks the position information of all the obstacles in the local area monitored in real time, and the static cost map marks the position information of all the static obstacles in the global area.
2. The method according to claim 1, wherein the step of generating and maintaining the static cost map corresponding to the preset space comprises:
detecting obstacles in the preset space based on the detection data, and determining the type of each detected obstacle;
and for each obstacle, if the obstacle is determined to be a static obstacle, adding or updating the position information of the obstacle in the static cost map, and if the obstacle is determined to be a dynamic obstacle, only adding or updating the position information of the obstacle in the local cost map.
3. The method of claim 2, wherein the step of detecting obstacles within the preset space based on the probe data and determining the type of each detected obstacle comprises:
collecting continuous multi-frame point cloud data, wherein each frame of point cloud data comprises a plurality of characteristic points;
stacking the collected multi-frame point cloud data in the same three-dimensional space;
connecting the characteristic points in the three-dimensional space in a point-to-point mode based on a preset rule to form a plurality of edge lines;
and dividing each target obstacle according to the connection relation between the points and the points, and then judging the dynamic and static attribute information of each target obstacle based on the edge line set corresponding to each target obstacle.
4. The method according to claim 3, wherein the step of connecting the feature points in the three-dimensional space in a point-to-point manner based on a preset rule comprises:
aiming at the feature points belonging to the same frame, connecting each feature point with all feature points which are adjacent to the feature point on the index measurement and have Euclidean distance from the feature point smaller than a first preset threshold value in a point-to-point mode to form a first edge line group;
and for each feature point, taking all feature points included in two frames before the frame to which the feature point belongs as target feature points, and connecting the feature point and all target feature points with Euclidean distances from the feature point smaller than a second preset threshold value in a point-to-point manner to form a second edge line group.
5. The method according to claim 4, wherein the step of determining the dynamic and static attribute information of each target obstacle based on the edge line set corresponding to each target obstacle comprises:
counting the number of moving edge lines and the number of continuous edge lines in the edge lines belonging to the second edge line group corresponding to each target obstacle;
if the projection length of any edge line on the first plane is greater than a third preset threshold value, marking the edge line as a moving edge line;
if an included angle theta between a direction vector of a projection of any edge line on the first plane and a main direction vector is smaller than a fourth preset threshold value, marking the edge line as a continuous edge line;
the first plane is a base plane of the three-dimensional space, and the main direction vector is a direction vector of a projection of a connecting line between a geometric central point surrounded by feature points located at the bottommost layer and a geometric central point surrounded by feature points located at the topmost layer on the first plane, among feature points corresponding to the target obstacle.
6. The method according to claim 5, wherein the step of determining the dynamic and static attribute information of each target obstacle based on the edge line set corresponding to each target obstacle further comprises:
for each target obstacle, calculating the number ratio of the moving edge line to all edge lines belonging to the second edge line group corresponding to the target obstacle based on the counted number of the moving edge lines; and
and calculating the number ratio of the continuous edge lines to the moving edge lines based on the counted number of the continuous edge lines for each target obstacle.
7. The method according to claim 6, wherein the step of determining the dynamic and static attribute information of each target obstacle based on the edge line set corresponding to each target obstacle further comprises:
for each target obstacle, judging the dynamic and static attributes of the target obstacle according to the number ratio of the moving edge lines in the edge line set corresponding to the target obstacle to all the edge lines belonging to the second edge line group corresponding to the target obstacle and the number ratio of the continuous edge lines to the moving edge lines;
if the ratio of the number of the moving edge lines to the number of all the edge lines belonging to the second edge line group corresponding to the target obstacle is smaller than a fifth preset threshold, determining that the type of the target obstacle is a static obstacle;
and if the ratio of the number of the moving edge lines to all the edge lines belonging to the second edge line group corresponding to the target obstacle is greater than a sixth preset threshold value, and the ratio of the number of the continuous edge lines to the number of the moving edge lines is greater than a seventh preset threshold value, determining that the type of the target obstacle is a dynamic obstacle.
8. The method of claim 7, further comprising:
and performing data fusion on the dynamic and static information of the target obstacle obtained based on the point cloud data and the image information of the target obstacle acquired by a camera device to obtain fused dynamic and static information of the target obstacle, and then assigning values to corresponding grid points of the static cost map and assigning values to corresponding grid points of the local cost map based on the fused dynamic and static information.
9. A path planning apparatus for a robot, the apparatus comprising:
the static map generation module is used for generating and maintaining a static cost map corresponding to a preset space;
the first path planning module is used for generating a global path plan for the task based on the static cost map before the robot executes the task;
the second path planning module is used for sensing whether an obstacle exists in a preset range of a current travel path of the robot in real time in the motion process of the robot, updating a local cost map and the static cost map based on a sensing result, and then generating a local path plan for optimizing the global path plan based on the local cost map;
the local cost map marks the position information of all the obstacles in the local area monitored in real time, and the static cost map marks the position information of all the static obstacles in the global area.
10. A storage medium, characterized in that the storage medium stores a computer program that can be loaded by a processor to perform the steps in the path planning method according to any of claims 1 to 8.
11. A chip comprising at least one processor and an interface;
the interface is used for providing program instructions or data for the at least one processor;
the at least one processor is configured to execute the program line instructions to perform the steps of the path planning method according to any one of claims 1 to 8.
12. A robot, characterized in that the robot comprises a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the steps of the path planning method according to any of claims 1 to 8.
CN202110960320.1A 2021-08-20 2021-08-20 Path planning method, path planning device, storage medium, chip and robot Active CN113741438B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110960320.1A CN113741438B (en) 2021-08-20 2021-08-20 Path planning method, path planning device, storage medium, chip and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110960320.1A CN113741438B (en) 2021-08-20 2021-08-20 Path planning method, path planning device, storage medium, chip and robot

Publications (2)

Publication Number Publication Date
CN113741438A true CN113741438A (en) 2021-12-03
CN113741438B CN113741438B (en) 2024-03-26

Family

ID=78732058

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110960320.1A Active CN113741438B (en) 2021-08-20 2021-08-20 Path planning method, path planning device, storage medium, chip and robot

Country Status (1)

Country Link
CN (1) CN113741438B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114166228A (en) * 2021-12-07 2022-03-11 武汉科技大学 Unmanned aerial vehicle continuous monitoring path planning method
CN114185353A (en) * 2021-12-09 2022-03-15 吉林大学 Autonomous operation loader reversing obstacle avoidance and path planning method
CN114360274A (en) * 2021-12-13 2022-04-15 珠海格力智能装备有限公司 Distribution vehicle navigation method, system, computer equipment and storage medium
CN114674333A (en) * 2022-03-22 2022-06-28 中国电信股份有限公司 Navigation method, device, system and medium for mobile device
CN115711624A (en) * 2022-10-18 2023-02-24 中国科学院半导体研究所 Motion cost map construction method and device, unmanned equipment and storage medium
WO2023191723A1 (en) * 2022-03-30 2023-10-05 Agency For Science, Technology And Research Method and system for navigating a robot
CN117232531A (en) * 2023-11-14 2023-12-15 长沙小钴科技有限公司 Robot navigation planning method, storage medium and terminal equipment
TWI828136B (en) * 2022-05-03 2024-01-01 優式機器人股份有限公司 Environmental finishing control method and system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060241827A1 (en) * 2005-03-04 2006-10-26 Masaki Fukuchi Obstacle avoiding apparatus, obstacle avoiding method, obstacle avoiding program and mobile robot apparatus
CN109146972A (en) * 2018-08-21 2019-01-04 南京师范大学镇江创新发展研究院 Vision navigation method based on rapid characteristic points extraction and gridding triangle restriction
CN109509260A (en) * 2017-09-14 2019-03-22 百度在线网络技术(北京)有限公司 Mask method, equipment and the readable medium of dynamic disorder object point cloud
CN109709945A (en) * 2017-10-26 2019-05-03 深圳市优必选科技有限公司 A kind of paths planning method based on obstacle classification, device and robot
CN111429514A (en) * 2020-03-11 2020-07-17 浙江大学 Laser radar 3D real-time target detection method fusing multi-frame time sequence point clouds
CN111966089A (en) * 2020-07-16 2020-11-20 上海澳悦智能科技有限公司 Method for estimating speed of dynamic obstacle by using cost map in mobile robot
CN112284389A (en) * 2020-09-28 2021-01-29 深圳优地科技有限公司 Mobile robot path planning method and device, mobile robot and storage medium
CN112859859A (en) * 2021-01-13 2021-05-28 中南大学 Dynamic grid map updating method based on three-dimensional obstacle object pixel object mapping

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060241827A1 (en) * 2005-03-04 2006-10-26 Masaki Fukuchi Obstacle avoiding apparatus, obstacle avoiding method, obstacle avoiding program and mobile robot apparatus
CN109509260A (en) * 2017-09-14 2019-03-22 百度在线网络技术(北京)有限公司 Mask method, equipment and the readable medium of dynamic disorder object point cloud
CN109709945A (en) * 2017-10-26 2019-05-03 深圳市优必选科技有限公司 A kind of paths planning method based on obstacle classification, device and robot
CN109146972A (en) * 2018-08-21 2019-01-04 南京师范大学镇江创新发展研究院 Vision navigation method based on rapid characteristic points extraction and gridding triangle restriction
CN111429514A (en) * 2020-03-11 2020-07-17 浙江大学 Laser radar 3D real-time target detection method fusing multi-frame time sequence point clouds
CN111966089A (en) * 2020-07-16 2020-11-20 上海澳悦智能科技有限公司 Method for estimating speed of dynamic obstacle by using cost map in mobile robot
CN112284389A (en) * 2020-09-28 2021-01-29 深圳优地科技有限公司 Mobile robot path planning method and device, mobile robot and storage medium
CN112859859A (en) * 2021-01-13 2021-05-28 中南大学 Dynamic grid map updating method based on three-dimensional obstacle object pixel object mapping

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114166228A (en) * 2021-12-07 2022-03-11 武汉科技大学 Unmanned aerial vehicle continuous monitoring path planning method
CN114166228B (en) * 2021-12-07 2024-05-28 武汉科技大学 Unmanned aerial vehicle continuous monitoring path planning method
CN114185353A (en) * 2021-12-09 2022-03-15 吉林大学 Autonomous operation loader reversing obstacle avoidance and path planning method
CN114185353B (en) * 2021-12-09 2024-03-22 吉林大学 Backing obstacle avoidance and path planning method for autonomous operation loader
CN114360274A (en) * 2021-12-13 2022-04-15 珠海格力智能装备有限公司 Distribution vehicle navigation method, system, computer equipment and storage medium
CN114674333A (en) * 2022-03-22 2022-06-28 中国电信股份有限公司 Navigation method, device, system and medium for mobile device
CN114674333B (en) * 2022-03-22 2023-12-08 中国电信股份有限公司 Navigation method, device, system and medium of mobile device
WO2023191723A1 (en) * 2022-03-30 2023-10-05 Agency For Science, Technology And Research Method and system for navigating a robot
TWI828136B (en) * 2022-05-03 2024-01-01 優式機器人股份有限公司 Environmental finishing control method and system
CN115711624A (en) * 2022-10-18 2023-02-24 中国科学院半导体研究所 Motion cost map construction method and device, unmanned equipment and storage medium
CN117232531A (en) * 2023-11-14 2023-12-15 长沙小钴科技有限公司 Robot navigation planning method, storage medium and terminal equipment
CN117232531B (en) * 2023-11-14 2024-01-30 长沙小钴科技有限公司 Robot navigation planning method, storage medium and terminal equipment

Also Published As

Publication number Publication date
CN113741438B (en) 2024-03-26

Similar Documents

Publication Publication Date Title
CN113741438B (en) Path planning method, path planning device, storage medium, chip and robot
CN107981790B (en) Indoor area dividing method and sweeping robot
CN106980320B (en) Robot charging method and device
CN110675307B (en) Implementation method from 3D sparse point cloud to 2D grid graph based on VSLAM
US20230305573A1 (en) Method for detecting obstacle, self-moving robot, and non-transitory computer readable storage medium
JP2020181572A (en) Route planning method, electronic device, robot, and computer-readable storage medium
US6728608B2 (en) System and method for the creation of a terrain density model
EP3974778B1 (en) Method and apparatus for updating working map of mobile robot, and storage medium
EP3770711A1 (en) Method for repositioning robot
CN108628318B (en) Congestion environment detection method and device, robot and storage medium
US20210172741A1 (en) Accompanying service method and device for intelligent robot
Jaspers et al. Multi-modal local terrain maps from vision and lidar
US11004332B2 (en) Cooperative mapping for autonomous vehicles, robots or multi-agent systems
CN113741435A (en) Obstacle avoidance method, device, decision maker, storage medium, chip and robot
RU2740229C1 (en) Method of localizing and constructing navigation maps of mobile service robot
CN111990930B (en) Distance measuring method, distance measuring device, robot and storage medium
CN113475977B (en) Robot path planning method and device and robot
JP2017083663A (en) Coincidence evaluation device and coincidence evaluation method
CN113566808A (en) Navigation path planning method, device, equipment and readable storage medium
WO2022188333A1 (en) Walking method and apparatus, and computer storage medium
CN114779777A (en) Sensor control method and device for self-moving robot, medium and robot
CN112033423B (en) Robot path planning method and device based on road consensus and robot
CN113503877A (en) Robot partition map establishing method and device and robot
CN112182122A (en) Method and device for acquiring navigation map of working environment of mobile robot
CN116820074A (en) Method and device for determining congestion point position of robot, robot and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant