CN112097762B - Path planning method and device, robot and storage medium - Google Patents

Path planning method and device, robot and storage medium Download PDF

Info

Publication number
CN112097762B
CN112097762B CN202010954696.7A CN202010954696A CN112097762B CN 112097762 B CN112097762 B CN 112097762B CN 202010954696 A CN202010954696 A CN 202010954696A CN 112097762 B CN112097762 B CN 112097762B
Authority
CN
China
Prior art keywords
robot
target
area
map
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010954696.7A
Other languages
Chinese (zh)
Other versions
CN112097762A (en
Inventor
卜大鹏
黎文正
霍峰
秦宝星
程昊天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Gaussian Automation Technology Development Co Ltd
Original Assignee
Shanghai Gaussian Automation Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Gaussian Automation Technology Development Co Ltd filed Critical Shanghai Gaussian Automation Technology Development Co Ltd
Priority to CN202010954696.7A priority Critical patent/CN112097762B/en
Publication of CN112097762A publication Critical patent/CN112097762A/en
Application granted granted Critical
Publication of CN112097762B publication Critical patent/CN112097762B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a path planning method, a device, a robot and a storage medium, wherein the method comprises the following steps: determining a target point in an uncovered area of the map and a target position in the map according to the map and the current position of the robot; determining a planned path between the current position and the target position of the robot; and marking the area which is in the range of the vision field of the robot in the uncovered area as a covered area in the process of realizing operation according to the planned path. According to the path planning method, a target point and a target position in a map, where the target point can be covered by a visual field, are determined first, then operation is achieved based on a planned path between the current position of the robot and the target position, in the operation process, an area, which is in the visual field range of the robot, in an uncovered area is marked as a covered area, namely, the visual field of the target point is firstly covered, so that full coverage of the map is achieved, the control logic is simple, the efficiency is high, and the working efficiency of the robot is improved.

Description

Path planning method and device, robot and storage medium
Technical Field
The embodiment of the invention relates to the technical field of robots, in particular to a path planning method and device, a robot and a storage medium.
Background
With the continuous development of computer technology, the development and perfection of robot sensor equipment and the popularization of robot operating systems, the trend of robot exchange is coming. In the working process of the robot, how to plan the full-coverage path on the map of the robot is very important.
At present, the robot can perform full coverage path planning in the following manner: step 1: on the constructed grid map, the grid map runs according to an inner spiral running track; step 2: in the operation process, detecting whether uncovered grids exist on the outer side in real time, if not, entering the step 3, and if so, entering the step 5; and step 3: the robot continuously moves forwards, whether an obstacle or a covered grid exists in front or not is detected in real time, if yes, the step 4 is carried out, if not, the robot continuously moves forwards to clean, and the step 2 is carried out; and 4, step 4: the robot turns to the inner side by 90 degrees, continues to move forwards for cleaning, detects whether an obstacle or a covered grid exists on the left side of the robot in real time, if so, returns to the step 3, and if not, enters the step 5; and 5: the robot turns to the outside by 90 degrees and continues to advance; and 6: and detecting whether the outer grid has an obstacle or a covered grid in real time, if so, returning to the step 3, and if not, returning to the step 5.
However, in the above process, the robot needs to detect the outer grid and the front grid in real time, and based on different detection results, the robot jumps to different subsequent steps, which is complex to implement, and thus the working efficiency of the robot based on the path planning is low.
Disclosure of Invention
The invention provides a path planning method, a path planning device, a robot and a storage medium, which are used for solving the technical problem of low working efficiency of the robot caused by the current path planning.
In a first aspect, an embodiment of the present invention provides a path planning method, including:
determining a target point in an uncovered area of a map and a target position in the map according to the map and the current position of the robot; when the robot is at the target position, the target point is within the visual field range of the robot;
determining a planned path between the current position of the robot and the target position;
and in the process of realizing operation according to the planned path, marking the area in the uncovered area, which is within the visual field range of the robot, as a covered area.
In the method as shown above, after marking the area in the uncovered area that is within the field of view of the robot as the covered area, the method further comprises:
determining whether a path replanning condition is met;
and when the path replanning condition is determined to be met, updating the uncovered area in the map and the current position of the robot, and returning to execute the steps of determining a target point in the uncovered area of the map and a target position in the map according to the map and the current position of the robot until the uncovered area in the map is marked as the covered area.
In the implementation mode, the full coverage of the map is realized by circularly planning the path, that is, all uncovered areas in the map are marked as covered areas, so that the comprehensiveness of the map coverage is ensured.
In the method as shown above, the determining a target point in an uncovered area of the map according to the map and the current position of the robot, and the target position in the map includes:
determining the target point according to the map and the current position of the robot; the target point is a point, in the uncovered area, of which the distance from the current position of the robot meets a preset condition;
and determining the target position according to the target point and the map.
In the implementation mode, the target point is determined according to the map and the current position of the robot, and then the target position is determined based on the target point and the map, so that the accuracy of the determined target position can be ensured.
In the method as shown above, the determining the target point according to the map and the current position of the robot includes:
and determining the point which is closest to the current position of the robot in the uncovered area as the target point.
In the implementation mode, the point closest to the current position of the robot is determined as the target point, and then the view coverage of the target point is realized, which is equivalent to realizing the full coverage of a map by a greedy algorithm, thereby further improving the efficiency.
In the method as shown above, the map further comprises an obstacle area;
the determining the target position according to the target point and the map includes:
determining a set of candidate target locations in the map;
determining the target position according to the candidate target position set;
wherein the candidate target positions in the set of candidate target positions all satisfy the following condition:
when the robot is at the candidate target position, the target point is within the visual field range of the robot;
the robot is unable to collide with the obstacle region at the candidate target location; and the number of the first and second groups,
a line segment between the target point and the candidate target location cannot pass through the obstacle region.
In the implementation mode, the determined target position can also meet the three conditions that the vision coverage of the target point can be realized, the robot cannot collide with the obstacle region at the target position, and the line segment between the target point and the target position cannot pass through the obstacle region, so that the accuracy of the target position is guaranteed.
In the method shown above, the determining the target location according to the candidate target location set includes:
determining the area of the uncovered area in the visual field range of the robot at each candidate target position;
comparing the areas to determine the maximum area;
and determining the candidate target position corresponding to the maximum area as the target position.
In the implementation mode, the determined target position can ensure that the area of an uncovered area in the visual field range is the largest when the robot is at the target position, and the efficiency of the robot in carrying out full map coverage is further improved.
In the method as shown above, the target position includes: a target coordinate position, and a target course angle of the robot when at the target coordinate position.
In the implementation mode, the target position comprises the target coordinate position and the target course angle of the robot at the target coordinate position, so that the accuracy and comprehensiveness of describing the target position are guaranteed, and the accuracy of a path planned based on the target position subsequently is improved.
In the method shown above, the path replanning condition is: the target point is marked as a covered area.
In the implementation mode, once the target point is marked as the covered area and the uncovered area exists in the map, the operation according to the planned path is stopped, the uncovered area in the map and the current position of the robot are updated, and the step of determining the target point is returned to be executed, so that the path planning efficiency can be further improved.
In the method as shown above, the marking, in the course of performing the operation according to the planned path, an area in the uncovered area within a visual field of the robot as a covered area includes:
determining whether the robot reaches a target coordinate position of the target position;
performing position movement along the planned path when it is determined that the robot does not reach a target coordinate position of the target position;
and marking the area in the uncovered area within the visual field range of the robot as a covered area in the process of carrying out position movement of the robot along the planned path.
In the method as described above, after the determining whether the robot reaches the target coordinate position of the target position, the method further includes:
when the robot is determined to reach the target coordinate position of the target position, rotating the robot by taking a target course angle as a target so as to adjust the course angle of the robot;
in the process of rotating the robot, marking the area in the uncovered area which is within the visual field range of the robot as a covered area.
In the two implementation modes, when the robot does not reach the target coordinate position of the target position, the position is moved along the planned path, and the area in the uncovered area, which is in the visual field range of the robot, is marked as the covered area; and when the robot reaches the target coordinate position of the target position, taking the target course angle in the target position as a target, rotating the robot to adjust the course angle of the robot, and marking the area which is positioned in the uncovered area and is within the visual field range of the robot as a covered area. The robot can carry out different operations at different stages, and the accuracy of the robot in running according to the planned path is ensured.
In the method shown above, the determining whether the path replanning condition is satisfied includes:
acquiring the state of the target point;
and when the state of the target point is determined to be the covered state, stopping the position movement or rotating the robot, and determining that the path replanning condition is met.
In the implementation mode, once the robot determines that the state of the target point is the covered state, the robot stops moving at the position or rotates, and the condition for replanning the path is determined to be met, so that the next path planning process is started, and the efficiency of path planning can be further improved.
In the method shown above, the path replanning condition is: a target object is present in the covered area and the robot acquires the target object.
In the method as shown above, before determining whether a path replanning condition is satisfied after marking an area in the uncovered area within the field of view of the robot as a covered area, the method further includes:
and when the target object exists in the covered area, the position of the target object is reached, and the target object is obtained.
In the two implementation modes, the target object can be obtained in the process that the robot runs according to the planned path, so that the robot can complete other work in the process of covering the visual field, the working efficiency of the robot is improved, and the application scene of the path planning method of the embodiment is expanded.
In a second aspect, an embodiment of the present invention provides a path planning apparatus, including:
the first determination module is used for determining a target point in an uncovered area of a map and a target position in the map according to the map and the current position of the robot; when the robot is at the target position, the target point is within the visual field range of the robot;
the second determination module is used for determining a planned path between the current position of the robot and the target position;
and the marking module is used for marking the area in the uncovered area within the visual field range of the robot as a covered area in the process of realizing operation according to the planned path.
In a third aspect, an embodiment of the present invention further provides a robot, including:
one or more processors;
a memory for storing one or more programs;
when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the path planning method as provided in the first aspect.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the path planning method provided in the first aspect.
The embodiment of the invention provides a path planning method, a path planning device, a robot and a storage medium, wherein the method comprises the following steps: determining a target point in an uncovered area of the map and a target position in the map according to the map and the current position of the robot, wherein when the robot is at the target position, the target point is in the visual field range of the robot; determining a planned path between the current position and the target position of the robot; and marking the area which is in the range of the vision field of the robot in the uncovered area as a covered area in the process of realizing operation according to the planned path. The path planning method firstly determines a target point in an uncovered area and a target position in a map, wherein the target point can realize visual coverage on the target point, then the operation is realized based on a planned path between the current position and the target position of the robot, and in the operation realization process, an area in the uncovered area, which is in the visual range of the robot, is marked as a covered area, which is equivalent to realizing the visual coverage on the target point in advance, so that the full coverage on the map is realized, the control logic is simple, the efficiency is higher, and the working efficiency of the robot is improved.
Drawings
Fig. 1 is a schematic diagram of an application scenario of a path planning method according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a path planning method according to an embodiment of the present invention;
fig. 3 is a schematic flow chart of a path planning method according to another embodiment of the present invention;
fig. 4 is a schematic diagram of a process of the robot according to a planned path in an embodiment of the present invention;
FIG. 5A is a schematic diagram of a planned path;
FIG. 5B is a schematic illustration of marking an area in the uncovered area that is within the field of view of the robot as a covered area;
FIG. 5C is a schematic diagram of a planned path of the robot;
fig. 6 is a schematic structural diagram of a path planning apparatus according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a robot according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Fig. 1 is a schematic diagram of an application scenario of a path planning method according to an embodiment of the present invention. As shown in fig. 1, the robot 11 needs to fully cover the uncovered area in the map 12 during operation. The overlay in this embodiment refers to a field of view overlay. That is, during the operation of the robot 11, all uncovered areas in the map 12 are within the visual field 13 of the robot 11 along the travel track of the robot 11. In this process, how to plan the path of the robot is very important. The embodiment provides a path planning method, and based on the path planning method, the robot can realize efficient map full coverage, and the working efficiency of the robot is improved.
Fig. 2 is a schematic flow chart of a path planning method according to an embodiment of the present invention. The embodiment is suitable for a scene of realizing full coverage on the uncovered area in the map in the working process of the robot. The path planning method may be performed by a path planning device, which may be implemented in software and/or hardware, and which may be integrated into a robot. As shown in fig. 2, the path planning method provided in this embodiment includes the following steps:
step 201: and determining a target point in the uncovered area of the map and a target position in the map according to the map and the current position of the robot.
When the robot is at the target position, the target point is in the visual field range of the robot.
Specifically, the robot in the present embodiment may be a cleaning robot, a patrol robot, or the like that needs to move on a map and cover the map in view.
The map in this embodiment is a preset map. The map has been marked with uncovered areas. The uncovered area in this embodiment refers to an area that is never within the field of view of the robot. In other words, an uncovered area refers to an area that is not yet covered by the field of view.
The visual field range of the robot in this embodiment may be a visual field range of a sensor provided in the robot, for example, a visual field range of a sensor such as a laser radar, a millimeter wave radar, or a vision acquisition device. In the present embodiment, the field of view of the robot may be represented by a field of view frame. Illustratively, the view frame may be a trapezoidal frame.
The target point in the present embodiment may be represented by coordinate values (x, y) in a map coordinate system. The positions involved in this embodiment, such as the target position, the current position of the robot, and the candidate target position, may be represented by three elements (x, y, yaw), where x and y represent coordinate values in the map coordinate system, and yaw represents the heading angle.
The map in this embodiment may be a grid map composed of several grids. One grid in the grid map corresponds to a region of a preset area in the real world. The predetermined area may be 5 cm by 5 cm, and the area may be a square area. The grid in the grid map may have 3 states: an uncovered state, a covered state, and an obstacle state. In particular implementations, different state identifiers may be set to characterize the state of the grid.
Alternatively, the target point in this embodiment may be a point in the uncovered area where the distance from the current position of the robot satisfies a preset condition. When the robot is at the target position, the target point is within the field of view of the robot, that is, the target position in this embodiment refers to a position where the field of view coverage of the target point can be achieved. Further, the target point in the present embodiment may be a point closest to the current position of the robot in the uncovered area.
In this embodiment, the position of one point on the robot may be used as the current position of the robot, for example, the position of the center point of the connecting line between the two rear wheels of the robot may be used as the current position of the robot.
In one implementation manner, the implementation procedure of step 201 may be: determining a target point according to the map and the current position of the robot; and determining the target position according to the target point and the map. In the implementation mode, the target point is determined according to the map and the current position of the robot, and then the target position is determined based on the target point and the map, so that the accuracy of the determined target position can be ensured.
Further, the map in the present embodiment may further include an obstacle region. When determining the target position according to the target point and the map, a candidate target position set in the map can be determined firstly; and determining the target position according to the candidate target position set.
Wherein the candidate target positions in the candidate target position set all satisfy the following three conditions: when the robot is at the candidate target position, the target point is in the visual field range of the robot; the robot cannot collide with the obstacle area at the candidate target position; and a line segment between the target point and the candidate target location, unable to pass through the obstacle region.
In the process of determining the target position, a candidate target position set meeting the three conditions is determined, and then the target position is determined from the candidate target position set, so that the implementation process can ensure that the determined target position can also meet the requirement of realizing the view coverage of the target point, the robot cannot collide with an obstacle region at the target position, and a line segment between the target point and the target position cannot pass through the obstacle region, so that the accuracy of the target position is ensured.
Alternatively, when determining the target position from the set of candidate target positions, the target position may be determined according to the following steps: determining the area of an uncovered area in the visual field range when the robot is at each candidate target position; comparing the areas to determine the maximum area; and determining the candidate target position corresponding to the maximum area as the target position.
The target position determined according to the method can ensure that the area of the uncovered area in the visual field range is the largest when the robot is at the target position, and the efficiency of the robot for carrying out full map coverage is further improved.
Step 202: and determining a planned path between the current position and the target position of the robot.
Specifically, a planned path between the current position and the target position of the robot may be determined based on an existing path planning algorithm.
Alternatively, the planned path may be a shortest path between the current position and the target position of the robot.
In a scenario where the map is a grid map, the planned path may be represented by a sequence of grid numbers.
It should be noted that, instead of the planned path between the target point and the target position, the planned path between the current position of the robot and the target position is determined in step 202. In some special scenarios, the current position of the robot may coincide with the target point.
Step 203: and marking the area which is in the range of the vision field of the robot in the uncovered area as a covered area in the process of realizing operation according to the planned path.
Specifically, after the planned path is determined, the robot runs according to the planned path.
In the process of realizing the operation, the area in the uncovered area of the map, which is within the visual field range of the robot, is marked as the covered area. In other words, in implementing the operations, the areas of the map that are covered by the field of view in the uncovered areas are marked as covered areas.
In a scenario where the map is a grid map, the robot may mark a grid in the uncovered grid of the grid map that is within the field of view of the robot as a covered grid. More specifically, the tagging may be implemented by modifying the state identification of the grid.
In the path planning method provided in this embodiment, a target point in an uncovered area and a target position in a map where visual coverage can be achieved for the target point are determined, then operation is achieved based on a planned path between the current position and the target position of the robot, and in the implementation operation process, an area in the uncovered area and within the visual range of the robot is marked as a covered area. The method is equivalent to the method for realizing the view coverage of the target point in advance, so that the full coverage of the map is realized, the control logic is simple, and the efficiency is high.
Further, when determining the target point, a point in the uncovered area closest to the current position of the robot may be determined as the target point. In the implementation mode, the point closest to the current position of the robot is determined as the target point, and then the view coverage of the target point is realized, which is equivalent to realizing the full coverage of a map by a greedy algorithm, thereby further improving the efficiency.
The path planning method provided by the embodiment comprises the following steps: determining a target point in an uncovered area of the map and a target position in the map according to the map and the current position of the robot, wherein when the robot is at the target position, the target point is in the visual field range of the robot; determining a planned path between the current position and the target position of the robot; and marking the area which is in the range of the vision field of the robot in the uncovered area as a covered area in the process of realizing operation according to the planned path. The path planning method firstly determines a target point in an uncovered area and a target position in a map, wherein the target point can realize visual coverage on the target point, then the operation is realized based on a planned path between the current position and the target position of the robot, and in the operation realization process, an area in the uncovered area, which is in the visual range of the robot, is marked as a covered area, which is equivalent to realizing the visual coverage on the target point in advance, so that the full coverage on the map is realized, the control logic is simple, the efficiency is higher, and the working efficiency of the robot is improved.
Fig. 3 is a schematic flow chart of a path planning method according to another embodiment of the present invention. The embodiment of the present invention is based on the embodiment shown in fig. 2 and various alternatives, and a detailed description is given to the steps after "marking the area in the uncovered area within the field of view of the robot as the covered area". As shown in fig. 3, the path planning method provided in this embodiment includes the following steps:
step 301: and determining a target point in the uncovered area of the map and a target position in the map according to the map and the current position of the robot.
When the robot is at the target position, the target point is in the visual field range of the robot.
Step 302: and determining a planned path between the current position and the target position of the robot.
Step 303: and marking the area which is in the range of the vision field of the robot in the uncovered area as a covered area in the process of realizing operation according to the planned path.
The implementation processes and technical principles of step 301 and step 201, step 302 and step 202, and step 303 and step 203 are similar, and are not described herein again.
Step 304: it is determined whether a path re-planning condition is satisfied.
Step 305: and when the path replanning condition is determined to be met, updating the uncovered area in the map and the current position of the robot, and returning to execute the step 301 until the uncovered areas in the map are marked as covered areas.
In the implementation mode, the full coverage of the map is realized by circularly planning the path, that is, all uncovered areas in the map are marked as covered areas, so that the comprehensiveness of the map coverage is ensured.
In this embodiment, step 304 and step 303 may be executed in parallel. And in the process that the robot runs according to the planned path, determining whether the path replanning condition is met. And when the condition of path replanning is determined to be met, stopping running according to the planned path, updating the uncovered area in the map and the current position of the robot, and returning to execute the step 301.
The path re-planning condition in this embodiment has two implementation manners, which are described below.
In a first implementation, the path re-planning condition is: the target point is marked as a covered area. In the implementation mode, once the target point is marked as the covered area and the uncovered area exists in the map, the operation according to the planned path is stopped, the uncovered area in the map and the current position of the robot are updated, and the step of determining the target point is returned to be executed, so that the path planning efficiency can be further improved.
In this implementation, optionally, the target location includes: a target coordinate position, and a target heading angle of the robot when at the target coordinate position. In the implementation mode, the target position comprises the target coordinate position and the target course angle of the robot at the target coordinate position, so that the accuracy and comprehensiveness of describing the target position are guaranteed, and the accuracy of a path planned based on the target position subsequently is improved.
In this implementation, step 303 may specifically include: determining whether the robot reaches a target coordinate position of the target position; when the robot is determined not to reach the target coordinate position of the target position, carrying out position movement along the planned path; and marking the area which is in the range of the vision field of the robot in the uncovered area as the covered area in the process of the position movement of the robot along the planned path.
Further, when the robot is determined to reach the target coordinate position of the target position, the target course angle is taken as a target, and the robot is rotated to adjust the course angle of the robot; in the process of rotating the robot, the area in the uncovered area that is within the field of view of the robot is marked as covered area.
From the above description, it can be seen that step 303 may include two stages: when the robot does not reach the target coordinate position of the target position, carrying out position movement along the planned path, and marking the area which is in the range of the visual field of the robot in the uncovered area as a covered area; and when the robot reaches the target coordinate position of the target position, taking the target course angle in the target position as a target, rotating the robot to adjust the course angle of the robot, and marking the area which is positioned in the uncovered area and is within the visual field range of the robot as a covered area. The robot can perform different operations in different stages, and the accuracy of the robot in operation according to the planned path is ensured.
Optionally, step 304 in this implementation may include: acquiring the state of a target point; when the state of the target point is determined to be the covered state, the position moving or rotating robot is stopped, and it is determined that the path replanning condition is satisfied.
Optionally, step 303 and step 304 may be executed concurrently, and when the state of the target point is determined to be the covered state in step 304: stopping the position movement if the robot performs the position movement along the planned path; and if the robot rotates by taking the target course angle as a target, stopping rotating the robot.
When the robot determines that the state of the target point is the covered state, the robot stops moving at the position or rotates, and the condition of path re-planning is determined to be met, so that the next path planning process is started, and the efficiency of path planning can be further improved.
It should be noted that if the state of the target point is still in the uncovered state when the robot reaches the target coordinate position of the target position, the state of the target point inevitably changes to the covered state during the rotation of the robot. This is because the target position is a position at which the target point can be covered in the field of view, and therefore, the state of the target point may be changed to the covered state at the latest during the rotation of the robot.
Fig. 4 is a schematic diagram of a process of the robot according to the planned path. As shown in a of fig. 4, a blank area of the map 47 in a represents an uncovered area, and a diagonal area represents a covered area. The trapezoidal box 46 represents the field of view box of the robot. During the operation of the robot 41 according to the planned path 44 between the current position 42 of the robot (the current position refers to the position when the target point and the target position are determined) and the target position 43, when the robot does not reach the target coordinate position of the target position 43 and after a distance is traveled along the planned path 44, it is determined that the target point 45 is within the field of view of the robot, that is, the target point 45 is marked as covered. At this time, the position movement is stopped, the uncovered area in the map and the current position of the robot are updated, and the process returns to step 301 to re-select the target point and the target position.
As shown in fig. 4 b, the blank areas in b represent uncovered areas and the diagonal areas represent covered areas. The trapezoidal frame 46 represents the field of view frame of the robot. During the operation of the robot 41 according to the planned path 44 between the current position 42 of the robot (the current position refers to the position when the target point and the target position are determined) and the target position 43, the state of the target point is still uncovered until the target coordinate position of the target position 43 is reached. At this time, the rotating robot, during the course of the course angle of the robot changing towards the target course angle, the target point 45 will be necessarily marked as covered area. When it is determined that the target point 45 is within the visual field range of the robot 41, the rotation of the robot 41 is stopped. And updating the uncovered area in the map and the current position of the robot, returning to execute the step 301, and reselecting the target point and the target position. In the b-diagram, the robot 41 marks an area in the uncovered area of the map that is within the field of view of the robot as a covered area during the movement from the current robot position 42 to the target coordinate position of the target position 43 and the rotation at the target coordinate position of the target position 43. b-to facilitate clarity, target points 45 are shown within the field of view of robot 41, and other areas that have been marked as covered areas are not shown.
In a second implementation, the path re-planning condition is: a target object exists in the covered area, and the robot acquires the target object.
In this implementation, after marking an area in the uncovered area that is within the field of view of the robot as a covered area, before determining whether the path re-planning condition is satisfied, the method further includes: and when the target object exists in the covered area, the position of the target object is reached, and the target object is obtained.
In the implementation mode, the target object can be obtained in the process that the robot runs according to the planned path, so that the robot can complete other work in the process of covering the visual field, the working efficiency of the robot is improved, and the application scene of the path planning method of the embodiment is expanded.
In the implementation mode, when the robot judges that the target object exists in the covered area, the robot reaches the position where the target object is located, and after the target object is obtained, the condition that the path is replanned is determined to be met at the moment. The current position of the robot is set as the new current position of the robot, the current map is set as the new map, and the process returns to step 301.
It should be noted that, in the process of acquiring the target object by the robot, an area in the uncovered area within the field of view of the robot may also be marked as a covered area.
Alternatively, the robot in the present embodiment may be a cleaning robot. The target object can be garbage in the robot field and other objects needing to be cleaned.
The following describes the path planning method provided in this embodiment with a specific example.
Fig. 5A is a schematic diagram of a planned path. As shown in fig. 5A, the gray area in the map is an obstacle area, and the blank area is an uncovered area. The trapezoidal frame 52 represents the field of view frame of the robot 56. The target point 51 and the target position 53 in the uncovered area are determined from the map and the current position of the robot. A planned path 54 between the current position of the robot and the target position 53 is determined. And then, the robot runs according to the planned path, and the area which is in the range of the robot vision field in the uncovered area is marked as the covered area.
Fig. 5B is a schematic diagram of marking an area in the uncovered area that is within the field of view of the robot as a covered area. As shown in fig. 5B, the stripe region is a covered region. In fig. 5B, the robot first performs a position shift along the planned path 54, and during the position shift, the area in the uncovered area that is within the field of view of the robot is marked as covered area. As shown in fig. 5B, the robot has its view frame facing downward at this time. After reaching the target coordinate position of the target position 53, the robot rotates, and during the rotation, the area in the uncovered area within the field of view of the robot is marked as the covered area. Illustratively, the robot is rotated counterclockwise in fig. 5B. After determining that the target point 51 is marked as covered, the rotation is stopped.
After stopping the rotation, the current position of the robot and the uncovered area of the map are updated. The current position of the robot is set as a new current position of the robot, and the current map is set as a new map, and the process returns to step 301. And repeating the steps until all uncovered areas in the map are marked as covered areas.
Fig. 5C is a schematic diagram of the planned path of the robot. As shown in fig. 5C, a schematic diagram of the trajectory 55 of the robot is shown. After following the travel trajectory 55, all uncovered areas in the map are marked as covered areas.
The path planning method provided by the embodiment realizes full coverage of the map by circularly planning the path, and ensures the comprehensiveness of the map coverage.
Fig. 6 is a schematic structural diagram of a path planning apparatus according to an embodiment of the present invention. As shown in fig. 6, the path planning apparatus provided in this embodiment includes the following modules: a first determining module 61, a second determining module 62 and a marking module 63.
The first determining module 61 is configured to determine a target point in an uncovered area of the map and a target position in the map according to the map and the current position of the robot.
When the robot is at the target position, the target point is in the visual field range of the robot.
Optionally, the first determining module 61 may include: a first determination submodule and a second determination submodule.
And the first determining submodule is used for determining a target point according to the map and the current position of the robot.
The target point is a point, in the uncovered area, of which the distance from the current position of the robot meets a preset condition.
And the second determining submodule is used for determining the position of the target according to the target point and the map.
Further, the first determining submodule is specifically configured to: and determining a point which is closest to the current position of the robot in the uncovered area as a target point.
Still further, the map also includes an obstacle region. The second determination submodule is specifically configured to: determining a set of candidate target locations in a map; and determining the target position according to the candidate target position set. Wherein the candidate target positions in the candidate target position set all satisfy the following conditions: when the robot is at the candidate target position, the target point is in the visual field range of the robot; the robot cannot collide with the obstacle area at the candidate target position; and a line segment between the target point and the candidate target location, unable to pass through the obstacle region.
In determining the target location from the set of candidate target locations, the second determining sub-module is specifically configured to: determining the area of an uncovered area in the visual field range when the robot is at each candidate target position; comparing the areas to determine the maximum area; and determining the candidate target position corresponding to the largest area as the target position.
And a second determining module 62 for determining a planned path between the current position and the target position of the robot.
And the marking module 63 is used for marking the area which is in the uncovered area and is within the visual field range of the robot as the covered area in the process of realizing operation according to the planned path.
Optionally, the apparatus may further include a third determining module 64 and a fourth determining module 65.
A third determining module 64, configured to determine whether the path replanning condition is satisfied.
And a fourth determining module 65, configured to, when it is determined that the path replanning condition is satisfied, update the uncovered area in the map and the current position of the robot, and return to performing the steps of "determining the target point in the uncovered area of the map and the target position in the map according to the map and the current position of the robot", until all the uncovered areas in the map are marked as covered areas.
Optionally, the target location comprises: a target coordinate position, and a target heading angle of the robot when at the target coordinate position.
In one implementation, the path replanning condition is: the target point is marked as a covered area.
In this implementation, the marking module 63 is specifically configured to: determining whether the robot reaches a target coordinate position of the target position; when the robot is determined not to reach the target coordinate position of the target position, carrying out position movement along the planned path; and marking the area which is in the range of the vision of the robot in the uncovered area as the covered area in the process of carrying out position movement of the robot along the planned path.
Further, the marking module 63 is further configured to: when the robot is determined to reach the target coordinate position of the target position, rotating the robot by taking the target course angle as a target so as to adjust the course angle of the robot; in the process of rotating the robot, the area in the uncovered area which is within the visual field range of the robot is marked as the covered area.
In this implementation, the third determining module 64 is specifically configured to: acquiring the state of a target point; when the state of the target point is determined to be the covered state, the position moving or rotating robot is stopped, and it is determined that the path replanning condition is satisfied.
In another implementation, the path replanning condition is as follows: a target object exists in the covered area, and the robot acquires the target object.
Optionally, in this implementation, the apparatus further includes: and the acquisition module is used for reaching the position of the target object to acquire the target object when judging that the target object exists in the covered area.
The path planning device provided by the embodiment of the invention can execute the path planning method provided by any shown embodiment and various optional modes of the invention, and has corresponding functional modules and beneficial effects of the execution method.
Fig. 7 is a schematic structural diagram of a robot according to an embodiment of the present invention. Fig. 7 is a schematic structural diagram of a robot according to an embodiment of the present invention. As shown in fig. 7, the robot comprises a processor 70 and a memory 71. The number of the processors 70 in the robot can be one or more, and one processor 70 is taken as an example in fig. 7; the processor 70 and the memory 71 of the robot may be connected by a bus or other means, as exemplified by the bus connection in fig. 7.
The memory 71 is a computer-readable storage medium, and can be used for storing software programs, computer-executable programs, and modules, such as program instructions and modules corresponding to the path planning method in the embodiment of the present invention (for example, the first determining module 61, the second determining module 62, and the marking module 63 in the path planning apparatus). The processor 70 executes various functional applications and data processing of the robot by running software programs, instructions and modules stored in the memory 71, that is, implements the path planning method described above.
The memory 71 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the robot, and the like. Further, the memory 71 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, the memory 71 may further include memory remotely located from the processor 70, which may be connected to the robot through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Optionally, the robot may further include: a power component 72, an audio component 73, a communication component 74, and a sensor component 75. The power component 72, audio component 73, communication component 74, and sensor component 75 may all be connected to the processor 70 via a bus.
The power supply assembly 72 provides power to the various components of the robot. The power components 72 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the robot.
The audio component 73 is configured to output and/or input an audio signal. For example, the audio component 73 comprises a microphone configured to receive an external audio signal when the robot is in an operation mode, such as a recording mode and a speech recognition mode. The received audio signal may further be stored in the memory 71 or transmitted via the communication component 74. In some embodiments, audio assembly 73 also includes a speaker for outputting audio signals.
The communication component 74 is configured to facilitate wired or wireless communication between the robot and other devices. The robot may access a wireless network based on a communication standard. In an exemplary embodiment, the communication component 74 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the Communication component 74 also includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association technology, ultra wideband technology, bluetooth technology, and other technologies.
The sensor assembly 75 includes one or more sensors for providing various aspects of status assessment for the robot. The sensor assembly 75 may include a laser sensor for collecting point cloud data. In some embodiments, the sensor assembly 75 may also include an acceleration sensor, a magnetic sensor, a pressure sensor, a temperature sensor, or the like.
Fig. 8 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present invention. As shown in fig. 8, the present invention also provides a computer-readable storage medium 82 containing computer-executable instructions 81, the computer-executable instructions 81 when executed by a processor 83 for performing a path planning method comprising:
determining a target point in an uncovered area of the map and a target position in the map according to the map and the current position of the robot; when the robot is at the target position, the target point is within the visual field range of the robot;
determining a planned path between the current position of the robot and the target position;
and in the process of realizing operation according to the planned path, marking the region in the uncovered region, which is within the visual field range of the robot, as a covered region.
Of course, the storage medium provided by the embodiment of the present invention contains computer-executable instructions, and the computer-executable instructions are not limited to the operations of the method described above, and may also perform related operations in the path planning method provided by any embodiment of the present invention.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk, or an optical disk of a computer, and includes instructions for enabling a robot (which may be a personal computer, a vehicle, or a network device) to execute the path planning method according to the embodiments of the present invention.
It should be noted that, in the embodiment of the path planning apparatus, each included unit and each included module are only divided according to functional logic, but are not limited to the above division, as long as the corresponding function can be implemented; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in some detail by the above embodiments, the invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the invention, and the scope of the invention is determined by the scope of the appended claims.

Claims (13)

1. A method of path planning, comprising:
determining a target point in an uncovered area of a map and a target position in the map according to the map and the current position of the robot; when the robot is at the target position, the target point is within the visual field range of the robot;
determining a planned path between the current position of the robot and the target position;
in the process of realizing operation according to the planned path, marking the area, which is in the view field range of the robot, in the uncovered area as a covered area;
the determining a target point in an uncovered area of the map and a target position in the map according to the map and the current position of the robot comprises the following steps:
determining the target point according to the map and the current position of the robot; the target point is a point, in the uncovered area, of which the distance from the current position of the robot meets a preset condition;
determining the target position according to the target point and the map;
the map further comprises an obstacle region;
the determining the target position according to the target point and the map includes:
determining a set of candidate target locations in the map;
determining the target position according to the candidate target position set;
wherein the candidate target positions in the set of candidate target positions all satisfy the following condition:
when the robot is at the candidate target position, the target point is within the visual field range of the robot;
the robot is unable to collide with the obstacle region at the candidate target location; and the number of the first and second groups,
a line segment between the target point and the candidate target location that cannot pass through the obstacle region;
said determining said target location from said set of candidate target locations comprises:
determining the area of the uncovered area in the visual field range when the robot is at each candidate target position;
comparing the areas to determine the maximum area;
and determining the candidate target position corresponding to the maximum area as the target position.
2. The method of claim 1, wherein after marking an area of the uncovered area that is within a field of view of the robot as a covered area, the method further comprises:
determining whether a path re-planning condition is satisfied;
and when the path replanning condition is determined to be met, updating the uncovered area in the map and the current position of the robot, and returning to execute the steps of determining a target point in the uncovered area of the map and a target position in the map according to the map and the current position of the robot until the uncovered area in the map is marked as the covered area.
3. The method of claim 1, wherein determining the target point based on the map and the current position of the robot comprises:
and determining the point which is closest to the current position of the robot in the uncovered area as the target point.
4. The method of claim 2, wherein the target location comprises: a target coordinate position, and a target course angle of the robot when at the target coordinate position.
5. The method of claim 4, wherein the path re-planning condition is: the target point is marked as a covered area.
6. The method of claim 5, wherein said marking an area of the uncovered area that is within a field of view of the robot as a covered area during operation according to the planned path comprises:
determining whether the robot reaches a target coordinate position of the target position;
when it is determined that the robot does not reach a target coordinate position of the target position, performing position movement along the planned path;
and marking the area in the uncovered area within the visual field range of the robot as a covered area in the process of carrying out position movement of the robot along the planned path.
7. The method of claim 6, wherein after the determining whether the robot has reached the target coordinate location of the target location, the method further comprises:
when the robot is determined to reach the target coordinate position of the target position, rotating the robot by taking a target course angle as a target so as to adjust the course angle of the robot;
in the process of rotating the robot, marking the area in the uncovered area which is within the visual field range of the robot as a covered area.
8. The method of claim 7, wherein the determining whether a path re-planning condition is satisfied comprises:
acquiring the state of the target point;
and when the state of the target point is determined to be the covered state, stopping the position movement or rotating the robot, and determining that the path replanning condition is met.
9. The method of claim 2, wherein the path replanning condition is: a target object is present in the covered area and the robot acquires the target object.
10. The method of claim 9, wherein after marking the area of the uncovered area that is within the field of view of the robot as a covered area, before determining whether a path replanning condition is met, the method further comprises:
and when the target object exists in the covered area, the position of the target object is reached, and the target object is obtained.
11. A path planning apparatus, comprising:
the first determination module is used for determining a target point in an uncovered area of a map and a target position in the map according to the map and the current position of the robot; when the robot is at the target position, the target point is within the visual field range of the robot;
the second determination module is used for determining a planned path between the current position of the robot and the target position;
the marking module is used for marking the region in the uncovered region within the visual field range of the robot as a covered region in the process of realizing operation according to the planned path;
the first determining module comprises: a first determining submodule and a second determining submodule;
the first determining submodule is used for determining the target point according to the map and the current position of the robot;
the target point is a point, in the uncovered area, of which the distance from the current position of the robot meets a preset condition;
the second determining submodule is used for determining the target position according to the target point and the map;
the map further comprises an obstacle region;
the second determining submodule is specifically configured to: determining a set of candidate target locations in the map; determining the target position according to the candidate target position set; wherein the candidate target positions in the set of candidate target positions all satisfy the following condition: when the robot is at the candidate target position, the target point is within the visual field range of the robot; the robot is unable to collide with the obstacle region at the candidate target location; and a line segment between the target point and the candidate target location unable to pass through the obstacle region;
the second determining submodule is specifically further configured to: determining the area of the uncovered area in the visual field range of the robot at each candidate target position; comparing the areas to determine the maximum area; and determining the candidate target position corresponding to the maximum area as the target position.
12. A robot, characterized in that the robot comprises:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement a path planning method as claimed in any one of claims 1 to 10.
13. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out a path planning method according to any one of claims 1 to 10.
CN202010954696.7A 2020-09-11 2020-09-11 Path planning method and device, robot and storage medium Active CN112097762B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010954696.7A CN112097762B (en) 2020-09-11 2020-09-11 Path planning method and device, robot and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010954696.7A CN112097762B (en) 2020-09-11 2020-09-11 Path planning method and device, robot and storage medium

Publications (2)

Publication Number Publication Date
CN112097762A CN112097762A (en) 2020-12-18
CN112097762B true CN112097762B (en) 2022-09-09

Family

ID=73750852

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010954696.7A Active CN112097762B (en) 2020-09-11 2020-09-11 Path planning method and device, robot and storage medium

Country Status (1)

Country Link
CN (1) CN112097762B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114371692B (en) * 2021-11-26 2023-10-13 中国人民解放军军事科学院国防科技创新研究院 Patrol ship area coverage path planning method, system and device under energy constraint

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104714551A (en) * 2015-03-23 2015-06-17 中国科学技术大学 Indoor area covering method suitable for vehicle type mobile robot
CN108571979A (en) * 2018-04-16 2018-09-25 绍兴文理学院 The method for covering triangle and spanning tree realization robot path planning

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8224516B2 (en) * 2009-12-17 2012-07-17 Deere & Company System and method for area coverage using sector decomposition
CN104898660B (en) * 2015-03-27 2017-10-03 中国科学技术大学 A kind of indoor map construction method for improving robot path planning's efficiency
CN106840168B (en) * 2017-03-16 2019-10-01 苏州大学 Complete coverage path planning method under clean robot and its dynamic environment
CN107843262A (en) * 2017-10-30 2018-03-27 洛阳中科龙网创新科技有限公司 A kind of method of farm machinery all standing trajectory path planning
CN108008728B (en) * 2017-12-12 2020-01-17 深圳市银星智能科技股份有限公司 Cleaning robot and shortest path planning method based on cleaning robot
CN111012251B (en) * 2019-12-17 2021-09-03 哈工大机器人(合肥)国际创新研究院 Planning method and device for full-coverage path of cleaning robot

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104714551A (en) * 2015-03-23 2015-06-17 中国科学技术大学 Indoor area covering method suitable for vehicle type mobile robot
CN108571979A (en) * 2018-04-16 2018-09-25 绍兴文理学院 The method for covering triangle and spanning tree realization robot path planning

Also Published As

Publication number Publication date
CN112097762A (en) 2020-12-18

Similar Documents

Publication Publication Date Title
US10278333B2 (en) Pruning robot system
WO2018210059A9 (en) Method and apparatus for charging robot
CN104914869B (en) Discrete Production Workshop material distributing trolley control system based on UWB
US20200110173A1 (en) Obstacle detection method and device
CN106840169B (en) Improved method for robot path planning
CN109068278B (en) Indoor obstacle avoidance method and device, computer equipment and storage medium
US20200073363A1 (en) Method for coordinating and monitoring objects
CN111474947A (en) Robot obstacle avoidance method, device and system
CN112097762B (en) Path planning method and device, robot and storage medium
CN111829525A (en) UWB (ultra wide band) indoor and outdoor integrated intelligent navigation positioning method and system
CN111986232B (en) Target object detection method, target object detection device, robot and storage medium
US20230205234A1 (en) Information processing device, information processing system, method, and program
WO2016069593A1 (en) Cooperative communication link mapping and classification
US11132898B2 (en) Systems and methods for self-driving vehicle traffic management
CN111060110A (en) Robot navigation method, robot navigation device and robot
CN114488065A (en) Track data processing method, device, vehicle and medium
CN113741412B (en) Control method and device for automatic driving equipment and storage medium
CN110340935A (en) A kind of method and robot of robot fusion positioning
CN114526724A (en) Positioning method and equipment for inspection robot
EP3097435B1 (en) Method for estimating the position of a portable device
CN113359705A (en) Path planning method, formation cooperative operation method and equipment
CN111897348A (en) Control method and system of cloud robot, cloud robot and cloud server
CN111487972A (en) Kickball gait planning method and device, readable storage medium and robot
CN111545375B (en) Positioning spraying method, device and system and storage medium
CN111443700A (en) Robot and navigation control method and device thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant