WO2022262743A1 - 机器人任务执行方法、装置、机器人及存储介质 - Google Patents

机器人任务执行方法、装置、机器人及存储介质 Download PDF

Info

Publication number
WO2022262743A1
WO2022262743A1 PCT/CN2022/098817 CN2022098817W WO2022262743A1 WO 2022262743 A1 WO2022262743 A1 WO 2022262743A1 CN 2022098817 W CN2022098817 W CN 2022098817W WO 2022262743 A1 WO2022262743 A1 WO 2022262743A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
robot
area
task
points
Prior art date
Application number
PCT/CN2022/098817
Other languages
English (en)
French (fr)
Inventor
闫东坤
王帅帅
范东
刘梓文
Original Assignee
北京盈迪曼德科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京盈迪曼德科技有限公司 filed Critical 北京盈迪曼德科技有限公司
Priority to EP22824222.8A priority Critical patent/EP4357871A1/en
Publication of WO2022262743A1 publication Critical patent/WO2022262743A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/20Control system inputs
    • G05D1/22Command input arrangements
    • G05D1/229Command input data, e.g. waypoints
    • G05D1/2297Command input data, e.g. waypoints positional data taught by the user, e.g. paths
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0219Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory ensuring the processing of the whole working surface
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/20Control system inputs
    • G05D1/24Arrangements for determining position or orientation
    • G05D1/246Arrangements for determining position or orientation using environment maps, e.g. simultaneous localisation and mapping [SLAM]
    • G05D1/2464Arrangements for determining position or orientation using environment maps, e.g. simultaneous localisation and mapping [SLAM] using an occupancy grid
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/20Control system inputs
    • G05D1/22Command input arrangements
    • G05D1/229Command input data, e.g. waypoints
    • G05D1/2295Command input data, e.g. waypoints defining restricted zones, e.g. no-flight zones or geofences
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/60Intended control result
    • G05D1/648Performing a task within a working area or space, e.g. cleaning
    • G05D1/6482Performing a task within a working area or space, e.g. cleaning by dividing the whole area or space in sectors to be processed separately
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D2105/00Specific applications of the controlled vehicles
    • G05D2105/10Specific applications of the controlled vehicles for cleaning, vacuuming or polishing
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D2107/00Specific environments of the controlled vehicles
    • G05D2107/40Indoor domestic environment
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D2109/00Types of controlled vehicles
    • G05D2109/10Land vehicles

Definitions

  • the present application relates to the communication field, and in particular, relates to a robot task execution method, device, robot and storage medium.
  • the cleaning robot mainly completes the cleaning work of households and typical commercial scenes, and its core work content is to complete the traversal cleaning work on the target scene.
  • the core function of the cleaning robot is reflected in the robot's path planning, especially the full-coverage path planning can reflect the robot's intelligence and market application prospects.
  • the full-coverage path planning of cleaning robots mainly adopts the following two methods:
  • the user first teaches (trains) a path that fully covers the target area, and the cleaning robot completely tracks the teaching (training) path to complete the entire coverage cleaning process. Since the walking path of the robot is determined by humans, the teaching path of the robot can easily avoid structured and complex scenes. This solution is generally applied to structured complex scenes, such as shelves, high-altitude complex obstacles, etc., as shown in Figure 1.
  • Boundary-based full-coverage method First, a boundary is taught (trained), and the cleaning robot performs completely autonomous planning and full-coverage planning in the envelope area according to the boundary constraints, and the robot completes the cleaning results according to the planned path. However, due to the uncertainty of the environment of the cleaning area envelope, it is difficult for the robot to completely complete the full-coverage cleaning in a complex structured scene. This solution is usually applied to a relatively simple structured scene (open lobby, square, etc.) , see Figure 2.
  • the first method it has strong adaptability to the environment, but once the teaching (training) is completed, the path cannot be changed, and the duplication of some areas due to human error will lead to low efficiency and unnecessary cleaning equipment wear and tear.
  • the second method it is only suitable for scenarios with relatively simple structure, and its adaptability is weak, and its application scenarios are very limited.
  • the main purpose of this application is to disclose a robot task execution method, device, robot and storage medium to at least solve the problems of low cleaning efficiency, weak adaptability, and limited application scenarios in the full-coverage path planning method in the related art.
  • a robot task execution method is provided.
  • the robot task execution method includes: acquiring a training trajectory and an environment map in a training mode; combining the environment map and the training trajectory to generate a target area for a robot to perform a task, wherein the target area is the The maximum enveloping area where the robot can complete the task autonomously; the robot is controlled to traverse the target area until the robot completes the task to be performed.
  • a robot task performing device is provided.
  • the robot task execution device includes: an acquisition module for acquiring a training trajectory and an environment map in a training mode; a generation module for generating a target area for a robot to perform a task according to the environment map and the training trajectory , wherein, the target area is the largest enveloping area where the robot can autonomously complete the task; the execution module is configured to control the robot to traverse the target area until the robot completes the task to be executed.
  • a robot is provided.
  • the robot according to the present application includes: a memory and a processor, the above-mentioned memory is used to store computer-executable instructions; the above-mentioned processor is used to execute the computer-executable instructions stored in the above-mentioned memory, so that the above-mentioned robot performs any of the above-mentioned method.
  • a computer-readable storage medium is provided.
  • the computer-readable storage medium includes: including: a memory and a processor, the above-mentioned memory is used to store computer-executable instructions; the above-mentioned processor is used to execute the computer-executable instructions stored in the above-mentioned memory, so that the above-mentioned robot performs any of the above-mentioned One of the above methods.
  • the training track and the environment map under the teaching (training) mode are obtained, and the target area of the task to be performed by the robot (such as cleaning, washing the floor, etc.) is automatically generated, wherein the target area is that the above-mentioned robot can complete the task autonomously The maximum enveloping area of .
  • the above-mentioned robot is controlled to traverse the above-mentioned target area until the above-mentioned robot completes the above-mentioned tasks to be performed. It solves the problems of low efficiency based on artificial teaching methods, weak adaptability and limited application scenarios based on the method based on full boundary coverage in related technologies. It can perform tasks in various environmental areas stably and efficiently, and can be applied to various application scenarios.
  • Fig. 1 is a schematic diagram of realizing path planning based on teaching methods of related technologies
  • FIG. 2 is a schematic diagram of realizing path planning based on a border full coverage method according to related technologies
  • Fig. 3 is the flow chart of the robot task execution method according to the embodiment of the present application.
  • Fig. 4 is a schematic diagram of acquiring an environment map and a training track in a teaching mode according to a preferred embodiment of the present application;
  • FIG. 5 is a schematic diagram of a target area of a task to be executed according to a preferred embodiment of the present application
  • FIG. 6 is a schematic diagram of the restricted area and the boundary of the target area according to a preferred embodiment of the present application.
  • FIG. 7 is a schematic diagram of divided regions according to a preferred embodiment of the present application.
  • Fig. 8 is a schematic diagram of a robot traversing a sub-area to perform a task according to a preferred embodiment of the present application
  • Fig. 9 is a schematic diagram of two traversal modes in a sub-area according to a preferred embodiment of the present application.
  • Fig. 10 is a structural block diagram of a robot task execution device according to an embodiment of the present application.
  • a robot task execution method is also provided.
  • Fig. 3 is a flowchart of a robot task execution method according to an embodiment of the present application. As shown in Figure 3, the robot task execution method includes:
  • Step S301 Obtain the training track and the environment map in the training mode
  • Step S302 According to the above-mentioned environment map and the above-mentioned training trajectory, generate a target area for the task to be performed by the robot, wherein the above-mentioned target area is the largest envelope area where the above-mentioned robot can complete the task autonomously;
  • Step S303 Control the robot to traverse the target area until the robot completes the task to be executed.
  • the scheme shown in Figure 1 first obtain the training track and the environment map in the teaching (training) mode, and combine the training track and the environment map to automatically generate the target area of the robot to perform tasks (such as cleaning, washing, etc.), where , the above-mentioned target area is the maximum envelope area where the above-mentioned robot can complete the task autonomously.
  • the above-mentioned robot is controlled to traverse the above-mentioned target area until the above-mentioned robot completes the above-mentioned tasks to be performed. It can perform tasks stably and efficiently in various environmental areas, and can be applied to various application scenarios.
  • the aforementioned environment map may be a world map established based on a world coordinate system, and may further be in the form of a 2D grid map or a 3D grid map.
  • obtaining the training track and the environment map in the training mode may further include: when controlling the robot to start moving from the initial position, establishing a world coordinate system based on the initial position, and constructing a world map based on the world coordinate system; During the movement of the robot, record the training trajectory of the robot, and record at least one target position information on the training trajectory, and the business operations corresponding to each target position information (for example, the business operations include: turning, arm movement, etc.) action, spray water, vacuum, go around, turn on the corresponding equipment, etc.); bind relevant information in the world map, wherein the relevant information includes: the training track, the at least one target location information, and The business operation information corresponding to each target location information is bound with the world map.
  • the robot establishes a world coordinate system based on the initial position O, and builds a world map based on the world coordinate system. For example, control the robot to start and move from the initial position O (for example, use remote control to control the movement of the robot), cover the work scene partially or traverse the work scene according to the business function requirements, and run to the target position A, and execute corresponding tasks at the target position A.
  • control the robot to start and move from the initial position O (for example, use remote control to control the movement of the robot), cover the work scene partially or traverse the work scene according to the business function requirements, and run to the target position A, and execute corresponding tasks at the target position A.
  • the robot perceives external information through multiple sensors installed in different positions of the robot, such as radar sensors, vision sensors, infrared sensors, ultrasonic sensors, collision sensors, etc., and maps each sensor data vertically Fill the 2D plane into the grid map; you can also create a 3D grid map according to the height of the robot body, and reduce the dimension of the 3D grid map when using it, and compress the 3D grid map into a 2D grid map. Record the grid map created by the robot as map1, and bind the robot's training trajectory, target position A, and business operations performed during the movement to the created map map1 according to the robot's real-time coordinate records.
  • sensors installed in different positions of the robot such as radar sensors, vision sensors, infrared sensors, ultrasonic sensors, collision sensors, etc.
  • target positions such as target position B, target position C, etc.
  • the robot builds an environmental map (for example, the above world map) based on the target scene.
  • Target positions such as A, B, C, etc. can be manually based on business needs Selected, the number of target locations can be greater than 0, and less than or equal to the number of business requirements.
  • the training trajectory of the robot can be recorded, for example, the coordinate information of each path point on the training trajectory is recorded in list1; and at least one target position information on the training trajectory is recorded, and
  • the business operations corresponding to each target position information for example, record the coordinate information of the target position A and the business operation corresponding to the target position A (in the form of business operation code, etc.) into list2. After that, bind list1 and list2 to map1 respectively.
  • the robot After completing the mapping of the target scene, and recording the training trajectory of the robot, at least one target location information on the training trajectory, and the business operations corresponding to each of the target location information, the robot is controlled to return to the initial position. .
  • generating the target area of the task to be performed by the robot may further include: in the above-mentioned environment map, expanding the above-mentioned training trajectory to obtain the maximum envelope area where the above-mentioned robot can autonomously complete the task.
  • the grid information of the M-layer grid on the left side of the training track and the N-layer grid on the right side of the training track is imported into the newly created coverage area map, wherein the single coverage width of the robot is equal to M+N+1, Both M and N are integers greater than or equal to 1;
  • the inner envelope and the outer envelope of the coverage area are extracted in the coverage area map, the inner envelope is superimposed on the inward expandable area, and the outer envelope is The outer expandable area is the maximum enveloping area where the robot can complete tasks autonomously.
  • the above M can be equal to N, and M can also be different from N.
  • M is equal to N, it means that the grid area expanded to the left and right sides along the training track is symmetrical, that is, the grid layer expanded to the left of the training track
  • M is not equal to N, it means that the grid area expanded to the left and right sides along the training track is asymmetrical, that is, the number of grid layers expanded to the left of the training track is not equal to the number of grid layers expanded to the right of the training track.
  • a household small sweeping (and mopping function) machine is taken as an example for description.
  • (a) shows the target environment that needs to be cleaned, which contains complex obstacle areas.
  • Figure (b) shows the trajectory of using the app to control the small sweeper to complete the traversal cleaning. It can be seen that the teaching (training) trajectory completely avoids the complex obstacle area, and finally the grid map shown in (c) is obtained, and (d) The training trajectory shown in the figure.
  • the training trajectory is expanded to obtain the above-mentioned maximum envelope area where the above-mentioned robot can complete the task autonomously, as shown in FIG. 5 .
  • the grid map determine the number of grid layers that need to be expanded for each training track.
  • the single coverage width of the robot is equivalent to the grid width of M+N+1 layers, and expand the training tracks to the left respectively.
  • M-layer grids expand the training track to the right to N-layer grids.
  • the grid in the expanded area is used as the virtual expansion grid of the training trajectory, and the information of the virtual expansion grid of the training trajectory is imported into the newly created coverage area map covermap, and the inner and outer envelopes of the coverage area are extracted in the covermap, The inward expandable area of the inner envelope and the outward expandable area of the outer envelope are superimposed to finally obtain the maximum envelope area in which the above-mentioned robot can independently complete the task. It can be seen from Figure 5 that the maximum envelope area completely covers the cleaning area and excludes complex obstacle areas that the robot cannot completely autonomously complete.
  • the following processing may also be included: determining the boundary of the above target area, and generating a restricted area fence at the boundary of the above target area, wherein the above-mentioned robot is in the area enclosed by the above-mentioned restricted area fence. Execute tasks in the target area.
  • the robot when the robot autonomously navigates and circumvents obstacles, certain areas do not require the robot to pass through, for example, for the robot to perform the task of mopping the floor, the robot does not need to pass through the carpet area.
  • the target area shown in Figure 5 is a joint cleaning area.
  • the restricted area When the robot does not need to pass through certain areas (i.e., the restricted area), it is necessary to set up a restricted area fence on the outer boundary of the target area and within the restricted area, as shown in Figure 6.
  • the shaded part of the robot can prevent the robot from exceeding the working area when performing tasks.
  • the corresponding restricted area range can be determined according to the training trajectory, so as to ensure that the robot will not pass through these areas.
  • controlling the robot to traverse the above-mentioned target area to perform tasks until the above-mentioned robot completes the above-mentioned tasks may further include: dividing the above-mentioned target area into a plurality of sub-areas; controlling the above-mentioned robot to traverse each of the above-mentioned sub-areas to perform tasks until the above-mentioned robot Complete the above-mentioned tasks to be performed within the above-mentioned target area.
  • the complex outline can be transformed into a simpler outline, for example, Partition the target area to get multiple sub-areas. For example, for the target area shown in FIG. 5 , the entire target area may be divided into five sub-areas, as shown in FIG. 7 .
  • Intelligent partitioning can adopt the following methods: Divide the environmental grid map into multiple grid areas, and the adjacent grid areas partially overlap; find the intersecting line segments in each grid area, and determine the candidate area according to the intersecting line segments; Enter the candidate area of the overlapping part of the grid area to obtain the sub-area division result.
  • BCD boustrophedon cellular decomposition
  • traversing each of the above-mentioned sub-areas to perform the above-mentioned to-be-executed tasks until the above-mentioned robots complete the above-mentioned to-be-executed tasks in all target areas may further include: determining the order in which the above-mentioned robots perform tasks on each of the above-mentioned sub-areas; controlling the above-mentioned The robot traverses the above-mentioned sub-areas one by one until the above-mentioned robot completes the above-mentioned tasks to be performed in all the above-mentioned sub-areas.
  • the robot may be controlled to traverse each of the sub-areas to perform tasks until the robot completes the tasks to be performed in the target area.
  • tasks can be traversed and executed independently in each sub-area in parallel, thereby improving the efficiency of task execution.
  • the robot can also be controlled to traverse each sub-area one by one to perform the task, that is, after completing the task in one sub-area, it can traverse in the next sub-area to complete the task.
  • the order in which the robot performs tasks on each of the above-mentioned sub-areas can be determined first; then the above-mentioned robot is controlled to traverse the above-mentioned sub-areas one by one according to the determined order until the above-mentioned robot completes the above-mentioned tasks to be performed in all the above-mentioned sub-areas. For example, first determine the starting position of the robot to perform tasks, and then determine which sub-areas are the most critical at this position, and then determine the sequence of areas through which the robot performs tasks according to the minimum distance from these sub-areas.
  • the task execution sequence corresponding to each sub-area may also be determined according to clockwise or counterclockwise directions.
  • controlling the above-mentioned robot to traverse each of the above-mentioned sub-areas to perform tasks until the above-mentioned robot completes the above-mentioned to-be-executed tasks in the above-mentioned target area may further include the following processing:
  • S1 According to the initial positioning information of the above-mentioned robot, determine the sub-area closest to the above-mentioned robot;
  • S2 Control the above-mentioned robot to traverse the above-mentioned nearest sub-area to perform tasks
  • S4 S2 to S3 are executed cyclically until the above-mentioned robot completes the above-mentioned to-be-executed tasks in all sub-areas of the above-mentioned target area.
  • the sorting method adopts the nearest neighbor principle. Every time an area is cleaned, according to the current positioning of the robot, the block closest to the robot is selected as the next sub-area (it can be that the boundary of the sub-area is the closest to the current positioning of the robot, or it can be The corner point of the sub-area is the closest to the current position of the robot).
  • the boundary grid points of the sub-area where the robot is not currently performing tasks traverse each boundary grid point (including the corner points of the sub-area), and calculate the distance between each boundary grid point and the above-mentioned end point, for example , the coordinate of the above end point is X in the map based on the world coordinate system, and the coordinate of a boundary grid point is Y(i) in the map based on the world coordinate system,
  • means Y(i ) and X, namely the Euclidean distance, i is a natural number.
  • a sub-region is selected from multiple sub-regions satisfying the condition as the next sub-region.
  • the robot is at the end point shown in the figure, then the boundary of the sub-area numbered 2 on the upper right side in Figure 8 is the closest to the end point, then the robot traverses the number 1
  • the robot is controlled to traverse the above-mentioned sub-area numbered 2 to perform tasks. The above steps are executed cyclically, and each sub-area in the target area is traversed to complete the task in the entire target area with complex contours.
  • the bow-shaped traversal method can also be selected, or a mixed-type traversal method (for example, the sub-regions numbered 1, 2, 3, and 5 in Figure 8 adopt the bow-shaped traversal method, and the sub-region No. traversal mode).
  • step S302 Before generating the target area of the task to be performed by the robot in step S302, the following processing may be included: receiving a first request instructing the robot to autonomously traverse the target area; loading the world map and communicating with the The relevant information bound to the world map. Then in step S303, controlling the robot to traverse the target area until the robot completes the task to be performed may further include: re-executing the traversal path planning within the range of the target area; For each target position information in the at least one target position information, when the robot moves to the target position, the robot executes the business operation corresponding to the target position information.
  • the robot when the robot needs to autonomously traverse the target area, the robot receives a first request indicating that the robot autonomously traverses the target area; the world map (for example, a 2D grid map) built on the target scene is loaded )map1 and the bound training track on the world map, at least one target location information, and business operation information corresponding to each of the target location information (for example, list1 and list2 bound to map1), first on the basis of map1 , the training trajectory in the robot mapping process is expanded based on the robot's own horizontal size (for example, the robot's single coverage width, etc.) and then merged to form a covermap of the covered area map in the robot's mapping process, and the inner and outer contents of the covered area map covermap are extracted.
  • the expandable expansion area of the robot for example, the inner envelope is inwardly expandable, and the outer envelope is outwardly expandable
  • the final coverage area map covermap of the robot is formed.
  • the robot performs traversal path planning based on the covermap of the coverage area map.
  • the path planning methods include: one-time full-map path planning and real-time local path planning. After re-executing the ergodic path planning, the path planning trajectory of the robot coverage area map is obtained.
  • the path planning may include: loop-shaped traversal mode, bow-shaped traversal mode, and mixed type (the loop-shaped traversal mode is used in some areas, and the bow-shaped traversal mode is used in some areas).
  • the robot automatically drives the robot to move based on the trajectory obtained by ergodic path planning. Since the above-mentioned at least one target location information and the business operation information corresponding to each target location information are bound to the world map, for the world For each target position information in the above at least one target position information bound to the map, when the robot moves to the target position, the robot executes a business operation corresponding to the coordinate information of the target position. For example, the business operation corresponding to the coordinate information of the target position A is to turn one circle. When the robot moves to the target position A, the robot performs the business operation of turning one circle. After the robot finally realizes the traversal of the full coverage target area, the robot is controlled to return to the initial point. During the process of autonomously traversing the full-coverage target area, the robot continuously updates the current environment map in real time, so that the environment map reflects the actual situation of the current scene.
  • the step of recording the training track of the robot may further include: sequentially recording the position information of each way point in the training track of the robot according to the order in which the robot traverses the way points;
  • re-executing the ergodic path planning may further include: for all target points corresponding to the at least one target position information, according to the distance between the target points on the training trajectory, and/or, each The associated information between the business operation information corresponding to the target location information respectively divides a plurality of target points into a set of target point sets to obtain at least one set of target point sets, and determines the at least one set of target points according to the sequence The traversal order of the target points in each set of target point sets of a set of target point sets; when re-executing the ergodic path planning, according to the traversal order of the target points in each set of target point sets, each set of target points The target points in the point set perform point-to-point path planning, or maintain the training trajectory corresponding to each set of
  • the business operations corresponding to some target location information are a series of business operations that need to be executed coherently. If the logic of these business operations is not considered, it may cause the robot to perform tasks. Business logic is disrupted. For example, at target position 1, the robot needs to perform the business operation of turning on the sprinkler, and then at target position 2, target position 3, and target position 4, the robot needs to continuously perform the business operation of water spraying. If path planning is not performed in accordance with the order of the above business logic, there is no guarantee that the robot can successfully complete the task. Therefore, corresponding policies can be set to ensure that the business logic of the robot's task execution is not disrupted.
  • the multiple target points are divided into a group
  • the set of target points includes:
  • the distance between every two closest target points is respectively determined on the training trajectory, when every two closest target points among multiple adjacent target points When the distances between the points are all less than the first predetermined distance threshold, the plurality of adjacent target points are determined as the target points satisfying the first predetermined condition;
  • the business function of the robot is disassembled to obtain at least one business operation.
  • the robot performing the water spraying function can be disassembled into business operations such as turning on the water spraying device and performing the water spraying operation.
  • Divide the business operations with the associated relationship into the same business function group pre-establish a first business function library including one or more business function groups, and record the business operations corresponding to each of the target location information and the
  • the first business function library performs matching, performs grouping operations on the business operations corresponding to each of the target location information according to the matching results, and groups the targets corresponding to the target location information corresponding to the business operations in the same business function group
  • the point is determined as the target point satisfying the second predetermined condition;
  • the target points satisfying the first predetermined condition and/or the second predetermined condition are defined in a group of target point sets.
  • Scheme 1 For all target points corresponding to the at least one target position information, all target points satisfying the first predetermined condition can be selected according to the distance between the target points on the training track. Multiple target points are defined in a set of target point sets; for example, when multiple adjacent target points 1, 2, 3, 4, the distance between every two closest target points is less than the first predetermined When the distance threshold is reached, the multiple adjacent target points 1, 2, 3, and 4 are defined in the same set of target points, and the target point 1 is determined according to the order in which the original robot traverses these target points on the training track, The traversal order of each target point in the set of 2, 3, and 4; when re-executing the traversal path planning, for the set of target points, if the above traversal order is: target point 1, target point 2, target point 3, target point point 4, execute point-to-point path planning for target points 1, 2, 3, and 4 according to this traversal order, that is, first plan the path from target point 1 to target point 2, then plan the path from target point 2 to target point 3, and then
  • a plurality of target points satisfying the second predetermined condition among all target points are defined in a group of target point sets; for example, when The multiple recorded business operations 1, 2, 3, and 4 are matched with the pre-established first business function library.
  • Business operations 1, 3, and 4 have a certain relationship and are matched to the same business function group.
  • Business operation 1 The target points corresponding to the target position information corresponding to , 3, and 4 are determined as target points satisfying the second predetermined condition, and these target points are defined in a group of target point sets.
  • information such as the robot's training trajectory, robot business operation, and robot target position can be bound to the environmental map to form an information collection.
  • the robot needs to autonomously traverse the target area, it can load the above-mentioned information set, expand the training trajectory, and obtain the largest envelope area where the robot can complete the task autonomously.
  • the expanded coverage can be The area information is imported into the newly created coverage area map, and the inner and outer envelopes of the coverage area in the coverage area map are extracted. After superimposing the expandable area of the robot, the final robot coverage area map is obtained, and it is within the coverage area of the coverage area map.
  • the robot autonomously tracks the path after the ergodic path planning, and for the target position corresponding to the business operation, when the robot moves to the target position, the robot executes the business operation corresponding to the target position. In this way, the entire set of business logic for the robot to traverse the full coverage target area is realized.
  • the following processing may also be included: receiving a second request indicating that the robot traverses the target position autonomously; loading the world map and binding with the world map the relevant information; determine one or more target locations that the robot needs to traverse autonomously corresponding to the second service request; compare the one or more target locations with the one or more target locations that correspond to the one or more target locations Business operations are added to the list to be executed; for the target points corresponding to each target position in the list to be executed, the path planning operation is performed to obtain the trajectory of the robot traversing the target location; the robot tracks the trajectory after the path planning, when When the robot moves to the one or more target positions, the robot executes business operations corresponding to the one or more target positions.
  • the robot when the robot needs to autonomously traverse the target location, the robot receives a second request indicating that the robot autonomously traverses the target location; the robot loads the built target scene mapping (for example, the above-mentioned world map) and map
  • the related information bound above includes: training track, at least one target location information, and business operation information corresponding to each target location information. After that, it is necessary to determine one or more target locations that the robot needs to traverse autonomously corresponding to the second service request.
  • one or more target locations that need to be traversed can be selected manually or autonomously by the robot, Add the one or more target positions and the business operations corresponding to the one or more target positions to the list to be executed, and the robot executes path planning operations to obtain the target points corresponding to each target position in the list to be executed by the robot
  • the robot can perform point-to-point path planning based on the target scene map based on the sequence order formed by the target positions in the to-be-executed list, or track and plan based on the training trajectory in the mapping stage to obtain the robot motion track.
  • the robot automatically drives the robot to move based on the motion trajectory obtained by the above-mentioned path planning. Since the above-mentioned at least one target location information and the business operation information corresponding to each target location information are bound to the world map, for the world For each target position information in the above at least one target position information bound to the map, when the robot moves to the target position, the robot executes a business operation corresponding to the coordinate information of the target position. For example, the business operation corresponding to the coordinate information of the target position B is raising the right arm. When the robot moves to the target position B, the robot performs the business operation of raising the right arm. After the robot finally achieves the traversal of the target position, the robot is controlled to return to the initial point. During the process of autonomously traversing the target position, the robot constantly updates the current environment map in real time, so that the environment map reflects the actual situation of the current scene.
  • the above-mentioned step of recording the training track of the robot may further include the following processing: according to the order in which the robot traverses the way points, sequentially record the position information of each way point in the training track of the robot;
  • the step of performing the path planning operation to obtain the trajectory of the robot traversing the target position may further include the following processing: For the target points corresponding to each target position in the to-be-executed list, according to The distance between the target points on the training track, and/or, the association information between the business operation information corresponding to each target position in the to-be-executed list, respectively delineate multiple target points into a group of targets In the point set, at least one set of target point sets is obtained, and the traversal order of the target points in each set of target point sets of the at least one set of target point sets is determined according to the sequence; when performing path planning operations, for planning For each target point set in all target point sets specified, perform point
  • Correlation information among information, delineating multiple target points in a set of target point sets respectively may further include:
  • the distance between every two closest target points is respectively determined on the training track.
  • the plurality of adjacent target points are determined as target points satisfying the third predetermined condition;
  • Dismantling the business functions of the robot to obtain at least one business operation dividing the business operations with related relationships into the same business function group, pre-establishing a second business function library including one or more business function groups, and adding the to-be-executed list
  • the business operations corresponding to each target position in the to-be-executed list are matched with the second business function library, and the business operations corresponding to each target position in the to-be-executed list are grouped according to the matching results, and the grouped operations belong to the same business
  • the target point corresponding to the target location information corresponding to the business operation in the functional group is determined as the target point satisfying the fourth predetermined condition;
  • the target points satisfying the third predetermined condition and/or the fourth predetermined condition are defined in a group of target point sets.
  • Solution 2 According to the association information between the business operation information corresponding to each target position in the to-be-executed list, define a plurality of target points that meet the fourth predetermined condition in a group of target point sets;
  • a robot task execution device is also provided.
  • Fig. 10 is a structural block diagram of a robot task execution device according to an embodiment of the present application.
  • the robot task execution device includes: an acquisition module 10 for acquiring a training trajectory and an environment map in the training mode; a generation module 12 for, according to the above-mentioned environment map and the above-mentioned training trajectory, Generate a target area for the task to be performed by the robot, wherein the target area is the largest envelope area in which the robot can complete the task autonomously; the execution module 14 is used to control the robot to traverse the target area until the robot completes the task to be performed.
  • the execution module 14 may further include: a dividing unit 140 (not shown in FIG. 10 ), used to divide the above-mentioned target area into multiple sub-areas; a control unit 142 (not shown in FIG. 10 ), used to control The above-mentioned robot traverses each of the above-mentioned sub-areas to perform tasks until the above-mentioned robot completes the above-mentioned to-be-executed tasks in the above-mentioned target area.
  • a dividing unit 140 not shown in FIG. 10
  • control unit 142 not shown in FIG. 10
  • a robot is also provided.
  • the robot according to the present application includes: a memory and a processor, the above-mentioned memory is used to store computer-executed instructions; the above-mentioned processor is used to execute the computer-executed instructions stored in the above-mentioned memory, so that the above-mentioned robot executes the task execution method provided by the above-mentioned embodiment .
  • a memory and a processor the above-mentioned memory is used to store computer-executed instructions
  • the above-mentioned processor is used to execute the computer-executed instructions stored in the above-mentioned memory, so that the above-mentioned robot executes the task execution method provided by the above-mentioned embodiment .
  • a computer-readable storage medium is also provided.
  • the computer-readable storage medium stores computer-executable instructions, and when the processor executes the above-mentioned computer-executable instructions, the robot task execution method provided in the above-mentioned embodiments is realized.
  • the storage medium containing computer-executable instructions in the embodiment of the present application can be used to store the computer-executable instructions of the robot task execution method provided in the foregoing embodiments. For details, please refer to the descriptions in FIGS. No longer.
  • the robot based on the training path and the grid map, generate the largest envelope area (that is, the target area) where the robot can complete the task autonomously, and intelligently partition the target area, and Sort the task execution order of the areas after intelligent partitioning, and automatically select a traversal method suitable for each area on this basis (for example, bow-shaped traversal method or back-type traversal method, etc.), on this basis, multiple decentralized
  • the target area is merged, and then the merged target area is split into multiple independent partitions, so as to realize the design idea of sub-total sub-division and then sub-total sub-division.
  • the robot can stably and efficiently perform tasks (such as tasks such as disinfecting, cleaning, and mopping the floor) in various environmental areas (for example, structured and complex environmental areas), and can be applied to various application scenarios.
  • information such as the robot's training trajectory, robot business operations, and robot target position can be bound to the environmental map to form an information collection.
  • the above-mentioned information set can be loaded, and based on the above scheme of obtaining the largest envelope area where the robot can autonomously complete the task, the path planning can be re-executed in the largest envelope area to obtain the maximum
  • the optimal path coverage trajectory can avoid the problem of repeated coverage.
  • the robot when it needs to autonomously traverse the target location, based on the target location and business operation logic, it can perform point-to-point path planning on the scene map or trajectory tracking planning based on the motion trajectory in the mapping stage to obtain the robot's motion trajectory.
  • the environment map can be updated in real time, and the movement trajectory can be planned in real time based on the update of the environment map.

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

本申请公开了一种机器人任务执行方法、装置、机器人及存储介质。该方法包括:获取训练模式下的训练轨迹以及环境地图;结合所述环境地图和所述训练轨迹,生成机器人待执行任务的目标区域,其中,所述目标区域为所述机器人能自主完成任务的最大包络区域;控制所述机器人遍历所述目标区域,直至所述机器人完成所述待执行任务。根据上述技术方案,可以稳定高效地在多种环境区域执行任务,并可以适用于多种应用场景。

Description

机器人任务执行方法、装置、机器人及存储介质
本申请要求在2021年06月18日提交中国专利局、申请号为202110681640.3、发明名称为“机器人任务执行方法、装置、机器人及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及通信领域,具体而言,涉及一种机器人任务执行方法、装置、机器人及存储介质。
背景技术
随着人工成本提升及机器人智能化技术日渐成熟,越来越多的机器人被应用在一定场景下解决特定问题。例如,清洁机器人主要完成家庭及典型商用场景的清洁工作,其核心工作内容是对目标场景完成遍历清扫工作。清洁机器人的核心功能体现在机器人的路径规划,尤其全覆盖路径规划能够反映机器人的智能程度及市场应用前景。
目前,清洁机器人的全覆盖路径规划主要采用以下两种方法:
1、基于人工教学(训练)方法:使用者首先教学(训练)一条全覆盖目标区域路径,清洁机器人完全跟踪教学(训练)路径从而完成整个覆盖清扫过程。由于机器人行走路径由人决定,所以机器人教学路径可以轻松避开结构化复杂场景。此方案其一般应用于结构化复杂场景,例如货架,高空复杂障碍物等,参见图1所示。
2、基于边界全覆盖方法:首先教学(训练)一个边界,清洁机器人根据边界约束在包络区域内进行完全自主规划全覆盖规划,机器人根据规划路径,完成清扫结果。但是,由于清扫区域包络的环境不可确定性,在结构化场景较为复杂场景,机器人很难完全自主完成全覆盖式清扫,此方案通常应用于结构化较为简单的场景(开阔大堂,广场等),参见图2所示。
对于第1种方法而言,对环境适应性强,但是一旦教学(训练)完毕,路径将不能更改,而其中由于人为失误所造成的部分区域重复会导致效率低下,且造成清扫设备的不必要磨损。对于第2种方法而言,仅适用于结构化较为简单的场景环境,其适应能力较弱,应用场景非常有限。
发明内容
本申请的主要目的在于公开了一种机器人任务执行方法、装置、机器人及存储介质,以至少解决相关技术中全覆盖路径规划方法清洁效率低下,适应能力较弱,应用场景有限等问题。
根据本申请的一个方面,提供了一种机器人任务执行方法。
根据本申请的机器人任务执行方法包括:获取训练模式下的训练轨迹以及环境地图;结合所述环境地图和所述训练轨迹,生成机器人待执行任务的目标区域,其中,所述目标区域为所述机器人能自主完成任务的最大包络区域;控制所述机器人遍历所述目标区域,直至所述机器人完成所述待执行任务。
根据本申请的另一方面,提供了一种机器人任务执行装置。
根据本申请的机器人任务执行装置包括:获取模块,用于获取训练模式下的训练轨迹以及环境地图;生成模块,用于根据所述环境地图和所述训练轨迹,生成机器人待执行任务的目标区域,其中,所述目标区域为所述机器人能自主完成任务的最大包络区域;执行模块,用于控制机器人遍历所述目标区域,直至所述机器人完成所述待执行任务。
根据本申请的又一方面,提供了一种机器人。
根据本申请的机器人包括:包括:存储器及处理器,上述存储器,用于存储计算机执行指令;上述处理器,用于执行上述存储器存储的计算机执行指令,使得上述机器人执行如上述任一项上述的方法。
根据本申请的再一方面,提供了一种计算机可读存储介质。
根据本申请的计算机可读存储介质包括:包括:存储器及处理器,上述存储器,用于存储计算机执行指令;上述处理器,用于执行上述存储器存储的计算机执行指令,使得上述机器人执行如上述任一项上述的方法。
根据本申请,获取教学(训练)模式下的训练轨迹和环境地图,自动生成机器人待执行任务(例如,清扫,洗地等任务)的目标区域,其中,上述目标区域为上述机器人能自主完成任务的最大包络区域。控制上述机器人遍历上述目标区域,直至上述机器人完成上述待执行任务。解决了相关技术中基于人工教学方法效率低下,基于边界全覆盖方法适应能力较弱,应用场景有限等问题,可以稳定高效地在多种环境区域执行任务,并可以适用于多种应用场景。
附图说明
图1是根据相关技术的基于教学方法实现路径规划的示意图;
图2是根据相关技术的基于边界全覆盖方法实现路径规划的示意图;
图3是根据本申请实施例的机器人任务执行方法的流程图;
图4是根据本申请优选实施例的教学模式下获取环境地图和训练轨迹的示意图;
图5根据本申请优选实施例的待执行任务的目标区域的示意图;
图6是根据本申请优选实施例的禁区范围及目标区域边界的示意图;
图7是根据本申请优选实施例的划分区域的示意图;
图8是根据本申请优选实施例的机器人遍历子区域执行任务的示意图;
图9是根据本申请优选实施例的子区域中两种遍历模式的示意图;
图10是根据本申请实施例的机器人任务执行装置的结构框图。
具体实施方式
下面结合说明书附图对本申请的具体实现方式做一详细描述。
根据本申请实施例,还提供了一种机器人任务执行方法。
图3是根据本申请实施例的机器人任务执行方法的流程图。如图3所示,该机器人任务执行方法包括:
步骤S301:获取训练模式下的训练轨迹以及环境地图;
步骤S302:根据上述环境地图和上述训练轨迹,生成机器人待执行任务的目标区域,其中,上述目标区域为上述机器人能自主完成任务的最大包络区域;
步骤S303:控制上述机器人遍历上述目标区域,直至上述机器人完成上述待执行任务。
相关技术中,对于人工教学(训练)方法,由于人的主观性,导致清扫区域的大量过度清扫与漏扫,因此该方法的效率和稳定性较低。对于基于边界全覆盖方法,包含复杂障碍物的区域(如图1所示)的应用场景中,由于清扫区域包络的环境不可确定性,该方法环境适应能力较弱。采用图1所示的方案,先获取教学(训练)模式下的训练轨迹和环境地图,结合训练轨迹和环境地图自动生成机器人待执行任务(例如,清扫,洗地等任务)的目标区域,其中,上述目标区域为上述机器人能自主完成任务的最大包络区域。控制上述机器人遍历上述目标区域,直至上述机器人完成上述待执行任务。可以稳定高效地在多种环境区域执行任务,并可以适用于多种应用场景。
其中,上述环境地图可以是基于世界坐标系建立的世界地图,进一步可以是2D栅格地图或者3D栅格地图等形式。
优选地,获取训练模式下的训练轨迹以及环境地图可以进一步包括:控制机器人从初始位置开始运动时,基于所述初始位置建立世界坐标系,并基于所述世界坐标系构建世界地图;在所述机器人运动过程中,记录所述机器人的训练轨迹、并记录所述训练轨迹上的至少一个目标位置信息,以及各个所述目标位置信息分别对应的业务操作(例如,业务操作包括:转动、手臂做动作、喷水、吸尘、绕行、打开相应设备等);在所述世界地图中绑定相关信息,其中,所述相关信息包括:所述训练轨迹、所述至少一个目标位置信息、以及各个所述目标位置信息分别对应的业务操作信息,与所述世界地图进行绑定。
在机器人建图阶段,机器人基于初始位置O建立世界坐标系,并基于该世界坐标系建立世界地图。例如,控制机器人从初始位置O出发和运动(例如,使用遥控等方式控制机器人运动),根据业务功能需求对工作场景进行局部覆盖或者遍历覆盖,并运行到目标位置A,在目标位置A执行相应的业务操作,机器人在执行上述操作的过程中,通过安装在机器人不同位置的多个传感器感知外界信息,如雷达传感器、视觉传感器、红外传感器、超声波传感器、碰撞传感器等,将各个传感器数据纵向映射到2D平面填充到栅格地图中;也可根据机器人机身高度,建立3D栅格地图,在使用时对3D栅格图降维处理,将3D栅格地图压缩为2D栅格地图。将机器人建立的栅格地图记录为map1,并将机器人的训练轨迹、目标位置A及在运动过程中执行的业务操作根据机器人实时坐标记录绑定到所建地图map1中。
同理,对于如目标位置B、目标位置C等等多个目标位置,重复执行上述操作,控制机器人从目标位置A运动至目标位置B、再从目标位置B运动至目标位置C,再从目标位置C运动至其他目标位置等一系列操作,在执行上述操作的过程中,机器人基于目标场景建立环境地图(例如,上述世界地图),目标位置如A、B、C等可以是人工基于业务需求选定,目标位置的数量可以大于0,且小于或者等于业务需求数量。
在所述机器人运动过程中,可以记录所述机器人的训练轨迹,例如,将训练轨迹上的各个路径点的坐标信息记录到list1中;并记录所述训练轨迹上的至少一个目标位置信息,以及各个所述目标位置信息分别对应的业务操作,例如,将目标位置A的坐标信息、以及与目标位置A对应的业务操作(可以以业务操作代码形式等)记录到list2中。之后,将list1和list2分别与map1进行绑定。
在完成目标场景的建图,并且记录所述机器人的训练轨迹、所述训练轨迹上的至少一个目标位置信息,以及各个所述目标位置信息分别对应的业务操作之后,控制机器人回到初始位置O。
优选地,上述步骤S302中,生成机器人待执行任务的目标区域可以进一步包括:在上 述环境地图内,对上述训练轨迹进行膨胀,获取上述机器人能自主完成任务的最大包络区域。
在优选实施过程中,在所述环境地图(以栅格地图为例)内,对所述训练轨迹进行膨胀,获取所述机器人能自主完成任务的最大包络区域可以进一步包括以下处理:分别将所述训练轨迹左侧M层栅格和所述训练轨迹右侧N层栅格的栅格信息导入至新建的覆盖区域地图中,其中,所述机器人单次覆盖宽度等于M+N+1,M和N均为大于或者等于1的整数;在所述覆盖区域地图中提取覆盖区域的内包络及外包络,叠加所述内包络向内可膨胀区域,以及所述外包络向外可膨胀区域,得到所述机器人能自主完成任务的最大包络区域。
其中,上述M可以与N相等,M也可以与N不等,当M等于N时,表示沿训练轨迹向左右两侧膨胀的栅格区域对称,即向训练轨迹的左侧膨胀的栅格层数等于训练轨迹的右侧膨胀的栅格层数;例如,机器人单次覆盖宽度W,根据W=M+N+1可知,M=N=(w-1)/2。当M不等于N时,表示沿训练轨迹向左右两侧膨胀的栅格区域不对称,即向训练轨迹的左侧膨胀的栅格层数不等于训练轨迹的右侧膨胀的栅格层数。
例如,以家用小型扫地(兼拖地功能)机为例进行说明。对于家用小型扫地机通常可以使用app中虚拟方向键控制机器人运动,从而可以完成整个目标区域的清扫遍历,得到清扫目标区域的环境地图及清扫训练轨迹。如图4所示,其中(a)图为需要清扫的目标环境,其中包含复杂障碍物区域。(b)图示出了使用app控制小型扫地机完成遍历清扫的轨迹,可以看到教学(训练)轨迹完全避开了复杂障碍物区域,最终得到(c)图所示的栅格地图,以及(d)图所示的训练轨迹。
之后,基于栅格地图及教学(训练)轨迹,将训练轨迹进行膨胀,可以获取上述机器人能自主完成任务的上述最大包络区域,如图5所示。具体地,在栅格地图上,确定每一条训练轨迹需要外扩的栅格层数,例如,机器人单次覆盖宽度相当于M+N+1层栅格宽度,将训练轨迹分别向左侧扩展M层栅格,将训练轨迹分别向右侧扩展N层栅格。将外扩后区域内的栅格作为训练轨迹虚拟膨胀栅格,将训练轨迹虚拟膨胀栅格的信息导入新建的覆盖区域地图covermap中,在covermap中提取覆盖区域的内包络及外包络,叠加所述内包络向内可膨胀区域,以及所述外包络向外可膨胀区域,最终获取上述机器人能自主完成任务的最大包络区域。从图5中可以看出此最大包络区域完全覆盖了清扫区域且排除了机器人无法完全自主可完成的复杂障碍物区域。
优选地,在步骤S302的生成机器人待执行任务的目标区域之后,还可以包括以下处理:确定上述目标区域的边界,在上述目标区域的边界生成禁区围栏,其中,上述机器人在上述禁区围栏封闭的目标区域内执行任务。
在优选实施过程中,在机器人自主导航和绕障时,某些区域不需要机器人通过,例如,对于机器人执行拖地任务而言,不需要机器人通过地毯区域。例如,如图5所示的目标区域是联合清洁区域,当不需要机器人通过某些区域(即禁区范围)时,需要在目标区域的外部边界以及禁区范围内,设置禁区围栏,如图6中的阴影部分,就可以避免机器人执行任务时超出工作区域。需要说明的是,由于人工教学过程中机器人不会通过地毯、复杂区域、跌落区域等禁区,所以根据训练轨迹可确定相应的禁区范围,从而保证机器人不会通过这些区域。
优选地,步骤S301中,控制机器人遍历上述目标区域执行任务,直至上述机器人完成上述任务可以进一步包括:将上述目标区域划分为多个子区域;控制上述机器人遍历各个上述子区域执行任务,直至上述机器人在上述目标区域内完成上述待执行任务。
在优选实施过程中,当上述目标区域仍具有复杂的轮廓时,为了更利于机器人稳定高效 地执行任务(例如,清扫,拖地等),可以将复杂的轮廓转变为更简单的轮廓,例如,对目标区域进行分区,得到多个子区域。例如,对于图5所示的目标区域,可以将整个目标区域划分为5个子区域,如图7所示。
智能分区可以采用如下方法:将环境栅格地图划分为多个栅格区域,相邻栅格区域部分重叠;在每个栅格区域中查找相交的线段,根据相交的线段确定候选区域;合并落入栅格区域重叠部分的候选区域,得到子区域划分结果。
智能分区也可以采用现有技术的BCD(boustrophedon cellular decomposition)算法来实现,BCD是一种栅格地图的划分方法,对整个目标清扫区域进行自动分区,其分区结果都可以使用牛耕式路径遍历,同时也可以采用回型路径遍历。
优选地,遍历各个上述子区域执行上述待执行任务,直至上述机器人在全部目标区域内完成上述待执行任务可以进一步包括:确定上述机器人对各个上述子区域执行任务的顺序;按照确定的顺序控制上述机器人逐个遍历上述子区域,直至上述机器人在全部上述子区域内完成上述待执行任务。
将上述目标区域划分为多个子区域之后,可以控制机器人在遍历各个上述子区域执行任务,直至上述机器人在上述目标区域内完成上述待执行任务。当然,对于多个机器人而言,可以并行独立在各个子区域内遍历执行任务,从而提高任务执行效率。优选地,也可以控制机器人逐个遍历各个子区域执行任务,即,在一个子区域内遍历完成任务后,再到下一个子区域内遍历完成任务。可以先确定机器人对各个上述子区域执行任务的顺序;在按照确定的顺序控制上述机器人逐个遍历上述子区域,直至上述机器人在全部上述子区域内完成上述待执行任务。例如,先确定机器人执行任务的起始位置,然后确定该位置最临界的是哪几个子区域,然后根据与这几个子区域的最小距离来判定机器人执行任务所经过的区域顺序。也可以根据顺时针或者逆时针方向等来确定各个子区域对应的任务执行顺序。
优选地,控制上述机器人遍历各个上述子区域执行任务,直至上述机器人在上述目标区域内完成上述待执行任务还可以进一步包括以下处理:
S1:根据上述机器人的初始定位信息,确定距离上述机器人最邻近的子区域;
S2:控制上述机器人遍历上述最邻近的子区域执行任务;
S3:在上述最邻近的子区域内完成任务之后,确定上述机器人任务执行完成时结束点的定位信息,确定距离该结束点最邻近的下一个子区域;
S4:循环执行S2至S3,直至上述机器人在上述目标区域的全部子区域内完成上述待执行任务。
例如,排序方式采用最邻近原则,每清扫完成一块区域,根据机器人当前的定位,选择距离机器人最近的区块作为下一个子区域(可以是子区域的边界距离机器人当前的定位最近,也可以是子区域的角点距离机器人当前的定位最近)。
具体地,可以确定机器人当前未执行任务的子区域的边界栅格点,遍历各个边界栅格点(也包括子区域的角点),计算每个边界栅格点与上述结束点的距离,例如,上述结束点在基于世界坐标系的地图中坐标为X,一个边界栅格点在基于世界坐标系的地图中坐标为Y(i),||Y(i)-X||表示Y(i)与X之间的范数,即欧式距离,i为自然数。将距离最近的边界栅格点所属的子区域作为下一个子区域;如果距离上述结束点最近的边界栅格点有多个,可以随机选择一个边界栅格点所属的子区域作为下一个子区域,或者,按照执行任务的方向(例如,顺时针方向或者逆时针方向等),从多个满足条件的子区域中选择一个子区域作为下一个子区 域。
如图8所示,编号为1的子区域清扫完成以后机器人处于图示的结束点位置,则图8中右上侧编号为2的子区域的边界距离结束点最近,则在机器人遍历编号为1的子区域之后,控制机器人遍历上述编号为2的子区域执行任务。循环执行上述步骤,遍历完目标区域中的各个子区域从而在具有复杂轮廓的整个目标区域内完成任务。
优选地,在控制上述机器人遍历目标区域之前,还需要先判断选择遍历方式,例如,如图9中的(a)图所示,可以选择回形遍历方式,如图9中的(b)图所示,也可以选择弓形遍历方式,或者,采用混合型遍历方式(例如,图8中编号1、2、3、5的子区域采用弓形遍历方式、图9中编号4的子区域采用回形遍历方式)。
优选地,在步骤S302的生成机器人待执行任务的所述目标区域之前,还可以包括以下处理:接收指示所述机器人自主遍历所述目标区域的第一请求;载入所述世界地图以及与所界地图绑定的所述相关信息。则步骤S303中,控制所述机器人遍历所述目标区域,直至所述机器人完成所述待执行任务可以进一步包括:在所述目标区域范围内,重新执行遍历式路径规划;所述机器人自主跟踪遍历式路径规划后的路径,对于所述至少一个目标位置信息中的每一个目标位置信息,当所述机器人运动至该目标位置时,所述机器人执行与该目标位置信息对应的业务操作。
在优选实施过程中,当需要执行机器人自主遍历目标区域时,机器人接收到指示该机器人自主遍历所述目标区域的第一请求;载入对目标场景所建的世界地图(例如,2D栅格地图)map1及该世界地图上绑定的训练轨迹、至少一个目标位置信息以及各个所述目标位置信息分别对应的业务操作信息(例如,与map1绑定的list1和list2),首先在map1的基础上,将机器人建图过程中训练轨迹基于机器人自身水平尺寸(例如,机器人单次覆盖宽度等)进行膨胀后合并,形成机器人建图过程中覆盖区域地图covermap,并提取该覆盖区域地图covermap的内、外包络,叠加机器人可拓展膨胀区(例如,内包络向内可膨胀区域,所述外包络向外可膨胀区域)后,形成机器人最终的覆盖区域地图covermap。
机器人基于覆盖区域地图covermap,进行遍历式路径规划,其中,路径规划方式包括:一次性全图路径规划、实时局部路径规划。在重新执行遍历式路径规划后,得到机器人覆盖区域地图的路径规划轨迹。路径规划根据机器人运行方式可以包括:回形遍历方式、弓形遍历方式、及混合型(部分区域采用回形遍历方式、部分区域采用弓形遍历方式)。
机器人基于遍历式路径规划得到的轨迹自动驱动机器人进行运动,由于上述至少一个目标位置信息、以及各个目标位置信息分别对应的业务操作信息,均是与所述世界地图进行绑定的,对于该世界地图绑定的上述至少一个目标位置信息中的每一个目标位置信息,当所述机器人运动至该目标位置时,所述机器人执行与该目标位置的坐标信息对应的业务操作。例如,目标位置A的坐标信息对应的业务操作为转动一圈,当所述机器人运动至目标位置A时,机器人执行转动一圈的业务操作。在机器人最终实现全覆盖目标区域的遍历之后,控制机器人回到初始点。机器人在执行自主遍历全覆盖目标区域的工作过程中,不断实时更新当前的环境地图,使得环境地图反映当前场景的实际情况。
在优选实施过程中,所述记录所述机器人的训练轨迹的步骤可以进一步包括:按照所述机器人遍历路径点的先后顺序,依次记录所述机器人的训练轨迹中各个路径点的位置信息;则在所述目标区域范围内,重新执行遍历式路径规划可以进一步包括:对于所述至少一个目标位置信息对应的所有目标点,按照在所述训练轨迹上目标点之间的距离,和/或,各个所述 目标位置信息分别对应的业务操作信息之间的关联信息,分别将多个目标点划分至一组目标点集合中,得到至少一组目标点集合,并按照所述先后顺序确定所述至少一组目标点集合的每一组目标点集合中目标点的遍历顺序;在重新执行遍历式路径规划时,按照每一组目标点集合中的目标点的遍历顺序,对所述每一组目标点集合中的目标点执行点对点路径规划,或者,在重新规划的路径中保持每一组目标点集合对应的训练轨迹以及每一组目标点集合中的目标点的遍历顺序。
需要说明的是,在机器人具体实施过程中,对于某些目标位置信息对应的业务操作,是需要连贯执行的一系列业务操作,如果不考虑这些业务操作的逻辑性,可能会造成机器人执行任务的业务逻辑被打乱。例如,在目标位置1机器人需要执行打开喷水装置的业务操作,之后在目标位置2,目标位置3,目标位置4机器人需要连贯性执行喷水的业务操作。如果没有按照上述业务逻辑的顺序来进行路径规划,则无法保证机器人能顺利完成任务。因此,可以设置相应的策略,以保证器人执行任务的业务逻辑不被打乱。
优选地,上述按照在所述训练轨迹上目标点之间的距离,和/或,各个所述目标位置信息分别对应的业务操作信息之间的关联信息,分别将多个目标点划分至一组目标点集合中包括:
对于所述至少一个目标位置信息对应的所有目标点,在所述训练轨迹上分别确定每两个最接近的目标点之间的距离,当多个相邻目标点中每两个最接近的目标点之间的距离均小于第一预定距离阈值时,将所述多个相邻目标点确定为满足第一预定条件的目标点;
拆解机器人的业务功能得到至少一个业务操作,例如,机器人执行喷水功能可以拆解为打开喷水装置、执行喷水操作等业务操作。将具有关联关系的业务操作划分至同一个业务功能组中,预先建立包括一个或多个业务功能组的第一业务功能库,将记录的各个所述目标位置信息分别对应的业务操作与所述第一业务功能库进行匹配,根据匹配结果对各个所述目标位置信息分别对应的业务操作执行分组操作,并将分组后属于同一个业务功能组中的业务操作对应的目标位置信息所对应的目标点,确定为满足第二预定条件的目标点;
将满足所述第一预定条件,和/或,所述第二预定条件的目标点划定在一组目标点集合中。
即,可以采用三种方案:方案1、对于所述至少一个目标位置信息对应的所有目标点,可以按照在所述训练轨迹上目标点之间的距离,将所有目标点中满足第一预定条件的多个目标点划定在一组目标点集合中;例如,当多个相邻目标点1,2,3,4中,每两个最接近的目标点之间的距离均小于第一预定距离阈值时,将所述多个相邻目标点1,2,3,4划定在同一组目标点集合中,并按照原先机器人在训练轨迹上遍历这些目标点的先后顺序确定目标点1,2,3,4的集合中各个目标点的遍历顺序;在重新执行遍历式路径规划时,对于该组目标点集合,如果上述遍历顺序是:目标点1、目标点2、目标点3、目标点4,则按照这个遍历顺序对目标点1,2,3,4执行点对点路径规划,即,先规划目标点1到目标点2的路径,接着规划目标点2到目标点3的路径,然后规划目标点3到目标点4的路径;或者,在重新规划的路径中保持原来的目标点1,2,3,4组成的这一段训练轨迹,并且保留这段路径的遍历顺序,即还是从目标点1到达目标点2,之后到达目标点3、最后到达目标点4,而不是相反的或者其他的遍历顺序。
方案2、按照各个所述目标位置信息分别对应的业务操作信息之间的关联信息,将所有目标点中满足第二预定条件的多个目标点划定在一组目标点集合中;例如,当记录的多个业务操作1,2,3,4,与预先建立第一业务功能库进行匹配,业务操作1、3、4具有一定的关 联关系,匹配到了同一个业务功能组中,业务操作1、3、4对应的目标位置信息对应的目标点,确定为满足第二预定条件的目标点,将这些目标点划定在一组目标点集合中。在重新执行遍历式路径规划时,对于该组目标点集合,如果上述遍历顺序是:目标点1、目标点3、目标点4,则按照这个遍历顺序对目标点1,3,4执行点对点路径规划,即,先规划目标点1到目标点3的路径,接着规划目标点3到目标点4的路径;或者,在重新规划的路径中保持原来的目标点1,3,4组成的这一段训练轨迹,并且保留这段路径的遍历顺序,即还是从目标点1到达目标点3,之后到达目标点4,而不是相反的或者其他的遍历顺序。
方案3、同时满足第一预定条件和第二预定条件的多个目标点才划定在同一组目标点集合中,即,取满足第一预定条件的目标点和满足第二预定条件的目标点的交集。需要说明的是,对于方案1而言,如果仅考虑所述训练轨迹上目标点之间的距离,而不考虑业务操作之间的关联信息,在分组时可能存在一定的误差。同理,对于方案2而言,如果仅考虑业务操作之间的关联信息,而不考虑所述训练轨迹上目标点之间的距离,在分组时也可能存在一定的误差。如果将两种策略相结合,即采用方案3,将同时满足所述第一预定条件和所述第二预定条件的目标点划定在一组目标点集合中,可以尽可能地降低分组误差。
由此可见,在控制机器人运动并建立环境地图的阶段,可以将机器人的训练轨迹、机器人业务操作、机器人目标位置等信息绑定到环境地图上,形成信息集合。所述机器人需要自主遍历所述目标区域时,可以载入上述信息集合,对所述训练轨迹进行膨胀,获取所述机器人能自主完成任务的最大包络区域,具体地,可以将膨胀后的覆盖区域信息导入新建的覆盖区域地图,并提取该覆盖区域地图中覆盖区域的内、外包络,叠加机器人可膨胀区域后,得到最终的机器人覆盖区域地图,并在该覆盖区域地图的覆盖范围内重新执行路径规划,得到最优的路径覆盖轨迹,可以避免重复覆盖的问题。所述机器人自主跟踪遍历式路径规划后的路径,对于对应有业务操作的目标位置,当机器人运动至该目标位置时,机器人执行与该目标位置对应的业务操作。从而实现机器人遍历全覆盖目标区域的整套业务逻辑。
优选地,在获取训练模式下的训练轨迹以及环境地图之后,还可以包括以下处理:接收指示所述机器人自主遍历目标位置的第二请求;载入所述世界地图以及与所述世界地图绑定的所述相关信息;确定与所述第二业务请求对应的所述机器人需要自主遍历的一个或多个目标位置;将所述一个或多个目标位置和所述一个或多个目标位置对应的业务操作,加入待执行列表;对所述待执行列表中各个目标位置对应的目标点,执行路径规划操作得到所述机器人遍历目标位置的运动轨迹;所述机器人跟踪路径规划后的运动轨迹,当所述机器人运动至所述一个或多个目标位置时,所述机器人执行与所述一个或多个目标位置对应的业务操作。
在优选实施过程中,当需要执行机器人自主遍历目标位置时,机器人接收指示所述机器人自主遍历目标位置的第二请求;机器人载入所建的目标场景建图(例如,上述世界地图)及地图上绑定的相关信息,包括:训练轨迹、至少一个目标位置信息以及各个所述目标位置信息分别对应的业务操作信息。之后需要确定与所述第二业务请求对应的所述机器人需要自主遍历的一个或多个目标位置,例如,基于业务需求,可以人工或由机器人自主选定需要遍历的一个或多个目标位置,并将所述一个或多个目标位置和所述一个或多个目标位置对应的业务操作,加入待执行列表,机器人对待执行列表中各个目标位置对应的目标点,执行路径规划操作得到所述机器人遍历目标位置的运动轨迹,例如,机器人可以基于该待执行列表中的目标位置形成的序列顺序,根据目标场景地图进行点对点式路径规划,或者,基于建图阶段的训练轨迹进行跟踪规划,得到机器人运动轨迹。
机器人基于上述路径规划得到的运动轨迹自动驱动机器人进行运动,由于上述至少一个目标位置信息、以及各个目标位置信息分别对应的业务操作信息,均是与所述世界地图进行绑定的,对于该世界地图绑定的上述至少一个目标位置信息中的每一个目标位置信息,当所述机器人运动至该目标位置时,所述机器人执行与该目标位置的坐标信息对应的业务操作。例如,目标位置B的坐标信息对应的业务操作为抬升右臂,当所述机器人运动至目标位置B时,机器人执行抬升右臂的业务操作。在机器人最终实现目标位置的遍历之后,控制机器人回到初始点。机器人在执行自主遍历目标位置的工作过程中,不断实时更新当前的环境地图,使得环境地图反映当前场景的实际情况。
优选地,上述记录所述机器人的训练轨迹的步骤可以进一步包括以下处理:按照所述机器人遍历路径点的先后顺序,依次记录所述机器人的训练轨迹中各个路径点的位置信息;则对所述待执行列表中各个目标位置对应的目标点,执行路径规划操作得到所述机器人遍历目标位置的运动轨迹的步骤可以进一步包括以下处理:对于所述待执行列表中各个目标位置对应的目标点,按照在所述训练轨迹上目标点之间的距离,和/或,所述待执行列表中各个目标位置分别对应的业务操作信息之间的关联信息,分别将多个目标点划定在一组目标点集合中,得到至少一组目标点集合,并按照所述先后顺序确定所述至少一组目标点集合的每一组目标点集合中目标点的遍历顺序;在执行路径规划操作时,对于划定的所有目标点集合中每一组目标点集合,分别按照该组目标点集合中各个目标点对应的所述遍历顺序对所述各个目标点执行点对点路径规划,或者,在规划的路径中保持所有目标点集合中每一组目标点集合对应的训练轨迹以及每一组目标点集合中各个目标点对应的遍历顺序。
优选地,对于所述待执行列表中各个目标位置对应的目标点,按照在所述训练轨迹上目标点之间的距离,和/或,所述待执行列表中各个目标位置分别对应的业务操作信息之间的关联信息,分别将多个目标点划定在一组目标点集合中可以进一步包括:
对于所述待执行列表中各个目标位置对应的目标点,在所述训练轨迹上分别确定每两个最接近的目标点之间的距离,当多个相邻目标点中每两个最接近的目标点之间的距离均小于第二预定距离阈值时,将所述多个相邻目标点确定为满足第三预定条件的目标点;
拆解机器人的业务功能得到至少一个业务操作,将具有关联关系的业务操作划分至同一个业务功能组,预先建立包括一个或多个业务功能组的第二业务功能库,将所述待执行列表中各个目标位置分别对应的业务操作与所述第二业务功能库进行匹配,根据匹配结果对所述待执行列表中各个目标位置分别对应的业务操作执行分组操作,并将分组后属于同一个业务功能组中的业务操作对应的目标位置信息所对应的目标点,确定为满足第四预定条件的目标点;
将满足所述第三预定条件,和/或,所述第四预定条件的目标点划定在一组目标点集合中。
同理,在机器人具体实施过程中,对于某些目标位置信息对应的业务操作,是需要连贯执行的一系列业务操作,为了避免机器人执行任务的业务逻辑被打乱,可以设置相应的策略。
例如,可以采用三种方案:
方案1、对于所述待执行列表中各个目标位置对应的目标点,按照在所述训练轨迹上目标点之间的距离,将满足第三预定条件的多个目标点划定在一组目标点集合中;
方案2、按照所述待执行列表中各个目标位置分别对应的业务操作信息之间的关联信息,将满足第四预定条件的多个目标点划定在一组目标点集合中;
方案3、同时满足第三预定条件和第四预定条件的多个目标点才划定在同一组目标点集合中,即,取满足第三预定条件的目标点和满足第四预定条件的目标点的交集。需要说明的是,对于方案1而言,如果仅考虑所述训练轨迹上目标点之间的距离,而不考虑业务操作之间的关联信息,在分组时可能存在一定的误差。同理,对于方案2而言,如果仅考虑业务操作之间的关联信息,而不考虑所述训练轨迹上目标点之间的距离,在分组时也可能存在一定的误差。如果将两种策略相结合,即采用方案3,将同时满足所述第三预定条件和所述第四预定条件的目标点划定在一组目标点集合中,可以尽可能地降低分组误差。
根据本申请实施例,还提供了一种机器人任务执行装置。
图10是根据本申请实施例的机器人任务执行装置的结构框图。如图10所示,根据本申请实施例的机器人任务执行装置包括:获取模块10,用于获取训练模式下的训练轨迹以及环境地图;生成模块12,用于根据上述环境地图和上述训练轨迹,生成机器人待执行任务的目标区域,其中,上述目标区域为上述机器人能自主完成任务的最大包络区域;执行模块14,用于控制机器人遍历上述目标区域,直至上述机器人完成上述待执行任务。
优选地,上述执行模块14可以进一步包括:划分单元140(图10中未示出),用于将上述目标区域划分为多个子区域;控制单元142(图10中未示出),用于控制上述机器人遍历各个上述子区域执行任务,直至上述机器人在上述目标区域内完成上述待执行任务。
需要说明的是,上述机器人任务执行装置中的各模块各单元相互结合的优选实施方式,具体可以参见图1至图9的描述,其实现方式与原理相同,此处不再赘述。
根据本申请实施例,还提供了一种机器人。
根据本申请的机器人包括:存储器及处理器,上述存储器,用于存储计算机执行指令;上述处理器,用于执行上述存储器存储的计算机执行指令,使得上述机器人执行如上述实施例提供的任务执行方法。具体可以参见图1至图9的描述,其实现方式与原理相同,不再赘述。
根据本申请实施例,还提供了一种计算机可读存储介质。
根据本申请的计算机可读存储介质存储有计算机执行指令,当处理器执行上述计算机执行指令时,实现如上述实施例提供的机器人任务执行方法。
本申请实施例的包含计算机可执行指令的存储介质,可用于存储前述实施例中提供的机器人任务执行方法的计算机执行指令,具体可以参见图1至图9的描述,其实现方式与原理相同,不再赘述。
综上所述,借助本申请提供的上述实施方式,基于训练路径和栅格地图,生成所述机器人能自主完成任务的最大包络区域(即目标区域),并对目标区域进行智能分区,并对智能分区后的区域的任务执行顺序进行排序,在此基础之上自动选择适合每个区域的遍历方式(例如,弓形遍历方式或回型遍历方式等),在此基础上可以实现多个分散目标区域的合并,之后再将合并后的目标区域拆分成多个独立的分区,实现化零为整之后再化整为零的分总分设计思想。从而可以实现机器人稳定高效地对具有多种环境区域(例如,结构化复杂的环境区域)执行任务(例如,消杀、清扫,拖地等任务),并可以适用于多种应用场景。
并且,在控制机器人运动并建立环境地图的阶段,可以将机器人的训练轨迹、机器人业务操作、机器人目标位置等信息绑定到环境地图上,形成信息集合。当所述机器人需要自主遍历所述目标区域时,可以载入上述信息集合,基于上述获取机器人能自主完成任务的最大包络区域的方案,可以在最大包络区域内重新执行路径规划,得到最优的路径覆盖轨迹,可 以避免重复覆盖的问题。并且,当所述机器人需要自主遍历目标位置时,可以基于目标位置及业务操作逻辑,在场景地图进行点对点式路径规划或者基于建图阶段运动轨迹的轨迹跟踪规划,得到机器人运动轨迹。
此外,机器人自主运动过程中,还可以实时更新环境地图,并基于环境地图的更新情况实时规划运动轨迹。
以上公开的仅为本申请的几个具体实施例,但是,本申请并非局限于此,任何本领域的技术人员能思之的变化都应落入本申请的保护范围。

Claims (17)

  1. 一种机器人任务执行方法,其中,包括:
    获取训练模式下的训练轨迹以及环境地图;
    结合所述环境地图和所述训练轨迹,生成机器人待执行任务的目标区域,其中,所述目标区域为所述机器人能自主完成任务的最大包络区域;
    控制所述机器人遍历所述目标区域,直至所述机器人完成所述待执行任务。
  2. 根据权利要求1所述的方法,其中,生成机器人待执行任务的目标区域包括:在所述环境地图内,对所述训练轨迹进行膨胀,获取所述机器人能自主完成任务的最大包络区域。
  3. 根据权利要求2所述的方法,其中,在所述环境地图内,对所述训练轨迹进行膨胀,获取所述机器人能自主完成任务的最大包络区域包括:
    分别将所述训练轨迹左侧M层栅格和所述训练轨迹右侧N层栅格的栅格信息导入至新建的覆盖区域地图中,其中,所述机器人单次覆盖宽度等于M+N+1,M和N均为大于或者等于1的整数;
    在所述覆盖区域地图中提取覆盖区域的内包络及外包络,叠加所述内包络向内可膨胀区域,以及所述外包络向外可膨胀区域,得到所述机器人能自主完成任务的最大包络区域。
  4. 根据权利要求1所述的方法,其中,获取训练模式下的训练轨迹以及环境地图包括:
    控制机器人从初始位置开始运动时,基于所述初始位置建立世界坐标系,并基于所述世界坐标系构建世界地图;
    在所述机器人运动过程中,记录所述机器人的训练轨迹、并记录所述训练轨迹上的至少一个目标位置信息,以及各个所述目标位置信息分别对应的业务操作;
    在所述世界地图中绑定相关信息,其中,所述相关信息包括:所述训练轨迹、所述至少一个目标位置信息、以及各个所述目标位置信息分别对应的业务操作信息,与所述世界地图进行绑定。
  5. 根据权利要求4所述的方法,其中,
    在生成机器人待执行任务的所述目标区域之前,还包括:接收指示所述机器人自主遍历所述目标区域的第一请求;载入所述世界地图以及与所述世界地图绑定的所述相关信息;
    所述控制所述机器人遍历所述目标区域,直至所述机器人完成所述待执行任务包括:在所述目标区域范围内,重新执行遍历式路径规划;所述机器人自主跟踪遍历式路径规划后的路径,对于所述至少一个目标位置信息中的每一个目标位置信息,当所述机器人运动至该目标位置时,所述机器人执行与该目标位置信息对应的业务操作。
  6. 根据权利要求5所述的方法,其中,
    所述记录所述机器人的训练轨迹包括:按照所述机器人遍历路径点的先后顺序,依次记录所述机器人的训练轨迹中各个路径点的位置信息;
    在所述目标区域范围内,重新执行遍历式路径规划包括:对于所述至少一个目标位置信息对应的所有目标点,按照在所述训练轨迹上目标点之间的距离,和/或,各个所述目标位置 信息分别对应的业务操作信息之间的关联信息,分别将多个目标点划分至一组目标点集合中,得到至少一组目标点集合,并按照所述先后顺序确定所述至少一组目标点集合的每一组目标点集合中目标点的遍历顺序;在重新执行遍历式路径规划时,按照每一组目标点集合中的目标点的遍历顺序,对所述每一组目标点集合中的目标点执行点对点路径规划,或者,在重新规划的路径中保持每一组目标点集合对应的训练轨迹以及每一组目标点集合中的目标点的遍历顺序。
  7. 根据权利要求6所述的方法,其中,对于所述至少一个目标位置信息对应的所有目标点,按照在所述训练轨迹上目标点之间的距离,和/或,各个所述目标位置信息分别对应的业务操作信息之间的关联信息,分别将多个目标点划分至一组目标点集合中包括:
    对于所述至少一个目标位置信息对应的所有目标点,在所述训练轨迹上分别确定每两个最接近的目标点之间的距离,当多个相邻目标点中每两个最接近的目标点之间的距离均小于第一预定距离阈值时,将所述多个相邻目标点确定为满足第一预定条件的目标点;
    拆解机器人的业务功能得到至少一个业务操作,将具有关联关系的业务操作划分至同一个业务功能组中,预先建立包括一个或多个业务功能组的第一业务功能库,将记录的各个所述目标位置信息分别对应的业务操作与所述第一业务功能库进行匹配,根据匹配结果对各个所述目标位置信息分别对应的业务操作执行分组操作,并将分组后属于同一个业务功能组中的业务操作对应的目标位置信息所对应的目标点,确定为满足第二预定条件的目标点;
    将满足所述第一预定条件,和/或,所述第二预定条件的目标点划定在一组目标点集合中。
  8. 根据权利要求4所述的方法,其中,在获取训练模式下的训练轨迹以及环境地图之后,还包括:
    接收指示所述机器人自主遍历目标位置的第二请求;
    载入所述世界地图以及与所述世界地图绑定的所述相关信息;
    确定与所述第二业务请求对应的所述机器人需要自主遍历的一个或多个目标位置;
    将所述一个或多个目标位置和所述一个或多个目标位置对应的业务操作,加入待执行列表;
    对所述待执行列表中各个目标位置对应的目标点,执行路径规划操作得到所述机器人遍历目标位置的运动轨迹;
    所述机器人跟踪路径规划后的运动轨迹,当所述机器人运动至所述一个或多个目标位置时,所述机器人执行与所述一个或多个目标位置对应的业务操作。
  9. 根据权利要求8所述的方法,其中,
    所述记录所述机器人的训练轨迹包括:按照所述机器人遍历路径点的先后顺序,依次记录所述机器人的训练轨迹中各个路径点的位置信息;
    对所述待执行列表中各个目标位置对应的目标点,执行路径规划操作得到所述机器人遍历目标位置的运动轨迹包括:对于所述待执行列表中各个目标位置对应的目标点,按照在所述训练轨迹上目标点之间的距离,和/或,所述待执行列表中各个目标位置分别对应的业务操作信息之间的关联信息,分别将多个目标点划定在一组目标点集合中,得到至少一组目标点集合,并按照所述先后顺序确定所述至少一组目标点集合的每一组目标点集合中目标点的遍历顺序;在执行路径规划操作时,对于划定的所有目标点集合中每一组目标点集合,分别按 照该组目标点集合中各个目标点对应的所述遍历顺序对所述各个目标点执行点对点路径规划,或者,在规划的路径中保持所有目标点集合中每一组目标点集合对应的训练轨迹以及每一组目标点集合中各个目标点对应的遍历顺序。
  10. 根据权利要求9所述的方法,其中,对于所述待执行列表中各个目标位置对应的目标点,按照在所述训练轨迹上目标点之间的距离,和/或,所述待执行列表中各个目标位置分别对应的业务操作信息之间的关联信息,分别将多个目标点划定在一组目标点集合中包括:
    对于所述待执行列表中各个目标位置对应的目标点,在所述训练轨迹上分别确定每两个最接近的目标点之间的距离,当多个相邻目标点中每两个最接近的目标点之间的距离均小于第二预定距离阈值时,将所述多个相邻目标点确定为满足第三预定条件的目标点;
    拆解机器人的业务功能得到至少一个业务操作,将具有关联关系的业务操作划分至同一个业务功能组,预先建立包括一个或多个业务功能组的第二业务功能库,将所述待执行列表中各个目标位置分别对应的业务操作与所述第二业务功能库进行匹配,根据匹配结果对所述待执行列表中各个目标位置分别对应的业务操作执行分组操作,并将分组后属于同一个业务功能组中的业务操作对应的目标位置信息所对应的目标点,确定为满足第四预定条件的目标点;
    将满足所述第三预定条件,和/或,所述第四预定条件的目标点划定在一组目标点集合中。
  11. 根据权利要求1所述的方法,其中,生成机器人待执行任务的目标区域之后,还包括:
    确定所述目标区域的外部边界,并确定所述待执行任务所对应的禁区范围;
    在所述外部边界上和所述禁区范围内,设置禁区围栏。
  12. 根据权利要求1所述的方法,其中,控制机器人遍历所述目标区域执行任务,直至所述机器人完成所述任务包括:
    将所述目标区域划分为多个子区域;
    控制所述机器人遍历各个所述子区域执行任务,直至所述机器人在所述目标区域内完成所述待执行任务。
  13. 根据权利要求12所述的方法,其中,遍历各个所述子区域执行所述待执行任务,直至所述机器人在全部目标区域内完成所述待执行任务包括:
    确定所述机器人对各个所述子区域执行任务的顺序;
    按照确定的顺序控制所述机器人逐个遍历所述子区域,直至所述机器人在全部所述子区域内完成所述待执行任务。
  14. 根据权利要求12所述的方法,其中,控制所述机器人遍历各个所述子区域执行任务,直至所述机器人在所述目标区域内完成所述待执行任务包括:
    S1:根据所述机器人的初始定位信息,确定距离所述机器人最邻近的子区域;
    S2:控制所述机器人遍历所述最邻近的子区域执行任务;
    S3:在所述最邻近的子区域内完成任务之后,确定所述机器人任务执行完成时结束点的定位信息,确定距离该结束点最邻近的下一个子区域;
    S4:循环执行S2至S3,直至所述机器人在所述目标区域的全部子区域内完成所述待执行任务。
  15. 一种机器人任务执行装置,其中,包括:
    获取模块,用于获取训练模式下的训练轨迹以及环境地图;
    生成模块,用于根据所述环境地图和所述训练轨迹,生成机器人待执行任务的目标区域,其中,所述目标区域为所述机器人能自主完成任务的最大包络区域;
    执行模块,用于控制机器人遍历所述目标区域,直至所述机器人完成所述待执行任务。
  16. 一种机器人,包括:存储器及处理器,其中,
    所述存储器,用于存储计算机执行指令;
    所述处理器,用于执行所述存储器存储的计算机执行指令,使得所述机器人执行如权利要求1至14任一项所述的方法。
  17. 一种计算机可读存储介质,其中,所述计算机可读存储介质中存储有计算机执行指令,当处理器执行所述计算机执行指令时,实现如权利要求1至14任一项所述的方法。
PCT/CN2022/098817 2021-06-18 2022-06-15 机器人任务执行方法、装置、机器人及存储介质 WO2022262743A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP22824222.8A EP4357871A1 (en) 2021-06-18 2022-06-15 Robot task execution method and apparatus, robot, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110681640.3A CN113359743A (zh) 2021-06-18 2021-06-18 机器人任务执行方法、装置、机器人及存储介质
CN202110681640.3 2021-06-18

Publications (1)

Publication Number Publication Date
WO2022262743A1 true WO2022262743A1 (zh) 2022-12-22

Family

ID=77535299

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/098817 WO2022262743A1 (zh) 2021-06-18 2022-06-15 机器人任务执行方法、装置、机器人及存储介质

Country Status (3)

Country Link
EP (1) EP4357871A1 (zh)
CN (2) CN113359743A (zh)
WO (1) WO2022262743A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118049890A (zh) * 2024-04-15 2024-05-17 山东吉利达智能装备集团有限公司 一种训练成绩裁决系统及方法

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113359743A (zh) * 2021-06-18 2021-09-07 北京盈迪曼德科技有限公司 机器人任务执行方法、装置、机器人及存储介质
CN115250720A (zh) * 2022-07-12 2022-11-01 松灵机器人(深圳)有限公司 割草方法、装置、割草机器人以及存储介质
CN116300976B (zh) * 2023-05-22 2023-07-21 汇智机器人科技(深圳)有限公司 一种机器人多任务作业规划方法、系统及其应用

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109358650A (zh) * 2018-12-14 2019-02-19 国网冀北电力有限公司检修分公司 巡检路径规划方法、装置、无人机和计算机可读存储介质
US20200110603A1 (en) * 2018-10-03 2020-04-09 Teco Electric & Machinery Co., Ltd. Expandable mobile platform
CN111562784A (zh) * 2020-04-24 2020-08-21 上海思岚科技有限公司 移动消毒机器人的消毒方法及设备
CN112306067A (zh) * 2020-11-13 2021-02-02 湖北工业大学 一种全局路径规划方法及系统
CN112462780A (zh) * 2020-11-30 2021-03-09 深圳市杉川致行科技有限公司 扫地控制方法、装置、扫地机器人及计算机可读存储介质
CN112612273A (zh) * 2020-12-21 2021-04-06 南方电网电力科技股份有限公司 一种巡检机器人避障路径规划方法、系统、设备和介质
CN112904845A (zh) * 2021-01-15 2021-06-04 珠海市一微半导体有限公司 基于无线测距传感器的机器人卡住检测方法、系统及芯片
CN113359743A (zh) * 2021-06-18 2021-09-07 北京盈迪曼德科技有限公司 机器人任务执行方法、装置、机器人及存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200110603A1 (en) * 2018-10-03 2020-04-09 Teco Electric & Machinery Co., Ltd. Expandable mobile platform
CN109358650A (zh) * 2018-12-14 2019-02-19 国网冀北电力有限公司检修分公司 巡检路径规划方法、装置、无人机和计算机可读存储介质
CN111562784A (zh) * 2020-04-24 2020-08-21 上海思岚科技有限公司 移动消毒机器人的消毒方法及设备
CN112306067A (zh) * 2020-11-13 2021-02-02 湖北工业大学 一种全局路径规划方法及系统
CN112462780A (zh) * 2020-11-30 2021-03-09 深圳市杉川致行科技有限公司 扫地控制方法、装置、扫地机器人及计算机可读存储介质
CN112612273A (zh) * 2020-12-21 2021-04-06 南方电网电力科技股份有限公司 一种巡检机器人避障路径规划方法、系统、设备和介质
CN112904845A (zh) * 2021-01-15 2021-06-04 珠海市一微半导体有限公司 基于无线测距传感器的机器人卡住检测方法、系统及芯片
CN113359743A (zh) * 2021-06-18 2021-09-07 北京盈迪曼德科技有限公司 机器人任务执行方法、装置、机器人及存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118049890A (zh) * 2024-04-15 2024-05-17 山东吉利达智能装备集团有限公司 一种训练成绩裁决系统及方法

Also Published As

Publication number Publication date
CN115097823A (zh) 2022-09-23
EP4357871A1 (en) 2024-04-24
CN113359743A (zh) 2021-09-07

Similar Documents

Publication Publication Date Title
WO2022262743A1 (zh) 机器人任务执行方法、装置、机器人及存储介质
CN110362079B (zh) 机器人的遍历控制方法和芯片以及清洁机器人
CN109363585B (zh) 分区遍历方法、清扫方法及其扫地机器人
CN113110457B (zh) 在室内复杂动态环境中智能机器人的自主覆盖巡检方法
Rekleitis et al. Efficient boustrophedon multi-robot coverage: an algorithmic approach
CN110286669B (zh) 自移动机器人的行走作业方法
CN110338715B (zh) 智能机器人清洁地面的方法和芯片以及清洁机器人
CN108106616B (zh) 一种自建导航地图的方法、系统及智能设备
CN111459153B (zh) 动态区域划分与区域通道识别方法及清洁机器人
CN110231824B (zh) 基于直线偏离度方法的智能体路径规划方法
CN105320140A (zh) 一种扫地机器人及其清扫路径规划方法
CN110456789A (zh) 一种清洁机器人的全覆盖路径规划方法
CN104858871A (zh) 机器人系统及其自建地图和导航的方法
CN112826373B (zh) 清洁机器人的清洁方法、装置、设备和存储介质
CN204700886U (zh) 机器人系统
CN112315379B (zh) 移动机器人及其的控制方法、装置和计算机可读介质
CN107728608A (zh) 一种移动机器人路径规划方法
CN112656307B (zh) 一种清洁方法及清洁机器人
CN114690753A (zh) 基于混合策略的路径规划方法、自主行进设备及机器人
CN113985866A (zh) 扫地机器人路径规划方法、装置、电子设备、存储介质
Li et al. Improving autonomous exploration using reduced approximated generalized voronoi graphs
CN112486182B (zh) 一种实现未知环境地图构建与路径规划的扫地机器人及其使用方法
Visser et al. Beyond frontier exploration
CN114253270A (zh) 室内机器人自动探索建图的方法及系统
WO2023231757A1 (zh) 基于地图区域轮廓的设置方法与机器人沿边结束控制方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22824222

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2022824222

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 18571017

Country of ref document: US

ENP Entry into the national phase

Ref document number: 2022824222

Country of ref document: EP

Effective date: 20240118