WO2022262743A1 - 机器人任务执行方法、装置、机器人及存储介质 - Google Patents
机器人任务执行方法、装置、机器人及存储介质 Download PDFInfo
- Publication number
- WO2022262743A1 WO2022262743A1 PCT/CN2022/098817 CN2022098817W WO2022262743A1 WO 2022262743 A1 WO2022262743 A1 WO 2022262743A1 CN 2022098817 W CN2022098817 W CN 2022098817W WO 2022262743 A1 WO2022262743 A1 WO 2022262743A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- target
- robot
- area
- task
- points
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 73
- 238000012549 training Methods 0.000 claims abstract description 115
- 230000006870 function Effects 0.000 claims description 28
- 125000000524 functional group Chemical group 0.000 claims description 2
- 230000008676 import Effects 0.000 claims 1
- 238000004140 cleaning Methods 0.000 description 29
- 230000007613 environmental effect Effects 0.000 description 13
- 238000010586 diagram Methods 0.000 description 10
- 238000013507 mapping Methods 0.000 description 7
- 238000012545 processing Methods 0.000 description 7
- 101150064138 MAP1 gene Proteins 0.000 description 6
- 238000005192 partition Methods 0.000 description 5
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 238000005507 spraying Methods 0.000 description 4
- 238000000638 solvent extraction Methods 0.000 description 3
- 241001417527 Pempheridae Species 0.000 description 2
- 238000005406 washing Methods 0.000 description 2
- 241000282412 Homo Species 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000249 desinfective effect Effects 0.000 description 1
- 239000007921 spray Substances 0.000 description 1
- 238000010408 sweeping Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/20—Control system inputs
- G05D1/22—Command input arrangements
- G05D1/229—Command input data, e.g. waypoints
- G05D1/2297—Command input data, e.g. waypoints positional data taught by the user, e.g. paths
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0219—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory ensuring the processing of the whole working surface
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/20—Control system inputs
- G05D1/24—Arrangements for determining position or orientation
- G05D1/246—Arrangements for determining position or orientation using environment maps, e.g. simultaneous localisation and mapping [SLAM]
- G05D1/2464—Arrangements for determining position or orientation using environment maps, e.g. simultaneous localisation and mapping [SLAM] using an occupancy grid
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/20—Control system inputs
- G05D1/22—Command input arrangements
- G05D1/229—Command input data, e.g. waypoints
- G05D1/2295—Command input data, e.g. waypoints defining restricted zones, e.g. no-flight zones or geofences
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/60—Intended control result
- G05D1/648—Performing a task within a working area or space, e.g. cleaning
- G05D1/6482—Performing a task within a working area or space, e.g. cleaning by dividing the whole area or space in sectors to be processed separately
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D2105/00—Specific applications of the controlled vehicles
- G05D2105/10—Specific applications of the controlled vehicles for cleaning, vacuuming or polishing
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D2107/00—Specific environments of the controlled vehicles
- G05D2107/40—Indoor domestic environment
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D2109/00—Types of controlled vehicles
- G05D2109/10—Land vehicles
Definitions
- the present application relates to the communication field, and in particular, relates to a robot task execution method, device, robot and storage medium.
- the cleaning robot mainly completes the cleaning work of households and typical commercial scenes, and its core work content is to complete the traversal cleaning work on the target scene.
- the core function of the cleaning robot is reflected in the robot's path planning, especially the full-coverage path planning can reflect the robot's intelligence and market application prospects.
- the full-coverage path planning of cleaning robots mainly adopts the following two methods:
- the user first teaches (trains) a path that fully covers the target area, and the cleaning robot completely tracks the teaching (training) path to complete the entire coverage cleaning process. Since the walking path of the robot is determined by humans, the teaching path of the robot can easily avoid structured and complex scenes. This solution is generally applied to structured complex scenes, such as shelves, high-altitude complex obstacles, etc., as shown in Figure 1.
- Boundary-based full-coverage method First, a boundary is taught (trained), and the cleaning robot performs completely autonomous planning and full-coverage planning in the envelope area according to the boundary constraints, and the robot completes the cleaning results according to the planned path. However, due to the uncertainty of the environment of the cleaning area envelope, it is difficult for the robot to completely complete the full-coverage cleaning in a complex structured scene. This solution is usually applied to a relatively simple structured scene (open lobby, square, etc.) , see Figure 2.
- the first method it has strong adaptability to the environment, but once the teaching (training) is completed, the path cannot be changed, and the duplication of some areas due to human error will lead to low efficiency and unnecessary cleaning equipment wear and tear.
- the second method it is only suitable for scenarios with relatively simple structure, and its adaptability is weak, and its application scenarios are very limited.
- the main purpose of this application is to disclose a robot task execution method, device, robot and storage medium to at least solve the problems of low cleaning efficiency, weak adaptability, and limited application scenarios in the full-coverage path planning method in the related art.
- a robot task execution method is provided.
- the robot task execution method includes: acquiring a training trajectory and an environment map in a training mode; combining the environment map and the training trajectory to generate a target area for a robot to perform a task, wherein the target area is the The maximum enveloping area where the robot can complete the task autonomously; the robot is controlled to traverse the target area until the robot completes the task to be performed.
- a robot task performing device is provided.
- the robot task execution device includes: an acquisition module for acquiring a training trajectory and an environment map in a training mode; a generation module for generating a target area for a robot to perform a task according to the environment map and the training trajectory , wherein, the target area is the largest enveloping area where the robot can autonomously complete the task; the execution module is configured to control the robot to traverse the target area until the robot completes the task to be executed.
- a robot is provided.
- the robot according to the present application includes: a memory and a processor, the above-mentioned memory is used to store computer-executable instructions; the above-mentioned processor is used to execute the computer-executable instructions stored in the above-mentioned memory, so that the above-mentioned robot performs any of the above-mentioned method.
- a computer-readable storage medium is provided.
- the computer-readable storage medium includes: including: a memory and a processor, the above-mentioned memory is used to store computer-executable instructions; the above-mentioned processor is used to execute the computer-executable instructions stored in the above-mentioned memory, so that the above-mentioned robot performs any of the above-mentioned One of the above methods.
- the training track and the environment map under the teaching (training) mode are obtained, and the target area of the task to be performed by the robot (such as cleaning, washing the floor, etc.) is automatically generated, wherein the target area is that the above-mentioned robot can complete the task autonomously The maximum enveloping area of .
- the above-mentioned robot is controlled to traverse the above-mentioned target area until the above-mentioned robot completes the above-mentioned tasks to be performed. It solves the problems of low efficiency based on artificial teaching methods, weak adaptability and limited application scenarios based on the method based on full boundary coverage in related technologies. It can perform tasks in various environmental areas stably and efficiently, and can be applied to various application scenarios.
- Fig. 1 is a schematic diagram of realizing path planning based on teaching methods of related technologies
- FIG. 2 is a schematic diagram of realizing path planning based on a border full coverage method according to related technologies
- Fig. 3 is the flow chart of the robot task execution method according to the embodiment of the present application.
- Fig. 4 is a schematic diagram of acquiring an environment map and a training track in a teaching mode according to a preferred embodiment of the present application;
- FIG. 5 is a schematic diagram of a target area of a task to be executed according to a preferred embodiment of the present application
- FIG. 6 is a schematic diagram of the restricted area and the boundary of the target area according to a preferred embodiment of the present application.
- FIG. 7 is a schematic diagram of divided regions according to a preferred embodiment of the present application.
- Fig. 8 is a schematic diagram of a robot traversing a sub-area to perform a task according to a preferred embodiment of the present application
- Fig. 9 is a schematic diagram of two traversal modes in a sub-area according to a preferred embodiment of the present application.
- Fig. 10 is a structural block diagram of a robot task execution device according to an embodiment of the present application.
- a robot task execution method is also provided.
- Fig. 3 is a flowchart of a robot task execution method according to an embodiment of the present application. As shown in Figure 3, the robot task execution method includes:
- Step S301 Obtain the training track and the environment map in the training mode
- Step S302 According to the above-mentioned environment map and the above-mentioned training trajectory, generate a target area for the task to be performed by the robot, wherein the above-mentioned target area is the largest envelope area where the above-mentioned robot can complete the task autonomously;
- Step S303 Control the robot to traverse the target area until the robot completes the task to be executed.
- the scheme shown in Figure 1 first obtain the training track and the environment map in the teaching (training) mode, and combine the training track and the environment map to automatically generate the target area of the robot to perform tasks (such as cleaning, washing, etc.), where , the above-mentioned target area is the maximum envelope area where the above-mentioned robot can complete the task autonomously.
- the above-mentioned robot is controlled to traverse the above-mentioned target area until the above-mentioned robot completes the above-mentioned tasks to be performed. It can perform tasks stably and efficiently in various environmental areas, and can be applied to various application scenarios.
- the aforementioned environment map may be a world map established based on a world coordinate system, and may further be in the form of a 2D grid map or a 3D grid map.
- obtaining the training track and the environment map in the training mode may further include: when controlling the robot to start moving from the initial position, establishing a world coordinate system based on the initial position, and constructing a world map based on the world coordinate system; During the movement of the robot, record the training trajectory of the robot, and record at least one target position information on the training trajectory, and the business operations corresponding to each target position information (for example, the business operations include: turning, arm movement, etc.) action, spray water, vacuum, go around, turn on the corresponding equipment, etc.); bind relevant information in the world map, wherein the relevant information includes: the training track, the at least one target location information, and The business operation information corresponding to each target location information is bound with the world map.
- the robot establishes a world coordinate system based on the initial position O, and builds a world map based on the world coordinate system. For example, control the robot to start and move from the initial position O (for example, use remote control to control the movement of the robot), cover the work scene partially or traverse the work scene according to the business function requirements, and run to the target position A, and execute corresponding tasks at the target position A.
- control the robot to start and move from the initial position O (for example, use remote control to control the movement of the robot), cover the work scene partially or traverse the work scene according to the business function requirements, and run to the target position A, and execute corresponding tasks at the target position A.
- the robot perceives external information through multiple sensors installed in different positions of the robot, such as radar sensors, vision sensors, infrared sensors, ultrasonic sensors, collision sensors, etc., and maps each sensor data vertically Fill the 2D plane into the grid map; you can also create a 3D grid map according to the height of the robot body, and reduce the dimension of the 3D grid map when using it, and compress the 3D grid map into a 2D grid map. Record the grid map created by the robot as map1, and bind the robot's training trajectory, target position A, and business operations performed during the movement to the created map map1 according to the robot's real-time coordinate records.
- sensors installed in different positions of the robot such as radar sensors, vision sensors, infrared sensors, ultrasonic sensors, collision sensors, etc.
- target positions such as target position B, target position C, etc.
- the robot builds an environmental map (for example, the above world map) based on the target scene.
- Target positions such as A, B, C, etc. can be manually based on business needs Selected, the number of target locations can be greater than 0, and less than or equal to the number of business requirements.
- the training trajectory of the robot can be recorded, for example, the coordinate information of each path point on the training trajectory is recorded in list1; and at least one target position information on the training trajectory is recorded, and
- the business operations corresponding to each target position information for example, record the coordinate information of the target position A and the business operation corresponding to the target position A (in the form of business operation code, etc.) into list2. After that, bind list1 and list2 to map1 respectively.
- the robot After completing the mapping of the target scene, and recording the training trajectory of the robot, at least one target location information on the training trajectory, and the business operations corresponding to each of the target location information, the robot is controlled to return to the initial position. .
- generating the target area of the task to be performed by the robot may further include: in the above-mentioned environment map, expanding the above-mentioned training trajectory to obtain the maximum envelope area where the above-mentioned robot can autonomously complete the task.
- the grid information of the M-layer grid on the left side of the training track and the N-layer grid on the right side of the training track is imported into the newly created coverage area map, wherein the single coverage width of the robot is equal to M+N+1, Both M and N are integers greater than or equal to 1;
- the inner envelope and the outer envelope of the coverage area are extracted in the coverage area map, the inner envelope is superimposed on the inward expandable area, and the outer envelope is The outer expandable area is the maximum enveloping area where the robot can complete tasks autonomously.
- the above M can be equal to N, and M can also be different from N.
- M is equal to N, it means that the grid area expanded to the left and right sides along the training track is symmetrical, that is, the grid layer expanded to the left of the training track
- M is not equal to N, it means that the grid area expanded to the left and right sides along the training track is asymmetrical, that is, the number of grid layers expanded to the left of the training track is not equal to the number of grid layers expanded to the right of the training track.
- a household small sweeping (and mopping function) machine is taken as an example for description.
- (a) shows the target environment that needs to be cleaned, which contains complex obstacle areas.
- Figure (b) shows the trajectory of using the app to control the small sweeper to complete the traversal cleaning. It can be seen that the teaching (training) trajectory completely avoids the complex obstacle area, and finally the grid map shown in (c) is obtained, and (d) The training trajectory shown in the figure.
- the training trajectory is expanded to obtain the above-mentioned maximum envelope area where the above-mentioned robot can complete the task autonomously, as shown in FIG. 5 .
- the grid map determine the number of grid layers that need to be expanded for each training track.
- the single coverage width of the robot is equivalent to the grid width of M+N+1 layers, and expand the training tracks to the left respectively.
- M-layer grids expand the training track to the right to N-layer grids.
- the grid in the expanded area is used as the virtual expansion grid of the training trajectory, and the information of the virtual expansion grid of the training trajectory is imported into the newly created coverage area map covermap, and the inner and outer envelopes of the coverage area are extracted in the covermap, The inward expandable area of the inner envelope and the outward expandable area of the outer envelope are superimposed to finally obtain the maximum envelope area in which the above-mentioned robot can independently complete the task. It can be seen from Figure 5 that the maximum envelope area completely covers the cleaning area and excludes complex obstacle areas that the robot cannot completely autonomously complete.
- the following processing may also be included: determining the boundary of the above target area, and generating a restricted area fence at the boundary of the above target area, wherein the above-mentioned robot is in the area enclosed by the above-mentioned restricted area fence. Execute tasks in the target area.
- the robot when the robot autonomously navigates and circumvents obstacles, certain areas do not require the robot to pass through, for example, for the robot to perform the task of mopping the floor, the robot does not need to pass through the carpet area.
- the target area shown in Figure 5 is a joint cleaning area.
- the restricted area When the robot does not need to pass through certain areas (i.e., the restricted area), it is necessary to set up a restricted area fence on the outer boundary of the target area and within the restricted area, as shown in Figure 6.
- the shaded part of the robot can prevent the robot from exceeding the working area when performing tasks.
- the corresponding restricted area range can be determined according to the training trajectory, so as to ensure that the robot will not pass through these areas.
- controlling the robot to traverse the above-mentioned target area to perform tasks until the above-mentioned robot completes the above-mentioned tasks may further include: dividing the above-mentioned target area into a plurality of sub-areas; controlling the above-mentioned robot to traverse each of the above-mentioned sub-areas to perform tasks until the above-mentioned robot Complete the above-mentioned tasks to be performed within the above-mentioned target area.
- the complex outline can be transformed into a simpler outline, for example, Partition the target area to get multiple sub-areas. For example, for the target area shown in FIG. 5 , the entire target area may be divided into five sub-areas, as shown in FIG. 7 .
- Intelligent partitioning can adopt the following methods: Divide the environmental grid map into multiple grid areas, and the adjacent grid areas partially overlap; find the intersecting line segments in each grid area, and determine the candidate area according to the intersecting line segments; Enter the candidate area of the overlapping part of the grid area to obtain the sub-area division result.
- BCD boustrophedon cellular decomposition
- traversing each of the above-mentioned sub-areas to perform the above-mentioned to-be-executed tasks until the above-mentioned robots complete the above-mentioned to-be-executed tasks in all target areas may further include: determining the order in which the above-mentioned robots perform tasks on each of the above-mentioned sub-areas; controlling the above-mentioned The robot traverses the above-mentioned sub-areas one by one until the above-mentioned robot completes the above-mentioned tasks to be performed in all the above-mentioned sub-areas.
- the robot may be controlled to traverse each of the sub-areas to perform tasks until the robot completes the tasks to be performed in the target area.
- tasks can be traversed and executed independently in each sub-area in parallel, thereby improving the efficiency of task execution.
- the robot can also be controlled to traverse each sub-area one by one to perform the task, that is, after completing the task in one sub-area, it can traverse in the next sub-area to complete the task.
- the order in which the robot performs tasks on each of the above-mentioned sub-areas can be determined first; then the above-mentioned robot is controlled to traverse the above-mentioned sub-areas one by one according to the determined order until the above-mentioned robot completes the above-mentioned tasks to be performed in all the above-mentioned sub-areas. For example, first determine the starting position of the robot to perform tasks, and then determine which sub-areas are the most critical at this position, and then determine the sequence of areas through which the robot performs tasks according to the minimum distance from these sub-areas.
- the task execution sequence corresponding to each sub-area may also be determined according to clockwise or counterclockwise directions.
- controlling the above-mentioned robot to traverse each of the above-mentioned sub-areas to perform tasks until the above-mentioned robot completes the above-mentioned to-be-executed tasks in the above-mentioned target area may further include the following processing:
- S1 According to the initial positioning information of the above-mentioned robot, determine the sub-area closest to the above-mentioned robot;
- S2 Control the above-mentioned robot to traverse the above-mentioned nearest sub-area to perform tasks
- S4 S2 to S3 are executed cyclically until the above-mentioned robot completes the above-mentioned to-be-executed tasks in all sub-areas of the above-mentioned target area.
- the sorting method adopts the nearest neighbor principle. Every time an area is cleaned, according to the current positioning of the robot, the block closest to the robot is selected as the next sub-area (it can be that the boundary of the sub-area is the closest to the current positioning of the robot, or it can be The corner point of the sub-area is the closest to the current position of the robot).
- the boundary grid points of the sub-area where the robot is not currently performing tasks traverse each boundary grid point (including the corner points of the sub-area), and calculate the distance between each boundary grid point and the above-mentioned end point, for example , the coordinate of the above end point is X in the map based on the world coordinate system, and the coordinate of a boundary grid point is Y(i) in the map based on the world coordinate system,
- means Y(i ) and X, namely the Euclidean distance, i is a natural number.
- a sub-region is selected from multiple sub-regions satisfying the condition as the next sub-region.
- the robot is at the end point shown in the figure, then the boundary of the sub-area numbered 2 on the upper right side in Figure 8 is the closest to the end point, then the robot traverses the number 1
- the robot is controlled to traverse the above-mentioned sub-area numbered 2 to perform tasks. The above steps are executed cyclically, and each sub-area in the target area is traversed to complete the task in the entire target area with complex contours.
- the bow-shaped traversal method can also be selected, or a mixed-type traversal method (for example, the sub-regions numbered 1, 2, 3, and 5 in Figure 8 adopt the bow-shaped traversal method, and the sub-region No. traversal mode).
- step S302 Before generating the target area of the task to be performed by the robot in step S302, the following processing may be included: receiving a first request instructing the robot to autonomously traverse the target area; loading the world map and communicating with the The relevant information bound to the world map. Then in step S303, controlling the robot to traverse the target area until the robot completes the task to be performed may further include: re-executing the traversal path planning within the range of the target area; For each target position information in the at least one target position information, when the robot moves to the target position, the robot executes the business operation corresponding to the target position information.
- the robot when the robot needs to autonomously traverse the target area, the robot receives a first request indicating that the robot autonomously traverses the target area; the world map (for example, a 2D grid map) built on the target scene is loaded )map1 and the bound training track on the world map, at least one target location information, and business operation information corresponding to each of the target location information (for example, list1 and list2 bound to map1), first on the basis of map1 , the training trajectory in the robot mapping process is expanded based on the robot's own horizontal size (for example, the robot's single coverage width, etc.) and then merged to form a covermap of the covered area map in the robot's mapping process, and the inner and outer contents of the covered area map covermap are extracted.
- the expandable expansion area of the robot for example, the inner envelope is inwardly expandable, and the outer envelope is outwardly expandable
- the final coverage area map covermap of the robot is formed.
- the robot performs traversal path planning based on the covermap of the coverage area map.
- the path planning methods include: one-time full-map path planning and real-time local path planning. After re-executing the ergodic path planning, the path planning trajectory of the robot coverage area map is obtained.
- the path planning may include: loop-shaped traversal mode, bow-shaped traversal mode, and mixed type (the loop-shaped traversal mode is used in some areas, and the bow-shaped traversal mode is used in some areas).
- the robot automatically drives the robot to move based on the trajectory obtained by ergodic path planning. Since the above-mentioned at least one target location information and the business operation information corresponding to each target location information are bound to the world map, for the world For each target position information in the above at least one target position information bound to the map, when the robot moves to the target position, the robot executes a business operation corresponding to the coordinate information of the target position. For example, the business operation corresponding to the coordinate information of the target position A is to turn one circle. When the robot moves to the target position A, the robot performs the business operation of turning one circle. After the robot finally realizes the traversal of the full coverage target area, the robot is controlled to return to the initial point. During the process of autonomously traversing the full-coverage target area, the robot continuously updates the current environment map in real time, so that the environment map reflects the actual situation of the current scene.
- the step of recording the training track of the robot may further include: sequentially recording the position information of each way point in the training track of the robot according to the order in which the robot traverses the way points;
- re-executing the ergodic path planning may further include: for all target points corresponding to the at least one target position information, according to the distance between the target points on the training trajectory, and/or, each The associated information between the business operation information corresponding to the target location information respectively divides a plurality of target points into a set of target point sets to obtain at least one set of target point sets, and determines the at least one set of target points according to the sequence The traversal order of the target points in each set of target point sets of a set of target point sets; when re-executing the ergodic path planning, according to the traversal order of the target points in each set of target point sets, each set of target points The target points in the point set perform point-to-point path planning, or maintain the training trajectory corresponding to each set of
- the business operations corresponding to some target location information are a series of business operations that need to be executed coherently. If the logic of these business operations is not considered, it may cause the robot to perform tasks. Business logic is disrupted. For example, at target position 1, the robot needs to perform the business operation of turning on the sprinkler, and then at target position 2, target position 3, and target position 4, the robot needs to continuously perform the business operation of water spraying. If path planning is not performed in accordance with the order of the above business logic, there is no guarantee that the robot can successfully complete the task. Therefore, corresponding policies can be set to ensure that the business logic of the robot's task execution is not disrupted.
- the multiple target points are divided into a group
- the set of target points includes:
- the distance between every two closest target points is respectively determined on the training trajectory, when every two closest target points among multiple adjacent target points When the distances between the points are all less than the first predetermined distance threshold, the plurality of adjacent target points are determined as the target points satisfying the first predetermined condition;
- the business function of the robot is disassembled to obtain at least one business operation.
- the robot performing the water spraying function can be disassembled into business operations such as turning on the water spraying device and performing the water spraying operation.
- Divide the business operations with the associated relationship into the same business function group pre-establish a first business function library including one or more business function groups, and record the business operations corresponding to each of the target location information and the
- the first business function library performs matching, performs grouping operations on the business operations corresponding to each of the target location information according to the matching results, and groups the targets corresponding to the target location information corresponding to the business operations in the same business function group
- the point is determined as the target point satisfying the second predetermined condition;
- the target points satisfying the first predetermined condition and/or the second predetermined condition are defined in a group of target point sets.
- Scheme 1 For all target points corresponding to the at least one target position information, all target points satisfying the first predetermined condition can be selected according to the distance between the target points on the training track. Multiple target points are defined in a set of target point sets; for example, when multiple adjacent target points 1, 2, 3, 4, the distance between every two closest target points is less than the first predetermined When the distance threshold is reached, the multiple adjacent target points 1, 2, 3, and 4 are defined in the same set of target points, and the target point 1 is determined according to the order in which the original robot traverses these target points on the training track, The traversal order of each target point in the set of 2, 3, and 4; when re-executing the traversal path planning, for the set of target points, if the above traversal order is: target point 1, target point 2, target point 3, target point point 4, execute point-to-point path planning for target points 1, 2, 3, and 4 according to this traversal order, that is, first plan the path from target point 1 to target point 2, then plan the path from target point 2 to target point 3, and then
- a plurality of target points satisfying the second predetermined condition among all target points are defined in a group of target point sets; for example, when The multiple recorded business operations 1, 2, 3, and 4 are matched with the pre-established first business function library.
- Business operations 1, 3, and 4 have a certain relationship and are matched to the same business function group.
- Business operation 1 The target points corresponding to the target position information corresponding to , 3, and 4 are determined as target points satisfying the second predetermined condition, and these target points are defined in a group of target point sets.
- information such as the robot's training trajectory, robot business operation, and robot target position can be bound to the environmental map to form an information collection.
- the robot needs to autonomously traverse the target area, it can load the above-mentioned information set, expand the training trajectory, and obtain the largest envelope area where the robot can complete the task autonomously.
- the expanded coverage can be The area information is imported into the newly created coverage area map, and the inner and outer envelopes of the coverage area in the coverage area map are extracted. After superimposing the expandable area of the robot, the final robot coverage area map is obtained, and it is within the coverage area of the coverage area map.
- the robot autonomously tracks the path after the ergodic path planning, and for the target position corresponding to the business operation, when the robot moves to the target position, the robot executes the business operation corresponding to the target position. In this way, the entire set of business logic for the robot to traverse the full coverage target area is realized.
- the following processing may also be included: receiving a second request indicating that the robot traverses the target position autonomously; loading the world map and binding with the world map the relevant information; determine one or more target locations that the robot needs to traverse autonomously corresponding to the second service request; compare the one or more target locations with the one or more target locations that correspond to the one or more target locations Business operations are added to the list to be executed; for the target points corresponding to each target position in the list to be executed, the path planning operation is performed to obtain the trajectory of the robot traversing the target location; the robot tracks the trajectory after the path planning, when When the robot moves to the one or more target positions, the robot executes business operations corresponding to the one or more target positions.
- the robot when the robot needs to autonomously traverse the target location, the robot receives a second request indicating that the robot autonomously traverses the target location; the robot loads the built target scene mapping (for example, the above-mentioned world map) and map
- the related information bound above includes: training track, at least one target location information, and business operation information corresponding to each target location information. After that, it is necessary to determine one or more target locations that the robot needs to traverse autonomously corresponding to the second service request.
- one or more target locations that need to be traversed can be selected manually or autonomously by the robot, Add the one or more target positions and the business operations corresponding to the one or more target positions to the list to be executed, and the robot executes path planning operations to obtain the target points corresponding to each target position in the list to be executed by the robot
- the robot can perform point-to-point path planning based on the target scene map based on the sequence order formed by the target positions in the to-be-executed list, or track and plan based on the training trajectory in the mapping stage to obtain the robot motion track.
- the robot automatically drives the robot to move based on the motion trajectory obtained by the above-mentioned path planning. Since the above-mentioned at least one target location information and the business operation information corresponding to each target location information are bound to the world map, for the world For each target position information in the above at least one target position information bound to the map, when the robot moves to the target position, the robot executes a business operation corresponding to the coordinate information of the target position. For example, the business operation corresponding to the coordinate information of the target position B is raising the right arm. When the robot moves to the target position B, the robot performs the business operation of raising the right arm. After the robot finally achieves the traversal of the target position, the robot is controlled to return to the initial point. During the process of autonomously traversing the target position, the robot constantly updates the current environment map in real time, so that the environment map reflects the actual situation of the current scene.
- the above-mentioned step of recording the training track of the robot may further include the following processing: according to the order in which the robot traverses the way points, sequentially record the position information of each way point in the training track of the robot;
- the step of performing the path planning operation to obtain the trajectory of the robot traversing the target position may further include the following processing: For the target points corresponding to each target position in the to-be-executed list, according to The distance between the target points on the training track, and/or, the association information between the business operation information corresponding to each target position in the to-be-executed list, respectively delineate multiple target points into a group of targets In the point set, at least one set of target point sets is obtained, and the traversal order of the target points in each set of target point sets of the at least one set of target point sets is determined according to the sequence; when performing path planning operations, for planning For each target point set in all target point sets specified, perform point
- Correlation information among information, delineating multiple target points in a set of target point sets respectively may further include:
- the distance between every two closest target points is respectively determined on the training track.
- the plurality of adjacent target points are determined as target points satisfying the third predetermined condition;
- Dismantling the business functions of the robot to obtain at least one business operation dividing the business operations with related relationships into the same business function group, pre-establishing a second business function library including one or more business function groups, and adding the to-be-executed list
- the business operations corresponding to each target position in the to-be-executed list are matched with the second business function library, and the business operations corresponding to each target position in the to-be-executed list are grouped according to the matching results, and the grouped operations belong to the same business
- the target point corresponding to the target location information corresponding to the business operation in the functional group is determined as the target point satisfying the fourth predetermined condition;
- the target points satisfying the third predetermined condition and/or the fourth predetermined condition are defined in a group of target point sets.
- Solution 2 According to the association information between the business operation information corresponding to each target position in the to-be-executed list, define a plurality of target points that meet the fourth predetermined condition in a group of target point sets;
- a robot task execution device is also provided.
- Fig. 10 is a structural block diagram of a robot task execution device according to an embodiment of the present application.
- the robot task execution device includes: an acquisition module 10 for acquiring a training trajectory and an environment map in the training mode; a generation module 12 for, according to the above-mentioned environment map and the above-mentioned training trajectory, Generate a target area for the task to be performed by the robot, wherein the target area is the largest envelope area in which the robot can complete the task autonomously; the execution module 14 is used to control the robot to traverse the target area until the robot completes the task to be performed.
- the execution module 14 may further include: a dividing unit 140 (not shown in FIG. 10 ), used to divide the above-mentioned target area into multiple sub-areas; a control unit 142 (not shown in FIG. 10 ), used to control The above-mentioned robot traverses each of the above-mentioned sub-areas to perform tasks until the above-mentioned robot completes the above-mentioned to-be-executed tasks in the above-mentioned target area.
- a dividing unit 140 not shown in FIG. 10
- control unit 142 not shown in FIG. 10
- a robot is also provided.
- the robot according to the present application includes: a memory and a processor, the above-mentioned memory is used to store computer-executed instructions; the above-mentioned processor is used to execute the computer-executed instructions stored in the above-mentioned memory, so that the above-mentioned robot executes the task execution method provided by the above-mentioned embodiment .
- a memory and a processor the above-mentioned memory is used to store computer-executed instructions
- the above-mentioned processor is used to execute the computer-executed instructions stored in the above-mentioned memory, so that the above-mentioned robot executes the task execution method provided by the above-mentioned embodiment .
- a computer-readable storage medium is also provided.
- the computer-readable storage medium stores computer-executable instructions, and when the processor executes the above-mentioned computer-executable instructions, the robot task execution method provided in the above-mentioned embodiments is realized.
- the storage medium containing computer-executable instructions in the embodiment of the present application can be used to store the computer-executable instructions of the robot task execution method provided in the foregoing embodiments. For details, please refer to the descriptions in FIGS. No longer.
- the robot based on the training path and the grid map, generate the largest envelope area (that is, the target area) where the robot can complete the task autonomously, and intelligently partition the target area, and Sort the task execution order of the areas after intelligent partitioning, and automatically select a traversal method suitable for each area on this basis (for example, bow-shaped traversal method or back-type traversal method, etc.), on this basis, multiple decentralized
- the target area is merged, and then the merged target area is split into multiple independent partitions, so as to realize the design idea of sub-total sub-division and then sub-total sub-division.
- the robot can stably and efficiently perform tasks (such as tasks such as disinfecting, cleaning, and mopping the floor) in various environmental areas (for example, structured and complex environmental areas), and can be applied to various application scenarios.
- information such as the robot's training trajectory, robot business operations, and robot target position can be bound to the environmental map to form an information collection.
- the above-mentioned information set can be loaded, and based on the above scheme of obtaining the largest envelope area where the robot can autonomously complete the task, the path planning can be re-executed in the largest envelope area to obtain the maximum
- the optimal path coverage trajectory can avoid the problem of repeated coverage.
- the robot when it needs to autonomously traverse the target location, based on the target location and business operation logic, it can perform point-to-point path planning on the scene map or trajectory tracking planning based on the motion trajectory in the mapping stage to obtain the robot's motion trajectory.
- the environment map can be updated in real time, and the movement trajectory can be planned in real time based on the update of the environment map.
Landscapes
- Engineering & Computer Science (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Manipulator (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
Description
Claims (17)
- 一种机器人任务执行方法,其中,包括:获取训练模式下的训练轨迹以及环境地图;结合所述环境地图和所述训练轨迹,生成机器人待执行任务的目标区域,其中,所述目标区域为所述机器人能自主完成任务的最大包络区域;控制所述机器人遍历所述目标区域,直至所述机器人完成所述待执行任务。
- 根据权利要求1所述的方法,其中,生成机器人待执行任务的目标区域包括:在所述环境地图内,对所述训练轨迹进行膨胀,获取所述机器人能自主完成任务的最大包络区域。
- 根据权利要求2所述的方法,其中,在所述环境地图内,对所述训练轨迹进行膨胀,获取所述机器人能自主完成任务的最大包络区域包括:分别将所述训练轨迹左侧M层栅格和所述训练轨迹右侧N层栅格的栅格信息导入至新建的覆盖区域地图中,其中,所述机器人单次覆盖宽度等于M+N+1,M和N均为大于或者等于1的整数;在所述覆盖区域地图中提取覆盖区域的内包络及外包络,叠加所述内包络向内可膨胀区域,以及所述外包络向外可膨胀区域,得到所述机器人能自主完成任务的最大包络区域。
- 根据权利要求1所述的方法,其中,获取训练模式下的训练轨迹以及环境地图包括:控制机器人从初始位置开始运动时,基于所述初始位置建立世界坐标系,并基于所述世界坐标系构建世界地图;在所述机器人运动过程中,记录所述机器人的训练轨迹、并记录所述训练轨迹上的至少一个目标位置信息,以及各个所述目标位置信息分别对应的业务操作;在所述世界地图中绑定相关信息,其中,所述相关信息包括:所述训练轨迹、所述至少一个目标位置信息、以及各个所述目标位置信息分别对应的业务操作信息,与所述世界地图进行绑定。
- 根据权利要求4所述的方法,其中,在生成机器人待执行任务的所述目标区域之前,还包括:接收指示所述机器人自主遍历所述目标区域的第一请求;载入所述世界地图以及与所述世界地图绑定的所述相关信息;所述控制所述机器人遍历所述目标区域,直至所述机器人完成所述待执行任务包括:在所述目标区域范围内,重新执行遍历式路径规划;所述机器人自主跟踪遍历式路径规划后的路径,对于所述至少一个目标位置信息中的每一个目标位置信息,当所述机器人运动至该目标位置时,所述机器人执行与该目标位置信息对应的业务操作。
- 根据权利要求5所述的方法,其中,所述记录所述机器人的训练轨迹包括:按照所述机器人遍历路径点的先后顺序,依次记录所述机器人的训练轨迹中各个路径点的位置信息;在所述目标区域范围内,重新执行遍历式路径规划包括:对于所述至少一个目标位置信息对应的所有目标点,按照在所述训练轨迹上目标点之间的距离,和/或,各个所述目标位置 信息分别对应的业务操作信息之间的关联信息,分别将多个目标点划分至一组目标点集合中,得到至少一组目标点集合,并按照所述先后顺序确定所述至少一组目标点集合的每一组目标点集合中目标点的遍历顺序;在重新执行遍历式路径规划时,按照每一组目标点集合中的目标点的遍历顺序,对所述每一组目标点集合中的目标点执行点对点路径规划,或者,在重新规划的路径中保持每一组目标点集合对应的训练轨迹以及每一组目标点集合中的目标点的遍历顺序。
- 根据权利要求6所述的方法,其中,对于所述至少一个目标位置信息对应的所有目标点,按照在所述训练轨迹上目标点之间的距离,和/或,各个所述目标位置信息分别对应的业务操作信息之间的关联信息,分别将多个目标点划分至一组目标点集合中包括:对于所述至少一个目标位置信息对应的所有目标点,在所述训练轨迹上分别确定每两个最接近的目标点之间的距离,当多个相邻目标点中每两个最接近的目标点之间的距离均小于第一预定距离阈值时,将所述多个相邻目标点确定为满足第一预定条件的目标点;拆解机器人的业务功能得到至少一个业务操作,将具有关联关系的业务操作划分至同一个业务功能组中,预先建立包括一个或多个业务功能组的第一业务功能库,将记录的各个所述目标位置信息分别对应的业务操作与所述第一业务功能库进行匹配,根据匹配结果对各个所述目标位置信息分别对应的业务操作执行分组操作,并将分组后属于同一个业务功能组中的业务操作对应的目标位置信息所对应的目标点,确定为满足第二预定条件的目标点;将满足所述第一预定条件,和/或,所述第二预定条件的目标点划定在一组目标点集合中。
- 根据权利要求4所述的方法,其中,在获取训练模式下的训练轨迹以及环境地图之后,还包括:接收指示所述机器人自主遍历目标位置的第二请求;载入所述世界地图以及与所述世界地图绑定的所述相关信息;确定与所述第二业务请求对应的所述机器人需要自主遍历的一个或多个目标位置;将所述一个或多个目标位置和所述一个或多个目标位置对应的业务操作,加入待执行列表;对所述待执行列表中各个目标位置对应的目标点,执行路径规划操作得到所述机器人遍历目标位置的运动轨迹;所述机器人跟踪路径规划后的运动轨迹,当所述机器人运动至所述一个或多个目标位置时,所述机器人执行与所述一个或多个目标位置对应的业务操作。
- 根据权利要求8所述的方法,其中,所述记录所述机器人的训练轨迹包括:按照所述机器人遍历路径点的先后顺序,依次记录所述机器人的训练轨迹中各个路径点的位置信息;对所述待执行列表中各个目标位置对应的目标点,执行路径规划操作得到所述机器人遍历目标位置的运动轨迹包括:对于所述待执行列表中各个目标位置对应的目标点,按照在所述训练轨迹上目标点之间的距离,和/或,所述待执行列表中各个目标位置分别对应的业务操作信息之间的关联信息,分别将多个目标点划定在一组目标点集合中,得到至少一组目标点集合,并按照所述先后顺序确定所述至少一组目标点集合的每一组目标点集合中目标点的遍历顺序;在执行路径规划操作时,对于划定的所有目标点集合中每一组目标点集合,分别按 照该组目标点集合中各个目标点对应的所述遍历顺序对所述各个目标点执行点对点路径规划,或者,在规划的路径中保持所有目标点集合中每一组目标点集合对应的训练轨迹以及每一组目标点集合中各个目标点对应的遍历顺序。
- 根据权利要求9所述的方法,其中,对于所述待执行列表中各个目标位置对应的目标点,按照在所述训练轨迹上目标点之间的距离,和/或,所述待执行列表中各个目标位置分别对应的业务操作信息之间的关联信息,分别将多个目标点划定在一组目标点集合中包括:对于所述待执行列表中各个目标位置对应的目标点,在所述训练轨迹上分别确定每两个最接近的目标点之间的距离,当多个相邻目标点中每两个最接近的目标点之间的距离均小于第二预定距离阈值时,将所述多个相邻目标点确定为满足第三预定条件的目标点;拆解机器人的业务功能得到至少一个业务操作,将具有关联关系的业务操作划分至同一个业务功能组,预先建立包括一个或多个业务功能组的第二业务功能库,将所述待执行列表中各个目标位置分别对应的业务操作与所述第二业务功能库进行匹配,根据匹配结果对所述待执行列表中各个目标位置分别对应的业务操作执行分组操作,并将分组后属于同一个业务功能组中的业务操作对应的目标位置信息所对应的目标点,确定为满足第四预定条件的目标点;将满足所述第三预定条件,和/或,所述第四预定条件的目标点划定在一组目标点集合中。
- 根据权利要求1所述的方法,其中,生成机器人待执行任务的目标区域之后,还包括:确定所述目标区域的外部边界,并确定所述待执行任务所对应的禁区范围;在所述外部边界上和所述禁区范围内,设置禁区围栏。
- 根据权利要求1所述的方法,其中,控制机器人遍历所述目标区域执行任务,直至所述机器人完成所述任务包括:将所述目标区域划分为多个子区域;控制所述机器人遍历各个所述子区域执行任务,直至所述机器人在所述目标区域内完成所述待执行任务。
- 根据权利要求12所述的方法,其中,遍历各个所述子区域执行所述待执行任务,直至所述机器人在全部目标区域内完成所述待执行任务包括:确定所述机器人对各个所述子区域执行任务的顺序;按照确定的顺序控制所述机器人逐个遍历所述子区域,直至所述机器人在全部所述子区域内完成所述待执行任务。
- 根据权利要求12所述的方法,其中,控制所述机器人遍历各个所述子区域执行任务,直至所述机器人在所述目标区域内完成所述待执行任务包括:S1:根据所述机器人的初始定位信息,确定距离所述机器人最邻近的子区域;S2:控制所述机器人遍历所述最邻近的子区域执行任务;S3:在所述最邻近的子区域内完成任务之后,确定所述机器人任务执行完成时结束点的定位信息,确定距离该结束点最邻近的下一个子区域;S4:循环执行S2至S3,直至所述机器人在所述目标区域的全部子区域内完成所述待执行任务。
- 一种机器人任务执行装置,其中,包括:获取模块,用于获取训练模式下的训练轨迹以及环境地图;生成模块,用于根据所述环境地图和所述训练轨迹,生成机器人待执行任务的目标区域,其中,所述目标区域为所述机器人能自主完成任务的最大包络区域;执行模块,用于控制机器人遍历所述目标区域,直至所述机器人完成所述待执行任务。
- 一种机器人,包括:存储器及处理器,其中,所述存储器,用于存储计算机执行指令;所述处理器,用于执行所述存储器存储的计算机执行指令,使得所述机器人执行如权利要求1至14任一项所述的方法。
- 一种计算机可读存储介质,其中,所述计算机可读存储介质中存储有计算机执行指令,当处理器执行所述计算机执行指令时,实现如权利要求1至14任一项所述的方法。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP22824222.8A EP4357871A1 (en) | 2021-06-18 | 2022-06-15 | Robot task execution method and apparatus, robot, and storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110681640.3A CN113359743A (zh) | 2021-06-18 | 2021-06-18 | 机器人任务执行方法、装置、机器人及存储介质 |
CN202110681640.3 | 2021-06-18 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022262743A1 true WO2022262743A1 (zh) | 2022-12-22 |
Family
ID=77535299
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/098817 WO2022262743A1 (zh) | 2021-06-18 | 2022-06-15 | 机器人任务执行方法、装置、机器人及存储介质 |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP4357871A1 (zh) |
CN (2) | CN113359743A (zh) |
WO (1) | WO2022262743A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118049890A (zh) * | 2024-04-15 | 2024-05-17 | 山东吉利达智能装备集团有限公司 | 一种训练成绩裁决系统及方法 |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113359743A (zh) * | 2021-06-18 | 2021-09-07 | 北京盈迪曼德科技有限公司 | 机器人任务执行方法、装置、机器人及存储介质 |
CN115250720A (zh) * | 2022-07-12 | 2022-11-01 | 松灵机器人(深圳)有限公司 | 割草方法、装置、割草机器人以及存储介质 |
CN116300976B (zh) * | 2023-05-22 | 2023-07-21 | 汇智机器人科技(深圳)有限公司 | 一种机器人多任务作业规划方法、系统及其应用 |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109358650A (zh) * | 2018-12-14 | 2019-02-19 | 国网冀北电力有限公司检修分公司 | 巡检路径规划方法、装置、无人机和计算机可读存储介质 |
US20200110603A1 (en) * | 2018-10-03 | 2020-04-09 | Teco Electric & Machinery Co., Ltd. | Expandable mobile platform |
CN111562784A (zh) * | 2020-04-24 | 2020-08-21 | 上海思岚科技有限公司 | 移动消毒机器人的消毒方法及设备 |
CN112306067A (zh) * | 2020-11-13 | 2021-02-02 | 湖北工业大学 | 一种全局路径规划方法及系统 |
CN112462780A (zh) * | 2020-11-30 | 2021-03-09 | 深圳市杉川致行科技有限公司 | 扫地控制方法、装置、扫地机器人及计算机可读存储介质 |
CN112612273A (zh) * | 2020-12-21 | 2021-04-06 | 南方电网电力科技股份有限公司 | 一种巡检机器人避障路径规划方法、系统、设备和介质 |
CN112904845A (zh) * | 2021-01-15 | 2021-06-04 | 珠海市一微半导体有限公司 | 基于无线测距传感器的机器人卡住检测方法、系统及芯片 |
CN113359743A (zh) * | 2021-06-18 | 2021-09-07 | 北京盈迪曼德科技有限公司 | 机器人任务执行方法、装置、机器人及存储介质 |
-
2021
- 2021-06-18 CN CN202110681640.3A patent/CN113359743A/zh active Pending
-
2022
- 2022-06-15 WO PCT/CN2022/098817 patent/WO2022262743A1/zh active Application Filing
- 2022-06-15 EP EP22824222.8A patent/EP4357871A1/en active Pending
- 2022-06-15 CN CN202210674876.9A patent/CN115097823A/zh active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200110603A1 (en) * | 2018-10-03 | 2020-04-09 | Teco Electric & Machinery Co., Ltd. | Expandable mobile platform |
CN109358650A (zh) * | 2018-12-14 | 2019-02-19 | 国网冀北电力有限公司检修分公司 | 巡检路径规划方法、装置、无人机和计算机可读存储介质 |
CN111562784A (zh) * | 2020-04-24 | 2020-08-21 | 上海思岚科技有限公司 | 移动消毒机器人的消毒方法及设备 |
CN112306067A (zh) * | 2020-11-13 | 2021-02-02 | 湖北工业大学 | 一种全局路径规划方法及系统 |
CN112462780A (zh) * | 2020-11-30 | 2021-03-09 | 深圳市杉川致行科技有限公司 | 扫地控制方法、装置、扫地机器人及计算机可读存储介质 |
CN112612273A (zh) * | 2020-12-21 | 2021-04-06 | 南方电网电力科技股份有限公司 | 一种巡检机器人避障路径规划方法、系统、设备和介质 |
CN112904845A (zh) * | 2021-01-15 | 2021-06-04 | 珠海市一微半导体有限公司 | 基于无线测距传感器的机器人卡住检测方法、系统及芯片 |
CN113359743A (zh) * | 2021-06-18 | 2021-09-07 | 北京盈迪曼德科技有限公司 | 机器人任务执行方法、装置、机器人及存储介质 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118049890A (zh) * | 2024-04-15 | 2024-05-17 | 山东吉利达智能装备集团有限公司 | 一种训练成绩裁决系统及方法 |
Also Published As
Publication number | Publication date |
---|---|
CN115097823A (zh) | 2022-09-23 |
EP4357871A1 (en) | 2024-04-24 |
CN113359743A (zh) | 2021-09-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022262743A1 (zh) | 机器人任务执行方法、装置、机器人及存储介质 | |
CN110362079B (zh) | 机器人的遍历控制方法和芯片以及清洁机器人 | |
CN109363585B (zh) | 分区遍历方法、清扫方法及其扫地机器人 | |
CN113110457B (zh) | 在室内复杂动态环境中智能机器人的自主覆盖巡检方法 | |
Rekleitis et al. | Efficient boustrophedon multi-robot coverage: an algorithmic approach | |
CN110286669B (zh) | 自移动机器人的行走作业方法 | |
CN110338715B (zh) | 智能机器人清洁地面的方法和芯片以及清洁机器人 | |
CN108106616B (zh) | 一种自建导航地图的方法、系统及智能设备 | |
CN111459153B (zh) | 动态区域划分与区域通道识别方法及清洁机器人 | |
CN110231824B (zh) | 基于直线偏离度方法的智能体路径规划方法 | |
CN105320140A (zh) | 一种扫地机器人及其清扫路径规划方法 | |
CN110456789A (zh) | 一种清洁机器人的全覆盖路径规划方法 | |
CN104858871A (zh) | 机器人系统及其自建地图和导航的方法 | |
CN112826373B (zh) | 清洁机器人的清洁方法、装置、设备和存储介质 | |
CN204700886U (zh) | 机器人系统 | |
CN112315379B (zh) | 移动机器人及其的控制方法、装置和计算机可读介质 | |
CN107728608A (zh) | 一种移动机器人路径规划方法 | |
CN112656307B (zh) | 一种清洁方法及清洁机器人 | |
CN114690753A (zh) | 基于混合策略的路径规划方法、自主行进设备及机器人 | |
CN113985866A (zh) | 扫地机器人路径规划方法、装置、电子设备、存储介质 | |
Li et al. | Improving autonomous exploration using reduced approximated generalized voronoi graphs | |
CN112486182B (zh) | 一种实现未知环境地图构建与路径规划的扫地机器人及其使用方法 | |
Visser et al. | Beyond frontier exploration | |
CN114253270A (zh) | 室内机器人自动探索建图的方法及系统 | |
WO2023231757A1 (zh) | 基于地图区域轮廓的设置方法与机器人沿边结束控制方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22824222 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022824222 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18571017 Country of ref document: US |
|
ENP | Entry into the national phase |
Ref document number: 2022824222 Country of ref document: EP Effective date: 20240118 |