CN106774345B - Method and equipment for multi-robot cooperation - Google Patents

Method and equipment for multi-robot cooperation Download PDF

Info

Publication number
CN106774345B
CN106774345B CN201710067320.2A CN201710067320A CN106774345B CN 106774345 B CN106774345 B CN 106774345B CN 201710067320 A CN201710067320 A CN 201710067320A CN 106774345 B CN106774345 B CN 106774345B
Authority
CN
China
Prior art keywords
robot
target object
robots
cooperation
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710067320.2A
Other languages
Chinese (zh)
Other versions
CN106774345A (en
Inventor
戴萧何
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai xianruan Information Technology Co., Ltd
Original Assignee
Shanghai Xianruan Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Xianruan Information Technology Co Ltd filed Critical Shanghai Xianruan Information Technology Co Ltd
Priority to CN201710067320.2A priority Critical patent/CN106774345B/en
Publication of CN106774345A publication Critical patent/CN106774345A/en
Application granted granted Critical
Publication of CN106774345B publication Critical patent/CN106774345B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0259Control of position or course in two dimensions specially adapted to land vehicles using magnetic or electromagnetic means
    • G05D1/0263Control of position or course in two dimensions specially adapted to land vehicles using magnetic or electromagnetic means using magnetic strips
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • B25J9/1682Dual arm manipulator; Coordination of several manipulators
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0287Control of position or course in two dimensions specially adapted to land vehicles involving a plurality of land vehicles, e.g. fleet or convoy travelling
    • G05D1/0289Control of position or course in two dimensions specially adapted to land vehicles involving a plurality of land vehicles, e.g. fleet or convoy travelling with means for avoiding collisions between vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Electromagnetism (AREA)
  • Manipulator (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The application aims to provide a method and equipment for multi-robot cooperation; acquiring a cooperation instruction matched with the robot from the network equipment; and executing the corresponding multi-robot cooperative task based on the cooperative instruction. Compared with the prior art, in the method and the system, the plurality of independent robots performing the cooperation tasks commonly execute the corresponding multi-robot cooperation tasks based on the cooperation instructions acquired from the corresponding network devices. According to the application, based on the application needs of a specific scene, a plurality of independent robots can be flexibly combined through the cooperation instruction sent by the network equipment, so that each combined robot can realize cooperative operation on tasks with large workload or complex work classification, and decomposition of complex work and optimization of overall resources are facilitated.

Description

Method and equipment for multi-robot cooperation
Technical Field
The application relates to the field of computers, in particular to a technology for multi-robot cooperation.
Background
The existing robots mostly work independently by a single robot, for example, the single robot moves independently and carries goods independently, and because the single robot has certain limitations in equipment scale and function application, tasks that can be completed by the single robot are relatively simple, and for some tasks with large workload or complex tasks, the single robot cannot complete the tasks or the completion effect is unsatisfactory. For example, in a transportation operation, for some transportation objects with large volume, it is highly necessary for a plurality of robots to carry and move the transportation objects in cooperation. However, the prior art lacks a technique for effectively combining multiple independent robots to collectively perform the same item, or set of tasks.
Disclosure of Invention
The application aims to provide a method and equipment for multi-robot cooperation.
According to one aspect of the application, a method for multi-robot cooperation at a robot end is provided, and the method comprises the following steps:
acquiring a cooperation instruction matched with the robot from the network equipment;
and executing the corresponding multi-robot cooperative task based on the cooperative instruction.
According to another aspect of the present application, there is also provided a method for multi-robot collaboration on a network device side, including:
providing the matched collaboration instructions to one or more robots, wherein the robots perform corresponding multi-robot collaboration tasks based on the respective collaboration instructions.
According to another aspect of the present application, there is also provided a robot performing multi-robot collaboration, including:
the first device is used for acquiring a cooperation instruction matched with the robot from the network equipment;
and the second device is used for executing the corresponding multi-robot cooperative task based on the cooperative instruction.
According to another aspect of the present application, there is also provided a network device for multi-robot collaboration, including:
and the fourth device is used for providing the matched collaboration instructions for one or more robots, wherein the robots execute corresponding multi-robot collaboration tasks based on the corresponding collaboration instructions.
According to another aspect of the present application, there is also provided a system for multi-robot collaboration, wherein the system includes: according to another aspect of the present application, a robot for multi-robot cooperation is provided, and a network device for multi-robot cooperation is provided according to another aspect of the present application.
Compared with the prior art, in the method and the system, the plurality of independent robots performing the cooperation tasks commonly execute the corresponding multi-robot cooperation tasks based on the cooperation instructions acquired from the corresponding network devices. According to the application, based on the application needs of a specific scene, a plurality of independent robots can be flexibly combined through the cooperation instruction sent by the network equipment, so that each combined robot can realize cooperative operation on tasks with large workload or complex work classification, and decomposition of complex work and optimization of overall resources are facilitated.
Further, in one implementation of the present application, the robots may be used to implement multi-robot formation movement, for example, the robots performing cooperation may move to a destination location based on a matching cooperation instruction or follow a target object to implement formation movement of multiple robots. Based on the implementation mode, various cooperative tasks, such as cooperative moving and carrying tasks and the like, which need to be realized based on the movement of a plurality of robot formations can be flexibly and effectively realized.
Further, in an implementation manner of the present application, after the robot obtains the cooperation instruction, the target object to be followed by the robot is determined; identifying the target object from a scene captured by the robot in real time; therefore, the robot is controlled to move to the target object according to the corresponding moving path based on the cooperation instruction. Compared with the prior robot following technology, the robot following method and device can accurately lock the target object in the natural environment with real-time change and more interference factors and effectively track the target object, so that the accuracy of robot following is improved, and the technical problem that the current robot follows the wrong target or loses the target frequently is solved. Meanwhile, the robot is controlled to move to the target object according to the corresponding moving path based on the cooperation instruction, and formation movement of mutual cooperation of a plurality of robots can be realized on the whole.
Further, in an implementation manner, based on the cooperation instruction, the robot is controlled to move to the target object according to the corresponding moving path, wherein the relative position between the robot and the target object is matched with the formation state information of the multiple robots in the cooperation instruction, and the relative distance between the second robot and the first robot is included in a preset relative distance range threshold value. In the method, the queue shape of the multiple robots in the multi-robot cooperative task can be controlled through the cooperative instruction, or the relative positions of the robots are specifically controlled, so that the cooperative work matching degree between the robots is higher, and the completion efficiency of the cooperative task is improved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 illustrates a flow diagram of a method for multi-robot collaboration at a robot side and a network device side in accordance with an aspect of the subject application;
FIG. 2 illustrates a flow diagram of a method for multi-robot collaboration at a robot end in accordance with an aspect of the subject application;
FIG. 3 illustrates a system diagram for multi-robot collaboration in accordance with an aspect of the subject application.
The same or similar reference numbers in the drawings identify the same or similar elements.
Detailed Description
The present application is described in further detail below with reference to the attached figures.
In a typical configuration of the present application, the terminal, the device serving the network, and the trusted party each include one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include non-transitory computer readable media (transient media), such as modulated data signals and carrier waves.
Fig. 1 illustrates a flow chart of a method for multi-robot collaboration at a robot side and a network device side according to an aspect of the application.
Wherein the method comprises step S11, step S12 and step S21.
The embodiment of the application provides a method for multi-robot cooperation, which can be realized at a corresponding robot end and a network equipment end. The robot includes various kinds of machine equipment capable of automatically executing work, machine equipment having a moving function, a carrying and loading function, or other functions, or machine equipment having the above-mentioned multiple functions at the same time, for example, various kinds of artificial intelligence equipment having a moving and carrying function. In the present application, a plurality of robots that perform the same cooperative task may have the same or different functions. The network device includes, but is not limited to, a computer, a network host, a single network server, multiple network server sets or a cloud server, wherein the cloud server may be a virtual supercomputer operating in a distributed system and composed of a group of loosely coupled computer sets, which is used to realize a simple, efficient, safe, reliable, and processing-capacity-scalable computing service. In the present application, the robot may be referred to as the robot 1, and the network device may be referred to as the network device 2.
Specifically, in step S21, the network device 2 may provide the matching collaboration instruction to one or more robots 1, wherein the robots 1 execute corresponding multi-robot collaboration tasks based on the respective collaboration instruction. Correspondingly, in step S11, the cooperation instruction matching itself is acquired from the network device 2 by the corresponding robot 1. Here, the multi-machine cooperative task may be various tasks cooperatively performed by the plurality of robots 1. For example, the plurality of robots 1 keep synchronized movement of similar distances; as another example, multiple robots 1 collectively carry the same object; for example, the plurality of robots 1 perform an assembly task of each component of one object. In one implementation, the network device 2 may match corresponding collaboration instructions for different robots 1 based on the type of collaboration task, or the specific collaboration operation.
In one implementation, the collaboration instruction may include at least any one of: information on a formation state of the multiple robots of the robot; a speed control rule of the robot; coordinate information of a target object to be followed by the robot; other execution related information of the robot.
Specifically, taking synchronous movement in which a plurality of robots 1 keep similar distances, or a scene in which a plurality of robots 1 carry the same object together as an example, in one implementation, the network device 2 may give, through a cooperation instruction, formation state information that each robot 1 needs to maintain for its respective movement, for example, keep one column, one row, or multiple columns of formation; in another implementation manner, the network device 2 may further control the running speed of each robot 1 in cooperation through a cooperation instruction of a speed control rule to adjust the distance between each robot 1, so as to control the movement of the whole queue; in still another implementation, the network device 2 may further provide the one or more robots 1 with coordinate information of a target object to be followed, where the coordinate information of the target object to be determined is provided when a moving operation is started, or the coordinate information of the target object is provided in real time based on a setting during moving.
Taking as an example a scenario in which a plurality of robots 1 perform assembly tasks of respective parts of one object, the cooperation tasks may include speed control rules of the robots in order to move the respective robots 1 to respective corresponding assembly positions; coordinate information of a target position of the robot; and information on the assembly operation steps of the robot. In addition, the collaborative tasks will also adapt based on the specific task needs of other collaborative tasks.
In one implementation manner, the network device 2 may simultaneously and uniformly send a cooperation instruction to each robot 1 corresponding to the cooperation task; in another implementation, the network device 2 may also send the cooperation instruction to any one or more robots 1 at any time. In one implementation, the collaboration instructions corresponding to multiple robots 1 in the same collaboration task may be the same; or may be different, or partially the same or partially different, for example, in a synchronous moving scene in which a plurality of robots 1 keep similar distances in a queue shape, the cooperation instruction of the robot 1 at the head of the queue may be different from the cooperation instructions of the other robots 1 in the queue.
Next, in step S12, the robot 1 may execute the corresponding multi-robot cooperative task based on the cooperation instruction. In one implementation, the robots 1 do not need to communicate directly to implement corresponding cooperative tasks, but may control one or more robots 1 cooperating with the network device 2 in real time according to the cooperative instruction, and each robot 1 executes the cooperative instruction to implement the cooperative task. In one implementation, the network device 2 may only give necessary instructions required for the robots 1 to cooperate with each other, and other operations that do not need to cooperate, i.e. executable, may be performed by the robots 1 independently, for example, in a scenario where a plurality of robots 1 keep synchronous movement at similar distances, or a plurality of robots 1 carry the same object together, control over the overall formation keeping and queue running speed may be controlled by the network device 1 through the cooperation instructions, and specific following operations for each robot 1, such as determination, identification, etc. of the following object may be set and performed by each robot 1 itself.
In the present application, a plurality of independent robots 1 performing a cooperative task may collectively execute a corresponding multi-robot cooperative task based on a cooperative instruction acquired from a corresponding network device 2. Here, according to the application requirements of a specific scene, the multiple independent robots can be flexibly combined through the cooperation instruction sent by the network device 2, so that each combined robot can realize cooperative work on tasks with large workload or complex work classification, and decomposition of complex work and optimization of overall resources are facilitated.
In one implementation, in step S12, the robot 1 may control the robot 1 to move to the destination location or the target object according to the corresponding moving path based on the cooperation instruction. Here, the multi-robot cooperative task of the present application may be a cooperative task that requires a plurality of robots to move in a formation, for example, a plurality of robots 1 keep synchronous movement at similar distances; as another example, a plurality of robots 1 collectively carry the same object. Specifically, in one implementation, based on the cooperation instruction, the robot 1 may be controlled to move to the destination location according to a corresponding moving path, for example, the robot 1 is one or more robots located at the forefront of the queue, which may not have a specific target object but correspond to the destination location to be reached; in one implementation, based on the cooperation instruction, the robot 1 may also be controlled to move to the target object according to a corresponding moving path, for example, one or more robots 1 located at the forefront of the robot queue may have a tracked object, such as a certain moving person or object, and for example, a robot 1 not located at the forefront of the robot queue needs to follow the target object, i.e., a target robot, which may be another robot closest to the robot 1 in front of the robot, or another robot preset or determined based on the cooperation instruction, to move.
In this implementation, the robot 1 may be configured to implement multi-robot formation movement, for example, the robots 1 performing cooperation may move to a destination location based on a matching cooperation instruction or move to follow a target object to implement formation movement of multiple robots 1. Based on the implementation mode, various cooperative tasks, such as cooperative moving and carrying tasks and the like, which need to be realized based on the movement of a plurality of robot formations can be flexibly and effectively realized.
Further, fig. 2 illustrates a flow diagram of a method for multi-robot collaboration at a robot end in accordance with an aspect of the subject application. Wherein the method comprises steps S11 and S12, and further the step S12 comprises steps S121, S122 and S123.
Specifically, in step S121, the robot 1 may determine a target object to be followed by the robot 1. In one implementation, the target object includes a target robot, and the same transport object is carried by the robot and the target robot corresponding to the robot, and at this time, the cooperative task may correspond to a cooperative mobile handling task. The robot 1 needs to determine the target object it is to follow at the start of the collaborative task.
In one implementation, in step S121, when the robot 1 is set to the following mode, the robot 1 may identify a corresponding matching object from the surrounding information captured by the robot 1 in real time, and then use the matching object as a target object to be followed by the robot 1. In one implementation, the following mode of the robot 1 may be initiated by a preset trigger operation. When this follow mode begins, the robot 1 may capture the surrounding information in real time, and in one implementation, raw data of the surrounding information may be acquired by one or more sensing devices in the robot 1, and the raw data may be an image, a picture, or a point cloud. Further, the robot 1 detects from said raw data the type of object that needs to be followed, to which one or more objects in the environment may belong. Through a machine learning method, a classifier is trained in advance, namely characteristic information of scanning data of a certain class of objects is extracted and input into the classifier, and the certain class of objects are detected from environmental information through comparison. There are often a plurality of objects of a certain class, and a matching object is an object selected from one or more objects of the class as a target object.
Further, in one implementation, the matching object may include, but is not limited to, at least any one of: an object closest to the robot 1 around the robot 1; an object closest to the robot 1 in front of the robot 1; an object closest to the robot 1 directly in front of the robot 1; an object that is around the robot 1 and matches object feature information of an object to be followed; an object around the robot 1 and best matching object feature information of an object to be followed; an object closest to the robot among a plurality of objects that match object feature information of an object to be followed around the robot 1. In one implementation, the object feature information may include, but is not limited to, one or more of position information, motion state information, and ontology feature information of the object to be followed.
Further, in one implementation, in step S121, the robot 1 may determine coordinate information of a target object to be followed based on the cooperation instruction; further, the robot 1 acquires the surrounding environment information of the robot in real time, wherein the distance between the robot 1 and the coordinate information is less than or equal to a predetermined distance threshold; then, the robot 1 identifies a corresponding matching object from the surrounding environment information, and takes the matching object as a target object to be followed by the robot 1. Here, the coordinate information may be absolute coordinate information or relative coordinate information. The robot 1 obtains the surrounding environment information through scanning, and if the distance between the robot 1 and the coordinate information is smaller than or equal to a preset distance threshold value at the moment; a matching object that matches the coordinate information may be identified from the environment information and set as the target object.
Further, in an implementation manner, if the robot 1 obtains the cooperation instruction, and the distance between the position of the robot and the position of the object to be followed is greater than a predetermined distance threshold, the present application further provides a solution in such a case: when the distance between the robot 1 and the coordinate information is larger than a preset distance threshold value, controlling the robot 1 to move towards the coordinate information so as to reduce the distance between the robot 1 and the coordinate information; then, in the moving process, the surrounding environment information of the robot 1 is acquired in real time, until the distance between the robot 1 and the coordinate information is smaller than or equal to a predetermined distance threshold, a corresponding matching object can be identified from the surrounding environment information, and the matching object is used as a target object to be followed by the robot 1.
Next, in step S122, the robot 1 may identify the target object from a scene captured by the robot 1 in real time. Since each object in the environment is also in a constantly changing state while the robot 1 is moving, the robot 1 needs to repeat the operation of identifying the target object again and again based on the environment that changes in real time. In one implementation, the robot 1 may periodically scan the surrounding environment to obtain real-time environment data information, detect all objects belonging to the same class as the target object from the environment data information, and finally identify the matched target object according to the detection results of a certain period or a plurality of periods of continuous scanning;
specifically, in one implementation manner, in step S122, the robot 1 may scan and acquire the ambient environment information of the robot 1 in real time; then, one or more observed objects matching the object feature information of the target object may be detected from the ambient environment information, where the object feature information of the one or more observed objects determined by the current environmental information scan may be similar matched to the stored object feature information of the target object since the target object determined by the latest target object recognition operation and the corresponding object feature information thereof have been stored, for example, in the form of a historical observation record, where the object feature information of the observed object or the target object determined by the current environmental information scan may include, but is not limited to, any one of the following: location information of the object; motion state information of the object; body characteristic information of the object and the like, wherein the position information refers to the position of the object at the corresponding scanning moment; the motion state information comprises motion information such as motion direction, speed and the like; the body characteristic information refers to the appearance characteristics of the object body, including shape, size, color information and the like; furthermore, the robot 1 may identify the target object from one or more of the observation objects, for example, an observation object satisfying a certain matching degree may be estimated as the target object.
Further, in one implementation, the identifying the target object from the one or more observed objects may include: determining association information of each observation object in one or more observation objects corresponding to the robot 1 and a historical observation record, wherein the one or more observation objects comprise the target object, and the historical observation record comprises object-related information of one or more historical observation objects; next, the robot 1 identifies the target object from one or more observation objects based on the association information between the observation object and the historical observation records.
Specifically, when the robot 1 determines the target object after repeating the operation of identifying the target object again and again based on the environment changing in real time, the target object and the object feature information corresponding to the target object may be recorded in the historical observation record, and meanwhile, other observation objects determined simultaneously with the target object and the object feature information corresponding to the other observation objects may be determined by matching and also recorded in the historical observation record. Further, when the target object identification operation is currently performed, data association may be performed between each of the currently acquired one or more observation objects and the historical observation records, in an implementation manner, the data association may refer to matching each of the currently acquired one or more observation objects with the stored observation record of each of the historical observation records, and a result of the data association is association information. For example, in a current scanning cycle, there are N observed objects in the environment, and the robot has previously stored historical observation records of M objects, where the numbers of M and N may be the same or different; and one or more object intersections may exist between the N objects and the particular objects corresponding to the M objects. And performing data association, namely, respectively matching the N observation objects with the observation records of the M objects in the historical observation records one by one to obtain the matching degree of each matching, wherein the overall matching result is a matrix with N rows and M columns, the matrix elements are the corresponding matching degrees, and the matrix is the association information. Wherein the observation object includes a target object. In one implementation, the matching may be based on feature matching of one or more object feature information of the object. Then, the target object is identified based on the obtained association information. After obtaining the associated information, namely the matching degree matrix, selecting an associated mode with the highest overall matching degree through comprehensive analysis operation, thereby obtaining the target object.
In one implementation, the method further includes step S13 (not shown), and in step S13, robot 1 may update the historical observation based on the one or more observed objects, wherein the updated objects in the historical observation include the target object identified from the one or more observed objects. The observation object corresponding to the robot 1 changes continuously with the change of the environment, and in one implementation mode, if a new observation object appears, the corresponding observation record is added; if the existing observation object disappears, deleting the observation record corresponding to the observation object; and if the existing observation object still exists, updating the relevant information in the corresponding observation record.
Next, in step S123, the robot 1 may control the robot to move to the target object according to the corresponding movement path based on the cooperation instruction. Specifically, the robot 1 may determine a moving path of the robot 1 to the target object; further, the robot 1 is controlled to move along the movement path. The determination of the movement path or the control action of the movement may be performed based on a cooperation instruction of the network device 2, or only one of the determination and the control action may be performed based on the cooperation instruction.
In one implementation, the robot 1 may control the robot to move to the target object according to the corresponding moving path based on the cooperation instruction, wherein the formation state between the robot and the target object matches with the formation state information of the multiple robots in the cooperation instruction, and the relative distance between the second robot and the first robot is included in a preset relative distance range threshold. The network device 2 may provide formation state information that each robot 1 needs to maintain for its respective movement through a cooperation instruction, for example, keep a column, a row, or multiple columns to form a queue, and in one implementation, these formation states may be implemented through setting parameters such as a movement path, a motion state, and the like of the robot 1; in still another implementation manner, the network device 2 may further control the running speed of each robot 1 in cooperation through a cooperation instruction of a speed control rule to adjust the distance between each robot 1, so as to control the movement of the whole queue. Here, the queue shape for controlling multiple robots in a multiple robot cooperative task, or specifically the relative positions of the robots with respect to each other, may be controlled by the cooperative instructions. The coordination degree of the coordination operation among the robots 1 is higher, and the completion efficiency of the coordination task is improved.
In one implementation, the step S123 may include a step S1231 (not shown) and a step S1232 (not shown). Specifically, in step S1231, the robot 1 may determine a moving path of the robot 1 to the target object based on the cooperation instruction; in step S1232, the robot 1 may control the robot 1 to move along the movement path based on the cooperation instruction.
Further, in step S1231, the robot 1 may acquire obstacle information from the surrounding environment information of the robot; next, determining target coordinates of the robot 1 based on the identified position information of the target object; then, based on the cooperation instruction, the moving path of the robot to the target object is determined by combining the target coordinates and the obstacle information, wherein the cooperation instruction comprises multi-robot formation state information.
Specifically, the robot 1 first determines obstacle information between the robot body and the target object, where obstacles refer to all objects in the environment except the target object, and therefore, the obstacles include both static obstacles, such as buildings like walls and pillars when tracking indoors, and moving obstacles, such as observation objects that do not belong to the target object. Next, the position information of the current target object, for example, the position information recorded in the corresponding historical observation record, is set as the target coordinates of the robot 1. And finally, determining a moving path of the robot to the target object according to the distribution situation of the obstacles and the target coordinates of the robot based on the cooperation instruction. In practical applications, since the movement path from one location to another is not unique, the movement path determined for the robot is not unique, but the most suitable path is selected from a plurality of paths. In the multi-robot cooperative task, independent motions of the robots need to be considered in cooperation with each other, where the cooperation instruction provided by the network device 2 to each robot 1 includes multi-robot formation state information to indicate movement formation information of each robot 1 in cooperation, for example, to keep one row, one line, or multiple rows for formation, and further, a movement path of the robot to the target object is planned through the formation state information, for example, if each robot 1 advances in a row manner, a path width on the movement path needs to be considered, and a candidate path with a limited path width is excluded. In one implementation, the cooperation instruction including the formation state information may be received by the corresponding robot 1 before the robot 1 starts moving, or may be provided to the robot 1 in real time based on a change in a scene during the movement.
Further, in step S1232, the robot 1 may determine the moving speed of the robot 1 based on the cooperation instruction, wherein the cooperation instruction includes a speed control rule; then, the robot 1 is controlled to move along the movement path based on the movement speed, wherein the relative distance between the robot 1 and the target object is controlled to be included in a preset relative distance range threshold value through the movement speed. Specifically, when the multi-robot cooperative formation moves, in addition to the formation, it is necessary to consider the relative position between the specific robots 1, for example, in the coordinated movement/conveyance task, when the robots 1 move in a single row and the conveyance object is N meters long, in order to ensure that each robot simultaneously carries the transportation task, the relative position of two adjacent robots 1 is not random, but needs to ensure that the distance between two adjacent robots 1 is kept within a certain range, here, the moving speed of the robot 1 may be determined by a speed control rule in the cooperative instruction, so that the robot 1 can move in the moving path based on the moving speed, at the same time, a preset distance range between the target robot (which may correspond to another robot 1) to follow it is maintained.
Further, in an implementation, the determining the moving speed of the robot 1 based on the cooperation instruction, wherein the cooperation instruction including a speed control rule includes: based on the speed control rule, a moving speed of the robot 1 is determined, wherein the moving speed includes a forward speed and/or a steering speed. Here, the movement of the robot 1 needs to be constrained by the kinematics and dynamics of the robot body, and at the same time, the size of the robot 1 needs to be considered in avoiding a collision. When the control robot 1 moves along the movement path, it is necessary to control the movement speed of the robot 1 while controlling the robot 1 so that the movement direction does not deviate from the path range. Further, it is preferable that the moving speed of the robot 1 is divided into two components of a forward speed and a turning speed, and specifically, the forward speed refers to a speed component in the direction in which the robot 1 faces, and the turning speed refers to a speed component in the direction perpendicular to the forward speed.
On this basis, a further implementation manner is as follows: when the distance between the robot 1 and the target object is larger than or equal to a distance threshold value, carrying out planning control on the advancing speed and the steering speed at the same time; when the distance between the robot 1 and the target object is smaller than the distance threshold value, that is, the robot approaches the target object, only the movement direction of the robot, that is, the steering speed, needs to be finely adjusted.
In the application, after the robot 1 obtains a cooperation instruction, a target object to be followed by the robot 1 is determined; identifying the target object from a scene captured by the robot in real time; therefore, the robot 1 is controlled to move to the target object according to the corresponding moving path based on the cooperation instruction. Compared with the prior robot following technology, the robot following method and device can accurately lock the target object in the natural environment with real-time change and more interference factors and effectively track the target object, so that the accuracy of robot following is improved, and the technical problem that the current robot follows the wrong target or loses the target frequently is solved. Meanwhile, the robot is controlled to move to the target object according to the corresponding moving path based on the cooperation instruction, and formation movement of mutual cooperation of a plurality of robots can be realized on the whole.
In one implementation, in step S21, the network device 1 may provide a first cooperation instruction to the first robot, where the first robot controls the first robot to move to the target object or the destination location according to the corresponding movement path based on the first cooperation instruction; then, a second cooperation instruction is provided for a second robot, wherein the second robot controls the second robot to follow the first robot according to a corresponding movement path based on the second cooperation instruction. Further, in one implementation, the formation status between the second robot and the first robot is matched with the formation status information of the multiple robots in the cooperative instruction, and the relative distance between the second robot and the first robot is included in a preset relative distance range threshold. Here, the first robot and the second robot may both correspond to different robots 1, and in one implementation, the same multi-robot cooperation task may be cooperatively executed by one or more first robots and one or more second robots. In one implementation, the first and second collaboration instructions may be the same or different.
FIG. 3 illustrates a system diagram for multi-robot collaboration in accordance with an aspect of the subject application. Wherein the system comprises a robot 1 and a network device 2.
Wherein the robot 1 comprises a first device 31 and a second device 32, and the network device 2 comprises a fourth device 41.
The embodiment of the application provides a system for multi-robot cooperation. The robot includes various kinds of machine equipment capable of automatically executing work, machine equipment having a moving function, a carrying and loading function, or other functions, or machine equipment having the above-mentioned multiple functions at the same time, for example, various kinds of artificial intelligence equipment having a moving and carrying function. In the present application, a plurality of robots that perform the same cooperative task may have the same or different functions. The network device includes, but is not limited to, a computer, a network host, a single network server, multiple network server sets or a cloud server, wherein the cloud server may be a virtual supercomputer operating in a distributed system and composed of a group of loosely coupled computer sets, which is used to realize a simple, efficient, safe, reliable, and processing-capacity-scalable computing service. In the present application, the robot may be referred to as the robot 1, and the network device may be referred to as the network device 2.
In particular, the fourth device 41 may provide the matched collaboration instructions to one or more robots 1, wherein said robots 1 perform corresponding multi-robot collaboration tasks based on the respective collaboration instructions. Correspondingly, the first device 31 acquires the cooperation instruction matched with itself from the network device 2. Here, the multi-machine cooperative task may be various tasks cooperatively performed by the plurality of robots 1. For example, the plurality of robots 1 keep synchronized movement of similar distances; as another example, multiple robots 1 collectively carry the same object; for example, the plurality of robots 1 perform an assembly task of each component of one object. In one implementation, the network device 2 may match corresponding collaboration instructions for different robots 1 based on the type of collaboration task, or the specific collaboration operation.
In one implementation, the collaboration instruction may include at least any one of: information on a formation state of the multiple robots of the robot; a speed control rule of the robot; coordinate information of a target object to be followed by the robot; other execution related information of the robot.
Specifically, taking synchronous movement in which a plurality of robots 1 keep similar distances, or a scene in which a plurality of robots 1 carry the same object together as an example, in one implementation, the network device 2 may give, through a cooperation instruction, formation state information that each robot 1 needs to maintain for its respective movement, for example, keep one column, one row, or multiple columns of formation; in another implementation manner, the network device 2 may further control the running speed of each robot 1 in cooperation through a cooperation instruction of a speed control rule to adjust the distance between each robot 1, so as to control the movement of the whole queue; in still another implementation, the network device 2 may further provide the one or more robots 1 with coordinate information of a target object to be followed, where the coordinate information of the target object to be determined is provided when a moving operation is started, or the coordinate information of the target object is provided in real time based on a setting during moving.
Taking as an example a scenario in which a plurality of robots 1 perform assembly tasks of respective parts of one object, the cooperation tasks may include speed control rules of the robots in order to move the respective robots 1 to respective corresponding assembly positions; coordinate information of a target position of the robot; and information on the assembly operation steps of the robot. In addition, the collaborative tasks will also adapt based on the specific task needs of other collaborative tasks.
In one implementation manner, the fourth device 41 may simultaneously and uniformly send a cooperation instruction to each robot 1 corresponding to the cooperation task; in another implementation, the fourth device 41 may also send the cooperation instruction to any one or more robots 1 at any time. In one implementation, the collaboration instructions corresponding to multiple robots 1 in the same collaboration task may be the same; or may be different, or partially the same or partially different, for example, in a synchronous moving scene in which a plurality of robots 1 keep similar distances in a queue shape, the cooperation instruction of the robot 1 at the head of the queue may be different from the cooperation instructions of the other robots 1 in the queue.
The second device 32 may then perform a corresponding multi-robot collaboration task based on the collaboration instructions. In one implementation, the robots 1 do not need to communicate directly to implement corresponding cooperative tasks, but may control one or more robots 1 cooperating with the network device 2 in real time according to the cooperative instruction, and each robot 1 executes the cooperative instruction to implement the cooperative task. In one implementation, the network device 2 may only give necessary instructions required for the robots 1 to cooperate with each other, and other operations that do not need to cooperate, i.e. executable, may be performed by the robots 1 independently, for example, in a scenario where a plurality of robots 1 keep synchronous movement at similar distances, or a plurality of robots 1 carry the same object together, control over the overall formation keeping and queue running speed may be controlled by the network device 1 through the cooperation instructions, and specific following operations for each robot 1, such as determination, identification, etc. of the following object may be set and performed by each robot 1 itself.
In the present application, a plurality of independent robots 1 performing a cooperative task may collectively execute a corresponding multi-robot cooperative task based on a cooperative instruction acquired from a corresponding network device 2. Here, according to the application requirements of a specific scene, the multiple independent robots can be flexibly combined through the cooperation instruction sent by the network device 2, so that each combined robot can realize cooperative work on tasks with large workload or complex work classification, and decomposition of complex work and optimization of overall resources are facilitated.
In one implementation, the second device 21 may control the robot 1 to move to the destination position or the target object according to the corresponding moving path based on the cooperation instruction. Here, the multi-robot cooperative task of the present application may be a cooperative task that requires a plurality of robots to move in a formation, for example, a plurality of robots 1 keep synchronous movement at similar distances; as another example, a plurality of robots 1 collectively carry the same object. Specifically, in one implementation, based on the cooperation instruction, the robot 1 may be controlled to move to the destination location according to a corresponding moving path, for example, the robot 1 is one or more robots located at the forefront of the queue, which may not have a specific target object but correspond to the destination location to be reached; in one implementation, based on the cooperation instruction, the robot 1 may also be controlled to move to the target object according to a corresponding moving path, for example, one or more robots 1 located at the forefront of the robot queue may have a tracked object, such as a certain moving person or object, and for example, a robot 1 not located at the forefront of the robot queue needs to follow the target object, i.e., a target robot, which may be another robot closest to the robot 1 in front of the robot, or another robot preset or determined based on the cooperation instruction, to move.
In this implementation, the robot 1 may be configured to implement multi-robot formation movement, for example, the robots 1 performing cooperation may move to a destination location based on a matching cooperation instruction or move to follow a target object to implement formation movement of multiple robots 1. Based on the implementation mode, various cooperative tasks, such as cooperative moving and carrying tasks and the like, which need to be realized based on the movement of a plurality of robot formations can be flexibly and effectively realized.
Further, in one implementation, the second device 32 includes a first unit (not shown), a second unit (not shown), and a third unit (not shown).
In particular, the first unit may determine a target object to be followed by the robot 1. In one implementation, the target object includes a target robot, and the same transport object is carried by the robot and the target robot corresponding to the robot, and at this time, the cooperative task may correspond to a cooperative mobile handling task. The robot 1 needs to determine the target object it is to follow at the start of the collaborative task.
In one implementation, when the robot 1 is set to the following mode, the first unit may identify a corresponding matching object from the surrounding information captured by the robot 1 in real time, and then use the matching object as a target object to be followed by the robot 1. In one implementation, the following mode of the robot 1 may be initiated by a preset trigger operation. When this follow mode begins, the robot 1 may capture the surrounding information in real time, and in one implementation, raw data of the surrounding information may be acquired by one or more sensing devices in the robot 1, and the raw data may be an image, a picture, or a point cloud. Further, the robot 1 detects from said raw data the type of object that needs to be followed, to which one or more objects in the environment may belong. Through a machine learning method, a classifier is trained in advance, namely characteristic information of scanning data of a certain class of objects is extracted and input into the classifier, and the certain class of objects are detected from environmental information through comparison. There are often a plurality of objects of a certain class, and a matching object is an object selected from one or more objects of the class as a target object.
Further, in one implementation, the matching object may include, but is not limited to, at least any one of: an object closest to the robot 1 around the robot 1; an object closest to the robot 1 in front of the robot 1; an object closest to the robot 1 directly in front of the robot 1; an object that is around the robot 1 and matches object feature information of an object to be followed; an object around the robot 1 and best matching object feature information of an object to be followed; an object closest to the robot among a plurality of objects that match object feature information of an object to be followed around the robot 1. In one implementation, the object feature information may include, but is not limited to, one or more of position information, motion state information, and ontology feature information of the object to be followed.
Further, in one implementation, the first unit may determine, based on the cooperation instruction, coordinate information of a target object to be followed; further, the robot 1 acquires the surrounding environment information of the robot in real time, wherein the distance between the robot 1 and the coordinate information is less than or equal to a predetermined distance threshold; then, the robot 1 identifies a corresponding matching object from the surrounding environment information, and takes the matching object as a target object to be followed by the robot 1. Here, the coordinate information may be absolute coordinate information or relative coordinate information. The robot 1 obtains the surrounding environment information through scanning, and if the distance between the robot 1 and the coordinate information is smaller than or equal to a preset distance threshold value at the moment; a matching object that matches the coordinate information may be identified from the environment information and set as the target object.
Further, in an implementation manner, if the robot 1 obtains the cooperation instruction, and the distance between the position of the robot and the position of the object to be followed is greater than a predetermined distance threshold, the present application further provides a solution in such a case: when the distance between the robot 1 and the coordinate information is larger than a preset distance threshold value, controlling the robot 1 to move towards the coordinate information so as to reduce the distance between the robot 1 and the coordinate information; then, in the moving process, the surrounding environment information of the robot 1 is acquired in real time, until the distance between the robot 1 and the coordinate information is smaller than or equal to a predetermined distance threshold, a corresponding matching object can be identified from the surrounding environment information, and the matching object is used as a target object to be followed by the robot 1.
Then, the second unit may identify the target object from a scene captured by the robot 1 in real time. Since each object in the environment is also in a constantly changing state while the robot 1 is moving, the robot 1 needs to repeat the operation of identifying the target object again and again based on the environment that changes in real time. In one implementation, the robot 1 may periodically scan the surrounding environment to obtain real-time environment data information, detect all objects belonging to the same class as the target object from the environment data information, and finally identify the matched target object according to the detection results of a certain period or a plurality of periods of continuous scanning;
specifically, in one implementation, the second unit may scan and acquire the ambient environment information of the robot 1 in real time; then, one or more observed objects matching the object feature information of the target object may be detected from the ambient environment information, where the object feature information of the one or more observed objects determined by the current environmental information scan may be similar matched to the stored object feature information of the target object since the target object determined by the latest target object recognition operation and the corresponding object feature information thereof have been stored, for example, in the form of a historical observation record, where the object feature information of the observed object or the target object determined by the current environmental information scan may include, but is not limited to, any one of the following: location information of the object; motion state information of the object; body characteristic information of the object and the like, wherein the position information refers to the position of the object at the corresponding scanning moment; the motion state information comprises motion information such as motion direction, speed and the like; the body characteristic information refers to the appearance characteristics of the object body, including shape, size, color information and the like; furthermore, the robot 1 may identify the target object from one or more of the observation objects, for example, an observation object satisfying a certain matching degree may be estimated as the target object.
Further, in one implementation, the identifying the target object from the one or more observed objects may include: determining association information of each observation object in one or more observation objects corresponding to the robot 1 and a historical observation record, wherein the one or more observation objects comprise the target object, and the historical observation record comprises object-related information of one or more historical observation objects; next, the robot 1 identifies the target object from one or more observation objects based on the association information between the observation object and the historical observation records.
Specifically, when the robot 1 determines the target object after repeating the operation of identifying the target object again and again based on the environment changing in real time, the target object and the object feature information corresponding to the target object may be recorded in the historical observation record, and meanwhile, other observation objects determined simultaneously with the target object and the object feature information corresponding to the other observation objects may be determined by matching and also recorded in the historical observation record. Further, when the target object identification operation is currently performed, data association may be performed between each of the currently acquired one or more observation objects and the historical observation records, in an implementation manner, the data association may refer to matching each of the currently acquired one or more observation objects with the stored observation record of each of the historical observation records, and a result of the data association is association information. For example, in a current scanning cycle, there are N observed objects in the environment, and the robot has previously stored historical observation records of M objects, where the numbers of M and N may be the same or different; and one or more object intersections may exist between the N objects and the particular objects corresponding to the M objects. And performing data association, namely, respectively matching the N observation objects with the observation records of the M objects in the historical observation records one by one to obtain the matching degree of each matching, wherein the overall matching result is a matrix with N rows and M columns, the matrix elements are the corresponding matching degrees, and the matrix is the association information. Wherein the observation object includes a target object. In one implementation, the matching may be based on feature matching of one or more object feature information of the object. Then, the target object is identified based on the obtained association information. After obtaining the associated information, namely the matching degree matrix, selecting an associated mode with the highest overall matching degree through comprehensive analysis operation, thereby obtaining the target object.
In one implementation, the robot 1 further comprises a third device (not shown), and the robot 1 may update the historical observation according to the one or more observed objects, wherein the updated objects in the historical observation include the target object identified from the one or more observed objects. The observation object corresponding to the robot 1 changes continuously with the change of the environment, and in one implementation mode, if a new observation object appears, the corresponding observation record is added; if the existing observation object disappears, deleting the observation record corresponding to the observation object; and if the existing observation object still exists, updating the relevant information in the corresponding observation record.
Then, the third unit may control the robot to move to the target object in the corresponding movement path based on the cooperation instruction. Specifically, the robot 1 may determine a moving path of the robot 1 to the target object; further, the robot 1 is controlled to move along the movement path. The determination of the movement path or the control action of the movement may be performed based on a cooperation instruction of the network device 2, or only one of the determination and the control action may be performed based on the cooperation instruction.
In one implementation, the third unit may control the robot to move to the target object according to the corresponding moving path based on the cooperation instruction, wherein the formation state between the robot and the target object matches with the formation state information of the multiple robots in the cooperation instruction, and the relative distance between the second robot and the first robot is included in a preset relative distance range threshold. The network device 2 may provide formation state information that each robot 1 needs to maintain for its respective movement through a cooperation instruction, for example, keep a column, a row, or multiple columns to form a queue, and in one implementation, these formation states may be implemented through setting parameters such as a movement path, a motion state, and the like of the robot 1; in still another implementation manner, the network device 2 may further control the running speed of each robot 1 in cooperation through a cooperation instruction of a speed control rule to adjust the distance between each robot 1, so as to control the movement of the whole queue. Here, the queue shape for controlling multiple robots in a multiple robot cooperative task, or specifically the relative positions of the robots with respect to each other, may be controlled by the cooperative instructions. The coordination degree of the coordination operation among the robots 1 is higher, and the completion efficiency of the coordination task is improved.
In one implementation, the third unit may include a first sub-unit (not shown) and a second sub-unit (not shown). Specifically, the first subunit may determine, based on the cooperation instruction, a movement path of the robot 1 to the target object; the second subunit may control the robot 1 to move along the movement path based on the cooperation instruction.
Further, the first subunit may acquire obstacle information from the surrounding environment information of the robot; next, determining target coordinates of the robot 1 based on the identified position information of the target object; then, based on the cooperation instruction, the moving path of the robot to the target object is determined by combining the target coordinates and the obstacle information, wherein the cooperation instruction comprises multi-robot formation state information.
Specifically, the first subunit first determines obstacle information between the robot body and the target object, where obstacles refer to all objects in the environment except the target object, and therefore, the obstacles include both static obstacles, such as buildings like walls and pillars when tracking indoors, and moving obstacles, such as observation objects that do not belong to the target object. Next, the position information of the current target object, for example, the position information recorded in the corresponding historical observation record, is set as the target coordinates of the robot 1. And finally, determining a moving path of the robot to the target object according to the distribution situation of the obstacles and the target coordinates of the robot based on the cooperation instruction. In practical applications, since the movement path from one location to another is not unique, the movement path determined for the robot is not unique, but the most suitable path is selected from a plurality of paths. In the multi-robot cooperative task, independent motions of the robots need to be considered in cooperation with each other, where the cooperation instruction provided by the network device 2 to each robot 1 includes multi-robot formation state information to indicate movement formation information of each robot 1 in cooperation, for example, to keep one row, one line, or multiple rows for formation, and further, a movement path of the robot to the target object is planned through the formation state information, for example, if each robot 1 advances in a row manner, a path width on the movement path needs to be considered, and a candidate path with a limited path width is excluded. In one implementation, the cooperation instruction including the formation state information may be received by the corresponding robot 1 before the robot 1 starts moving, or may be provided to the robot 1 in real time based on a change in a scene during the movement.
Further, the second subunit may determine the moving speed of the robot 1 based on the cooperation instruction, wherein the cooperation instruction includes a speed control rule; then, the robot 1 is controlled to move along the movement path based on the movement speed, wherein the relative distance between the robot 1 and the target object is controlled to be included in a preset relative distance range threshold value through the movement speed. Specifically, when the multi-robot cooperative formation moves, in addition to the formation, it is necessary to consider the relative position between the specific robots 1, for example, in the coordinated movement/conveyance task, when the robots 1 move in a single row and the conveyance object is N meters long, in order to ensure that each robot simultaneously carries the transportation task, the relative position of two adjacent robots 1 is not random, but needs to ensure that the distance between two adjacent robots 1 is kept within a certain range, here, the moving speed of the robot 1 may be determined by a speed control rule in the cooperative instruction, so that the robot 1 can move in the moving path based on the moving speed, at the same time, a preset distance range between the target robot (which may correspond to another robot 1) to follow it is maintained.
Further, in an implementation, the determining the moving speed of the robot 1 based on the cooperation instruction, wherein the cooperation instruction including a speed control rule includes: based on the speed control rule, a moving speed of the robot 1 is determined, wherein the moving speed includes a forward speed and/or a steering speed. Here, the movement of the robot 1 needs to be constrained by the kinematics and dynamics of the robot body, and at the same time, the size of the robot 1 needs to be considered in avoiding a collision. When the control robot 1 moves along the movement path, it is necessary to control the movement speed of the robot 1 while controlling the robot 1 so that the movement direction does not deviate from the path range. Further, it is preferable that the moving speed of the robot 1 is divided into two components of a forward speed and a turning speed, and specifically, the forward speed refers to a speed component in the direction in which the robot 1 faces, and the turning speed refers to a speed component in the direction perpendicular to the forward speed.
On this basis, a further implementation manner is as follows: when the distance between the robot 1 and the target object is larger than or equal to a distance threshold value, carrying out planning control on the advancing speed and the steering speed at the same time; when the distance between the robot 1 and the target object is smaller than the distance threshold value, that is, the robot approaches the target object, only the movement direction of the robot, that is, the steering speed, needs to be finely adjusted.
In the application, after the robot 1 obtains a cooperation instruction, a target object to be followed by the robot 1 is determined; identifying the target object from a scene captured by the robot in real time; therefore, the robot 1 is controlled to move to the target object according to the corresponding moving path based on the cooperation instruction. Compared with the prior robot following technology, the robot following method and device can accurately lock the target object in the natural environment with real-time change and more interference factors and effectively track the target object, so that the accuracy of robot following is improved, and the technical problem that the current robot follows the wrong target or loses the target frequently is solved. Meanwhile, the robot is controlled to move to the target object according to the corresponding moving path based on the cooperation instruction, and formation movement of mutual cooperation of a plurality of robots can be realized on the whole.
In one implementation, the fourth device 41 of the network device 1 may provide a first cooperation instruction to the first robot, wherein the first robot controls the first robot to move to the target object or the destination location according to the corresponding movement path based on the first cooperation instruction; then, a second cooperation instruction is provided for a second robot, wherein the second robot controls the second robot to follow the first robot according to a corresponding movement path based on the second cooperation instruction. Further, in one implementation, the formation status between the second robot and the first robot is matched with the formation status information of the multiple robots in the cooperative instruction, and the relative distance between the second robot and the first robot is included in a preset relative distance range threshold. Here, the first robot and the second robot may both correspond to different robots 1, and in one implementation, the same multi-robot cooperation task may be cooperatively executed by one or more first robots and one or more second robots. In one implementation, the first and second collaboration instructions may be the same or different.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the apparatus claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.

Claims (19)

1. A method for multi-robot collaboration at a robot end, wherein the method comprises:
acquiring a cooperation instruction matched with the robot from the network equipment;
based on the cooperation instruction, executing a corresponding multi-robot cooperation task, comprising: controlling the robot to move to a target position or a target object according to a corresponding moving path based on the cooperation instruction; wherein, specifically include: determining a target object to be followed by the robot; identifying the target object from a scene captured by the robot in real time; based on the cooperation instruction, controlling the robot to move to a target object according to a corresponding moving path, wherein the target object comprises a target robot, and the robot and the corresponding target robot bear the same conveying object;
based on the cooperation instruction, controlling the robot to move to the target object according to a corresponding moving path according to any one of a relative position between the robot and each robot in the multiple robots in the cooperation instruction, a formation state between the robot and the target object and a corresponding running speed so as to adjust the distance between the multiple robots, wherein the formation state between the robot and the target object is matched with the formation state information of the multiple robots in the cooperation instruction, and the relative distance between the robot and the target object is contained in a preset relative distance range threshold value;
wherein an object state between the robot and the target object is determined by a movement path of the robot and/or a motion state of the robot.
2. The method of claim 1, wherein the controlling the robot to move toward the target object in the respective movement path based on the collaboration instruction comprises:
determining a moving path of the robot to the target object based on the cooperation instruction;
and controlling the robot to move according to the moving path based on the cooperation instruction.
3. The method of claim 2, wherein the determining a path of movement of the robot to the target object based on the collaboration instruction comprises:
acquiring obstacle information from the surrounding environment information of the robot;
determining target coordinates of the robot based on the identified position information of the target object;
and determining a moving path of the robot to the target object based on the cooperation instruction in combination with the target coordinates and the obstacle information, wherein the cooperation instruction comprises multi-robot formation state information.
4. The method of claim 3, wherein the controlling the robot to move along the movement path based on the collaboration instruction comprises:
determining a movement speed of the robot based on the cooperation instruction, wherein the cooperation instruction comprises a speed control rule;
and controlling the robot to move according to the moving path based on the moving speed, wherein the relative distance between the robot and the target object is controlled to be included in a preset relative distance range threshold value through the moving speed.
5. The method of claim 4, wherein the determining a movement speed of the robot based on the collaboration instruction, wherein the collaboration instruction including a speed control rule comprises:
determining a movement speed of the robot based on the speed control rule, wherein the movement speed comprises a forward speed and/or a steering speed.
6. The method of claim 5, wherein the speed control rule comprises:
controlling the forward speed and the steering speed of the robot simultaneously when the distance of the target object from the robot is greater than or equal to a distance threshold;
controlling the steering speed of the robot when the distance of the target object from the robot is less than a distance threshold.
7. The method of any of claims 1 or 3 to 6, wherein the determining a target object to be followed by the robot comprises:
identifying a corresponding matching object from surrounding information captured by the robot in real time when the robot is set to a following mode;
and taking the matching object as a target object to be followed by the robot.
8. The method of any of claims 1 or 3 to 6, wherein the determining a target object to be followed by the robot comprises:
determining coordinate information of a target object to be followed based on the cooperation instruction;
acquiring surrounding environment information of the robot in real time, wherein the distance between the robot and the coordinate information is smaller than or equal to a preset distance threshold;
and identifying a corresponding matching object from the surrounding environment information, and taking the matching object as a target object to be followed by the robot.
9. The method of claim 8, wherein the obtaining of the information about the robot's surroundings in real time, wherein the distance between the robot and the coordinate information being less than or equal to a predetermined distance threshold comprises:
when the distance between the robot and the coordinate information is larger than a preset distance threshold value, controlling the robot to move towards the coordinate information;
and acquiring surrounding environment information of the robot in real time, wherein the distance between the robot and the coordinate information is less than or equal to a preset distance threshold value.
10. The method of claim 1, wherein the identifying the target object from the scene captured by the robot in real-time comprises:
scanning in real time to obtain the surrounding environment information of the robot;
detecting one or more observed objects matching object feature information of the target object from the surrounding environment information;
the target object is identified from one or more of the observed objects.
11. The method of claim 10, wherein the identifying the target object from the one or more observed objects comprises:
determining association information of each observation object in one or more observation objects corresponding to the robot and a historical observation record, wherein the one or more observation objects comprise the target object, and the historical observation record comprises object-related information of one or more historical observation objects;
and identifying the target object from one or more observation objects according to the association information of the observation objects and the historical observation records.
12. The method of claim 11, wherein the method further comprises:
updating the historical observation based on the one or more observed objects, wherein objects in the updated historical observation include the target object identified from the one or more observed objects.
13. A method for multi-robot cooperation at a network equipment end, wherein the method comprises the following steps:
providing matched collaboration instructions to one or more robots, wherein the robots execute corresponding multi-robot collaboration tasks based on the corresponding collaboration instructions, and based on the collaboration instructions, the robots are controlled to move to target objects according to corresponding movement paths according to any one of relative positions between the robots and each robot in the multiple robots in the collaboration instructions, a formation state between the robots and the target objects and corresponding running speeds so as to adjust distances between the multiple robots, wherein the formation state between the robots and the target objects is matched with the formation state information of the multiple robots in the collaboration instructions, and the relative distances between the robots and the target objects are contained in a preset relative distance range threshold value; wherein an object state between the robot and the target object is determined by a movement path of the robot and/or a motion state of the robot;
wherein the cooperation instruction comprises multi-robot formation state information of the robot;
wherein the providing the matched collaboration instructions to the one or more robots, wherein the robot executing the corresponding multi-robot collaboration task based on the corresponding collaboration instructions comprises: and respectively providing the same or different cooperation instructions for a plurality of robots, wherein each robot executes the corresponding multi-robot cooperation task based on the execution instruction of each robot.
14. The method of claim 13, wherein the collaboration instruction comprises at least any one of:
a speed control rule of the robot;
coordinate information of a target object to be followed by the robot;
other execution related information of the robot.
15. The method of claim 14, wherein the providing the matched collaboration instructions to the one or more robots, wherein the robots performing corresponding multi-robot collaboration tasks based on the respective collaboration instructions comprises:
providing a first cooperation instruction to a first robot, wherein the first robot controls the first robot to move to a target object or a destination position according to a corresponding movement path based on the first cooperation instruction;
and providing a second cooperation instruction to a second robot, wherein the second robot controls the second robot to move along a corresponding movement path to follow the first robot based on the second cooperation instruction.
16. The method of claim 15, wherein the formation status between the second robot and the first robot matches the multi-robot formation status information in the collaboration instruction, and the relative distance between the second robot and the first robot is contained within a preset relative distance range threshold.
17. A robot that performs multi-robot cooperation, wherein the robot comprises:
the first device is used for acquiring a cooperation instruction matched with the robot from the network equipment;
the second device is used for executing the corresponding multi-robot cooperative task based on the cooperative instruction and comprises the following steps: controlling the robot to move to a target position or a target object according to a corresponding moving path based on the cooperation instruction; wherein, specifically include: determining a target object to be followed by the robot; identifying the target object from a scene captured by the robot in real time; based on the cooperation instruction, controlling the robot to move to a target object according to a corresponding moving path, wherein the target object comprises a target robot, and the robot and the corresponding target robot bear the same conveying object; the robot is further used for controlling the robot to move to the target object according to a corresponding moving path to adjust the distance between the multiple robots according to any one of the relative position between the robot and each robot in the multiple robots in the cooperation instruction, the formation state between the robot and the target object and the corresponding running speed based on the cooperation instruction, wherein the formation state between the robot and the target object is matched with the formation state information of the multiple robots in the cooperation instruction, and the relative distance between the robot and the target object is contained in a preset relative distance range threshold value;
wherein an object state between the robot and the target object is determined by a movement path of the robot and/or a motion state of the robot.
18. A network device that performs multi-robot collaboration, wherein the device comprises:
a fourth device, configured to provide matched collaboration instructions to one or more robots, where the robots execute corresponding multi-robot collaboration tasks based on the corresponding collaboration instructions, and based on the collaboration instructions, control the robots to move to the target objects according to corresponding movement paths to adjust distances between the multiple robots, according to any one of relative positions between the robots and each of the multiple robots in the collaboration instructions, a formation state between the robots and the target objects, and corresponding operation speeds, where the formation state between the robots and the target objects matches the formation state information of the multiple robots in the collaboration instructions, and the relative distances between the robots and the target objects are included in a preset relative distance range threshold; wherein an object state between the robot and the target object is determined by a movement path of the robot and/or a motion state of the robot;
wherein the cooperation instruction comprises multi-robot formation state information of the robot;
wherein the providing the matched collaboration instructions to the one or more robots, wherein the robot executing the corresponding multi-robot collaboration task based on the corresponding collaboration instructions comprises: and respectively providing the same or different cooperation instructions for a plurality of robots, wherein each robot executes the corresponding multi-robot cooperation task based on the execution instruction of each robot.
19. A system for multi-robot collaboration, wherein the system comprises:
the robot of claim 17 and the network device of claim 18.
CN201710067320.2A 2017-02-07 2017-02-07 Method and equipment for multi-robot cooperation Active CN106774345B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710067320.2A CN106774345B (en) 2017-02-07 2017-02-07 Method and equipment for multi-robot cooperation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710067320.2A CN106774345B (en) 2017-02-07 2017-02-07 Method and equipment for multi-robot cooperation

Publications (2)

Publication Number Publication Date
CN106774345A CN106774345A (en) 2017-05-31
CN106774345B true CN106774345B (en) 2020-10-30

Family

ID=58956308

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710067320.2A Active CN106774345B (en) 2017-02-07 2017-02-07 Method and equipment for multi-robot cooperation

Country Status (1)

Country Link
CN (1) CN106774345B (en)

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7136426B2 (en) * 2017-09-25 2022-09-13 日本電産シンポ株式会社 Management device and mobile system
CN109683556B (en) * 2017-10-18 2021-02-09 苏州宝时得电动工具有限公司 Cooperative work control method and device for self-moving equipment and storage medium
JP6965785B2 (en) * 2018-02-15 2021-11-10 オムロン株式会社 Control system, slave device control unit, control method and program
JP6781183B2 (en) * 2018-03-26 2020-11-04 ファナック株式会社 Control device and machine learning device
CN108428059B (en) * 2018-03-27 2021-07-16 昆明理工大学 Pipeline detection robot queue forming and evolution method
CN108527367B (en) * 2018-03-28 2021-11-19 华南理工大学 Description method of multi-robot cooperative work task
CN108873913A (en) * 2018-08-22 2018-11-23 深圳乐动机器人有限公司 From mobile device work compound control method, device, storage medium and system
CN109740464B (en) * 2018-12-21 2021-01-26 北京智行者科技有限公司 Target identification following method
CN109765889A (en) * 2018-12-31 2019-05-17 深圳市越疆科技有限公司 A kind of monitoring method of robot, device and intelligent terminal
CN109676611B (en) * 2019-01-25 2021-05-25 北京猎户星空科技有限公司 Multi-robot cooperative service method, device, control equipment and system
CN111766854A (en) * 2019-03-27 2020-10-13 杭州海康机器人技术有限公司 Control system and control method for AGV cooperative transportation
CN109947105B (en) * 2019-03-27 2022-11-25 科大智能机器人技术有限公司 Speed regulating method and speed regulating device of automatic tractor
CN110347159B (en) * 2019-07-12 2022-03-08 苏州融萃特种机器人有限公司 Mobile robot multi-machine cooperation method and system
CN112775957B (en) * 2019-11-08 2022-06-14 珠海一微半导体股份有限公司 Control method of working robot, working robot system and chip
CN112540605A (en) * 2020-03-31 2021-03-23 深圳优地科技有限公司 Multi-robot cooperation clearance method, server, robot and storage medium
CN111443642A (en) * 2020-04-24 2020-07-24 深圳国信泰富科技有限公司 Cooperative control system and method for robot
CN111612312B (en) * 2020-04-29 2023-12-22 深圳优地科技有限公司 Robot distribution method, robot, terminal device, and storage medium
CN112396653B (en) * 2020-10-31 2022-10-18 清华大学 Target scene oriented robot operation strategy generation method
CN112873206A (en) * 2021-01-22 2021-06-01 中国铁建重工集团股份有限公司 Multi-task automatic distribution mechanical arm control system and operation trolley
CN113771033A (en) * 2021-09-13 2021-12-10 中冶赛迪技术研究中心有限公司 Multi-robot site integrated control system, method, device and medium
CN114019912B (en) * 2021-10-15 2024-02-27 上海电机学院 Group robot motion planning control method and system
CN114296460B (en) * 2021-12-30 2023-12-15 杭州海康机器人股份有限公司 Collaborative handling method and device, readable storage medium and electronic equipment
CN114227699A (en) * 2022-02-10 2022-03-25 乐聚(深圳)机器人技术有限公司 Robot motion adjustment method, robot motion adjustment device, and storage medium
CN115097816B (en) * 2022-05-20 2023-05-23 深圳市大族机器人有限公司 Modularized multi-robot cooperative control method
CN115218904A (en) * 2022-06-13 2022-10-21 深圳市优必选科技股份有限公司 Following navigation method, device, computer readable storage medium and mobile device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103608741A (en) * 2011-06-13 2014-02-26 微软公司 Tracking and following of moving objects by a mobile robot
CN104950887A (en) * 2015-06-19 2015-09-30 重庆大学 Transportation device based on robot vision system and independent tracking system
CN105425791A (en) * 2015-11-06 2016-03-23 武汉理工大学 Swarm robot control system and method based on visual positioning
CN106155065A (en) * 2016-09-28 2016-11-23 上海仙知机器人科技有限公司 A kind of robot follower method and the equipment followed for robot

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102662377B (en) * 2012-05-17 2014-04-02 哈尔滨工业大学 Formation system and formation method of multi-mobile robot based on wireless sensor network
CN103901889A (en) * 2014-03-27 2014-07-02 浙江大学 Multi-robot formation control path tracking method based on Bluetooth communications
CN105527960A (en) * 2015-12-18 2016-04-27 燕山大学 Mobile robot formation control method based on leader-follow
CN106094835B (en) * 2016-08-01 2019-02-12 西北工业大学 The dynamic formation control method of front-wheel drive vehicle type mobile robot

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103608741A (en) * 2011-06-13 2014-02-26 微软公司 Tracking and following of moving objects by a mobile robot
CN104950887A (en) * 2015-06-19 2015-09-30 重庆大学 Transportation device based on robot vision system and independent tracking system
CN105425791A (en) * 2015-11-06 2016-03-23 武汉理工大学 Swarm robot control system and method based on visual positioning
CN106155065A (en) * 2016-09-28 2016-11-23 上海仙知机器人科技有限公司 A kind of robot follower method and the equipment followed for robot

Also Published As

Publication number Publication date
CN106774345A (en) 2017-05-31

Similar Documents

Publication Publication Date Title
CN106774345B (en) Method and equipment for multi-robot cooperation
Huang et al. ClusterVO: Clustering moving instances and estimating visual odometry for self and surroundings
Carrio et al. Onboard detection and localization of drones using depth maps
US10748061B2 (en) Simultaneous localization and mapping with reinforcement learning
Boudjit et al. Human detection based on deep learning YOLO-v2 for real-time UAV applications
US11475591B2 (en) Hybrid metric-topological camera-based localization
US11111785B2 (en) Method and device for acquiring three-dimensional coordinates of ore based on mining process
CN109213202A (en) Cargo arrangement method, device, equipment and storage medium based on optical servo
Asadi et al. Automated object manipulation using vision-based mobile robotic system for construction applications
Vemprala et al. Monocular vision based collaborative localization for micro aerial vehicle swarms
CN111604898A (en) Livestock retrieval method, robot, terminal equipment and storage medium
da Silva et al. Mapping and navigation for indoor robots under ROS: An experimental analysis
Zhao et al. Visual odometry-A review of approaches
Wei et al. An approach to navigation for the humanoid robot nao in domestic environments
Wei et al. Overview of visual slam for mobile robots
Sanchez-Matilla et al. Motion prediction for first-person vision multi-object tracking
Muravyev et al. Evaluation of RGB-D SLAM in large indoor environments
Liu et al. A method of simultaneous location and mapping based on RGB-D cameras
Zhang Deep learning applications in simultaneous localization and mapping
Szendy et al. Simultaneous localization and mapping with TurtleBotII
Vithalani et al. Autonomous navigation using monocular ORB SLAM2
Katzourakis et al. Vision aided navigation for unmanned helicopters
Zhong et al. Deep learning based strategy for eye-to-hand robotic tracking and grabbing
Qian et al. An improved ORB-SLAM2 in dynamic scene with instance segmentation
Chen et al. Map Updating Revisited for Navigation Map: A mathematical way to perform map updating for autonomous mobile robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20200702

Address after: 200131 2nd floor, building 13, No. 27, Xinjinqiao Road, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai

Applicant after: Shanghai xianruan Information Technology Co., Ltd

Address before: 201203, Shanghai, Pudong New Area, China (Shanghai) free trade test area, No. 301, Xia Xia Road, room 22

Applicant before: SHANGHAI SEER ROBOTICS TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant