CN114924513B - Multi-robot cooperative control system and method - Google Patents

Multi-robot cooperative control system and method Download PDF

Info

Publication number
CN114924513B
CN114924513B CN202210637246.4A CN202210637246A CN114924513B CN 114924513 B CN114924513 B CN 114924513B CN 202210637246 A CN202210637246 A CN 202210637246A CN 114924513 B CN114924513 B CN 114924513B
Authority
CN
China
Prior art keywords
robot
task
preset
acquiring
early warning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210637246.4A
Other languages
Chinese (zh)
Other versions
CN114924513A (en
Inventor
边锡
陈甲成
吴超
杨亚东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongdi Robot Yancheng Co ltd
Original Assignee
Zhongdi Robot Yancheng Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongdi Robot Yancheng Co ltd filed Critical Zhongdi Robot Yancheng Co ltd
Priority to CN202210637246.4A priority Critical patent/CN114924513B/en
Publication of CN114924513A publication Critical patent/CN114924513A/en
Application granted granted Critical
Publication of CN114924513B publication Critical patent/CN114924513B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/04Programme control other than numerical control, i.e. in sequence controllers or logic controllers
    • G05B19/042Programme control other than numerical control, i.e. in sequence controllers or logic controllers using digital processors
    • G05B19/0423Input/output
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/25Pc structure of the system
    • G05B2219/25257Microcontroller
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)

Abstract

The invention provides a multi-robot cooperative control system and a method, wherein the system comprises the following components: the acquisition module is used for acquiring a target task; the determining module is used for determining proper task division based on the target task and a preset neural network model; and the control module is used for carrying out cooperative control on the plurality of first robots based on task division so as to execute the target task. According to the multi-robot cooperative control system and method, workers are not required to cooperatively control robots, so that the labor cost is reduced, and in addition, the neural network model is introduced, so that the problems that the task execution is not efficient and the like due to the fact that manual cooperative control possibly causes the ambiguous division of the robots are effectively avoided.

Description

Multi-robot cooperative control system and method
Technical Field
The invention relates to the technical field of robots, in particular to a multi-robot cooperative control system and method.
Background
Currently, some tasks to be performed by robots require multiple robots to perform together due to the lack of progress and/or the complexity of the tasks, for example: and cleaning, loading and unloading a large number of workpieces. When a plurality of robots jointly execute tasks, the robots are required to be cooperatively controlled, and generally, the cooperative control is completed by staff, so that the labor cost is high.
Thus, a solution is needed.
Disclosure of Invention
The invention provides a multi-robot cooperative control system and a method, which do not need workers to cooperatively control robots, reduce labor cost, and effectively avoid the problems that the manual cooperative control may cause the unclear task execution of the robots, and the like by introducing a neural network model.
The invention provides a multi-robot cooperative control system, which comprises:
the acquisition module is used for acquiring a target task;
the determining module is used for determining proper task division based on the target task and a preset neural network model;
and the control module is used for carrying out cooperative control on the plurality of first robots based on task division so as to execute the target task.
Preferably, the acquiring module acquires the target task, including:
acquiring a preset task collection library;
performing feature extraction on tasks in a task collection library based on a preset first feature extraction template to obtain a plurality of first feature values;
constructing a task description factor based on the first characteristic value;
based on the task description factors and a preset multi-robot execution task identification library, determining that the corresponding task is to be executed by multiple robots;
if yes, the corresponding task is taken as a target task.
Preferably, the multi-robot cooperative control system further comprises:
and the correction module is used for acquiring the execution condition of the first robots when the plurality of first robots are cooperatively controlled, correcting the task division based on the execution condition, and carrying out relay cooperative control on the first robots based on the corrected task division.
Preferably, the correction module obtains the execution condition of the first robot, including:
acquiring a current first position of a first robot;
acquiring a first image of a first robot by at least one first image acquisition device corresponding to a first location;
determining an execution condition of the first robot based on the first image;
and/or the number of the groups of groups,
the first robot is subjected to timing condition inquiry;
acquiring the execution condition of the first robot replied after receiving the condition query;
and/or the number of the groups of groups,
and acquiring the execution condition of the first robot uploaded by at least one condition recorder.
Preferably, the correction module corrects task division based on execution conditions, including:
sequentially traversing a plurality of first task items in task division;
each time of traversing, determining a target execution condition corresponding to the traversed first task item from the execution conditions;
performing feature extraction on the target execution condition based on a preset second feature extraction template corresponding to the task type of the traversed first task item to obtain a plurality of second feature values;
constructing an execution situation description factor based on the second characteristic value;
determining at least one first defect item in the target execution condition based on the execution condition description factor and a preset execution condition defect recognition library;
after the traversal is finished, counting the total number of the first defect items;
when the total number is one, a preset first optimal correction strategy corresponding to the first defect item is based;
correcting task division based on a first optimal correction strategy;
when the total number is not one, acquiring a plurality of preset to-be-collocated correction strategies corresponding to the first defect item;
matching and selecting the correction strategies to be matched to obtain second optimal correction strategies corresponding to the first defect items one by one;
correcting task division based on a second optimal correction strategy;
the correction module performs collocation selection on the correction strategy to be collocated, and the method comprises the following steps:
randomly selecting a second optimal correction strategy corresponding to each first defect item, and summarizing to obtain a correction strategy set;
based on a preset third feature extraction template, performing feature extraction on a second optimal correction strategy in the correction strategy set to obtain a plurality of third feature values;
constructing collocation description factors based on the third characteristic values;
determining the collocation suitability of the correction strategy set based on the collocation description factor and a preset collocation suitability recognition library;
and selecting a second optimal correction strategy in the correction strategy set corresponding to the maximum collocation suitability as a second optimal correction strategy corresponding to the first defect item one by one.
Preferably, the multi-robot cooperative control system further comprises:
the early warning module is used for carrying out safety monitoring and early warning on the operation site of the first robot operation;
the early warning module is used for carrying out safety monitoring and early warning on the operation site of the first robot operation, and comprises the following steps:
acquiring a second image of the job site by at least one second image acquisition device corresponding to the job site;
performing feature extraction on the second image based on a preset fourth feature extraction template to obtain a plurality of fourth feature values;
constructing a field description factor based on the fourth characteristic value;
determining at least one risk event within the job site based on the site description factor and a preset risk event identification library, the risk event comprising: the risk type, at least one operator creating a risk, a second location of the operator and at least one second robot of the first robots affected by the risk;
acquiring preset early warning information corresponding to the risk type;
reminding operators based on the early warning information;
wherein, early warning module reminds the operating personnel based on early warning information, includes:
acquiring a noise value of an operation site;
if the noise value is smaller than a preset noise threshold value, controlling at least one play device in the operation site to output early warning information;
otherwise, determining a movement state of the operator based on a second image in a first time preset before, wherein the movement state comprises: static and dynamic;
when the moving state is static, acquiring a local moving route of a third robot except the second robot in the first robot in a second time preset next;
when the distance between at least one first point position and the first position on the local moving route is smaller than or equal to a preset distance threshold, taking the first point position corresponding to the minimum distance as a second point position, and taking a third robot corresponding to the minimum distance as a fourth robot;
when the fourth robot is about to reach the second point position, controlling a first display device of the first robot to display early warning information;
determining a first facial orientation of the face of the worker based on the current second image;
dynamically adjusting a first display orientation of the first display device, so that a first included angle between the first face orientation and the first display orientation continuously falls within a preset first included angle range until the fourth robot runs out of a corresponding local moving route;
when the moving state is dynamic, acquiring the identity ID of the operator;
generating a template based on preset checking guide information, and generating checking guide information according to the identity ID;
continuously determining a third position and a second face orientation of the face of the operator based on the latest second image;
acquiring a second display orientation of at least one second display device within a preset range around the third position;
and if a second included angle between the second face orientation and the second display orientation falls within a preset second included angle range, controlling the corresponding second display equipment to sequentially output and check the guide information and the early warning information.
The invention provides a multi-robot cooperative control method, which comprises the following steps:
step 1: acquiring a target task;
step 2: determining proper task division based on a target task and a preset neural network model;
step 3: based on task division, the first robots are cooperatively controlled to execute the target task.
Preferably, step 1: obtaining a target task, including:
acquiring a preset task collection library;
performing feature extraction on tasks in a task collection library based on a preset first feature extraction template to obtain a plurality of first feature values;
constructing a task description factor based on the first characteristic value;
based on the task description factors and a preset multi-robot execution task identification library, determining that the corresponding task is to be executed by multiple robots;
if yes, the corresponding task is taken as a target task.
Preferably, the multi-robot cooperative control method further includes:
when the first robots are cooperatively controlled, the execution conditions of the first robots are acquired, the task division is corrected based on the execution conditions, and the relay cooperative control is performed on the first robots based on the corrected task division.
Preferably, the acquiring the execution condition of the first robot includes:
acquiring a current first position of a first robot;
acquiring a first image of a first robot by at least one first image acquisition device corresponding to a first location;
determining an execution condition of the first robot based on the first image;
and/or the number of the groups of groups,
the first robot is subjected to timing condition inquiry;
acquiring the execution condition of the first robot replied after receiving the condition query;
and/or the number of the groups of groups,
and acquiring the execution condition of the first robot uploaded by at least one condition recorder.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention may be realized and attained by the structure particularly pointed out in the written description and drawings.
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention. In the drawings:
FIG. 1 is a schematic diagram of a multi-robot cooperative control system in accordance with an embodiment of the present invention;
fig. 2 is a flowchart of a multi-robot cooperative control method according to an embodiment of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described below with reference to the accompanying drawings, it being understood that the preferred embodiments described herein are for illustration and explanation of the present invention only, and are not intended to limit the present invention.
The invention provides a multi-robot cooperative control system, as shown in fig. 1, comprising:
the acquisition module 1 is used for acquiring a target task;
the determining module 2 is used for determining proper task division based on the target task and a preset neural network model;
and the control module 3 is used for cooperatively controlling the first robots based on task division so as to execute target tasks.
The working principle and the beneficial effects of the technical scheme are as follows:
the target task is a task that requires a plurality of robots to commonly execute. The method comprises the steps of introducing a preset neural network model, wherein the neural network model is an artificial intelligent network model for training and converging based on a large number of records of robot task division according to task content. Based on the neural network model, task division is determined according to the target task. And based on task division, performing cooperative control on the plurality of first robots. The robot is not required to be cooperatively controlled by staff, so that the labor cost is reduced, and in addition, the neural network model is introduced, so that the problems that the task execution is not efficient and the like due to the fact that the robot is not definitely divided by manpower due to manual cooperative control are effectively avoided.
The invention provides a multi-robot cooperative control system, wherein an acquisition module 1 acquires target tasks, and the system comprises:
acquiring a preset task collection library;
performing feature extraction on tasks in a task collection library based on a preset first feature extraction template to obtain a plurality of first feature values;
constructing a task description factor based on the first characteristic value;
based on the task description factors and a preset multi-robot execution task identification library, determining that the corresponding task is to be executed by multiple robots;
if yes, the corresponding task is taken as a target task.
The working principle and the beneficial effects of the technical scheme are as follows:
the task collection library is stored with collected tasks to be executed by the robot, wherein the tasks are issued by a plurality of required personnel. A preset first feature extraction template is introduced, and the first feature extraction template may be, for example: extraction task hope completion time, etc. Extracting a first feature value of a task in the task collection library based on the first feature extraction template, where the first feature value may be, for example: the task is expected to be completed in time, the amount of work to be carried, and the like. Based on the first characteristic value, a task description factor is constructed, the task description factor can be a description vector, and the description vector and the vector construction belong to the category of the prior art and are not repeated. A preset multi-robot execution task identification library is introduced, and a large number of task description factors of tasks which are required to be jointly executed by the multi-robots are stored in the multi-robot execution task identification library. Based on the multi-robot execution task identification library, determining whether the task needs to be executed by multiple robots, if so, taking the corresponding task as a target task, and completing acquisition. The screening timeliness of tasks which are required to be jointly executed by multiple robots is improved, manual screening is not needed, and labor cost is reduced.
The invention provides a multi-robot cooperative control system, which further comprises:
and the correction module is used for acquiring the execution condition of the first robots when the plurality of first robots are cooperatively controlled, correcting the task division based on the execution condition, and carrying out relay cooperative control on the first robots based on the corrected task division.
The working principle and the beneficial effects of the technical scheme are as follows:
when a plurality of first robots jointly execute a target task, an unreasonable job division situation may occur, for example: when executing work piece cleaning task, the work piece that the input of the cleaning line of cleaning machine needs to carry out the material loading increases, because of the cleaning needs time, and the output of cleaning line need not to carry out the work piece of unloading temporarily, leads to distributing to the robot of output and is waiting for the output to export to wash the completion work piece. Therefore, the execution condition of the first robot is acquired, and the task division is corrected based on the execution condition, for example: and adjusting the robots positioned at the output end of the cleaning line to go to the input end to execute the feeding task, and carrying out relay cooperative control on the first robot based on the corrected task division. The rationality of cooperative control is fully ensured, and the execution efficiency of the tasks jointly executed by a plurality of robots is greatly improved.
The invention provides a multi-robot cooperative control system, wherein a correction module acquires the execution condition of a first robot, and the system comprises the following components:
acquiring a current first position of a first robot;
acquiring a first image of a first robot by at least one first image acquisition device corresponding to a first location;
determining an execution condition of the first robot based on the first image;
and/or the number of the groups of groups,
the first robot is subjected to timing condition inquiry;
acquiring the execution condition of the first robot replied after receiving the condition query;
and/or the number of the groups of groups,
and acquiring the execution condition of the first robot uploaded by at least one condition recorder.
The working principle and the beneficial effects of the technical scheme are as follows:
there are three ways to obtain the execution of the first robot: first, a first image of a first robot is acquired, and based on the first image, an execution condition is determined, for example: based on an image recognition technology, recognizing actions and the like which are being executed by the robot, and determining task execution progress based on the actions, wherein a first image acquisition device corresponding to a first position is an image acquisition device with a shooting range containing the first position; secondly, the first robot is subjected to timing condition inquiry, a condition inquiry instruction is sent, and the first robot returns to the execution condition after receiving the condition inquiry instruction; third, the real-time recording and uploading is performed by the on-site situation recorder. The accuracy and timeliness of the treatment condition acquisition are greatly improved.
The invention provides a multi-robot cooperative control system, a correction module corrects task division based on execution conditions, comprising:
sequentially traversing a plurality of first task items in task division;
each time of traversing, determining a target execution condition corresponding to the traversed first task item from the execution conditions;
performing feature extraction on the target execution condition based on a preset second feature extraction template corresponding to the task type of the traversed first task item to obtain a plurality of second feature values;
constructing an execution situation description factor based on the second characteristic value;
determining at least one first defect item in the target execution condition based on the execution condition description factor and a preset execution condition defect recognition library;
after the traversal is finished, counting the total number of the first defect items;
when the total number is one, a preset first optimal correction strategy corresponding to the first defect item is based;
correcting task division based on a first optimal correction strategy;
when the total number is not one, acquiring a plurality of preset to-be-collocated correction strategies corresponding to the first defect item;
matching and selecting the correction strategies to be matched to obtain second optimal correction strategies corresponding to the first defect items one by one;
correcting task division based on a second optimal correction strategy;
the correction module performs collocation selection on the correction strategy to be collocated, and the method comprises the following steps:
randomly selecting a second optimal correction strategy corresponding to each first defect item, and summarizing to obtain a correction strategy set;
based on a preset third feature extraction template, performing feature extraction on a second optimal correction strategy in the correction strategy set to obtain a plurality of third feature values;
constructing collocation description factors based on the third characteristic values;
determining the collocation suitability of the correction strategy set based on the collocation description factor and a preset collocation suitability recognition library;
and selecting a second optimal correction strategy in the correction strategy set corresponding to the maximum collocation suitability as a second optimal correction strategy corresponding to the first defect item one by one.
The working principle and the beneficial effects of the technical scheme are as follows:
the task division comprises a plurality of first task items which are required to be executed by the first robots. A preset second feature extraction template corresponding to the task type of the first task item is introduced, for example: the task type is feeding and discharging of the cleaning line, and the second feature extraction template is the number of workpieces which need to be fed by the cleaning machine and the number of workpieces which need to be discharged by the cleaning machine. Based on the second feature extraction template, extracting a plurality of second feature values of the target execution condition corresponding to the first task item in the execution condition, where the second feature values may be, for example: the number of workpieces to be fed by the cleaning machine, the number of workpieces to be discharged by the extraction cleaning machine, and the like. Based on the second feature value, an execution condition description factor is constructed, and the execution condition description factor can be a description vector, and the description vector and the vector construction belong to the category of the prior art and are not repeated. A preset execution situation defect identification library is introduced, and the execution situation description factors of the execution situations of defects existing when a plurality of robots execute tasks are stored in the execution situation defect identification library. At least one first defect entry in the target execution case is determined based on the execution case defect recognition library. The efficiency and the accuracy of processing condition defect identification are improved.
The first defect term may be 1 or more. When the first defect entries are 1, a corresponding preset first optimal correction strategy is obtained, for example: the first defect item is that the number of workpieces to be blanked of the cleaning machine is 0, the workpieces still and the robots are used for executing blanking tasks, and the first optimal correction strategy is to adjust the robots located at the output end of the cleaning line of the cleaning machine to go to the input end to assist other robots to feed. And correcting the task division based on the first optimal correction strategy. However, when the first defect entry is corrected when the first defect entry is plural, if the first defect entry is corrected sequentially based on the optimum correction policy corresponding to the first defect entry, a collision may occur, and the correction effect may be poor, for example: the first defect items are respectively insufficient in capacity of the current robot for executing the feeding task (5% is insufficient), excessive in number of robots for executing the discharging task (10% is excessive) and less in number of robots for executing the task of transferring the workpiece to the output end of the cleaning line of the cleaning machine (5% is less), and when the correction strategy is determined, collocation is needed, so that other defect items cannot be perfectly solved or conflicts are generated between the correction strategies due to the fact that one defect item cannot be solved. Thus, a preset third feature extraction template is introduced, which may be, for example: the number of robots deployed is extracted. Based on the third feature extraction template, a third feature value of a second best correction strategy in the correction strategy set is extracted, and the third feature value may be, for example: and allocating the number of the feeding tasks from the robots for executing the feeding tasks to the number of the feeding tasks. Based on the third characteristic value, a collocation description factor is constructed, the collocation description factor can be a description vector, and the description vector and the vector construction belong to the category of the prior art and are not repeated. A preset matching suitability identification library is introduced, and matching description factors and corresponding suitability formed when different correction strategies are matched are stored in the matching suitability identification library. And determining the collocation suitability of the correction strategy set based on the collocation suitability recognition library. And selecting a second optimal correction strategy in the correction strategy set corresponding to the maximum collocation suitability as a second optimal correction strategy corresponding to the first defect item one by one. The accuracy and the rationality of correcting the task division are fully ensured. Generally, when a robot performs a task, a plurality of defects often occur in the performance, and therefore, the applicability is high.
The invention provides a multi-robot cooperative control system, which further comprises:
the early warning module is used for carrying out safety monitoring and early warning on the operation site of the first robot operation;
the early warning module is used for carrying out safety monitoring and early warning on the operation site of the first robot operation, and comprises the following steps:
acquiring a second image of the job site by at least one second image acquisition device corresponding to the job site;
performing feature extraction on the second image based on a preset fourth feature extraction template to obtain a plurality of fourth feature values;
constructing a field description factor based on the fourth characteristic value;
determining at least one risk event within the job site based on the site description factor and a preset risk event identification library, the risk event comprising: the risk type, at least one operator creating a risk, a second location of the operator and at least one second robot of the first robots affected by the risk;
acquiring preset early warning information corresponding to the risk type;
reminding operators based on the early warning information;
wherein, early warning module reminds the operating personnel based on early warning information, includes:
acquiring a noise value of an operation site;
if the noise value is smaller than a preset noise threshold value, controlling at least one play device in the operation site to output early warning information;
otherwise, determining a movement state of the operator based on a second image in a first time preset before, wherein the movement state comprises: static and dynamic;
when the moving state is static, acquiring a local moving route of a third robot except the second robot in the first robot in a second time preset next;
when the distance between at least one first point position and the first position on the local moving route is smaller than or equal to a preset distance threshold, taking the first point position corresponding to the minimum distance as a second point position, and taking a third robot corresponding to the minimum distance as a fourth robot;
when the fourth robot is about to reach the second point position, controlling a first display device of the first robot to display early warning information;
determining a first facial orientation of the face of the worker based on the current second image;
dynamically adjusting a first display orientation of the first display device, so that a first included angle between the first face orientation and the first display orientation continuously falls within a preset first included angle range until the fourth robot runs out of a corresponding local moving route;
when the moving state is dynamic, acquiring the identity ID of the operator;
generating a template based on preset checking guide information, and generating checking guide information according to the identity ID;
continuously determining a third position and a second face orientation of the face of the operator based on the latest second image;
acquiring a second display orientation of at least one second display device within a preset range around the third position;
and if a second included angle between the second face orientation and the second display orientation falls within a preset second included angle range, controlling the corresponding second display equipment to sequentially output and check the guide information and the early warning information.
The working principle and the beneficial effects of the technical scheme are as follows:
in addition, when the first robot performs work, safety accidents may occur in the work site, such as: the robot performing the article carrying collides with the article transport vehicle that is reversing. Therefore, safety monitoring and early warning are required for the operation site.
A preset fourth feature extraction template is introduced, and the fourth feature extraction template may be, for example: the number of trucks being moved within the job site is extracted. Extracting a plurality of fourth feature values of the second image of the job site based on the fourth feature extraction template, the fourth feature values may be, for example: the distance between the robot and the trucks, the number of trucks being moved, the ratio of the occupied area of the trucks and the robot to the total area of the site, etc. Based on the fourth characteristic value, a field description factor is constructed, the field description factor can be a description vector, and the description vector and the vector construction belong to the category of the prior art and are not repeated. A preset risk event identification library is introduced, and site description factors corresponding to different risk events are stored in the risk event identification library. Identifying, based on the risk event identification library, at least one risk event within the job site, the risk event package comprising: the type of risk (e.g., a collision of the robot with the truck), at least one operator (for the truck driver) creating the risk, a second location of the operator, and at least one second robot of the first robot that is affected by the risk (a robot that may be colliding with the truck). And introducing preset early warning information corresponding to the risk type, for example: "please note for reverse! The robot at the rear is working-! ". Based on the early warning information, reminding the operator. The safety of the robot on the operation site is improved to a great extent, and the stability of the robot for jointly completing the target task is also improved.
When reminding the operator, two conditions are divided: firstly, the on-site noise is small, and early warning information is directly output through playing equipment (such as sound equipment); second, the site noise is loud, at this time, the staff is reminded and also divided into two cases, the first, when the staff is static, for example: moving or standing still in a fixed area, for example: a mobile lifting machine is remotely and manually controlled at a certain place, a fourth robot which is suitable for passing is selected, and an operator is reminded; second, when the operator is dynamic, for example: and driving the truck, and selecting a proper second display device to remind an operator. The applicability of reminding is fully ensured.
In addition, when choosing suitable fourth robot that will pass, introduce minimum distance, first face orientation, first demonstration orientation, first contained angle and the first contained angle scope of predetermineeing respectively, fully guarantee the suitability of the fourth robot of choosing, can convert both directions into direction vector when calculating first contained angle, calculate direction vector's contained angle can, vector conversion and vector contained angle calculation belong to prior art category, for example: the calculation formula for calculating the included angle of the two vectors is as follows:
Figure SMS_1
,/>
Figure SMS_2
is the included angle of->
Figure SMS_3
And->
Figure SMS_4
Is a vector of both. The preset first angle range may be, for example: 150-180 degrees. When the second display equipment is selected, a second face orientation, a second display orientation, a second included angle and a second included angle range are respectively introduced, the second included angle is obtained by the same method as the first included angle, the second included angle range can be 90-180 degrees, and the suitability of the selected second display equipment is fully ensured.
The invention provides a multi-robot cooperative control method, as shown in fig. 2, comprising the following steps:
step 1: acquiring a target task;
step 2: determining proper task division based on a target task and a preset neural network model;
step 3: based on task division, the first robots are cooperatively controlled to execute the target task.
The invention provides a multi-robot cooperative control method, which comprises the following steps of: obtaining a target task, including:
acquiring a preset task collection library;
performing feature extraction on tasks in a task collection library based on a preset first feature extraction template to obtain a plurality of first feature values;
constructing a task description factor based on the first characteristic value;
based on the task description factors and a preset multi-robot execution task identification library, determining that the corresponding task is to be executed by multiple robots;
if yes, the corresponding task is taken as a target task.
The invention provides a multi-robot cooperative control method, which further comprises the following steps:
when the first robots are cooperatively controlled, the execution conditions of the first robots are acquired, the task division is corrected based on the execution conditions, and the relay cooperative control is performed on the first robots based on the corrected task division.
The invention provides a multi-robot cooperative control method, which comprises the following steps of:
acquiring a current first position of a first robot;
acquiring a first image of a first robot by at least one first image acquisition device corresponding to a first location;
determining an execution condition of the first robot based on the first image;
and/or the number of the groups of groups,
the first robot is subjected to timing condition inquiry;
acquiring the execution condition of the first robot replied after receiving the condition query;
and/or the number of the groups of groups,
and acquiring the execution condition of the first robot uploaded by at least one condition recorder.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (5)

1. A multi-robot cooperative control system, comprising:
the acquisition module is used for acquiring a target task;
the determining module is used for determining proper task division based on the target task and a preset neural network model;
the control module is used for carrying out cooperative control on a plurality of first robots based on the task division so as to execute the target task;
the correction module is used for acquiring the execution condition of the first robots when the first robots are cooperatively controlled, correcting the task division based on the execution condition, and carrying out relay cooperative control on the first robots based on the corrected task division;
the correction module obtains the execution condition of the first robot, including:
acquiring a current first position of the first robot;
acquiring a first image of the first robot by at least one first image acquisition device corresponding to the first position;
determining an execution condition of the first robot based on the first image;
and/or the number of the groups of groups,
performing timing condition query on the first robot;
acquiring the execution condition of the first robot replied after receiving the condition query;
and/or the number of the groups of groups,
acquiring the execution condition of the first robot uploaded by at least one condition recorder;
the early warning module is used for carrying out safety monitoring and early warning on the operation site of the first robot operation;
the early warning module is used for carrying out safety monitoring and early warning on the operation site of the first robot operation, and comprises the following steps:
acquiring a second image of the job site by at least one second image acquisition device corresponding to the job site;
performing feature extraction on the second image based on a preset fourth feature extraction template to obtain a plurality of fourth feature values;
constructing a field description factor based on the fourth characteristic value;
determining at least one risk event within the job site based on the site description factor and a preset risk event identification library, the risk event comprising: a risk type, at least one operator creating a risk, a second location of the operator, and at least one second robot of the first robots affected by the risk;
acquiring preset early warning information corresponding to the risk type;
reminding the operator based on the early warning information;
the early warning module reminds the operator based on the early warning information, and the early warning module comprises:
acquiring a noise value of the operation site;
if the noise value is smaller than a preset noise threshold value, controlling at least one playing device in the operation site to output the early warning information;
otherwise, determining a movement state of the operator based on the second image in the first time preset before, wherein the movement state comprises: static and dynamic;
when the moving state is static, acquiring a local moving route of a third robot except the second robot in the first robot in a second time preset next;
when the distance between at least one first point position and the first position on the local moving route is smaller than or equal to a preset distance threshold, taking the first point position corresponding to the minimum distance as a second point position, and taking the third robot corresponding to the minimum distance as a fourth robot;
when the fourth robot is about to reach the second point location, controlling a first display device of the first robot to display the early warning information;
determining a first facial orientation of the face of the worker based on the current second image;
dynamically adjusting a first display orientation of the first display device, so that a first included angle between the first face orientation and the first display orientation continuously falls within a preset first included angle range until the fourth robot runs out of the corresponding local moving route;
when the moving state is dynamic, acquiring the identity ID of the operator;
generating a template based on preset view guide information, and generating view guide information according to the identity ID;
continuously determining a third position and a second face orientation of the face of the operator based on the latest second image;
acquiring a second display orientation of at least one second display device within a preset range around the third position;
and if a second included angle between the second face orientation and the second display orientation falls within a preset second included angle range, controlling the corresponding second display equipment to output the viewing guide information and the early warning information successively.
2. The multi-robot cooperative control system of claim 1, wherein the acquisition module acquires the target task, comprising:
acquiring a preset task collection library;
performing feature extraction on the tasks in the task collection library based on a preset first feature extraction template to obtain a plurality of first feature values;
constructing a task description factor based on the first characteristic value;
based on the task description factors and a preset multi-robot execution task identification library, determining that the task is required to be executed by multiple robots;
if yes, the task corresponding to the task is used as a target task.
3. The multi-robot cooperative control system of claim 1, wherein the correction module corrects the task division based on the execution condition, comprising:
traversing a plurality of first task items in the task division in sequence;
each time of traversing, determining a target execution condition corresponding to the traversed first task item from the execution conditions;
performing feature extraction on the target execution condition based on a preset second feature extraction template corresponding to the task type of the traversed first task item to obtain a plurality of second feature values;
constructing an execution situation describing factor based on the second characteristic value;
determining at least one first defect item in the target execution situation based on the execution situation description factor and a preset execution situation defect recognition library;
after the traversing is finished, counting the total number of the first defect items;
when the total number is one, a preset first optimal correction strategy corresponding to the first defect item is based;
correcting the task division based on the first optimal correction strategy;
when the total number is not one, acquiring a plurality of preset strategies to be collocated and corrected corresponding to the first defect item;
matching and selecting the correction strategies to be matched to obtain second optimal correction strategies corresponding to the first defect items one by one;
correcting the task division based on the second optimal correction strategy;
the correction module performs matching selection on the correction strategy to be matched, and the correction module comprises:
randomly selecting one second optimal correction strategy corresponding to each first defect item, and summarizing to obtain a correction strategy set;
performing feature extraction on the second optimal correction strategy in the correction strategy set based on a preset third feature extraction template to obtain a plurality of third feature values;
constructing collocation description factors based on the third characteristic values;
determining the collocation suitability of the correction strategy set based on the collocation description factor and a preset collocation suitability recognition library;
and selecting the second optimal correction strategy in the correction strategy set corresponding to the maximum collocation suitability as a second optimal correction strategy corresponding to the first defect item one by one.
4. A multi-robot cooperative control method, comprising:
step 1: acquiring a target task;
step 2: determining proper task division based on the target task and a preset neural network model;
step 3: based on the task division, performing cooperative control on a plurality of first robots to execute the target task;
when a plurality of first robots are cooperatively controlled, acquiring the execution condition of the first robots, correcting the task division based on the execution condition, and performing relay cooperative control on the first robots based on the corrected task division;
acquiring the execution condition of the first robot, including:
acquiring a current first position of the first robot;
acquiring a first image of the first robot by at least one first image acquisition device corresponding to the first position;
determining an execution condition of the first robot based on the first image;
and/or the number of the groups of groups,
performing timing condition query on the first robot;
acquiring the execution condition of the first robot replied after receiving the condition query;
and/or the number of the groups of groups,
acquiring the execution condition of the first robot uploaded by at least one condition recorder;
the method further comprises the steps of:
safety monitoring and early warning are carried out on the operation site of the first robot operation;
the early warning module is used for carrying out safety monitoring and early warning on the operation site of the first robot operation, and comprises the following steps:
acquiring a second image of the job site by at least one second image acquisition device corresponding to the job site;
performing feature extraction on the second image based on a preset fourth feature extraction template to obtain a plurality of fourth feature values;
constructing a field description factor based on the fourth characteristic value;
determining at least one risk event within the job site based on the site description factor and a preset risk event identification library, the risk event comprising: a risk type, at least one operator creating a risk, a second location of the operator, and at least one second robot of the first robots affected by the risk;
acquiring preset early warning information corresponding to the risk type;
reminding the operator based on the early warning information;
the early warning module reminds the operator based on the early warning information, and the early warning module comprises:
acquiring a noise value of the operation site;
if the noise value is smaller than a preset noise threshold value, controlling at least one playing device in the operation site to output the early warning information;
otherwise, determining a movement state of the operator based on the second image in the first time preset before, wherein the movement state comprises: static and dynamic;
when the moving state is static, acquiring a local moving route of a third robot except the second robot in the first robot in a second time preset next;
when the distance between at least one first point position and the first position on the local moving route is smaller than or equal to a preset distance threshold, taking the first point position corresponding to the minimum distance as a second point position, and taking the third robot corresponding to the minimum distance as a fourth robot;
when the fourth robot is about to reach the second point location, controlling a first display device of the first robot to display the early warning information;
determining a first facial orientation of the face of the worker based on the current second image;
dynamically adjusting a first display orientation of the first display device, so that a first included angle between the first face orientation and the first display orientation continuously falls within a preset first included angle range until the fourth robot runs out of the corresponding local moving route;
when the moving state is dynamic, acquiring the identity ID of the operator;
generating a template based on preset view guide information, and generating view guide information according to the identity ID;
continuously determining a third position and a second face orientation of the face of the operator based on the latest second image;
acquiring a second display orientation of at least one second display device within a preset range around the third position;
and if a second included angle between the second face orientation and the second display orientation falls within a preset second included angle range, controlling the corresponding second display equipment to output the viewing guide information and the early warning information successively.
5. The multi-robot cooperative control method as set forth in claim 4, wherein said step 1: obtaining a target task, including:
acquiring a preset task collection library;
performing feature extraction on the tasks in the task collection library based on a preset first feature extraction template to obtain a plurality of first feature values;
constructing a task description factor based on the first characteristic value;
based on the task description factors and a preset multi-robot execution task identification library, determining that the task is required to be executed by multiple robots;
if yes, the task corresponding to the task is used as a target task.
CN202210637246.4A 2022-06-07 2022-06-07 Multi-robot cooperative control system and method Active CN114924513B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210637246.4A CN114924513B (en) 2022-06-07 2022-06-07 Multi-robot cooperative control system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210637246.4A CN114924513B (en) 2022-06-07 2022-06-07 Multi-robot cooperative control system and method

Publications (2)

Publication Number Publication Date
CN114924513A CN114924513A (en) 2022-08-19
CN114924513B true CN114924513B (en) 2023-06-06

Family

ID=82813075

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210637246.4A Active CN114924513B (en) 2022-06-07 2022-06-07 Multi-robot cooperative control system and method

Country Status (1)

Country Link
CN (1) CN114924513B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115392505B (en) * 2022-08-29 2023-07-18 智迪机器人技术(盐城)有限公司 Abnormality processing system and method for auto-parts automatic installation robot
CN117590816B (en) * 2023-12-14 2024-05-17 湖南比邻星科技有限公司 Multi-robot cooperative control system and method based on Internet of things

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0899285A (en) * 1994-10-03 1996-04-16 Meidensha Corp Collision preventing device for robot
WO2015114089A1 (en) * 2014-01-30 2015-08-06 Kuka Systems Gmbh Safety device and safety process

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8552886B2 (en) * 2010-11-24 2013-10-08 Bcs Business Consulting Services Pte Ltd. Crash warning system for motor vehicles
JP6645142B2 (en) * 2015-11-30 2020-02-12 株式会社デンソーウェーブ Robot safety system
US20200058387A1 (en) * 2017-03-31 2020-02-20 Ikkiworks Pty Limited Methods and systems for a companion robot
WO2019058694A1 (en) * 2017-09-20 2019-03-28 ソニー株式会社 Control device, control method, and control system
CN108416488B (en) * 2017-12-21 2022-05-03 中南大学 Dynamic task-oriented multi-intelligent-robot task allocation method
CN110209485B (en) * 2019-06-05 2020-06-02 青岛海通胜行智能科技有限公司 Dynamic avoidance method for multiple robots during cooperative operation
CN110717684B (en) * 2019-10-15 2022-06-17 西安工程大学 Task allocation method based on task allocation coordination strategy and particle swarm optimization
CN113001536B (en) * 2019-12-20 2022-08-23 中国科学院沈阳计算技术研究所有限公司 Anti-collision detection method and device for multiple cooperative robots
CN111708361B (en) * 2020-05-19 2023-09-08 上海有个机器人有限公司 Multi-robot collision prediction method and device
CN111798097B (en) * 2020-06-06 2024-04-09 浙江科钛机器人股份有限公司 Autonomous mobile robot task allocation processing method based on market mechanism
DE102021201918A1 (en) * 2020-10-07 2022-04-07 Robert Bosch Gesellschaft mit beschränkter Haftung Device and method for controlling one or more robots
CN112561227A (en) * 2020-10-26 2021-03-26 南京集新萃信息科技有限公司 Multi-robot cooperation method and system based on recurrent neural network
CN112883792A (en) * 2021-01-19 2021-06-01 武汉海默机器人有限公司 Robot active safety protection method and system based on visual depth analysis
CN113936209A (en) * 2021-09-03 2022-01-14 深圳云天励飞技术股份有限公司 Cooperative operation method of patrol robot and related equipment
CN114179104B (en) * 2021-12-13 2022-07-08 盐城工学院 Picking robot control method and system based on visual identification

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0899285A (en) * 1994-10-03 1996-04-16 Meidensha Corp Collision preventing device for robot
WO2015114089A1 (en) * 2014-01-30 2015-08-06 Kuka Systems Gmbh Safety device and safety process

Also Published As

Publication number Publication date
CN114924513A (en) 2022-08-19

Similar Documents

Publication Publication Date Title
CN114924513B (en) Multi-robot cooperative control system and method
CN108995226B (en) Intelligent sheet metal part production system capable of automatically pasting sponge
CN108469786A (en) Extensive intelligent storage distribution radio frequency
CN114862226B (en) Warehouse logistics intelligent scheduling and loading and unloading management method and system
CN109649913A (en) Intelligent warehousing system
CN114911238A (en) Unmanned mine car cooperative control method and system
CN112249573B (en) System and method for classified and centralized warehousing and storing of products with multiple specifications
CN109107903B (en) Automatic sorting method and system
CN114078162B (en) Truss sorting method and system for workpiece after steel plate cutting
CN115108259A (en) Composite transportation equipment control method, device, equipment and storage medium
CN114819887A (en) Unmanned intelligent manufacturing system
CN112935772B (en) Method and device for screwing screw by vision-guided robot, storage medium and equipment
CN111891611B (en) Intelligent warehousing distribution system, method and device and readable storage medium
CN111459114B (en) Method and device for feeding steel coil of hot rolling leveling unit
CN116588573B (en) Bulk cargo grabbing control method and system of intelligent warehouse lifting system
CN116703104A (en) Material box robot order picking method and device based on decision-making big model
CN107309702A (en) A kind of AGV Job Scheduling methods of lathe streamline
CN113578756B (en) Warehouse-in and warehouse-out control method for SMT materials
CN116061213A (en) Feeding and discharging method, system, equipment and storage medium of composite robot
CN112173530A (en) Automatic material taking system and method thereof
CN114888816A (en) Control system and method of intelligent feeding and discharging robot
CN113401553A (en) Method and equipment for moving bin
WO2023155722A1 (en) Warehousing system and control method therefor
DE102023107092A1 (en) OPTIMIZED TASK GENERATION AND PLANNING OF DRIVERLESS TRANSPORT CARTS USING OVERHEAD SENSOR SYSTEMS
CN108237533B (en) Robot self-adaptive object positioning method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant