CN114924513A - Multi-robot cooperative control system and method - Google Patents

Multi-robot cooperative control system and method Download PDF

Info

Publication number
CN114924513A
CN114924513A CN202210637246.4A CN202210637246A CN114924513A CN 114924513 A CN114924513 A CN 114924513A CN 202210637246 A CN202210637246 A CN 202210637246A CN 114924513 A CN114924513 A CN 114924513A
Authority
CN
China
Prior art keywords
robot
task
preset
acquiring
cooperative control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210637246.4A
Other languages
Chinese (zh)
Other versions
CN114924513B (en
Inventor
边锡
陈甲成
吴超
杨亚东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongdi Robot Yancheng Co ltd
Original Assignee
Zhongdi Robot Yancheng Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongdi Robot Yancheng Co ltd filed Critical Zhongdi Robot Yancheng Co ltd
Priority to CN202210637246.4A priority Critical patent/CN114924513B/en
Publication of CN114924513A publication Critical patent/CN114924513A/en
Application granted granted Critical
Publication of CN114924513B publication Critical patent/CN114924513B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/04Programme control other than numerical control, i.e. in sequence controllers or logic controllers
    • G05B19/042Programme control other than numerical control, i.e. in sequence controllers or logic controllers using digital processors
    • G05B19/0423Input/output
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/25Pc structure of the system
    • G05B2219/25257Microcontroller
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)

Abstract

The invention provides a multi-robot cooperative control system and a method, wherein the system comprises: the acquisition module is used for acquiring a target task; the determining module is used for determining proper task division based on the target task and a preset neural network model; and the control module is used for performing cooperative control on the plurality of first robots based on task division to execute the target task. The multi-robot cooperative control system and the method do not need to carry out cooperative control on the robot by workers, so that the labor cost is reduced, and in addition, the neural network model is introduced, so that the problems that the task execution is not efficient and the like due to the fact that the robot is not clearly divided due to the fact that the cooperative control is carried out manually can be effectively avoided.

Description

Multi-robot cooperative control system and method
Technical Field
The invention relates to the technical field of robots, in particular to a multi-robot cooperative control system and a multi-robot cooperative control method.
Background
Currently, some tasks to be performed by robots require multiple robots to work together due to lack of progress and/or complex tasks, such as: and cleaning and feeding and discharging tasks of a large number of workpieces. When a plurality of robots execute tasks together, the robots need to be controlled in a coordinated manner, generally, the coordinated control is completed by workers, the labor cost is high, and in addition, the problems that the robot is not clearly divided into workers to cause the task execution to be inefficient and the like can be caused when the robots are controlled in a coordinated manner manually.
Therefore, a solution is needed.
Disclosure of Invention
The invention provides a multi-robot cooperative control system and method, which do not need to cooperatively control the robot by workers, reduce the labor cost, and effectively avoid the problems that the robot is indefinite in task execution and the like due to the fact that the cooperation control is performed manually by introducing a neural network model.
The invention provides a multi-robot cooperative control system, comprising:
the acquisition module is used for acquiring a target task;
the determining module is used for determining proper task division based on the target task and a preset neural network model;
and the control module is used for performing cooperative control on the plurality of first robots based on task division to execute the target task.
Preferably, the obtaining module obtains the target task, including:
acquiring a preset task collection library;
performing feature extraction on the tasks in the task collection library based on a preset first feature extraction template to obtain a plurality of first feature values;
constructing a task description factor based on the first characteristic value;
determining that the corresponding task needs to be executed by multiple robots based on the task description factor and a preset task execution identification library of multiple robots;
and if so, taking the corresponding task as the target task.
Preferably, the multi-robot cooperative control system further includes:
and the correcting module is used for acquiring the execution condition of the first robot when the first robots are cooperatively controlled, correcting the task division based on the execution condition and performing relay cooperative control on the first robots based on the corrected task division.
Preferably, the correction module acquires the execution condition of the first robot, and includes:
acquiring a current first position of a first robot;
acquiring a first image of the first robot through at least one first image acquisition device corresponding to the first position;
determining the execution condition of the first robot based on the first image;
and/or the presence of a gas in the atmosphere,
inquiring the timing condition of the first robot;
acquiring an execution condition replied after the first robot receives the condition inquiry;
and/or the presence of a gas in the atmosphere,
and acquiring the execution condition of the first robot uploaded by at least one condition recording person.
Preferably, the correction module corrects the task division based on the execution condition, and includes:
sequentially traversing a plurality of first task items in task division;
during each traversal, determining a target execution condition corresponding to the traversed first task item from the execution conditions;
performing feature extraction on the target execution condition based on a preset second feature extraction template corresponding to the task type of the traversed first task item to obtain a plurality of second feature values;
constructing an execution condition description factor based on the second characteristic value;
determining at least one first defect item in the target execution condition based on the execution condition description factor and a preset execution condition defect identification library;
after traversing, counting the total number of the first defect items;
when the total number is one, a preset first optimal correction strategy corresponding to the first defect item is based on;
based on the first optimal correction strategy, correcting the task division;
when the total number is not one, acquiring a plurality of preset correction strategies to be collocated corresponding to the first defect item;
matching and selecting the correction strategy to be matched to obtain a second optimal correction strategy which is in one-to-one correspondence with the first defect item;
based on the second optimal correction strategy, correcting the task division;
the method comprises the following steps that a correction module selects a correction strategy to be matched in a matching mode, and comprises the following steps:
randomly selecting a second optimal correction strategy corresponding to each first defect item, and summarizing to obtain a correction strategy set;
based on a preset third feature extraction template, performing feature extraction on a second optimal correction strategy in the correction strategy set to obtain a plurality of third feature values;
constructing a collocation description factor based on the third characteristic value;
determining the collocation suitability of the correction strategy set based on the collocation description factor and a preset collocation suitability recognition library;
and selecting a second optimal correction strategy in the correction strategy set corresponding to the maximum matching suitability as a second optimal correction strategy corresponding to the corresponding first defect items one by one.
Preferably, the multi-robot cooperative control system further includes:
the early warning module is used for carrying out safety monitoring and early warning on the operation site of the first robot operation;
wherein, the early warning module carries out safety monitoring and early warning to the operation scene of first robot operation, includes:
acquiring a second image of the operation site through at least one second image acquisition device corresponding to the operation site;
performing feature extraction on the second image based on a preset fourth feature extraction template to obtain a plurality of fourth feature values;
constructing a field description factor based on the fourth characteristic value;
determining at least one risk event in the job site based on the site description factor and a preset risk event identification library, wherein the risk event comprises the following steps: a risk type, at least one worker who generates a risk, a second location of the worker, and at least one second robot of the first robots that is affected by the risk;
acquiring preset early warning information corresponding to the risk type;
reminding the operating personnel based on the early warning information;
wherein, early warning module reminds the operation personnel based on early warning information, includes:
acquiring a noise value of an operation site;
if the noise value is smaller than the preset noise threshold value, controlling at least one playing device in the operation site to output early warning information;
otherwise, determining the moving state of the operator based on a second image in a first time preset before, wherein the moving state comprises: static and dynamic;
when the moving state is static, acquiring a local moving route of a third robot except the second robot in the first robot within a second preset time;
when at least one first point exists on the local moving route, the distance between the first point and the first position is smaller than or equal to a preset distance threshold value, taking the first point corresponding to the minimum distance as a second point, and simultaneously taking the third robot corresponding to the minimum distance as a fourth robot;
when the fourth robot is about to arrive at the second point location, controlling a first display device of the first robot to display early warning information;
determining a first face orientation of the face of the worker based on the current second image;
dynamically adjusting a first display orientation of the first display device, so that a first included angle between the orientation of the first face and the first display orientation continuously falls within a preset first included angle range until the fourth robot finishes driving the corresponding local movement route;
when the moving state is dynamic, acquiring the identity ID of an operator;
generating a template based on preset checking guide information, and generating the checking guide information according to the identity ID;
continuously determining a third position of the face and a second face orientation of the operator based on the latest second image;
acquiring a second display orientation of at least one second display device in a preset range around a third position;
and if a second included angle between the orientation of the second face and the orientation of the second display is within a preset second included angle range, controlling the corresponding second display equipment to output and view the guide information and the early warning information successively.
The invention provides a multi-robot cooperative control method, which comprises the following steps:
step 1: acquiring a target task;
step 2: determining proper task division based on the target task and a preset neural network model;
and step 3: and performing cooperative control on the plurality of first robots based on task division to execute the target task.
Preferably, step 1: acquiring a target task, comprising:
acquiring a preset task collection library;
performing feature extraction on the tasks in the task collection library based on a preset first feature extraction template to obtain a plurality of first feature values;
constructing a task description factor based on the first characteristic value;
determining that the corresponding task needs to be executed by multiple robots based on the task description factor and a preset task execution identification library of multiple robots;
and if so, taking the corresponding task as the target task.
Preferably, the multi-robot cooperative control method further includes:
when the first robots are cooperatively controlled, the execution condition of the first robots is acquired, the tasks are divided into the work division based on the execution condition, and the first robots are cooperatively controlled in a relay mode based on the corrected task division.
Preferably, the acquiring the execution situation of the first robot includes:
acquiring a current first position of a first robot;
acquiring a first image of the first robot by at least one first image acquisition device corresponding to the first position;
determining the execution condition of the first robot based on the first image;
and/or the presence of a gas in the atmosphere,
inquiring the timing condition of the first robot;
acquiring an execution condition replied after the first robot receives the condition inquiry;
and/or the presence of a gas in the gas,
and acquiring the execution condition of the first robot uploaded by at least one condition recording person.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a schematic diagram of a multi-robot cooperative control system according to an embodiment of the present invention;
fig. 2 is a flowchart of a multi-robot cooperative control method according to an embodiment of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
The present invention provides a multi-robot cooperative control system, as shown in fig. 1, including:
the acquisition module 1 is used for acquiring a target task;
the determining module 2 is used for determining proper task division based on the target task and a preset neural network model;
and the control module 3 is used for cooperatively controlling the plurality of first robots based on task division to execute the target tasks.
The working principle and the beneficial effects of the technical scheme are as follows:
the target task is a task that requires a plurality of robots to perform together. And introducing a preset neural network model, wherein the neural network model is an artificial intelligent network model for carrying out training convergence on the basis of a large amount of manual work according to the task content and the work division record of the robot task. And determining task division according to the target task based on the neural network model. And performing cooperative control on the plurality of first robots based on task division. In addition, a neural network model is introduced, and the problem that task execution is not efficient due to the fact that division of labor of the robot is unclear caused by manual cooperative control is effectively avoided.
The invention provides a multi-robot cooperative control system, wherein an acquisition module 1 acquires a target task, and the acquisition module comprises the following steps:
acquiring a preset task collection library;
performing feature extraction on the tasks in the task collection library based on a preset first feature extraction template to obtain a plurality of first feature values;
constructing a task description factor based on the first characteristic value;
determining that the corresponding task needs to be executed by multiple robots based on the task description factor and a preset task execution identification library of multiple robots;
and if so, taking the corresponding task as the target task.
The working principle and the beneficial effects of the technical scheme are as follows:
the preset task collection library stores collected tasks issued by a plurality of demand personnel and to be executed by the robot. A preset first feature extraction template is introduced, and the first feature extraction template may be, for example: extract the desired completion time of the task, etc. Based on the first feature extraction template, extracting a first feature value of the task in the task collection library, where the first feature value may be, for example: the task is desired to be completed in time, the amount of work pieces to be transported, and the like. Based on the first eigenvalue, a task description factor is constructed, the task description factor may be a description vector, and description vectors and vector construction belong to the category of the prior art and are not described in detail. And introducing a preset multi-robot execution task recognition library, wherein a large number of task description factors of tasks required to be executed by the multi-robot together are stored in the multi-robot execution task recognition library. And determining whether the task needs to be executed by multiple robots based on the task identification library executed by multiple robots, and if so, taking the corresponding task as a target task to finish the acquisition. The screening timeliness of tasks needing to be executed by multiple robots together is improved, manual screening is not needed, and the labor cost is reduced.
The invention provides a multi-robot cooperative control system, further comprising:
and the correction module is used for acquiring the execution condition of the first robots when the first robots are cooperatively controlled, correcting the task division based on the execution condition, and performing relay cooperative control on the first robots based on the corrected task division.
The working principle and the beneficial effects of the technical scheme are as follows:
when a plurality of first robots collectively perform a target task, an unreasonable division of labor may occur, for example: when a workpiece cleaning task is executed, the number of workpieces needing to be loaded at the input end of a cleaning line of the cleaning machine is increased, and because the cleaning needs time, the output end of the cleaning line has no workpieces needing to be unloaded temporarily, so that a robot distributed to the output end waits for the output end to output and clean the workpieces. Therefore, the execution condition of the first robot is obtained, and based on the execution condition, the task division is corrected, for example: and adjusting the robot located at the output end of the cleaning line to go to the input end to execute a feeding task, and performing relay cooperative control on the first robot based on the corrected task division. The rationality of cooperative control is fully guaranteed, and the execution efficiency of a plurality of robots to jointly execute tasks is greatly improved.
The invention provides a multi-robot cooperative control system.A correction module acquires the execution condition of a first robot, and the system comprises:
acquiring a current first position of a first robot;
acquiring a first image of the first robot through at least one first image acquisition device corresponding to the first position;
determining the execution condition of the first robot based on the first image;
and/or the presence of a gas in the gas,
inquiring the timing condition of the first robot;
acquiring the execution condition replied after the first robot receives the condition inquiry;
and/or the presence of a gas in the atmosphere,
and acquiring the execution condition of the first robot uploaded by at least one condition recording person.
The working principle and the beneficial effects of the technical scheme are as follows:
there are three ways to obtain the execution of the first robot: first, a first image of the first robot is acquired, and based on the first image, execution is determined, for example: based on an image recognition technology, recognizing the action and the like being executed by the robot, determining the task execution progress based on the action, wherein the first image acquisition equipment corresponding to the first position is the image acquisition equipment with the shooting range including the first position; secondly, inquiring the condition of the first robot at regular time, sending a condition inquiry command, and replying the execution condition after the first robot receives the condition inquiry command; and thirdly, recording and uploading in real time by on-site condition recording personnel. The accuracy and the timeliness of the acquisition of the processing condition are greatly improved.
The invention provides a multi-robot cooperative control system.A correction module corrects task division based on execution conditions, and the system comprises:
sequentially traversing a plurality of first task items in task division;
during each traversal, determining a target execution condition corresponding to the traversed first task item from the execution conditions;
performing feature extraction on the target execution condition based on a preset second feature extraction template corresponding to the task type of the traversed first task item to obtain a plurality of second feature values;
constructing an execution condition description factor based on the second characteristic value;
determining at least one first defect item in the target execution condition based on the execution condition description factor and a preset execution condition defect identification library;
after traversing, counting the total number of the first defect items;
when the total number is one, based on a preset first optimal correction strategy corresponding to the first defect item;
based on the first optimal correction strategy, correcting the task division;
when the total number is not one, acquiring a plurality of preset correction strategies to be collocated corresponding to the first defect item;
matching and selecting the correction strategy to be matched to obtain a second optimal correction strategy corresponding to the first defect item one by one;
based on the second optimal correction strategy, correcting the task division;
wherein, the correction module is used for matching and selecting the correction strategy to be matched, and comprises the following steps:
randomly selecting a second optimal correction strategy corresponding to each first defect item, and summarizing to obtain a correction strategy set;
based on a preset third feature extraction template, performing feature extraction on a second optimal correction strategy in the correction strategy set to obtain a plurality of third feature values;
constructing a collocation description factor based on the third characteristic value;
determining the collocation suitability of the correction strategy set based on the collocation description factor and a preset collocation suitability recognition base;
and selecting a second optimal correction strategy in the correction strategy set corresponding to the maximum matching suitability as a second optimal correction strategy corresponding to the corresponding first defect items one by one.
The working principle and the beneficial effects of the technical scheme are as follows:
the task division comprises a plurality of first task items which are required to be executed by a plurality of first robots in a division manner. Introducing a preset second feature extraction template corresponding to the task type of the first task item, for example: the task type is charging and discharging of the cleaning line, and the second feature extraction template is used for extracting the number of workpieces needing to be charged and extracted from the cleaning machine. Based on the second feature extraction template, a plurality of second feature values of the target execution condition corresponding to the first task item in the execution conditions are extracted, where the second feature values may be, for example: the number of workpieces needing to be fed into the cleaning machine, the number of workpieces needing to be fed out from the cleaning machine after being extracted, and the like. Based on the second eigenvalue, an execution situation description factor is constructed, which may be a description vector, and the description vector and the vector construction belong to the category of the prior art and are not described in detail. And introducing a preset execution condition defect identification library, wherein execution condition description factors of the execution conditions of the defects existing when the plurality of robots execute the tasks are stored in the execution condition defect identification library. At least one first defect entry in the target execution scenario is determined based on the execution scenario defect identification library. The efficiency and the accuracy of handling the condition defect discernment have been promoted.
The first defect item may be 1 or more. When the number of the first defect items is 1, obtaining a corresponding preset first optimal correction strategy, for example: the first defect item is that the number of workpieces needing to be blanked currently by the cleaning machine is 0, and the workpieces still execute a blanking task with the robot, and the first optimal correction strategy is to adjust the robot located at the output end of a cleaning line of the cleaning machine to go to the input end to assist other robots in loading. And based on the first optimal correction strategy, correcting the task by division. However, when there are a plurality of first defect entries, if the first defect entries are corrected sequentially based on the optimal correction strategy corresponding to the first defect entry, a conflict may occur, resulting in a poor correction effect, for example: the first defect items are respectively insufficient in capacity (5%) of a current robot for executing a feeding task, excessive in number (10%) of robots for executing a discharging task and fewer in number (5%) of robots for executing a task of transferring a workpiece to an output end of a cleaning line of a cleaning machine, and when a correction strategy is determined, matching is needed, so that the problem that other defect items cannot be perfectly solved or conflict is generated among the correction strategies due to the fact that one defect item cannot be solved is solved. Therefore, a preset third feature extraction template is introduced, and the third feature extraction template may be, for example: the number of robots to be deployed is extracted. Based on the third feature extraction template, a third feature value of the second optimal modification strategy in the modification strategy set is extracted, where the third feature value may be, for example: the number of the robots for executing the blanking tasks is adjusted to the number of the robots for executing the loading tasks. Based on the third eigenvalue, a collocation description factor is constructed, which may be a description vector, and the description vector and the vector construction belong to the prior art category and are not described in detail. And introducing a preset matching suitability recognition library, wherein matching description factors and corresponding suitability formed when different correction strategies are matched are stored in the matching suitability recognition library. And determining the collocation suitability of the correction strategy set based on the collocation suitability identification library. And selecting a second optimal correction strategy in the correction strategy set corresponding to the maximum matching suitability as a second optimal correction strategy corresponding to the corresponding first defect items one by one. The accuracy and the rationality of task division correction are fully guaranteed. Generally, when a robot executes a task, a plurality of defects often occur in the execution state, and therefore, the applicability is high.
The invention provides a multi-robot cooperative control system, which further comprises:
the early warning module is used for carrying out safety monitoring and early warning on the operation site of the first robot operation;
wherein, the early warning module carries out safety monitoring and early warning to the operation scene of first robot operation, includes:
acquiring a second image of the operation site through at least one second image acquisition device corresponding to the operation site;
performing feature extraction on the second image based on a preset fourth feature extraction template to obtain a plurality of fourth feature values;
constructing a field description factor based on the fourth characteristic value;
determining at least one risk event in the job site based on the site description factor and a preset risk event identification library, wherein the risk event comprises the following steps: a risk type, at least one worker who generates a risk, a second location of the worker, and at least one second robot of the first robots that is affected by the risk;
acquiring preset early warning information corresponding to the risk type;
reminding the operating personnel based on the early warning information;
wherein, early warning module reminds the operation personnel based on early warning information, includes:
acquiring a noise value of an operation field;
if the noise value is smaller than a preset noise threshold value, controlling at least one playing device in the operation site to output early warning information;
otherwise, based on a second image within a first time preset before, determining the moving state of the operator, wherein the moving state comprises the following steps: static and dynamic;
when the moving state is static, acquiring a local moving route of a third robot except the second robot in the first robot within a second preset time;
when the distance between at least one first point position and the first position on the local moving route is smaller than or equal to a preset distance threshold value, taking the first point position corresponding to the minimum distance as a second point position, and taking a third robot corresponding to the minimum distance as a fourth robot;
when the fourth robot is about to reach the second point location, controlling the first display equipment of the first robot to display early warning information;
determining a first face orientation of the face of the worker based on the current second image;
dynamically adjusting a first display orientation of the first display device, so that a first included angle between the orientation of the first face and the first display orientation continuously falls within a preset first included angle range until the fourth robot finishes driving the corresponding local movement route;
when the moving state is dynamic, acquiring the identity ID of an operator;
generating a template based on preset checking guide information, and generating the checking guide information according to the identity ID;
continuously determining a third position of the face and a second face orientation of the operator based on the latest second image;
acquiring a second display orientation of at least one second display device in a preset range around a third position;
and if a second included angle between the orientation of the second face and the orientation of the second display is within a preset second included angle range, controlling the corresponding second display equipment to output and view the guide information and the early warning information successively.
The working principle and the beneficial effects of the technical scheme are as follows:
in addition, when the first robot performs a work, a safety accident may occur in the work site, for example: the robot performing the goods transportation collides with the goods transport vehicle that is backing up. Therefore, safety monitoring and early warning are required for the operation site.
A preset fourth feature extraction template is introduced, and the fourth feature extraction template may be, for example: the number of trucks being moved within the job site is extracted. Based on the fourth feature extraction template, a plurality of fourth feature values of the second image of the job site are extracted, and the fourth feature values may be, for example: the distance between the robot and the truck, the number of trucks moving, the ratio of the occupied area of the truck and the robot to the total area of the site, and the like. And constructing a field description factor based on the fourth eigenvalue, wherein the field description factor can be a description vector, and the description vector and the vector construction belong to the field of the prior art and are not described in detail. And introducing a preset risk event recognition library, wherein field description factors corresponding to different risk events are stored in the risk event recognition library. Identifying at least one risk event within the determined job site based on a risk event identification repository, the risk event package comprising: the type of risk (e.g. collision of the robot with the truck), at least one operator (to the truck driver) who is at risk, the second location of the operator and at least one second robot of the first robot (robot that may collide with the truck) that is affected by the risk. Introducing preset early warning information corresponding to the risk types, for example: "Please notice backing! The robot is working at the back! ". And reminding the operating personnel based on the early warning information. The safety of the robot on the operation site is greatly improved, and the stability of the robot for jointly completing the target task is also improved.
When reminding the operating personnel, divide into two kinds of situations: firstly, the field noise is small, and the early warning information can be directly output through a playing device (such as a sound device); the second, the site noise is great, at this moment, reminds the operating personnel also to fall into two kinds of situations in addition, the first kind, when the operating personnel is static, for example: moving or standing still in a fixed area, for example: remotely and manually remotely controlling the mobile lifting machine at a certain position, selecting a proper fourth robot to be passed by, and reminding an operator; second, when the operator is dynamic, for example: and driving the truck, and selecting appropriate second display equipment to remind the operator. The applicability of reminding is fully guaranteed.
In addition, when the suitable fourth robot that will pass by is chosen, introduce minimum distance, first face orientation, first demonstration orientation, first contained angle and the first contained angle scope of presetting respectively, fully guarantee the suitability of the fourth robot chosen, can convert both directions into direction vector when calculating first contained angle, the contained angle of calculation direction vector can, vector conversion and vector contained angle calculation belong to prior art category, for example: the calculation formula for calculating the included angle between the two vectors is as follows:
Figure BDA0003680915730000141
theta is the angle, and A and B are the vectors of the two. The preset first angle range may be, for example: 150 to 180 degrees. When the second display equipment which is suitable is selected, the orientation of the second face, the orientation of the second display, the second included angle and the range of the second included angle are respectively introduced, the method for solving the second included angle is the same as the method for solving the first included angle, the range of the second included angle can be 90-180 degrees, and the suitability of the selected second display equipment is fully ensured.
The invention provides a multi-robot cooperative control method, as shown in fig. 2, comprising the following steps:
step 1: acquiring a target task;
step 2: determining proper task division based on the target task and a preset neural network model;
and step 3: and performing cooperative control on the plurality of first robots based on task division to execute the target task.
The invention provides a multi-robot cooperative control method, which comprises the following steps: acquiring a target task, comprising:
acquiring a preset task collection library;
performing feature extraction on the tasks in the task collection library based on a preset first feature extraction template to obtain a plurality of first feature values;
constructing a task description factor based on the first characteristic value;
determining that the corresponding task needs to be executed by multiple robots based on the task description factor and a preset task execution identification library of multiple robots;
and if so, taking the corresponding task as the target task.
The invention provides a multi-robot cooperative control method, which further comprises the following steps:
when the first robots are cooperatively controlled, the execution condition of the first robots is acquired, the tasks are divided into the work division based on the execution condition, and the first robots are cooperatively controlled in a relay mode based on the corrected task division.
The invention provides a multi-robot cooperative control method, which is used for acquiring the execution condition of a first robot and comprises the following steps:
acquiring a current first position of a first robot;
acquiring a first image of the first robot by at least one first image acquisition device corresponding to the first position;
determining the execution condition of the first robot based on the first image;
and/or the presence of a gas in the atmosphere,
inquiring the timing condition of the first robot;
acquiring an execution condition replied after the first robot receives the condition inquiry;
and/or the presence of a gas in the gas,
and acquiring the execution condition of the first robot uploaded by at least one condition recording person.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A multi-robot cooperative control system, comprising:
the acquisition module is used for acquiring a target task;
the determining module is used for determining proper task division based on the target task and a preset neural network model;
and the control module is used for cooperatively controlling the plurality of first robots based on the task division to execute the target task.
2. The multi-robot cooperative control system according to claim 1, wherein the acquiring module acquires a target task, comprising:
acquiring a preset task collection library;
performing feature extraction on the tasks in the task collection library based on a preset first feature extraction template to obtain a plurality of first feature values;
constructing a task description factor based on the first characteristic value;
determining that the corresponding task needs to be executed by multiple robots based on the task description factor and a preset task execution identification library of multiple robots;
and if so, taking the corresponding task as a target task.
3. The multi-robot cooperative control system according to claim 1, further comprising:
and the correcting module is used for acquiring the execution condition of the first robots when performing cooperative control on the plurality of first robots, correcting the task division based on the execution condition, and performing relay cooperative control on the first robots based on the corrected task division.
4. The multi-robot cooperative control system according to claim 3, wherein the obtaining of the execution of the first robot by the revision module comprises:
acquiring a current first position of the first robot;
acquiring a first image of the first robot by at least one first image acquisition device corresponding to the first position;
determining the execution condition of the first robot based on the first image;
and/or the presence of a gas in the gas,
inquiring the timing condition of the first robot;
acquiring the execution condition replied after the first robot receives the condition inquiry;
and/or the presence of a gas in the atmosphere,
and acquiring the execution condition of the first robot uploaded by at least one condition recording person.
5. The multi-robot cooperative control system of claim 3 wherein the revising module revises the division of tasks based on the performance, comprising:
sequentially traversing a plurality of first task items in the task division;
determining a target execution condition corresponding to the traversed first task item from the execution conditions in each traversal;
performing feature extraction on the target execution condition based on a preset second feature extraction template corresponding to the traversed task type of the first task item to obtain a plurality of second feature values;
constructing an execution condition description factor based on the second characteristic value;
determining at least one first defect item in the target execution condition based on the execution condition description factor and a preset execution condition defect identification library;
after traversing, counting the total number of the first defect items;
when the total number is one, based on a preset first optimal correction strategy corresponding to the first defect item;
based on the first optimal correction strategy, correcting the task by division;
when the total number is not one, acquiring a plurality of preset correction strategies to be collocated corresponding to the first defect item;
matching and selecting the correction strategies to be matched to obtain second optimal correction strategies corresponding to the first defect items one by one;
based on the second optimal correction strategy, correcting the task by division;
the method comprises the following steps that a correction module performs matching selection on the correction strategy to be matched, and comprises the following steps:
randomly selecting one second optimal correction strategy corresponding to each first defect item, and summarizing to obtain a correction strategy set;
based on a preset third feature extraction template, performing feature extraction on the second optimal correction strategy in the correction strategy set to obtain a plurality of third feature values;
constructing a collocation description factor based on the third characteristic value;
determining the collocation suitability of the correction strategy set based on the collocation description factor and a preset collocation suitability recognition base;
and selecting the second optimal correction strategy in the correction strategy set corresponding to the maximum matching suitability as the second optimal correction strategy corresponding to the first defect items one by one.
6. The multi-robot cooperative control system as claimed in claim 1, further comprising:
the early warning module is used for carrying out safety monitoring and early warning on the operation site of the first robot operation;
wherein, the early warning module is right the job site of first robot operation carries out safety monitoring and early warning, includes:
acquiring a second image of the operation site through at least one second image acquisition device corresponding to the operation site;
performing feature extraction on the second image based on a preset fourth feature extraction template to obtain a plurality of fourth feature values;
constructing a field description factor based on the fourth characteristic value;
determining at least one risk event in the job site based on the site description factor and a preset risk event identification library, wherein the risk event comprises: a risk type, at least one worker who generates a risk, a second location of the worker, and at least one second robot of the first robots affected by the risk;
acquiring preset early warning information corresponding to the risk type;
reminding the operating personnel based on the early warning information;
wherein, the early warning module is based on the early warning information, right the operation personnel remind, include:
acquiring a noise value of the operation field;
if the noise value is smaller than a preset noise threshold value, controlling at least one playing device in the operation site to output the early warning information;
otherwise, determining the moving state of the operator based on the second image within the first preset time, wherein the moving state comprises: static and dynamic;
when the moving state is static, acquiring a local moving route of a third robot except the second robot in the first robot in a next preset second time;
when at least one first point exists on the local moving route, the distance between the first point and the first position is smaller than or equal to a preset distance threshold value, taking the first point corresponding to the minimum distance as a second point, and simultaneously taking the third robot corresponding to the minimum distance as a fourth robot;
when the fourth robot is about to reach the second point location, controlling a first display device of the first robot to display the early warning information;
determining a first face orientation of the face of the worker based on the current second image;
dynamically adjusting a first display orientation of the first display device, so that a first included angle between the first face orientation and the first display orientation continuously falls within a preset first included angle range until the fourth robot runs out of the corresponding local moving route;
when the moving state is dynamic, acquiring the identity ID of the operator;
generating a template based on preset checking guide information, and generating checking guide information according to the identity ID;
continuously determining a third position and a second face orientation of the face of the worker based on the latest second image;
acquiring a second display orientation of at least one second display device in a preset range around a third position;
and if a second included angle between the orientation of the second face and the orientation of the second display is within a preset second included angle range, controlling the second display equipment to output the viewing guide information and the early warning information successively.
7. A multi-robot cooperative control method is characterized by comprising the following steps:
step 1: acquiring a target task;
step 2: determining appropriate task division based on the target task and a preset neural network model;
and step 3: and performing cooperative control on the plurality of first robots based on the task division to execute the target task.
8. The multi-robot cooperative control method according to claim 7, wherein said step 1: acquiring a target task, comprising:
acquiring a preset task collection library;
performing feature extraction on the tasks in the task collection library based on a preset first feature extraction template to obtain a plurality of first feature values;
constructing a task description factor based on the first characteristic value;
determining that the corresponding task needs to be executed by multiple robots based on the task description factor and a preset task execution identification library of multiple robots;
and if so, taking the corresponding task as a target task.
9. The multi-robot cooperative control method as claimed in claim 7, further comprising:
when the first robots are cooperatively controlled, the execution condition of the first robots is obtained, the task division is corrected based on the execution condition, and the first robots are cooperatively controlled in relay based on the corrected task division.
10. The multi-robot cooperative control method according to claim 7, wherein acquiring the execution situation of the first robot comprises:
acquiring a current first position of the first robot;
acquiring a first image of the first robot by at least one first image acquisition device corresponding to the first position;
determining the execution condition of the first robot based on the first image;
and/or the presence of a gas in the atmosphere,
inquiring the timing condition of the first robot;
acquiring the execution condition replied after the first robot receives the condition inquiry;
and/or the presence of a gas in the gas,
and acquiring the execution condition of the first robot uploaded by at least one condition recording person.
CN202210637246.4A 2022-06-07 2022-06-07 Multi-robot cooperative control system and method Active CN114924513B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210637246.4A CN114924513B (en) 2022-06-07 2022-06-07 Multi-robot cooperative control system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210637246.4A CN114924513B (en) 2022-06-07 2022-06-07 Multi-robot cooperative control system and method

Publications (2)

Publication Number Publication Date
CN114924513A true CN114924513A (en) 2022-08-19
CN114924513B CN114924513B (en) 2023-06-06

Family

ID=82813075

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210637246.4A Active CN114924513B (en) 2022-06-07 2022-06-07 Multi-robot cooperative control system and method

Country Status (1)

Country Link
CN (1) CN114924513B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115392505A (en) * 2022-08-29 2022-11-25 智迪机器人技术(盐城)有限公司 Abnormity processing system and method for automatic installation robot of automobile parts
CN117590816A (en) * 2023-12-14 2024-02-23 湖南比邻星科技有限公司 Multi-robot cooperative control system and method based on Internet of things

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0899285A (en) * 1994-10-03 1996-04-16 Meidensha Corp Collision preventing device for robot
US20120126997A1 (en) * 2010-11-24 2012-05-24 Philippe Bensoussan Crash warning system for motor vehicles
WO2015114089A1 (en) * 2014-01-30 2015-08-06 Kuka Systems Gmbh Safety device and safety process
JP2017100206A (en) * 2015-11-30 2017-06-08 株式会社デンソーウェーブ Robot safety system
CN108416488A (en) * 2017-12-21 2018-08-17 中南大学 A kind of more intelligent robot method for allocating tasks towards dynamic task
CN110209485A (en) * 2019-06-05 2019-09-06 青岛海通胜行智能科技有限公司 The dynamic preventing collision method of multirobot when a kind of work compound
CN110717684A (en) * 2019-10-15 2020-01-21 西安工程大学 Task allocation method based on task allocation coordination strategy and particle swarm optimization
US20200058387A1 (en) * 2017-03-31 2020-02-20 Ikkiworks Pty Limited Methods and systems for a companion robot
US20200282549A1 (en) * 2017-09-20 2020-09-10 Sony Corporation Control device, control method, and control system
CN111708361A (en) * 2020-05-19 2020-09-25 上海有个机器人有限公司 Multi-robot collision prediction method and device
CN111798097A (en) * 2020-06-06 2020-10-20 浙江科钛机器人股份有限公司 Autonomous mobile robot task allocation processing method based on market mechanism
CN112561227A (en) * 2020-10-26 2021-03-26 南京集新萃信息科技有限公司 Multi-robot cooperation method and system based on recurrent neural network
CN112883792A (en) * 2021-01-19 2021-06-01 武汉海默机器人有限公司 Robot active safety protection method and system based on visual depth analysis
CN113001536A (en) * 2019-12-20 2021-06-22 中国科学院沈阳计算技术研究所有限公司 Anti-collision detection method and device for multiple cooperative robots
CN113936209A (en) * 2021-09-03 2022-01-14 深圳云天励飞技术股份有限公司 Cooperative operation method of patrol robot and related equipment
CN114179104A (en) * 2021-12-13 2022-03-15 盐城工学院 Picking robot control method and system based on visual identification
CN114290326A (en) * 2020-10-07 2022-04-08 罗伯特·博世有限公司 Apparatus and method for controlling one or more robots

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0899285A (en) * 1994-10-03 1996-04-16 Meidensha Corp Collision preventing device for robot
US20120126997A1 (en) * 2010-11-24 2012-05-24 Philippe Bensoussan Crash warning system for motor vehicles
WO2015114089A1 (en) * 2014-01-30 2015-08-06 Kuka Systems Gmbh Safety device and safety process
JP2017100206A (en) * 2015-11-30 2017-06-08 株式会社デンソーウェーブ Robot safety system
US20200058387A1 (en) * 2017-03-31 2020-02-20 Ikkiworks Pty Limited Methods and systems for a companion robot
US20200282549A1 (en) * 2017-09-20 2020-09-10 Sony Corporation Control device, control method, and control system
CN108416488A (en) * 2017-12-21 2018-08-17 中南大学 A kind of more intelligent robot method for allocating tasks towards dynamic task
CN110209485A (en) * 2019-06-05 2019-09-06 青岛海通胜行智能科技有限公司 The dynamic preventing collision method of multirobot when a kind of work compound
CN110717684A (en) * 2019-10-15 2020-01-21 西安工程大学 Task allocation method based on task allocation coordination strategy and particle swarm optimization
CN113001536A (en) * 2019-12-20 2021-06-22 中国科学院沈阳计算技术研究所有限公司 Anti-collision detection method and device for multiple cooperative robots
CN111708361A (en) * 2020-05-19 2020-09-25 上海有个机器人有限公司 Multi-robot collision prediction method and device
CN111798097A (en) * 2020-06-06 2020-10-20 浙江科钛机器人股份有限公司 Autonomous mobile robot task allocation processing method based on market mechanism
CN114290326A (en) * 2020-10-07 2022-04-08 罗伯特·博世有限公司 Apparatus and method for controlling one or more robots
CN112561227A (en) * 2020-10-26 2021-03-26 南京集新萃信息科技有限公司 Multi-robot cooperation method and system based on recurrent neural network
CN112883792A (en) * 2021-01-19 2021-06-01 武汉海默机器人有限公司 Robot active safety protection method and system based on visual depth analysis
CN113936209A (en) * 2021-09-03 2022-01-14 深圳云天励飞技术股份有限公司 Cooperative operation method of patrol robot and related equipment
CN114179104A (en) * 2021-12-13 2022-03-15 盐城工学院 Picking robot control method and system based on visual identification

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
姚湘;徐平平;王华君;: "基于深度图像检测的机器人碰撞避免方案", 控制工程 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115392505A (en) * 2022-08-29 2022-11-25 智迪机器人技术(盐城)有限公司 Abnormity processing system and method for automatic installation robot of automobile parts
CN115392505B (en) * 2022-08-29 2023-07-18 智迪机器人技术(盐城)有限公司 Abnormality processing system and method for auto-parts automatic installation robot
CN117590816A (en) * 2023-12-14 2024-02-23 湖南比邻星科技有限公司 Multi-robot cooperative control system and method based on Internet of things
CN117590816B (en) * 2023-12-14 2024-05-17 湖南比邻星科技有限公司 Multi-robot cooperative control system and method based on Internet of things

Also Published As

Publication number Publication date
CN114924513B (en) 2023-06-06

Similar Documents

Publication Publication Date Title
CN114924513A (en) Multi-robot cooperative control system and method
CN109264275B (en) Robot-based intelligent warehouse management method and device and storage medium
CN109013949B (en) Intelligent sheet metal part production system capable of automatically pasting sponge
CN113428547A (en) Goods-to-person holographic image sorting workstation and operation method
CN113954072A (en) Vision-guided wooden door workpiece intelligent identification and positioning system and method
CN115892823A (en) Material storage and matching inspection integrated system and method
CN111391691B (en) Vision-based target alignment method, system, and computer-readable storage medium
CN116214531B (en) Path planning method and device for industrial robot
CN115254631B (en) Flexible production field intelligent sorting method and sorting system
CN114511167A (en) Material handling equipment scheduling method, device and system and storage medium
CN114888816B (en) Control system and method for intelligent loading and unloading robot
CN116081154A (en) Goods shelf correction device and method of AGV intelligent transfer robot and storage medium
CN110918492A (en) Cargo sorting system and method
CN116061213A (en) Feeding and discharging method, system, equipment and storage medium of composite robot
CN111459114B (en) Method and device for feeding steel coil of hot rolling leveling unit
JP2002269192A (en) Physical distribution optimizing system
CN115744027A (en) Goods management method and device, goods management system and computer readable storage medium
CN111736540B (en) Goods sorting control method and device, electronic equipment and storage medium
CN112365204A (en) Storage and transportation management method and management system of chemical products and electronic equipment
CN112551195A (en) Method for butting vehicle and boarding bridge and platform management system
CN219916366U (en) Management system of stock position logistics
CN116588573B (en) Bulk cargo grabbing control method and system of intelligent warehouse lifting system
CN112132493B (en) Order picking regulation and control method and robot
CN117834836A (en) Material grabbing system, method, computing equipment and medium
CN108237533B (en) Robot self-adaptive object positioning method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant