CN117420760A - Multi-mode control algorithm fusion method suitable for autonomous cooperation of robot - Google Patents
Multi-mode control algorithm fusion method suitable for autonomous cooperation of robot Download PDFInfo
- Publication number
- CN117420760A CN117420760A CN202311579211.0A CN202311579211A CN117420760A CN 117420760 A CN117420760 A CN 117420760A CN 202311579211 A CN202311579211 A CN 202311579211A CN 117420760 A CN117420760 A CN 117420760A
- Authority
- CN
- China
- Prior art keywords
- task
- algorithm
- matching
- steps
- layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000004422 calculation algorithm Methods 0.000 title claims abstract description 169
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 16
- 238000004458 analytical method Methods 0.000 claims abstract description 50
- 238000000034 method Methods 0.000 claims abstract description 14
- 238000012549 training Methods 0.000 claims description 12
- 238000004364 calculation method Methods 0.000 claims description 7
- 230000004927 fusion Effects 0.000 claims description 7
- 230000006870 function Effects 0.000 claims description 6
- 230000006978 adaptation Effects 0.000 claims description 4
- 238000007405 data analysis Methods 0.000 claims description 4
- 238000013135 deep learning Methods 0.000 claims description 4
- 238000005516 engineering process Methods 0.000 claims description 4
- 239000000284 extract Substances 0.000 claims description 4
- 238000012706 support-vector machine Methods 0.000 claims description 4
- 238000013475 authorization Methods 0.000 claims description 3
- 230000006872 improvement Effects 0.000 claims description 3
- 238000013508 migration Methods 0.000 claims description 3
- 230000005012 migration Effects 0.000 claims description 3
- 230000008569 process Effects 0.000 claims description 3
- 238000012827 research and development Methods 0.000 claims description 3
- 239000013589 supplement Substances 0.000 claims description 3
- 238000013519 translation Methods 0.000 claims description 3
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001276 controlling effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000001105 regulatory effect Effects 0.000 description 1
- 238000013526 transfer learning Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B13/00—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
- G05B13/02—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
- G05B13/04—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators
- G05B13/042—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators in which a parameter or coefficient is automatically adjusted to optimise the performance
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Manipulator (AREA)
Abstract
The invention discloses a multi-mode control algorithm fusion method suitable for autonomous cooperation of a robot, which is characterized in that an algorithm layer is set to form a huge algorithm library, so that algorithm support is provided for the work of all aspects of the robot; setting a task layer, translating a work task given by a human into a task set executable by a robot through big data and an AI classification method, importing a target task into a task analysis method similar to the matching task, connecting, and splitting the task into N specific robot task points or steps through algorithms such as reasoning, analysis and the like built in the task layer; setting a matching layer, deploying a special matching algorithm, matching a platform, and connecting a matching interface to complete the matching of tasks and algorithms; and setting a decision layer, providing a decision for a matching result, and fusing a control learning algorithm responsible for control common sense, decision and reasoning with a model algorithm to realize high-efficiency and high-precision work, thereby meeting the requirements of high-precision and flexible work of the robot.
Description
Technical Field
The invention relates to the field of algorithms, in particular to a multi-mode control algorithm fusion method suitable for autonomous cooperation of robots.
Background
The learning algorithm mainly has the functions of decision making, task disassembly and common sense understanding on robots, can be self-regulated under different environments to finish different work tasks, namely, improve the flexible work capacity, but does not have or is not suitable for the robot planning and control tasks with real-time and accuracy requirements, so that the learning algorithm has the advantages of good wide area and can determine the working state through training, but has high difficulty in realizing high precision and high training cost. However, the pattern recognition type algorithm has high algorithm accuracy, but needs to be recognized in a fixed pattern, so that flexible recognition is difficult.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a multi-mode control algorithm fusion method suitable for autonomous cooperation of a robot, which can effectively solve the problems of the background technology.
The technical scheme adopted for solving the technical problems is as follows:
the multi-mode control algorithm fusion method suitable for autonomous cooperation of the robot comprises the following steps:
setting an algorithm layer, adopting an open architecture, and accessing algorithm technologies of other organizations or institutions in various modes of SDK, API, authorization, purchase and cooperation; or the autonomous research and development algorithm module is accessed to form a huge algorithm library, so that algorithm support is provided for the work of each aspect of the robot;
setting a task layer, dividing the task layer into a task database, a task analysis system and task points, translating a work task given by a human being into a task set executable by a robot through big data and an AI classification method, importing a target task into a task analysis method similar to the matching task, connecting the task set, and dividing the task set into N specific robot task points or steps which can be independently executed through algorithms such as built-in reasoning, analysis and the like;
setting a matching layer, deploying a special matching algorithm, matching a platform, and connecting a matching interface to complete the matching of tasks and algorithms;
setting a decision layer to provide decisions for the matching result; the specific steps are as follows:
step (1) developing a special algorithm suitable for fusion of a multi-mode control algorithm of the robot;
setting a special task layer to analyze and extract key points of a task when the algorithm is applied, and obtaining a final task adaptation algorithm through various factor decisions at a decision layer;
step (3), the problem of flexible work of the device is solved through a learning algorithm, then task points which do not meet high-precision calculation are extracted, corresponding algorithms are adapted to an algorithm matching layer, and a multi-mode fusion algorithm of the learning algorithm and the self-adaptive matching algorithm is obtained;
step (4) dividing the decision of how the robot should execute the task into two parts, using a learning algorithm for big data training, and outputting available high-level motion instructions; the special representative robot can do things in the current environment, common multi-mode data of the robot are solidified into executable steps, and then the common multi-mode data and the executable steps are combined in a value function mode to jointly determine which instruction is selected for actual execution.
In one preferred scheme, the task layer setting comprises the following steps:
step S1: firstly, establishing a task database, wherein different items are established in the database according to the type of work required to be completed by a robot;
step S2: importing target works into a task database according to classification;
step S3: the task analysis system is started, a learning algorithm special for task analysis is built in the task analysis system, and the task analysis or translation given by a human being is responsible for being converted into the key points or steps of N robots to be executed specifically, and the specific method is as follows:
step S31: the deployed task analysis algorithm firstly executes a training task, the method comprises the steps of firstly taking a human work task and an analyzed execution key point or step as a sample, and carrying out the algorithm processes of repeated training, iteration and improvement:
step S32: executing, namely analyzing or translating the task into points or steps of specific execution of N robots;
step S33: returning to the step 32 if the analysis result does not meet the requirement, otherwise executing 35;
step S34: if the requirements of the M execution steps 33 are not met, applying for manual intervention, generating a result meeting the requirements, then executing the step 35, and simultaneously returning the result to the step 31 to supplement or iterate the calculation loopholes;
step S35: outputting the analysis result and ending the analysis;
step S4: checking the obtained N execution task points or steps, and returning to the step S3 for further analysis or splitting if the robot can not execute the execution task points or steps independently; otherwise, executing the step S5;
step S5: and finishing the analysis task, and outputting N task points or steps as final results, wherein N task points or steps execute algorithms to finish matching through the matching layer.
In one preferred embodiment, the setting of the matching layer includes the following steps:
step S11: matching is needed to be executed for N times, and algorithm support is provided for task execution;
step S12: firstly, single task key points/steps are sent to a matching platform;
step S13: starting a matching interface connection algorithm layer and independently executing task points or steps;
step S14: a big data system built in the matching platform is used for recommending a proper algorithm for singly executing task points or steps:
step S15: starting a matching algorithm, calculating a matching result, and executing step S16 if the matching result meets the requirement; otherwise, executing the step S13;
step S16: and after the matching task is completed, providing a decision for the matching result through the decision layer.
In one of the preferred schemes, the setting of the decision layer comprises the following steps:
when the decision layer provides a decision on the matching result: if the matching result of the matching layer meets the task requirement, the decision is 'consent', and the result is output; if the matching result of the matching layer does not meet the task requirement, the decision is 'disagreement', and the matching layer is returned to be matched again.
In one of the preferred embodiments, the step (1) further includes the steps of:
the learning algorithm comprises a support vector machine, a Bayesian linear analysis algorithm and a deep learning and migration learning algorithm.
In one of the preferred embodiments, the step (2) further includes the following steps:
according to the correlation of the task key points, a special matching degree big data analysis algorithm is designed, and an optimal algorithm and a suboptimal algorithm meeting the requirements are found out.
In one of the preferred embodiments, the step (2) further includes the following steps:
if necessary, the algorithm engineer needs to manually add multiple algorithms, execute the weight analysis of the multiple algorithms, and discuss whether the multiple algorithms have more excellent matching degree when fused.
In one of the preferred embodiments, the step (2) further includes the following steps:
including matching, cost, and efficiency.
Compared with the prior art, the invention has the beneficial effects that:
the multi-mode control algorithm fusion method suitable for autonomous cooperation of the robot provided by the invention fuses the control learning algorithm and the model algorithm which are responsible for control common sense, decision and reasoning to realize high-efficiency and high-precision work, and meets the requirements of high-precision and flexible work of the robot.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The embodiment provides a multi-mode control algorithm fusion method suitable for autonomous cooperation of a robot, which combines a mode identification algorithm and a learning algorithm, solves the problem of flexible work of the robot through the learning algorithm, extracts task points which do not meet high-precision calculation, and adapts corresponding algorithms at an algorithm matching layer to obtain the multi-mode fusion algorithm of the learning algorithm and the self-adaptive matching algorithm.
In the embodiment, algorithms such as a support vector machine, bayesian linear analysis (BLDA) and the like suitable for flexible operation of the robot are developed mainly in the aspect of algorithms; and algorithms such as deep learning, transfer learning and the like, and the algorithms are placed in a special algorithm layer. And setting key points of special task layer analysis and extraction tasks when the algorithm is applied. According to the relevance of the task key points, a special matching degree big data analysis algorithm is designed, and an optimal algorithm, a suboptimal algorithm and the like meeting the requirements are found out; if necessary, the algorithm engineer needs to manually add multiple algorithms, execute the weight analysis of the multiple algorithms, and discuss whether the multiple algorithms have more excellent matching degree when fused. And in a decision layer, the final task adaptation algorithm is obtained by more matching degree, cost, efficiency and other factors.
The algorithm design logic of the robot in this embodiment is as follows: splitting the decision of how the robot should execute the task into two parts, using a learning algorithm for big data training, and outputting available high-level motion instructions; the special representative robot can do things in the current environment, and meanwhile common multi-mode data (mainly from sensors of the robot, such as images, robot states, scene environment information and the like) of the robot are solidified into executable steps, such as machine position instructions, motion instructions, grabbing and placing instructions and the like of the robot. The two are then combined by means of a Value Function (Value Function) to jointly decide which instruction to select for actual execution. In this way, the robot can generalize the original capability better into new scenarios.
As shown in fig. 1, input to the robotic system "how do you turn the screw into the screw hole? For the task, learning type prior intervention is performed, the task is disassembled into the most suitable working sequence (step) of the robot, and the task target of the stage step is established by combining machine vision data:
1. pick up the screw (task object: X, Y, X space coordinate position of screw);
2. moving the screw (task object: screw center point, X, Y, X spatial coordinate position of object profile);
3. finding screw holes (task targets: X, Y, X space coordinate positions of screws);
4. the screw is screwed (task objective: the screwing data of the screw is that the screw is screwed up state if 26 circles are needed).
Then, the special algorithm is inserted during the session of the specific step, such as a second step of moving a screw, a path planning special algorithm is started to convert the observation space (generally, a three-dimensional space and an object to be operated) of the robot into a 3D value map, and then a mature path searching algorithm (such as a probability path map Probabilistic RoadMap) can be used for searching and generating a usable robot motion path on the 3D value map. With the available paths, trajectory planning and controlling robot movements are then performed.
In the above embodiment, the algorithm layer is set, and an open architecture is adopted, so that the algorithm technologies of other organizations or institutions are accessed through various modes of SDK, API, authorization, purchase and cooperation; or the autonomous research and development algorithm module is accessed to form a huge algorithm library, so that algorithm support is provided for the work of each aspect of the robot; setting a task layer, dividing the task layer into a task database, a task analysis system and task points, translating a work task given by a human being into a task set executable by a robot through big data and an AI classification method, importing a target task into a task analysis method similar to the matching task, connecting the task set, and dividing the task set into N specific robot task points or steps which can be independently executed through algorithms such as built-in reasoning, analysis and the like; setting a matching layer, deploying a special matching algorithm, matching a platform, and connecting a matching interface to complete the matching of tasks and algorithms; setting a decision layer to provide decisions for the matching result; the specific steps are as follows:
step (1) developing a special algorithm suitable for fusion of a multi-mode control algorithm of the robot; the learning algorithm comprises a support vector machine, a Bayesian linear analysis algorithm and a deep learning and migration learning algorithm.
Setting a special task layer to analyze and extract key points of a task when the algorithm is applied, and obtaining a final task adaptation algorithm through various factor decisions at a decision layer; according to the correlation of the task key points, a special matching degree big data analysis algorithm is designed, and an optimal algorithm and a suboptimal algorithm meeting the requirements are found out. If necessary, the algorithm engineer needs to manually add multiple algorithms, execute the weight analysis of the multiple algorithms, and discuss whether the multiple algorithms have more excellent matching degree when fused. Including matching, cost, and efficiency.
Step (3), the problem of flexible work of the device is solved through a learning algorithm, then task points which do not meet high-precision calculation are extracted, corresponding algorithms are adapted to an algorithm matching layer, and a multi-mode fusion algorithm of the learning algorithm and the self-adaptive matching algorithm is obtained;
step (4) dividing the decision of how the robot should execute the task into two parts, using a learning algorithm for big data training, and outputting available high-level motion instructions; the special representative robot can do things in the current environment, common multi-mode data of the robot are solidified into executable steps, and then the common multi-mode data and the executable steps are combined in a value function mode to jointly determine which instruction is selected for actual execution.
The task layer setting comprises the following steps:
step S1: firstly, establishing a task database, wherein different items are established in the database according to the type of work required to be completed by a robot;
step S2: importing target works into a task database according to classification;
step S3: the task analysis system is started, a learning algorithm special for task analysis is built in the task analysis system, and the task analysis or translation given by a human being is responsible for being converted into the key points or steps of N robots to be executed specifically, and the specific method is as follows:
step S31: the deployed task analysis algorithm firstly executes a training task, the method comprises the steps of firstly taking a human work task and an analyzed execution key point or step as a sample, and carrying out the algorithm processes of repeated training, iteration and improvement:
step S32: executing, namely analyzing or translating the task into points or steps of specific execution of N robots;
step S33: returning to the step 32 if the analysis result does not meet the requirement, otherwise executing 35;
step S34: if the requirements of the M execution steps 33 are not met, applying for manual intervention, generating a result meeting the requirements, then executing the step 35, and simultaneously returning the result to the step 31 to supplement or iterate the calculation loopholes;
step S35: outputting the analysis result and ending the analysis;
step S4: checking the obtained N execution task points or steps, and returning to the step S3 for further analysis or splitting if the robot can not execute the execution task points or steps independently; otherwise, executing the step S5;
step S5: and finishing the analysis task, and outputting N task points or steps as final results, wherein N task points or steps execute algorithms to finish matching through the matching layer.
The setting of the matching layer comprises the following steps:
step S11: matching is needed to be executed for N times, and algorithm support is provided for task execution;
step S12: firstly, single task key points/steps are sent to a matching platform;
step S13: starting a matching interface connection algorithm layer and independently executing task points or steps;
step S14: a big data system built in the matching platform is used for recommending a proper algorithm for singly executing task points or steps:
step S15: starting a matching algorithm, calculating a matching result, and executing step S16 if the matching result meets the requirement; otherwise, executing the step S13;
step S16: and after the matching task is completed, providing a decision for the matching result through the decision layer.
The setting of the decision layer comprises the following steps:
when the decision layer provides a decision on the matching result: if the matching result of the matching layer meets the task requirement, the decision is 'consent', and the result is output; if the matching result of the matching layer does not meet the task requirement, the decision is 'disagreement', and the matching layer is returned to be matched again.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Claims (8)
1. The multi-mode control algorithm fusion method suitable for autonomous cooperation of the robot is characterized by comprising the following steps of: the method comprises the following steps:
setting an algorithm layer, adopting an open architecture, and accessing algorithm technologies of other organizations or institutions in various modes of SDK, API, authorization, purchase and cooperation; or the autonomous research and development algorithm module is accessed to form a huge algorithm library, so that algorithm support is provided for the work of each aspect of the robot;
setting a task layer, dividing the task layer into a task database, a task analysis system and task points, translating a work task given by a human being into a task set executable by a robot through big data and an AI classification method, importing a target task into a task analysis method similar to the matching task, connecting the task set, and dividing the task set into N specific robot task points or steps which can be independently executed through algorithms such as built-in reasoning, analysis and the like;
setting a matching layer, deploying a special matching algorithm, matching a platform, and connecting a matching interface to complete the matching of tasks and algorithms;
setting a decision layer to provide decisions for the matching result; the specific steps are as follows:
step (1) developing a special algorithm suitable for fusion of a multi-mode control algorithm of the robot;
setting a special task layer to analyze and extract key points of a task when the algorithm is applied, and obtaining a final task adaptation algorithm through various factor decisions at a decision layer;
step (3), the problem of flexible work of the device is solved through a learning algorithm, then task points which do not meet high-precision calculation are extracted, corresponding algorithms are adapted to an algorithm matching layer, and a multi-mode fusion algorithm of the learning algorithm and the self-adaptive matching algorithm is obtained;
step (4) dividing the decision of how the robot should execute the task into two parts, using a learning algorithm for big data training, and outputting available high-level motion instructions; the special representative robot can do things in the current environment, common multi-mode data of the robot are solidified into executable steps, and then the common multi-mode data and the executable steps are combined in a value function mode to jointly determine which instruction is selected for actual execution.
2. The multi-modal control algorithm fusion method suitable for autonomous cooperation of a robot according to claim 1, wherein the task layer setting comprises the steps of:
step S1: firstly, establishing a task database, wherein different items are established in the database according to the type of work required to be completed by a robot;
step S2: importing target works into a task database according to classification;
step S3: the task analysis system is started, a learning algorithm special for task analysis is built in the task analysis system, and the task analysis or translation given by a human being is responsible for being converted into the key points or steps of N robots to be executed specifically, and the specific method is as follows:
step S31: the deployed task analysis algorithm firstly executes a training task, the method comprises the steps of firstly taking a human work task and an analyzed execution key point or step as a sample, and carrying out the algorithm processes of repeated training, iteration and improvement:
step S32: executing, namely analyzing or translating the task into points or steps of specific execution of N robots;
step S33: returning to the step 32 if the analysis result does not meet the requirement, otherwise executing 35;
step S34: if the requirements of the M execution steps 33 are not met, applying for manual intervention, generating a result meeting the requirements, then executing the step 35, and simultaneously returning the result to the step 31 to supplement or iterate the calculation loopholes;
step S35: outputting the analysis result and ending the analysis;
step S4: checking the obtained N execution task points or steps, and returning to the step S3 for further analysis or splitting if the robot can not execute the execution task points or steps independently; otherwise, executing the step S5;
step S5: and finishing the analysis task, and outputting N task points or steps as final results, wherein N task points or steps execute algorithms to finish matching through the matching layer.
3. The multi-modal control algorithm fusion method suitable for autonomous cooperation of robots according to claim 1 or 2, wherein the setting of the matching layer comprises the steps of:
step S11: matching is needed to be executed for N times, and algorithm support is provided for task execution;
step S12: firstly, single task key points/steps are sent to a matching platform;
step S13: starting a matching interface connection algorithm layer and independently executing task points or steps;
step S14: a big data system built in the matching platform is used for recommending a proper algorithm for singly executing task points or steps:
step S15: starting a matching algorithm, calculating a matching result, and executing step S16 if the matching result meets the requirement; otherwise, executing the step S13;
step S16: and after the matching task is completed, providing a decision for the matching result through the decision layer.
4. The multi-modal control algorithm fusion method suitable for autonomous cooperation of robots according to claim 1, wherein the setting of the decision layer comprises the steps of:
when the decision layer provides a decision on the matching result: if the matching result of the matching layer meets the task requirement, the decision is 'consent', and the result is output; if the matching result of the matching layer does not meet the task requirement, the decision is 'disagreement', and the matching layer is returned to be matched again.
5. The multi-modal control algorithm fusion method suitable for autonomous cooperation of robots according to claim 1, wherein: the step (1) further comprises the steps of:
the learning algorithm comprises a support vector machine, a Bayesian linear analysis algorithm and a deep learning and migration learning algorithm.
6. The multi-modal control algorithm fusion method suitable for autonomous cooperation of robots according to claim 1, wherein: the step (2) further comprises the following steps:
according to the correlation of the task key points, a special matching degree big data analysis algorithm is designed, and an optimal algorithm and a suboptimal algorithm meeting the requirements are found out.
7. The multi-modal control algorithm fusion method suitable for autonomous cooperation of robots according to claim 1, wherein: the step (2) further comprises the following steps:
if necessary, the algorithm engineer needs to manually add multiple algorithms, execute the weight analysis of the multiple algorithms, and discuss whether the multiple algorithms have more excellent matching degree when fused.
8. The multi-modal control algorithm fusion method suitable for autonomous cooperation of robots according to claim 1, wherein: the step (2) further comprises the following steps:
including matching, cost, and efficiency.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311579211.0A CN117420760A (en) | 2023-11-24 | 2023-11-24 | Multi-mode control algorithm fusion method suitable for autonomous cooperation of robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311579211.0A CN117420760A (en) | 2023-11-24 | 2023-11-24 | Multi-mode control algorithm fusion method suitable for autonomous cooperation of robot |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117420760A true CN117420760A (en) | 2024-01-19 |
Family
ID=89524959
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311579211.0A Pending CN117420760A (en) | 2023-11-24 | 2023-11-24 | Multi-mode control algorithm fusion method suitable for autonomous cooperation of robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117420760A (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106094813A (en) * | 2016-05-26 | 2016-11-09 | 华南理工大学 | It is correlated with based on model humanoid robot gait's control method of intensified learning |
CN108399427A (en) * | 2018-02-09 | 2018-08-14 | 华南理工大学 | Natural interactive method based on multimodal information fusion |
CN108406767A (en) * | 2018-02-13 | 2018-08-17 | 华南理工大学 | Robot autonomous learning method towards man-machine collaboration |
CN110245874A (en) * | 2019-03-27 | 2019-09-17 | 中国海洋大学 | A kind of Decision fusion method based on machine learning and knowledge reasoning |
CN111444954A (en) * | 2020-03-24 | 2020-07-24 | 广东省智能制造研究所 | Robot autonomous assembly method based on multi-mode perception and learning |
CN111813870A (en) * | 2020-06-01 | 2020-10-23 | 武汉大学 | Machine learning algorithm resource sharing method and system based on unified description expression |
CN114170454A (en) * | 2021-11-04 | 2022-03-11 | 同济大学 | Intelligent voxel action learning method based on joint grouping strategy |
WO2022094746A1 (en) * | 2020-11-03 | 2022-05-12 | 北京洛必德科技有限公司 | Multi-robot multi-task collaborative working method, and server |
CN116442219A (en) * | 2023-03-24 | 2023-07-18 | 东莞市新佰人机器人科技有限责任公司 | Intelligent robot control system and method |
CN116976306A (en) * | 2023-08-01 | 2023-10-31 | 珠海市卓轩科技有限公司 | Multi-model collaboration method based on large-scale language model |
-
2023
- 2023-11-24 CN CN202311579211.0A patent/CN117420760A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106094813A (en) * | 2016-05-26 | 2016-11-09 | 华南理工大学 | It is correlated with based on model humanoid robot gait's control method of intensified learning |
CN108399427A (en) * | 2018-02-09 | 2018-08-14 | 华南理工大学 | Natural interactive method based on multimodal information fusion |
CN108406767A (en) * | 2018-02-13 | 2018-08-17 | 华南理工大学 | Robot autonomous learning method towards man-machine collaboration |
CN110245874A (en) * | 2019-03-27 | 2019-09-17 | 中国海洋大学 | A kind of Decision fusion method based on machine learning and knowledge reasoning |
CN111444954A (en) * | 2020-03-24 | 2020-07-24 | 广东省智能制造研究所 | Robot autonomous assembly method based on multi-mode perception and learning |
CN111813870A (en) * | 2020-06-01 | 2020-10-23 | 武汉大学 | Machine learning algorithm resource sharing method and system based on unified description expression |
WO2022094746A1 (en) * | 2020-11-03 | 2022-05-12 | 北京洛必德科技有限公司 | Multi-robot multi-task collaborative working method, and server |
CN114170454A (en) * | 2021-11-04 | 2022-03-11 | 同济大学 | Intelligent voxel action learning method based on joint grouping strategy |
CN116442219A (en) * | 2023-03-24 | 2023-07-18 | 东莞市新佰人机器人科技有限责任公司 | Intelligent robot control system and method |
CN116976306A (en) * | 2023-08-01 | 2023-10-31 | 珠海市卓轩科技有限公司 | Multi-model collaboration method based on large-scale language model |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Labbé et al. | Monte-carlo tree search for efficient visually guided rearrangement planning | |
Peralta et al. | Next-best view policy for 3d reconstruction | |
Hsu et al. | A knowledge-based engineering system for assembly sequence planning | |
CN112109079A (en) | Method and system for robot maneuver planning | |
Wang et al. | Perception of demonstration for automatic programing of robotic assembly: framework, algorithm, and validation | |
CN115605326A (en) | Method for controlling a robot and robot controller | |
CN111753696A (en) | Method for sensing scene information, simulation device and robot | |
Zhang et al. | Automatic assembly simulation of product in virtual environment based on interaction feature pair | |
Sakane et al. | Illumination setup planning for a hand-eye system based on an environmental model | |
Bonato et al. | Ultra-low power deep learning-based monocular relative localization onboard nano-quadrotors | |
Rajvanshi et al. | Saynav: Grounding large language models for dynamic planning to navigation in new environments | |
Li et al. | Dynamic scene graph for mutual-cognition generation in proactive human-robot collaboration | |
Wang et al. | Simulation and deep learning on point clouds for robot grasping | |
Zhang et al. | A posture detection method for augmented reality–aided assembly based on YOLO-6D | |
Bonsignorio et al. | Deep learning and machine learning in robotics [from the guest editors] | |
CN117420760A (en) | Multi-mode control algorithm fusion method suitable for autonomous cooperation of robot | |
Liu et al. | A human-robot collaboration framework based on human motion prediction and task model in virtual environment | |
Wang et al. | An environment state perception method based on knowledge representation in dual-arm robot assembly tasks | |
Liu et al. | An augmented reality-assisted interaction approach using deep reinforcement learning and cloud-edge orchestration for user-friendly robot teaching | |
Liu et al. | RealDex: Towards Human-like Grasping for Robotic Dexterous Hand | |
Chang et al. | An implementation of reinforcement learning in assembly path planning based on 3D point clouds | |
Hwang et al. | Primitive object grasping for finger motion synthesis | |
Wu | Investigation of different observation and action spaces for reinforcement learning on reaching tasks | |
Zhu et al. | Multi-level Reasoning for Robotic Assembly: From Sequence Inference to Contact Selection | |
Bai et al. | Strategy with machine learning models for precise assembly using programming by demonstration |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |