CN115213885A - Robot skill generation method, device and medium, cloud server and robot control system - Google Patents

Robot skill generation method, device and medium, cloud server and robot control system Download PDF

Info

Publication number
CN115213885A
CN115213885A CN202110729756.XA CN202110729756A CN115213885A CN 115213885 A CN115213885 A CN 115213885A CN 202110729756 A CN202110729756 A CN 202110729756A CN 115213885 A CN115213885 A CN 115213885A
Authority
CN
China
Prior art keywords
skill
robot
skills
atomic
task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110729756.XA
Other languages
Chinese (zh)
Other versions
CN115213885B (en
Inventor
何博文
王斌
黄晓庆
马世奎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloudminds Beijing Technologies Co Ltd
Original Assignee
Cloudminds Beijing Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloudminds Beijing Technologies Co Ltd filed Critical Cloudminds Beijing Technologies Co Ltd
Priority to CN202110729756.XA priority Critical patent/CN115213885B/en
Priority to PCT/CN2021/136891 priority patent/WO2023273178A1/en
Publication of CN115213885A publication Critical patent/CN115213885A/en
Application granted granted Critical
Publication of CN115213885B publication Critical patent/CN115213885B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/163Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The disclosure relates to a robot skill generation method, a device, a medium, a cloud server and a robot control system, wherein the method comprises the following steps: splitting a preset robot task into a plurality of subtasks; determining a plurality of target atomic skills corresponding to each subtask from a skill blueprint, wherein the skill blueprint comprises skill parameters of a plurality of atomic skills of the robot, and each atomic skill is a minimum skill unit obtained by decoupling the action ability of the robot; for each subtask, generating a molecular skill for completing the subtask by combining the plurality of target atomic skills corresponding to the subtask; generating cellular skills for completing the robotic task by combining each of the molecular skills. In this way, based on the atomic skill being the minimum skill unit obtained by decoupling the motion ability of the robot, the atomic skill can be reused when generating the molecular skill, and the efficiency of robot skill development is improved.

Description

Robot skill generation method, device and medium, cloud server and robot control system
Technical Field
The disclosure relates to the technical field of robot control, in particular to a robot skill generation method, a device, a medium, a cloud server and a robot control system.
Background
The robot skill control is to give the robot the operation skill of human, so that the robot can autonomously perform control planning and control instruction generation for complex environment and task, and is one of the bases of robot intellectualization. In the related art, in a robot client, a target skill is determined from a candidate skill list corresponding to a target skill package, a target skill instance corresponding to the target skill is created, and target skill configuration is performed on the target skill instance.
Disclosure of Invention
The invention aims to provide a robot skill development method, a robot skill development device, a robot skill development medium, a cloud server and a robot control system, and aims to solve the problem that the efficiency of robot skill development is low.
In order to achieve the above object, a first aspect of the present disclosure provides a robot skill generating method, including:
splitting a preset robot task into a plurality of subtasks;
determining a plurality of target atomic skills corresponding to each subtask from a skill blueprint, wherein the skill blueprint comprises skill parameters of a plurality of atomic skills of the robot, and each atomic skill is a minimum skill unit obtained by decoupling the action ability of the robot;
for each subtask, generating a molecular skill for completing the subtask by combining the plurality of target atomic skills corresponding to the subtask;
generating cellular skills for completing the robotic task by combining each of the molecular skills.
Optionally, the splitting a preset robot task into multiple subtasks includes:
determining a plurality of functional components required by the robot to execute the robot task and actions required by each functional component;
and performing action splitting on the robot task according to the functional components to obtain a plurality of subtasks which are in one-to-one correspondence with the functional components, wherein each subtask is used for completing the action to be executed by the corresponding functional component.
Optionally, the action to be performed by each of the functional units includes a plurality of sub-actions, and for each of the sub-tasks, generating a molecular skill for completing the sub-task by combining the plurality of target atomic skills corresponding to the sub-task includes:
for each subtask, determining a calling sequence of each target atomic skill in a plurality of target atomic skills corresponding to the subtask according to an execution sequence of each sub-action of the functional component corresponding to the subtask;
and combining the plurality of target atomic skills according to the calling sequence to generate the molecular skills for completing the subtasks, wherein the plurality of sub-actions correspond to the plurality of target atomic skills one to one.
Optionally, the generating cellular skills for completing the robotic task by combining each of the molecular skills comprises:
determining a combined order between the molecular skills according to an execution order between the functional components;
combining each of the molecular skills according to the combined order to generate a cellular skill for completing the robotic task.
Optionally, the skill blueprint includes skill sets corresponding to task types of tasks that the robot can complete, and cellular skills, molecular skills and atomic skills corresponding to the task type are stored in each of the skill sets in a grading manner.
Optionally, the method further comprises:
for new sub-actions expected to be completed by the robot, respectively calling an action control model of the robot through a plurality of skill parameter samples;
evaluating the matching degree of each sub-action actually completed by the robot and the new sub-action expected to be completed by the robot;
and constructing a new minimum skill unit in the skill blueprint according to the skill parameter sample corresponding to the highest matching degree.
In a second aspect of the present disclosure, there is provided a robot skill generating apparatus, the apparatus comprising:
the splitting module is used for splitting a preset robot task into a plurality of subtasks;
a determining module, configured to determine a plurality of target atomic skills corresponding to each of the subtasks from a skill blueprint, where the skill blueprint includes skill parameters of a plurality of atomic skills of the robot, and each of the atomic skills is a minimum skill unit obtained by decoupling an action capability of the robot;
the combination module is used for generating molecular skills for completing the subtasks by combining the target atomic skills corresponding to the subtasks aiming at each subtask;
a generating module for generating cellular skills for completing the robotic task by combining each of the molecular skills.
Optionally, the splitting module is configured to determine a plurality of functional components required by the robot to perform the robot task, and an action required to be performed by each of the functional components;
and performing action splitting on the robot task according to the functional components to obtain a plurality of subtasks which are in one-to-one correspondence with the functional components, wherein each subtask is used for completing the action to be executed by the corresponding functional component.
Optionally, the combination module is configured to, for each sub-task, determine, according to an execution order of each sub-action of the functional component corresponding to the sub-task, a calling order of each target atomic skill in a plurality of target atomic skills corresponding to the sub-task, when the action to be executed by each functional component includes a plurality of sub-actions;
and combining the plurality of target atomic skills according to the calling sequence to generate the molecular skills for completing the subtasks, wherein the plurality of sub-actions correspond to the plurality of target atomic skills one to one.
Optionally, the generating module is configured to determine a combination order between the molecular skills according to an execution order between the functional components;
combining each of the molecular skills according to the combined order to generate a cellular skill for completing the robotic task.
Optionally, the skill blueprint includes skill sets corresponding to task types of tasks that the robot can complete one by one, and cellular skills, molecular skills, and atomic skills corresponding to the task type are stored in each skill set in a hierarchical manner.
Optionally, the generating module is further configured to, for a new sub-action that the robot is expected to complete, respectively invoke an action control model of the robot through multiple skill parameter samples;
evaluating the matching degree of each sub-action actually completed by the robot and the new sub-action expected to be completed by the robot;
and constructing a new minimum skill unit in the skill blueprint according to the skill parameter sample corresponding to the highest matching degree.
In a third aspect of the disclosure, a computer-readable storage medium is provided, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of the first aspect.
The fourth aspect of the present disclosure provides a cloud server, including:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to implement the steps of the method of any one of the first aspect.
In a fifth aspect of the present disclosure, a robot control system is provided, where the robot control system includes a robot central controller, and the cloud server of the fourth aspect communicatively connected to the robot central controller;
the robot central controller is used for uploading the acquired image information to the cloud server;
the cloud server is used for determining a target robot task from preset robot tasks according to the image information after receiving the image information uploaded by the robot central controller, and sending a cell skill corresponding to the target robot task to the robot central controller;
the robot central controller is further configured to, in response to the received cell skills sent by the cloud server, complete the target robot task by calling skill parameters of each atomic skill in the cell skills.
Optionally, the cloud server is configured to:
inputting the image information into a visual recognition model to obtain a target object output by the visual recognition model;
calculating the confidence coefficient between the target object and a task target object in the preset robot task;
and determining a target robot task from the preset robot tasks according to the confidence.
Through the technical scheme, the following technical effects can be at least achieved:
splitting a preset robot task into a plurality of subtasks; determining a plurality of target atomic skills corresponding to each subtask from a skill blueprint, wherein the skill blueprint comprises skill parameters of a plurality of atomic skills of the robot, and each atomic skill is a minimum skill unit obtained by decoupling the action capacity of the robot; for each subtask, generating a molecular skill for completing the subtask by combining a plurality of target atomic skills corresponding to the subtask; cellular skills for completing robotic tasks are generated by combining the molecular skills. In this way, the atomic skills are the minimum skill units obtained by decoupling the motion capabilities of the robot, and the atomic skills can be multiplexed when generating the molecular skills, and the efficiency of robot skill development can be improved based on the atomic skill multiplexing.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure without limiting the disclosure. In the drawings:
fig. 1 is a flowchart illustrating a robot skill generation method according to an exemplary embodiment.
Fig. 2 is a flowchart illustrating an implementation of step S11 in fig. 1 according to an exemplary embodiment.
Fig. 3 is a flowchart illustrating an implementation of step S13 in fig. 1 according to an exemplary embodiment.
Fig. 4 is a flowchart illustrating one implementation of step S14 according to an exemplary embodiment.
FIG. 5 is a flow diagram illustrating the construction of a new minimum skill unit in accordance with an exemplary embodiment.
Fig. 6 is a schematic diagram illustrating skills and textual description of a task for a robot to grasp an item, according to an exemplary embodiment.
Fig. 7 is a schematic diagram illustrating skills and textual description of a robotic handshake task according to an exemplary embodiment.
Fig. 8 is a diagram illustrating skills and textual description of a task of moving a gripper by a robot according to an exemplary embodiment.
Fig. 9 is a block diagram illustrating a robot skill generating apparatus in accordance with an exemplary embodiment.
FIG. 10 is a block diagram illustrating an electronic device in accordance with an example embodiment.
Detailed Description
The following detailed description of the embodiments of the disclosure refers to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present disclosure, are given by way of illustration and explanation only, not limitation.
It should be noted that in the present disclosure, the terms "S11", "S51", and the like in the description and claims and drawings are used for distinguishing method steps and are not necessarily to be construed as describing a particular order of execution.
The following provides a detailed description of embodiments of the present disclosure.
Fig. 1 is a flowchart illustrating a robot skill generation method according to an exemplary embodiment, where the method may be applied to a robot terminal, for example, a robot central controller, and may also be applied to a cloud server, and the embodiment of the present disclosure is described by taking application to the cloud server as an example, as shown in fig. 1, the robot skill generation method includes the following steps.
In step S11, a preset robot task is split into a plurality of subtasks.
In step S12, a plurality of target atomic skills corresponding to each subtask are determined from a skill blueprint, where the skill blueprint includes skill parameters of a plurality of atomic skills of the robot, and each atomic skill is a minimum skill unit obtained by decoupling an action capability possessed by the robot.
In one embodiment, a plurality of target atomic skills respectively corresponding to each subtask can be determined from the skill blueprint based on a manual selection operation of the developer, for example, a name of an atomic skill and a text description of a function corresponding to the atomic skill are shown in the skill blueprint, so that the developer can select according to the text description. And determining the atomic skills selected by the developer as the target atomic skills.
After the minimum skill units obtained by decoupling the motion capabilities of the robot are obtained, a common API (Application Programming Interface) does not exist between the minimum skill units.
In step S13, for each subtask, a molecular skill for completing the subtask is generated by combining a plurality of target atomic skills corresponding to the subtask.
When the molecular skills include more than one atomic skill, different execution sequences among the atomic skills can generate the molecular skills for completing different subtasks, and therefore, for each subtask, a plurality of target atomic skills corresponding to the subtask are combined, for example, the plurality of target atomic skills can be combined based on manual selection operation of a developer, and then the molecular skills for completing the subtask are generated.
It may be stated that the same atomic skills may be included between different molecular skills, i.e. the same atomic skills may be invoked in part when invoking different molecular skills.
In step S14, cellular skills for completing the robot task are generated by combining the respective molecular skills.
In an embodiment of the present disclosure, the skill blueprint includes skill sets corresponding to task types of tasks that the robot can complete one to one, and cellular skills, molecular skills, and atomic skills corresponding to the task types are hierarchically stored in each skill set.
Similarly, a cellular skill may include one or more molecular skills. That is, it is readily understood that each preset robotic task may include one cellular skill, in each of which one or more molecular skills may be included, and in each of which one or more atomic skills may be included.
By adopting the technical scheme, the preset robot task is divided into a plurality of subtasks; determining a plurality of target atomic skills corresponding to each subtask from a skill blueprint, wherein the skill blueprint comprises skill parameters of a plurality of atomic skills of the robot, and each atomic skill is a minimum skill unit obtained by decoupling the action ability of the robot; for each subtask, generating a molecular skill for completing the subtask by combining a plurality of target atomic skills corresponding to the subtask; cellular skills for completing robotic tasks are generated by combining the molecular skills. In this way, the atomic skills are the minimum skill units obtained by decoupling the motion capabilities of the robot, and the atomic skills can be multiplexed when generating the molecular skills, and the efficiency of robot skill development can be improved based on the atomic skill multiplexing.
On the basis of the above embodiment, fig. 2 is a flowchart illustrating an implementation of step S11 in fig. 1 according to an exemplary embodiment, where in step S11, the splitting of the preset robot task into multiple subtasks includes the following steps.
In step S111, a plurality of functional components required by the robot to perform a preset robot task and an action to be performed by each functional component are determined.
For example, for a preset robot task, the plurality of functional components required to be used are determined to include an upper arm, a lower arm and a plurality of fingers, and the action required to be executed by each finger of the upper arm, the lower arm and the plurality of fingers is determined. Illustratively, the movements to be performed by the large arm include raising and lowering, the movements to be performed by the small arm include extending and retracting, and the movements to be performed by each of the plurality of fingers include bending and unbending.
In step S112, the robot task is divided according to the actions of the plurality of functional components, so as to obtain a plurality of subtasks corresponding to the plurality of functional components one to one, and each subtask is used to complete the action to be executed by the corresponding functional component.
It should be noted that, in the same robot task, the same functional component may execute one or more actions, and in the case where the same functional component needs to execute multiple actions, if multiple actions are adjacent actions, the multiple actions may be regarded as the same subtask; if multiple actions are not adjacent actions, then the non-adjacent actions are required to be different subtasks.
For example, for the action to be performed by the big arm to be lifted and put down, the big arm needs to perform the lifting action first, and after the small arm performs the retracting action, the big arm needs to perform the putting down action, and then the lifting and the putting down of the big arm need to be regarded as different subtasks.
As yet another example, for a preset robot task, the robot is required to perform a head-down task after 5 seconds, and for both head-down and head-up, the functional part neck of the robot is required to perform, but since the head-down and head-up actions are adjacent actions, the head-down action and the head-up action may be regarded as the same subtask.
Based on the foregoing embodiment, the action to be performed by each functional unit includes a plurality of sub-actions, fig. 3 is a flowchart illustrating an implementation of step S13 in fig. 1 according to an exemplary embodiment, and in step S13, for each sub-task, a molecular skill for completing the sub-task is generated by combining a plurality of target atomic skills corresponding to the sub-task, including the following steps.
In step S131, for each subtask, a calling order of each of the plurality of target atomic skills corresponding to the subtask is determined based on an execution order of each of the sub actions of the functional block corresponding to the subtask.
As described with the embodiment in fig. 2, for the subtask, multiple fingers are needed, multiple finger actions correspond to the sub-actions one by one, and the order of the actions of each finger is determined, so as to determine the execution order of each sub-action. And the order of invocation of each target atomic technique can be determined.
For example, for a subtask requiring the index finger and the middle finger to be used, the index finger is determined to perform the bending motion first, and the middle finger is determined to perform the bending motion again, so that the target atomic skill-index finger bending call is determined to be before and the atomic skill-middle finger bending call is after.
In step S132, a plurality of target atomic skills are combined according to the calling order, and a molecular skill for completing the subtask is generated, where a plurality of sub actions correspond to the plurality of target atomic skills one to one.
And combining the multiple target atomic skills according to the sequence in the calling sequence to generate the molecular skill for completing the subtasks.
In addition to the above-described embodiments, the method of generating the cellular skills for completing the robot task by combining the molecular skills in step S14 is shown in fig. 4 and includes the following steps.
In step S141, the order of combination between the respective molecular skills is determined according to the order of execution between the respective functional parts.
In step S142, the molecular skills are combined according to the combination order to generate a cellular skill for completing the robot task.
As explained with the embodiment in fig. 2, the execution sequence among the functional components is that the large arm needs to perform a lifting action, the small arm needs to perform a stretching action after the large arm lifting action is performed, the plurality of fingers need to perform a bending action after the small arm stretching action is performed, the small arm needs to perform a retracting action after the plurality of fingers perform the bending action, the large arm needs to perform a lowering action after the small arm retracting action is performed, and the plurality of fingers need to perform a straightening action after the large arm lowering action is performed.
Further, determining the order of combination between molecular skills is specifically: the method comprises the following steps of large arm lifting, small arm extending, multiple finger bending actions, small arm retracting, large arm lowering and multiple finger straightening. And combining the molecular skills according to the combined order to generate cellular skills for completing the robotic task.
Based on the above embodiments, fig. 5 is a flowchart illustrating a method of building a new minimum skill unit according to an exemplary embodiment, as shown in fig. 5, the method includes the following steps.
In step S51, for a new sub-action that the robot is expected to complete, the action control model of the robot is called through the multiple skill parameter samples, respectively.
For example, for a robot to be expected to complete a new sub-action hand-lifting of 15 degrees, the hand-lifting action control model of the robot is respectively invoked by the skill parameters of the hand-lifting of the robot.
In step S52, the matching degree of each sub-action actually completed by the robot and a new sub-action expected to be completed by the robot is evaluated.
For example, the developer may manually score each sub-action that is actually completed, and then determine a matching degree with a new sub-action that is expected to be completed by the robot according to the manual score.
In step S53, a new minimum skill unit is constructed in the skill blueprint according to the skill parameter sample corresponding to the highest matching degree.
Wherein, a new minimum skill unit is constructed in the skill blueprint, and a new atomic skill can be obtained. The new atomic skills are stored in the skill blueprint and can also be used for generating new molecular skills, so that the efficiency of the molecular skills and the cell skills of the robot can be improved based on the reusability of the atomic skills, and the efficiency of the development of the robot skills is further improved.
The robot skill generation method in the present disclosure is explained below by way of a more detailed example.
In an example, a skill generation method for a robot to grab a static article is described, referring to an atomic skill and a text description schematic diagram of a task for grabbing an article by a robot in fig. 6, according to a plurality of preset functional components required by the task for grabbing by the robot, the task is split into a visual sub-action and an action sub-action.
Further, determining a target atomic skill item identification corresponding to the visual sub-action from the skill blueprint, the target atomic skill item identification being for the robot to obtain the 3D location of the item, and determining a plurality of target atomic skills corresponding to the action sub-action from the skill blueprint, comprising: the robot comprises a head control (head lowering), a hand raising, a holding and a head control (head raising), wherein the head control (head lowering) is used for calling a skill parameter of the atomic skill to control the robot to perform low head movement, the hand raising is used for calling the skill parameter of the atomic skill to control the robot to raise the hand to a specified 3D position, the holding is used for calling the skill parameter of the atomic skill to control the robot to fold 5 fingers to hold an article, and the head control (head raising) is used for calling the skill parameter of the atomic skill to control the robot to perform head movement raising.
Further, for the action sub-actions, a plurality of target atomic skills are combined according to the calling order, and a timing as shown in fig. 6 is determined according to the execution order of the visual sub-actions and the action sub-actions. And finally obtaining the preset grabbing robot control skill.
Here, the cellular skills are not shown, as there is only one molecular skill in the grabbing task.
In another example, a robot-to-human handshake is taken as an example for explanation, and with reference to the skill generation method and the text description schematic diagram of the robot handshake task of fig. 7, the grabbing task is divided into a visual sub-action, a moving sub-action, a dialogue sub-action, an action sub-action, and a basic sub-action according to a plurality of functional components required by a preset robot handshake task.
It should be noted that, since the arm movement includes a plurality of non-adjacent sub-movements, the arm movement is split into 3 movement sub-movements, including a first movement sub-movement of bending down and extending the arm, a second movement sub-movement of holding and shaking, and a third movement sub-movement of releasing the hand and retracting the arm. Therefore, different classifications are respectively assigned here.
Further, the target atomic skills corresponding to the first visual subaction are determined from the skill blueprint, and the target atomic skills comprise face recognition, face detection, face attribute detection and counterpart human body detection, wherein the face detection is used for detecting face orientation, the face attribute detection is used for detecting male and female and age, and the counterpart human body detection is used for detecting human body orientation and distance. And determining the atomic skills corresponding to the actions of the mobile terminal from the skill blueprint, wherein the atomic skills comprise calculating the social distance and obstacle avoidance walking, the social distance is calculated to keep the social distance, and the obstacle avoidance walking is considered to avoid the obstacle. And determine from the skill blueprint a target atomic skill corresponding to the dialog sub-action, including a greeting for a personalized greeting, e.g., mr. S/lady/kids good afternoon. And determining a target atomic skill corresponding to the first action sub-action from the skill blueprint, wherein the target atomic skill comprises the following steps: and bending and stretching the arms, and controlling the robot to bend and stretch the arms by using the skill parameters for calling the atomic skills. And determining a target atomic skill corresponding to the second visual subaction from the skill blueprint, wherein the target atomic skill comprises the following steps: the opposite hand is detected. Thus, if the hand of the other side is detected during specific control, the robot stretches out the arm; if the hand of the opposite side is not detected, the robot does not stretch out the arm, but calls the atomic skill of the conversation and broadcasts the corresponding greeting. And determining target atomic skills corresponding to the second action sub-actions from the skill blueprint, wherein the target atomic skills comprise a shake and a shake, the 5 fingers of the robot are controlled to be bent according to a shake gesture by holding the skill parameters for calling the atomic skills, and the shake is used for calling the skill parameters of the atomic skills to perform gravity compensation so as to control the robot to shake according to the shake gesture.
Furthermore, according to the execution sequence of the classified sub-actions, the time sequence arrangement shown in the figure is performed and the execution sequence of each target atomic skill is determined, so as to obtain the control skill of the preset handshake task.
The above embodiments are all embodiments in which the robot statically completes a task in a standing position, and the following description will be made of embodiments in which the robot moves to grasp an article. Referring to the technical skill and text description schematic diagram of the task of grabbing articles by the robot in a moving mode of fig. 8, the grabbing task is divided into a grabbing subtask, a navigation subtask and an approaching subtask according to a plurality of preset functional components required by the task of grabbing articles by the robot in a moving mode.
Further, dividing the grabbing subtasks into visual subtasks and action subtasks, and determining target atomic skills corresponding to the visual subtasks from a skill blueprint, wherein the method comprises the following steps: item identification for obtaining a 3D position of an item. And determining a target atomic skill corresponding to the action sub-action from the skill blueprint, wherein the target atomic skill comprises the following steps: the robot comprises a head control (head-down) device, a hand-up device, a holding device and a head control (head-up) device, wherein the head control (head-down) device is used for calling the skill parameters of the atomic skill to control the robot to have low head movement, the hand-up device is used for calling the skill parameters of the atomic skill to control the robot to move to a specified 3D position, the holding device is used for calling the skill parameters of the atomic skill to control the robot to fold 5 fingers to hold an article, and the head control (head-up) device is used for calling the skill parameters of the atomic skill to control the robot to move to lift. And obtaining the molecular grabbing skill.
Further, for the navigation subtask, determining a corresponding target atomic skill from the skill blueprint, including: and the obstacle avoidance walking is used for calling the skill parameters of the atomic skill to control the robot to avoid the obstacles and navigate to the front of the table where the target object is located. And obtaining the navigation molecular skill.
And determining, for the approaching subtask, a corresponding target atomic skill from the skill blueprint, including: and calculating a distance and an approaching movement, wherein the calculated distance is used for calculating the distance from the table, and the approaching movement is used for calling a skill parameter of the atomic skill to control the robot to approach the table to the position of the grippable object. And obtaining approximate molecular skills.
And further combining the molecular skill of grasping, the molecular skill of navigating and the molecular skill of approaching to obtain the molecular skill of navigating, the molecular skill of approaching and the cell-grasping skill of moving of the molecular skill of grasping.
Based on the same inventive concept, the disclosure also provides a robot skill generation device, which is applied to a cloud server and used for executing the steps of the robot skill generation method provided by the embodiment. Fig. 9 is a block diagram illustrating a robot skill generating apparatus 100 according to an exemplary embodiment, the apparatus 100 including, as shown in fig. 9: a splitting module 110, a determining module 120, a combining module 130, and a generating module 140.
The splitting module 110 is configured to split a preset robot task into multiple subtasks;
a determining module 120, configured to determine a plurality of target atomic skills corresponding to each of the subtasks from a skill blueprint, where the skill blueprint includes skill parameters of a plurality of atomic skills of the robot, and each of the atomic skills is a minimum skill unit obtained by decoupling an action capability of the robot;
a combination module 130, configured to, for each subtask, generate a molecular skill for completing the subtask by combining the target atomic skills corresponding to the subtask;
a generating module 140 for generating cellular skills for completing the robotic task by combining each of the molecular skills.
The device can multiplex the atomic skills when generating the molecular skills based on the atomic skills which are the minimum skill units obtained by decoupling the action abilities of the robot, and can improve the efficiency of robot skill development based on the atomic skill multiplexing.
Optionally, the splitting module 110 is configured to determine a plurality of functional components required by the robot to perform the robot task, and an action to be performed by each of the functional components;
and performing action splitting on the robot task according to the functional components to obtain a plurality of subtasks which are in one-to-one correspondence with the functional components, wherein each subtask is used for completing the action to be executed by the corresponding functional component.
Optionally, the combining module 130 is configured to, for each sub-task, determine, according to an execution order of each sub-action of the functional component corresponding to the sub-task, a calling order of each target atomic skill in a plurality of target atomic skills corresponding to the sub-task when the action to be executed by each functional component includes a plurality of sub-actions;
and combining the plurality of target atomic skills according to the calling sequence to generate the molecular skills for completing the subtasks, wherein the plurality of sub actions correspond to the plurality of target atomic skills one to one.
Optionally, the generating module 140 is configured to determine a combination order between the molecular skills according to an execution order between the functional components;
combining each of the molecular skills according to the combined order to generate a cellular skill for completing the robotic task.
Optionally, the skill blueprint includes skill sets corresponding to task types of tasks that the robot can complete one by one, and cellular skills, molecular skills, and atomic skills corresponding to the task type are stored in each skill set in a hierarchical manner.
Optionally, the generating module 140 is further configured to, for a new sub-action that the robot is expected to complete, respectively invoke an action control model of the robot through a plurality of skill parameter samples;
evaluating the matching degree of each sub-action actually completed by the robot and the new sub-action expected to be completed by the robot;
and constructing a new minimum skill unit in the skill blueprint according to the skill parameter sample corresponding to the highest matching degree.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
It should be noted that, for convenience and brevity of description, the embodiments described in the specification all belong to the preferred embodiments, and the related parts are not necessarily essential to the present disclosure, for example, the combining module 130 and the generating module 140 may be independent devices or the same device when being implemented specifically, and the present disclosure is not limited thereto.
The disclosed embodiments also provide a computer readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps of the robot skill generating method of any of the preceding claims.
The embodiment of the present disclosure further provides a cloud server, including:
a memory having a computer program stored thereon;
a processor for executing a computer program in a memory to implement the steps of the robot skill generation method of any preceding claim.
The embodiment of the disclosure also provides a robot control system, which comprises a robot central controller and a cloud server in communication connection with the robot central controller;
the robot central controller can be in communication connection with the cloud server through a mobile communication network such as 4G/5G, and can also be in communication connection with the cloud server through wireless communication modes such as WIFI and Bluetooth.
The robot central controller is used for uploading the acquired image information to the cloud server;
and the cloud server is used for determining a target robot task from preset robot tasks according to the image information after receiving the image information uploaded by the central robot controller, and sending the cell skills corresponding to the target robot task to the central robot controller.
On the basis of the above embodiment, the cloud server is configured to:
inputting the image information into a visual recognition model to obtain a target object output by the visual recognition model;
calculating the confidence coefficient between the target object and a task target object in a preset robot task;
and determining a target robot task from preset robot tasks according to the confidence coefficient.
In the present disclosure, the architecture of the visual recognition model is preferably a deep convolutional neural network model architecture in a deep learning model, and may be other deep learning model architectures, which is not particularly limited in this disclosure.
It is worth explaining, deep learning refers to machine learning based on artificial neural networks. Different from the traditional machine learning, the deep learning model needs to be trained by using a large number of manual labeled samples so as to train to obtain the deep learning model with higher accuracy. The deep learning model usually comprises a multilayer artificial neural network, and the fitting capability of the artificial neural network can be improved by increasing the depth and the breadth of the artificial neural network, so that the deep learning model with better robustness is obtained. The essence of the deep learning model is to learn a mapping relationship f (x) = y, for example, x is the input image of each color and size of apple, and then y is the output text description about the apple. In the application scenario of the present disclosure, the input x of the visual recognition model refers to an image captured by the robot through the image sensor, and the output y refers to the name of the target object in the recognized image.
And the robot central controller is also used for responding to the received cell skills sent by the cloud server and calling skill parameters of all atomic skills in the cell skills to complete the target robot task.
In this embodiment, the skill parameters of the atomic skills may be stored locally in the robot, the cloud server sends the cell skills to the robot central controller, and the robot central controller invokes the skill parameters of each atomic skill in the cell skills according to the cell skills. Thus, the data volume of cell skills sent by the cloud server to the central robot controller can be reduced.
The skill parameters of the atomic skills can also be stored in the cloud server, the cell skills sent by the cloud server include the cell skills and the corresponding skill parameters of the atomic skills, and the robot may not locally store the skill parameters of the atomic skills. In this way, the developer can update the atomic skills, such as updating parameters, newly adding the minimum skill unit, and the robot is not required to be updated through traversal.
The robot skill control of the present disclosure is described below in more detail by way of examples.
For example, taking a robot to statically grab an article as an example, taking the skill and text description of the task of grabbing an article by the robot in fig. 6 as an example to explain, an image sensor of the robot acquires image information of the article, for example, acquires image information of a desktop apple, and uploads the image information to a cloud server through a central controller of the robot, and the cloud server inputs the image information to a visual recognition model to obtain a name of the article output by the visual recognition model, for example, to obtain a name of the article output by the visual recognition model as a green apple, and can acquire a spatial position of the article, thereby determining a target atomic skill.
Further, the cloud server calculates the confidence between the red apple and the name of the apple in the preset robot task, for example, the confidence between the red apple and the name of the green apple in the preset first robot task is calculated to be 90%, and the confidence between the red apple and the name of the watermelon in the preset second robot task is calculated to be 20%. And determining the preset robot task with the highest confidence coefficient as a target robot task.
Further, the cloud server determines a preset first robot task as a target robot task, and sends a cell skill for grabbing green apples to the robot central controller, the robot central controller responds to the received cell skill for grabbing green apples sent by the cloud server, and completes the target robot task by calling a skill parameter of each atomic skill in the cell skills, for example, firstly calling a head of the atomic skill to control a head-down skill parameter, so that an image sensor of the robot collects relevant environment data based on a principle of observing an article by human eyes, and then calling a skill parameter of an atomic skill of the robot to enable a palm of the robot to reach the article.
Further, after the palm of the robot reaches the article, the robot central controller calls skill parameters of the atomic skill holding, wherein the skill parameters comprise the strength and the angle of finger contraction. And finally, calling the atomic skill head to control the skill parameter of head raising, and controlling the head of the robot to be raised to an initial position to complete the green apple grabbing task. It will be appreciated that different angles and heights of the robot hand-lift correspond to different atomic skills, and that different finger contraction strengths and angles when holding an article also correspond to different atomic skills.
Fig. 10 is a block diagram illustrating an electronic device 1900 according to an example embodiment. For example, the electronic device 1900 may be provided as a cloud server for performing the steps of the robot skill generating method and performing the operations performed by the cloud server in the robot control system. Referring to fig. 10, electronic device 1900 includes a processor 1922, which can be one or more in number, and memory 1932 for storing computer programs executable by processor 1922. The computer program stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processor 1922 may be configured to execute the computer program to perform the robot skill generation method steps described above and to perform the operations performed by the cloud-side server in the robot control system described above.
Additionally, electronic device 1900 may also include a power component 1926 and a communication component 1950, the power component 1926 may be configured to perform power management of the electronic device 1900, and the communication component 1950 may be configured to enable communication, e.g., wired or wireless communication, of the electronic device 1900. In addition, the electronic device 1900 may also include input/output (I/O) interfaces 1958. Electronic device 1900 may operate based on an operating system stored in memory 1932Systems, e.g. Windows Server TM ,Mac OS X TM ,Unix TM ,Linux TM And so on.
In another exemplary embodiment, a computer readable storage medium including program instructions is further provided, and the program instructions, when executed by a processor, implement the steps of the robot skill generation method described above and perform the operations performed by the cloud-end server in the robot control system described above. For example, the computer readable storage medium may be the above-mentioned memory 1932 including program instructions executable by the processor 1922 of the electronic device 1900 to perform the above-mentioned robot skill generating method steps and perform the operations performed by the cloud server in the robot control system.
In another exemplary embodiment, a computer program product is also provided, which contains a computer program executable by a programmable apparatus, the computer program having code portions for performing the robot skill generating method described above and performing operations performed by a cloud-side server in the robot control system described above when executed by the programmable apparatus.
The preferred embodiments of the present disclosure are described in detail with reference to the accompanying drawings, however, the present disclosure is not limited to the specific details of the above embodiments, and various simple modifications may be made to the technical solution of the present disclosure within the technical idea of the present disclosure, and these simple modifications all belong to the protection scope of the present disclosure.
It should be noted that, in the above embodiments, the various features described in the above embodiments may be combined in any suitable manner, and in order to avoid unnecessary repetition, various possible combinations will not be further described in the present disclosure.
In addition, any combination of various embodiments of the present disclosure may be made, and the same should be considered as the disclosure of the present disclosure, as long as it does not depart from the spirit of the present disclosure.

Claims (11)

1. A robot skill generation method, comprising:
splitting a preset robot task into a plurality of subtasks;
determining a plurality of target atomic skills corresponding to each subtask from a skill blueprint, wherein the skill blueprint comprises skill parameters of a plurality of atomic skills of the robot, and each atomic skill is a minimum skill unit obtained by decoupling action capacity of the robot;
for each subtask, generating a molecular skill for completing the subtask by combining the plurality of target atomic skills corresponding to the subtask;
generating cellular skills for completing the robotic task by combining each of the molecular skills.
2. The method of claim 1, wherein the splitting of the pre-defined robotic task into a plurality of subtasks comprises:
determining a plurality of functional components required by the robot to execute the robot task and actions required by each functional component;
and performing action splitting on the robot task according to the functional components to obtain a plurality of subtasks which are in one-to-one correspondence with the functional components, wherein each subtask is used for completing the action to be executed by the corresponding functional component.
3. The method of claim 2, wherein the action to be performed by each of the functional components comprises a plurality of sub-actions, and wherein generating, for each of the sub-tasks, a molecular skill for completing the sub-task by combining the plurality of target atomic skills corresponding to the sub-task comprises:
for each subtask, determining a calling sequence of each target atomic skill in a plurality of target atomic skills corresponding to the subtask according to an execution sequence of each sub-action of the functional component corresponding to the subtask;
and combining the plurality of target atomic skills according to the calling sequence to generate the molecular skills for completing the subtasks, wherein the plurality of sub-actions correspond to the plurality of target atomic skills one to one.
4. The method of claim 2, wherein said generating cellular skills for completing said robotic task by combining each of said molecular skills comprises:
determining a combination order between the molecular skills according to an execution order between the functional parts;
combining each of the molecular skills according to the combined order to generate a cellular skill for completing the robotic task.
5. The method of claim 1, wherein the skill blueprint comprises skill sets corresponding to task types of tasks that the robot can complete, and each skill set is hierarchically stored with cellular skills, molecular skills, and atomic skills corresponding to the task type.
6. The method according to any one of claims 1-5, further comprising:
for new sub-actions expected to be completed by the robot, respectively calling an action control model of the robot through a plurality of skill parameter samples;
evaluating the matching degree of each actually completed sub-action of the robot and the new sub-action expected to be completed by the robot;
and constructing a new minimum skill unit in the skill blueprint according to the skill parameter sample corresponding to the highest matching degree.
7. A robotic skill generating apparatus, the apparatus comprising:
the splitting module is used for splitting a preset robot task into a plurality of subtasks;
a determining module, configured to determine a plurality of target atomic skills corresponding to each of the subtasks, from a skill blueprint, where the skill blueprint includes skill parameters of a plurality of atomic skills of the robot, and each atomic skill is a minimum skill unit obtained by decoupling an action capability possessed by the robot;
the combination module is used for generating molecular skills for completing the subtasks by combining the target atomic skills corresponding to the subtasks aiming at each subtask;
a generating module for generating cellular skills for completing the robotic task by combining each of the molecular skills.
8. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
9. A cloud server, comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to carry out the steps of the method of any one of claims 1 to 6.
10. A robotic control system comprising a robotic central controller, the cloud server of claim 9 communicatively connected to the robotic central controller;
the robot central controller is used for uploading the acquired image information to the cloud server;
the cloud server is used for determining a target robot task from preset robot tasks according to the image information after receiving the image information uploaded by the central robot controller, and sending a cell skill corresponding to the target robot task to the central robot controller;
the robot central controller is further configured to, in response to the received cell skills sent by the cloud server, complete the target robot task by calling skill parameters of each atomic skill in the cell skills.
11. The robot control system of claim 10, wherein the cloud server is configured to:
inputting the image information into a visual recognition model to obtain a target object output by the visual recognition model;
calculating the confidence coefficient between the target object and a task target object in the preset robot task;
and determining a target robot task from the preset robot tasks according to the confidence.
CN202110729756.XA 2021-06-29 2021-06-29 Robot skill generation method, device and medium, cloud server and robot control system Active CN115213885B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110729756.XA CN115213885B (en) 2021-06-29 2021-06-29 Robot skill generation method, device and medium, cloud server and robot control system
PCT/CN2021/136891 WO2023273178A1 (en) 2021-06-29 2021-12-09 Method and apparatus for generating robot skills, and medium, cloud server and robot control system.

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110729756.XA CN115213885B (en) 2021-06-29 2021-06-29 Robot skill generation method, device and medium, cloud server and robot control system

Publications (2)

Publication Number Publication Date
CN115213885A true CN115213885A (en) 2022-10-21
CN115213885B CN115213885B (en) 2023-04-07

Family

ID=83606559

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110729756.XA Active CN115213885B (en) 2021-06-29 2021-06-29 Robot skill generation method, device and medium, cloud server and robot control system

Country Status (2)

Country Link
CN (1) CN115213885B (en)
WO (1) WO2023273178A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101633166A (en) * 2009-07-13 2010-01-27 哈尔滨工业大学深圳研究生院 Restructurable industrial robot
CN109408800A (en) * 2018-08-23 2019-03-01 优视科技(中国)有限公司 Talk with robot system and associative skills configuration method
US20190111568A1 (en) * 2017-10-13 2019-04-18 International Business Machines Corporation Robotic Chef
CN111737492A (en) * 2020-06-23 2020-10-02 安徽大学 Autonomous robot task planning method based on knowledge graph technology
CN112809689A (en) * 2021-02-26 2021-05-18 同济大学 Language-guidance-based mechanical arm action element simulation learning method and storage medium

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9146546B2 (en) * 2012-06-04 2015-09-29 Brain Corporation Systems and apparatus for implementing task-specific learning using spiking neurons
CN104113731A (en) * 2014-07-15 2014-10-22 大连大学 Remote wireless video monitoring system based on cloud service of Internet of things
CN108333941A (en) * 2018-02-13 2018-07-27 华南理工大学 A kind of robot cooperated learning method of cloud based on mixing enhancing intelligence
GB2584727B (en) * 2019-06-14 2024-02-28 Vision Semantics Ltd Optimised machine learning
CN110364049B (en) * 2019-07-17 2021-03-30 石虹 Professional skill training auxiliary teaching system with automatic deviation degree feedback data closed-loop deviation rectification control and auxiliary teaching method
CN110989382A (en) * 2019-12-06 2020-04-10 河南师范大学 Multifunctional cloud service home robot
CN111424380B (en) * 2020-03-31 2021-04-30 山东大学 Robot sewing system and method based on skill learning and generalization
CN111975769A (en) * 2020-07-16 2020-11-24 华南理工大学 Mobile robot obstacle avoidance method based on meta-learning
CN112882769B (en) * 2021-02-10 2022-12-23 南京苏宁软件技术有限公司 Skill pack data processing method, skill pack data processing device, computer equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101633166A (en) * 2009-07-13 2010-01-27 哈尔滨工业大学深圳研究生院 Restructurable industrial robot
US20190111568A1 (en) * 2017-10-13 2019-04-18 International Business Machines Corporation Robotic Chef
CN109408800A (en) * 2018-08-23 2019-03-01 优视科技(中国)有限公司 Talk with robot system and associative skills configuration method
CN111737492A (en) * 2020-06-23 2020-10-02 安徽大学 Autonomous robot task planning method based on knowledge graph technology
CN112809689A (en) * 2021-02-26 2021-05-18 同济大学 Language-guidance-based mechanical arm action element simulation learning method and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
黄敏: "家庭智能空间下机器人服务自主认知和技能发育", 《中国硕士学位论文全文数据库 信息科技辑》 *

Also Published As

Publication number Publication date
WO2023273178A1 (en) 2023-01-05
CN115213885B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
JP6921151B2 (en) Deep machine learning methods and equipment for robot grip
CN108873768B (en) Task execution system and method, learning device and method, and recording medium
JP6586243B2 (en) Deep machine learning method and apparatus for robot gripping
KR101945772B1 (en) Methods and systems for generating instructions for a robotic system to carry out a task
US11413748B2 (en) System and method of direct teaching a robot
US11559902B2 (en) Robot system and control method of the same
US11654552B2 (en) Backup control based continuous training of robots
EP3494513B1 (en) Selectively downloading targeted object recognition modules
US20210394362A1 (en) Information processing device, control method, and program
EP3585569B1 (en) Systems, apparatus, and methods for robotic learning and execution of skills
EP2014425A1 (en) Method and device for controlling a robot
US20190390396A1 (en) Robot and clothes folding apparatus including the same
JP2021534988A (en) Systems, equipment, and methods for robot learning and execution of skills
JP2019206041A (en) Robot control device, system, information processing method and program
WO2021258023A1 (en) Robotic intervention systems
US20220314432A1 (en) Information processing system, information processing method, and nonvolatile storage medium capable of being read by computer that stores information processing program
JP7452657B2 (en) Control device, control method and program
CN115213885B (en) Robot skill generation method, device and medium, cloud server and robot control system
Muztoba et al. Instinctive assistive indoor navigation using distributed intelligence
JP2019197441A (en) Learning device, learning method, and learning program
KR20230100101A (en) Robot control system and method for robot setting and robot control using the same
CN117881506A (en) Robot mission planning
EP4389367A1 (en) Holding mode determination device for robot, holding mode determination method, and robot control system
WO2022180788A1 (en) Limiting condition learning device, limiting condition learning method, and storage medium
US20230351197A1 (en) Learning active tactile perception through belief-space control

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant