CN117021073A - Robot control method and device, electronic equipment and readable storage medium - Google Patents

Robot control method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN117021073A
CN117021073A CN202310873460.4A CN202310873460A CN117021073A CN 117021073 A CN117021073 A CN 117021073A CN 202310873460 A CN202310873460 A CN 202310873460A CN 117021073 A CN117021073 A CN 117021073A
Authority
CN
China
Prior art keywords
action
executed
instruction
target
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310873460.4A
Other languages
Chinese (zh)
Inventor
黄琦
刘思彦
胡志鹏
范长杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202310873460.4A priority Critical patent/CN117021073A/en
Publication of CN117021073A publication Critical patent/CN117021073A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

The application discloses a control method and device of a robot, electronic equipment and a computer readable storage medium, wherein the method comprises the following steps: determining actions to be executed required for completing tasks to be executed; determining a target instruction template matched with the action to be executed from preset instruction templates, wherein the instruction template comprises configuration variables on which the corresponding action is dependent; responding to a parameter configuration instruction aiming at a target configuration variable in a target instruction template, and carrying out parameter configuration on the target configuration variable to obtain an instantiation instruction for controlling the robot to execute an action to be executed; and executing the instantiation instruction according to the arrangement sequence of the actions to be executed so as to control the robot to execute the tasks to be executed. The scheme provided by the application can improve the reusability of the program and reduce the programming difficulty of the program for controlling the movement of the robot.

Description

Robot control method and device, electronic equipment and readable storage medium
Technical Field
The present application relates to the field of robots, and in particular, to a method and apparatus for controlling a robot, an electronic device, and a computer readable storage medium.
Background
With the development of robotics, robots are used instead of manual labor in more and more industries, for example, robots are used in the logistics field to perform sorting, distribution, and the like, and robots are used in factories to perform material transportation. The appearance of robot has reduced the human cost, has realized automaticly for the efficiency of labour work is higher.
At present, the control of the robot is mostly realized by a program, and the motion trail of the robot is programmed to control the robot to perform corresponding motion. With the increase of uncertain factors in application scenes, the robot may correspond to a plurality of different motion tracks in practical application, for example, the robot needs to transport objects to different delivery positions from different starting positions. Once the motion trail of the robot needs to be changed, a program is often required to be redesigned according to the motion trail of the robot, so that the reusability of the program is poor, and as the motion trail of the robot in an application scene becomes more and more complex, the programming difficulty becomes more and more, a great deal of time and energy are also required to be consumed for maintaining the program, and the experience of a user when the robot is used is poor.
Disclosure of Invention
The application provides a control method and device of a robot, electronic equipment and a computer readable storage medium, which can improve the reusability of a program and reduce the programming difficulty of the program for controlling the movement of the robot. The specific scheme is as follows:
in a first aspect, an embodiment of the present application provides a method for controlling a robot, including:
determining actions to be executed required for completing tasks to be executed;
determining a target instruction template matched with the action to be executed from preset instruction templates, wherein the instruction template comprises configuration variables on which the corresponding action is executed;
responding to a parameter configuration instruction aiming at a target configuration variable in the target instruction template, and carrying out parameter configuration on the target configuration variable to obtain an instantiation instruction for controlling the robot to execute the action to be executed;
and executing the instantiation instruction according to the arrangement sequence of the actions to be executed so as to control the robot to execute the tasks to be executed.
In a second aspect, an embodiment of the present application provides a control device for a robot, including:
the first determining unit is used for determining actions to be executed required by completing tasks to be executed;
The second determining unit is used for determining a target instruction template matched with the action to be executed from preset instruction templates, wherein the instruction template comprises configuration variables on which the corresponding action is executed;
the parameter configuration unit is used for responding to a parameter configuration instruction aiming at a target configuration variable in the target instruction template, carrying out parameter configuration on the target configuration variable and obtaining an instantiation instruction for controlling the robot to execute the action to be executed;
and the instruction execution unit is used for executing the instantiation instruction according to the arrangement sequence of the actions to be executed so as to control the robot to execute the tasks to be executed.
In a third aspect, the present application also provides an electronic device, including:
a processor; and
a memory for storing a data processing program, the electronic device being powered on and executing the program by the processor, to perform the method according to the first aspect.
In a fourth aspect, embodiments of the present application also provide a computer readable storage medium storing a data processing program for execution by a processor to perform the method of the first aspect.
Compared with the prior art, the application has the following advantages:
According to the control method of the robot, the action to be executed required by completing the task to be executed is determined; determining a target instruction template matched with the action to be executed from preset instruction templates, wherein the instruction template comprises configuration variables on which the corresponding action is dependent; responding to a parameter configuration instruction aiming at a target configuration variable in a target instruction template, and carrying out parameter configuration on the target configuration variable to obtain an instantiation instruction for controlling the robot to execute an action to be executed; and executing the instantiation instruction according to the execution sequence of the actions to be executed so as to control the robot to execute the tasks to be executed.
Because the tasks executed by the robot are all composed of executing actions, any complex task to be executed can be composed of a plurality of ordered actions to be executed, such as the process of action 1- & gt action 2- & gt action 3- & gt action 4- & gt action 5, and a command template is preset in the application, so that a target command template which is matched with the plurality of actions to be executed and contains configuration variables depending on the execution of the corresponding actions can be determined from the preset command templates, parameters of the determined target configuration variables in the target command template can be configured to obtain an instantiation command corresponding to the actions to be executed, and because the task to be executed is composed of the instantiation command corresponding to the actions to be executed, the command for controlling the robot to execute the actions to be executed is composed of the instantiation command corresponding to the actions to be executed.
Therefore, the method for controlling the robot can quickly obtain the instantiation instruction corresponding to the action to be executed through parameter configuration of the instruction template corresponding to the action to be executed, and quickly execute the instantiation instruction according to the arrangement sequence of the action to be executed, so that the method for controlling the robot can improve the multiplexing rate of programs, reduce programming difficulty and save labor and time cost consumed by programming.
Drawings
Fig. 1 is a schematic diagram of a control system of a robot according to an embodiment of the present application;
fig. 2 is a flowchart of a control method of a robot according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a front end page in an example of a control method of a robot according to an embodiment of the present application;
Fig. 4 is a schematic diagram of another front end page in the control method of the robot according to the embodiment of the present application;
fig. 5 is a schematic diagram of a shelf in an application scenario of a control method of a robot according to an embodiment of the present application;
fig. 6 is a block diagram showing an example of a control device for a robot according to an embodiment of the present application;
fig. 7 is a block diagram illustrating an example of an electronic device according to an embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. The present application may be embodied in many other forms than those herein described, and those skilled in the art will readily appreciate that the present application may be similarly embodied without departing from the spirit or essential characteristics thereof, and therefore the present application is not limited to the specific embodiments disclosed below.
It should be noted that the terms "first," "second," "third," and the like in the claims, description, and drawings of the present application are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. The data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and their variants are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In the labor-intensive industry, a great deal of manpower is required to carry out the labor work, with a consequent great manpower cost. Therefore, more and more labor-intensive industries use robots instead of manpower, and the occurrence of robots saves labor cost and also realizes efficient automatic labor work.
However, for many industries, the labor is not constant, which results in the need to design the motion profile of the robot according to the actual situation. For example, in the logistics field, the robot needs to move the article a from the position 1 to the position 2, possibly passing through a plurality of obstacles a, B, c, etc. halfway, and the robot needs to move the article B from the position 3 to the position 4, possibly passing through a plurality of obstacles c, d, e, etc. halfway, and as elements related to labor work increase in the application scene, the movement track of the robot becomes more and more complex, the design of a corresponding program for controlling the movement of the robot becomes more and more complex, and each movement track needs to design a corresponding control program, thereby improving the programming difficulty in the robot control.
For the above reasons, in order to improve the reusability of the program and reduce the programming difficulty of the program for controlling the movement of the robot, the first embodiment of the present application provides a control method of the robot, where the method is applied to an electronic device, and the electronic device may be a desktop computer, a notebook computer, a mobile phone, a tablet computer, a server, a terminal device, or other electronic devices capable of running the control method of the robot.
Before describing the control method of the robot provided by the embodiment of the present application, application of the control method of the robot provided by the embodiment of the present application in a control system of the robot is described below through fig. 1.
As shown in fig. 1, a schematic diagram of a control system of a robot according to an embodiment of the present application is provided, and a control system 100 of the robot includes a terminal device 10 and a robot 20. The terminal device 10 and the robot 20 are connected by a network. Fig. 1 includes the following steps S101 to S105.
Step S101: at the terminal device 10, in response to a selection instruction for a target instruction template, acquiring the selected target instruction template;
step S102: at the terminal device 10, in response to a parameter configuration instruction for a target instruction template, filling configuration parameters into corresponding configuration variables to obtain an instantiation instruction;
Step S103: responding to the sequence arrangement instruction of the instantiation instruction, and sequentially arranging the sequence arrangement instruction according to the arrangement sequence indicated by the sequence arrangement instruction to obtain a continuous action instruction set;
step S104: transmitting a continuous action instruction set;
step S105: a set of sequential action instructions is received and executed to control the robot to perform tasks.
For a detailed explanation of steps S101 to S105, reference will be made to the following detailed description of the processing method of the art resource provided in the embodiment of the present application.
As shown in fig. 2, a flowchart of a control method of a robot according to an embodiment of the present application includes the following steps S201 to S204.
Step S201: and determining the action to be performed required for completing the task to be performed.
In the step, the task to be executed is the task to be executed by the robot. The robot in the present application may be, for example, a robot for sorting cargoes in the field of logistics, a robot for transporting materials in a factory, etc., and in the embodiment of the present application, description is given by way of example of a robot for sorting cargoes, and accordingly, the task to be performed may include at least one of moving cargoes from one location to another, sorting a plurality of cargoes in batches, charging a sorting completion, etc.
In practical applications, the robot is usually used for loading and unloading goods through mechanical arms, one robot is corresponding to one or more mechanical arms, and when the robot has a plurality of mechanical arms, the mechanical arms for gripping the goods and the mechanical arms for unloading the goods may be the same or different. In practical application, corresponding functions can be preset for a plurality of mechanical arms, so that the mechanical arms supporting the corresponding functions can be selected when different requirements are met.
Because each task executed by the robot in the application scene can be decomposed into single actions, each single action is an action executed by the robot, and thus, a plurality of single actions can be combined into a complete task. Therefore, the task to be executed in the application can be decomposed into at least one action to be executed, and the more complex the task to be executed is, the more the corresponding actions to be executed are likely to be. Specifically, the action to be executed required for completing the task to be executed can be determined according to the actual scene of the task executed by the robot.
As shown in the example table of the tasks to be performed and the actions to be performed provided in table 1, the tasks to be performed of the robot are to transfer the goods 1 from the shelf a to the shelf b and return to the charging point, and then 7 actions to be performed by the robot at this time are respectively: from the current position to shelf a, loading goods 1, from shelf a to shelf b, lifting the robotic arm, unloading goods 1, lowering the robotic arm, from shelf b to the charging point.
TABLE 1 example Table of tasks to be performed and actions to be performed
In practical application, each task executed by the robot in the application scene can be split into single actions, that is, the single actions corresponding to the robot in the execution process of a certain task are combined according to a certain sequence, so that the whole execution process corresponding to the execution of the task by the robot can be formed.
The aim of the step is to decompose the task to be executed into specific actions to be executed, and the task to be executed with any complexity is decomposed into each action to be executed by the method, so that a foundation is provided for simplifying the complex task.
Step S202: and determining a target instruction template matched with the action to be executed from preset instruction templates.
Wherein the instruction templates comprise configuration variables on which corresponding actions are dependent.
Because the robot automation execution task is controlled by the program, each action to be executed corresponding to the task to be executed is controlled by the program, and each single action in the application can be preset with a corresponding instruction template which is an instruction without parameter configuration on the configuration variables. Accordingly, in step S202, a target instruction template that matches each action to be performed may be determined from among instruction templates set in advance according to the actions to be performed determined in step S201. One action to be executed corresponds to one instruction template in the instruction templates, and each instruction template can be distinguished by different instruction identifications or different instruction names, so that the application is not particularly limited.
Because the configuration variables in each instruction template are the configuration variables on which the corresponding action is performed, the target configuration variables in the target instruction templates are the configuration variables on which the corresponding action to be performed is performed. For example, for a move action, the configuration variables in the instruction templates may be the position coordinates of the move start point and the move end point.
Step 203: and responding to a parameter configuration instruction aiming at a target configuration variable in a target instruction template, and carrying out parameter configuration on the target configuration variable to obtain an instantiation instruction for controlling the robot to execute the action to be executed.
Because the configuration variables in the instruction templates are not configured with parameters, in order to obtain an instruction capable of controlling the robot to execute the action to be executed, the determined target instruction templates matched with the action to be executed can be configured with parameters to obtain an instantiation instruction, so that the instantiation of the instruction is realized.
As shown in table 2, the parameter configuration example table provided by the embodiment of the present application when the instruction is instantiated for the movement, and the actions to be executed when the instruction is instantiated in table 2 are action 1, action 3 and action 7 in table 1.
TABLE 2 parameter configuration example Table for instruction instantiation of Mobile action
In this way, parameter configuration can be performed on the target instruction templates corresponding to each action to be executed required for completing the task to be executed, the instantiation instruction corresponding to each target instruction template is obtained, and the robot can be controlled to execute the corresponding action to be executed based on the obtained instantiation instructions.
Step S204: and executing the instantiation instruction according to the arrangement sequence of the actions to be executed so as to control the robot to execute the tasks to be executed.
The aim of this step is to control the robot to perform the task to be performed according to the instantiation instructions. Because the actions to be executed required for completing the tasks to be executed have a certain arrangement sequence, the robot can complete the tasks to be executed by executing each action to be executed according to the arrangement sequence. Therefore, each instantiation instruction also has a certain execution sequence, and the execution sequence of each instantiation instruction is the arrangement sequence of the corresponding actions to be executed.
In the step, the corresponding instantiation instruction can be executed according to the arrangement sequence of each action to be executed to control the robot to execute the task to be executed, or the instantiation instruction can be sequentially arranged according to the arrangement sequence of the actions to be executed to obtain an instantiation instruction set after the arrangement sequence, and the instantiation instruction set after the arrangement sequence is executed to control the robot to execute each action to be executed according to the sequence, so that the purpose of executing the task to be executed is achieved.
According to the control method of the robot, the action to be executed required by completing the task to be executed is determined; determining a target instruction template matched with the action to be executed from preset instruction templates, wherein the instruction template comprises configuration variables on which the corresponding action is dependent; responding to a parameter configuration instruction aiming at a target configuration variable in a target instruction template, and carrying out parameter configuration on the target configuration variable to obtain an instantiation instruction for controlling the robot to execute an action to be executed; and executing the instantiation instruction according to the execution sequence of the actions to be executed so as to control the robot to execute the tasks to be executed.
Because the tasks executed by the robot are all composed of executing actions, any complex task to be executed can be composed of a plurality of ordered actions to be executed, such as the process of action 1- & gt action 2- & gt action 3- & gt action 4- & gt action 5, and a command template is preset in the application, so that a target command template which is matched with the plurality of actions to be executed and contains configuration variables depending on the execution of the corresponding actions can be determined from the preset command templates, parameters of the determined target configuration variables in the target command template can be configured to obtain an instantiation command corresponding to the actions to be executed, and because the task to be executed is composed of the instantiation command corresponding to the actions to be executed, the command for controlling the robot to execute the actions to be executed is composed of the instantiation command corresponding to the actions to be executed.
Therefore, the method for controlling the robot can quickly obtain the instantiation instruction corresponding to the action to be executed through parameter configuration of the instruction template corresponding to the action to be executed, and quickly execute the instantiation instruction according to the arrangement sequence of the action to be executed, so that the method for controlling the robot can improve the multiplexing rate of programs, reduce programming difficulty and save labor and time cost consumed by programming.
Optionally, before step S201, the control method of the robot provided by the present application may further include the following steps:
determining a single action of the robot in the scene;
modeling the single action according to the action position of executing the single action to obtain an instruction template corresponding to the single action.
Accordingly, the configuration variables include the action positions for executing the corresponding actions, and the "parameter configuration for the target configuration variables" in step S203 may be implemented by the following steps:
determining a target action position for executing the action to be executed;
and carrying out parameter configuration on the target configuration variable according to the target action position.
In practical applications, if two actions are of the same action type, for example, action a and action B are moving actions for different starting points and ending points, for example, action a is moving from position a with coordinates of (1, 2) to position B with coordinates of (3, 5), action B is moving from position c with coordinates of (0, 1) to position c with coordinates of (6, 8), and since actions by the robot are both moving actions, configuration parameters of configuration variables may be different only in control instructions corresponding to action a and action B.
In order to improve the programming efficiency of a program for controlling the robot to execute tasks, the application can provide an instruction template for single actions of the same type, and an instantiation instruction for controlling the actual actions in an actual scene is obtained by configuring different configuration parameters for configuration variables in the instruction template. Because the robot executes a plurality of single-phase actions corresponding to a task, and a plurality of actions corresponding to two different tasks, the corresponding instruction templates can be used for programming a certain single-phase action, so that the instruction templates can be reused, and the programming speed is improved.
If the actions to be executed in table 1 are classified according to the action types, action 1, action 3 and action 7 are moving actions, action 2 is loading action, action 4 is lifting action of the mechanical arm, action 5 is unloading action, and action 6 is lowering action of the mechanical arm. Therefore, the target instruction templates of action 1, action 3, and action 7 in table 1 are the same instruction template.
The application designs a corresponding instruction template for each possible single action of the robot in the application scene, wherein the single action comprises one or more of the following single actions: moving, lifting the mechanical arm, lowering the mechanical arm, loading, unloading, avoiding obstacles, lifting the chassis, lowering the chassis, charging, and the like.
Specifically, the single action can be modeled according to the action position for executing the single action, and an instruction template comprising configuration variables such as the action position for executing the corresponding action is obtained. The action position for executing the single action and the application scenario can be understood to at least include a position for starting executing the single action and a position for stopping executing the single action, for example, an action position corresponding to a moving action is a moving start position and a moving end position, and an action position corresponding to discharging is a position for starting discharging and a position for ending discharging.
In general, for a single action in which a position of a robot does not change during execution of a single action, a position at which the single action starts to be executed and a position at which the single action stops to be executed do not change, and the action position at which the single action is executed is an execution position at which the single action is executed, that is, the action position at which the single action is executed corresponds to one position; for a single action of which the position changes in the process of executing a certain single action by the robot, the position for starting to execute the single action and the position for stopping to execute the single action are not the same position, namely the action position for executing the single action corresponds to at least two positions.
When the target instruction template is subjected to instruction instantiation, a target action position for executing the action to be executed can be determined according to an actual scene, and parameter configuration is performed on a target configuration variable according to the target action position.
In this way, modeling is firstly performed according to each single action possibly occurring in the robot and the action position corresponding to each single action possibly occurring in the execution to obtain the instruction template corresponding to each single action possibly occurring. When facing to the task to be executed, the parameter configuration of the action position can be rapidly carried out on the instruction templates corresponding to the actions to be executed required by completing the task to be executed according to the actual scene, the parameter configuration is rapidly carried out on the instruction templates corresponding to the actions to be executed through the task to be executed, and then the control instruction corresponding to the task to be executed is efficiently completed, the multiplexing rate of the program is improved through setting the instruction templates, the programming efficiency is further improved, and the programming time is saved.
The instruction template can contain an instruction and configuration variables containing annotation information, based on the instruction and the configuration variables, a user can quickly determine what information is described by the configuration variables according to the annotation information of the configuration variables, writing and modification are easy, and maintenance efficiency of the instruction template is improved.
In order to uniformly manage the instruction templates, each instruction template can be stored in an instruction template library, so that for each single action, a corresponding instruction template exists in the instruction template library. And the unified instruction template library is arranged, so that a user can conveniently and intensively maintain each instruction template, and the maintenance efficiency of instructions is improved.
It should be noted that, for different execution tasks, the corresponding single movement may also depend on different pre-constraint, for example, the robot needs to perform the task of unloading to the shelf B, and then the single action of "lifting the mechanical arm" may depend on "the robot is already located at the shelf B", that is, the corresponding action is performed under the condition that the pre-constraint is satisfied. Thus, the configuration variables further include a precondition determination function when the corresponding action is performed, that is, a precondition (also referred to as a constraint) for the corresponding action to be performed, and it is possible to determine whether the corresponding action can be performed by determining whether the precondition is satisfied by the determination function.
Accordingly, the "parameter configuration for the target configuration variable" in step S203 may include the following steps:
determining a target pre-condition judging function for executing the action to be executed;
and carrying out parameter configuration on the target configuration variable according to the target precondition judging function.
The precondition judging function can be customized by a user according to an actual scene. For example, when the robot performs a more complex task, the precondition judgment function may be an electric quantity of not less than 50%; when the robot executes the picking task, the pre-condition judging function can be that the mechanical arm corresponding to picking is in an idle state; when the robot performs the task of unloading at a certain location, the precondition determination function may be that the robot arrives at that location.
The pre-condition judging function is used for judging whether the robot has basic conditions capable of executing corresponding actions, and when the pre-condition judging function is not met, the corresponding instantiation instruction cannot be executed, and the corresponding actions cannot be completed. By setting the precondition judging function, when the action execution failure occurs in the process of executing the instantiation instruction, the reason of the action execution failure can be analyzed and positioned.
Optionally, the precondition judging function includes a space state judging function and/or an accessory state judging function.
Thus, the step of "determining the target precondition judging function when executing the action to be executed" may be realized by the steps of:
determining a space state judging function according to an execution position for executing the action to be executed in the target action position; and/or the number of the groups of groups,
and determining an accessory state judging function according to the target identifier corresponding to the accessory for executing the action to be executed.
The space state judging function is used for judging whether the robot reaches an execution position for starting to execute the corresponding action in the action position, and the accessory state judging function is used for judging whether accessories in the robot for executing the corresponding action are in a state capable of executing the corresponding action.
For the space state judgment function, after the target action position of the action to be performed is determined, the space judgment function may be automatically determined according to the execution position of the action to be performed starting from the target action position, for example, if the action 3 in table 1 is to move from the shelf a to the shelf b, the corresponding space state judgment function may be whether the robot reaches the shelf a, and if the action 4 in table 1 is to lift the mechanical arm, the corresponding space state judgment function may be whether the robot reaches the shelf b for executing the lifting mechanical arm.
The accessory status determination function may be whether the accessory used is in an available status. If there is only one accessory, such as a mechanical arm, of the robot, the accessory is used for executing the action to be executed. Therefore, in the case that there is only one accessory, the accessory status judging function is determined only according to the identification of the accessory; if the accessory of the robot is a plurality of accessories, the accessory state judging function can be set according to the identification of each accessory, and when each accessory is in a usable state, the used accessory is necessarily in the usable state. Similarly, the accessory status determination function may be automatically generated.
In the field of logistics robots, the accessories of the robot may include, but are not limited to, at least one of the following: the device comprises a goods taking mechanical arm, a goods unloading mechanical arm, a chassis and a storage rack.
In this way, when designing the instruction template, the single action can be modeled according to the space state judgment function and/or the accessory state judgment function in the pre-condition judgment function. For example, for the unloading template, whether the position x of unloading is reached and whether the mechanical arm y used for unloading is available or not can be set, so that when the instruction instantiation is performed, the user can perform parameter filling on the position x and the mechanical arm y according to actual conditions or automatically determine and fill according to filled parameters.
As shown in table 3, an example table of the precondition judgment function corresponding to each action to be executed in table 1 is shown.
TABLE 3 example Table of precondition judgment function for each action to be performed
Optionally, in case the robot comprises a plurality of accessories, the configuration variable comprises an identification of the correspondence of the accessory of the robot used for performing the corresponding action. Accordingly, the "parameter configuration for target configuration variables" in step S203 may further include the following steps:
determining a target accessory used for executing the action to be executed in the plurality of accessories;
and carrying out parameter configuration on the target configuration variable according to the accessory identification of the target accessory.
In this way, when designing the instruction templates, the configuration variables may be set as variables of the robot arm for executing the corresponding actions. For example, for the unloading template, whether the mechanical arm y used for unloading is available or not can be set, so that when the instruction is instantiated, the user can fill the used identification of the mechanical arm into the mechanical arm y according to actual conditions. Therefore, under the condition that the robot comprises a plurality of accessories, parameter filling can be carried out on the instruction templates matched with the action according to the accessories used for executing the action in an actual scene, and instructions for executing the action to use the accessories do not need to be written by a user, so that the programming efficiency of robot control is further improved.
In practical application, the user can arrange the sequence corresponding to the instantiation instruction of the corresponding action according to the practical scene. Therefore, optionally, before step S204, the control method of the robot provided by the embodiment of the present application may further include the following steps:
and responding to a drag instruction of the front page for the component corresponding to the instantiation instruction, and acquiring the arrangement sequence of actions to be executed indicated by the drag instruction.
Specifically, the workflow engine can be deployed in the application, and the user sequentially arranges the action instructions of the single action in the front page of the workflow engine according to the actual requirements. For example, for the task performed by the robot in table 1 to transfer the goods 1 from shelf a to shelf b and back to the charging point, the user may sequence the 7 actions performed in this process in the front page, and the sequence after the sequence is as shown in table 4: from the current position to the goods shelf a, loading goods 1, from the goods shelf a to the goods shelf b, lifting the mechanical arm, unloading the goods 1, lowering the mechanical arm, and from the goods shelf b to the charging point.
TABLE 4 example Table of the order in which the individual actions are arranged to be performed
The Workflow (Workflow) is an abstract and generalized description of the Workflow and business rules between the operation steps. The workflow engine is a set of implementation tools for realizing the driving of the workflow, and because the workflow is essentially an abstraction of the business flow, different classes of business flows form different workflows, and further different workflow engines are used for carrying out specific definition and implementation on different classes of workflows.
The front-end page corresponding to the workflow engine is equivalent to a low-code technology platform, the low-code technology platform is a development platform capable of quickly generating an application program without codes or through a small amount of codes, and developers with different experience levels can create web pages and mobile application programs through a graphical user interface and using drag components and model-driven logic through a method of developing the front-end page in a visualized manner. Therefore, the user can schedule the circulation process of each instantiation instruction in the front-end page corresponding to the workflow engine in a dragging component mode.
Specifically, in the front-end page corresponding to the workflow engine, each instantiation instruction is abstracted into a component, the circulation process among each instantiation instruction is abstracted into a flow, the visualized arrangement of each component corresponding to each instantiation instruction is used for realizing the rapid arrangement of the circulation process of each instantiation instruction, the arrangement difficulty of the circulation processes of a plurality of instantiation instructions is reduced, the cost is saved, and the arrangement efficiency of the circulation processes of a plurality of instantiation instructions is improved.
The instantiation of the instruction templates can be realized by deploying the instruction template engine. The instruction template engine is used for filling the configuration parameters into the corresponding configuration variables to obtain the corresponding instantiation instructions.
In a possible implementation manner, the instruction template engine page and the workflow engine page correspond to the same front-end page, a user can fill configuration parameters of an instruction template of a single action through the front-end page to obtain an instantiation action instruction, and the front-end page is used for arranging an execution sequence of the filled instantiation action instruction to obtain a continuous action instruction set arranged according to the action execution sequence in an actual scene.
Fig. 3 is a schematic diagram of a front end page in an example of a control method of a robot according to an embodiment of the present application. The front-end page 1 may include an instruction template library module, a parameter configuration area module, and an execution sequence setting module, where the instruction template library module may be obtained by modeling and packaging an instruction template library, and a user may trigger a selection instruction for an instruction template displayed in the instruction template library module, so that a user-selected instruction template may be displayed in the parameter configuration area module, and the user may perform parameter configuration for the selected instruction template according to an actual scenario, so that an instruction template engine may perform parameter filling for configuration variables in the selected instruction template to obtain a corresponding instantiation instruction. And then, in the execution sequence setting module, a user can drag the instantiation instruction according to actual requirements to set the execution sequence, so that the workflow engine can sort the instantiation instruction according to the execution sequence set by the user to obtain a continuous action instruction set.
The instruction module engine may be nested in the workflow engine, and the workflow engine may also call the instruction module engine through a call interface, which is not particularly limited in the present application.
In another possible implementation manner, the instruction template engine page and the workflow engine page respectively correspond to different front end pages, a user can obtain an instantiation action instruction by filling configuration parameters of an instruction template of a single action through the front end page corresponding to the instruction template engine page, and the front end page corresponding to the workflow engine page arranges an execution sequence of the filled instantiation action instruction to obtain a continuous action instruction set arranged according to the action execution sequence in an actual scene. In this case, communication may be performed between a front page corresponding to the instruction template engine page and a front page corresponding to the workflow engine page, and the front page corresponding to the instruction template engine page may send the filled instantiation action instruction to the front page corresponding to the workflow engine page.
Fig. 4 is a schematic diagram of another front-end page in the control method of the robot according to the embodiment of the present application, including an interface (a) and an interface (b). The front-end page 2 illustrated by the interface (a) is a front-end page corresponding to the instruction template engine page, the front-end page 3 illustrated by the interface (b) is a front-end page corresponding to the workflow engine page, the front-end page 2 may include an instruction template library module and a parameter configuration area, and a user may perform parameter filling on each configuration variable in the parameter configuration area. The front page 3 may include an instantiation instruction module and an execution order setting module. The details of the front page 2 and the front page 3 may refer to the description of the front page 1 in fig. 3, and will not be described here.
Optionally, when one action is performed after another action is completed, the precondition judging function of the action performed later may be determined according to the action performed before, and specifically, before the step of determining the target precondition judging function for performing the action, the control method of the robot provided by the embodiment of the present application may further include the following steps:
the order in which the actions are to be performed is determined.
The step of determining the target precondition judging function for executing the action to be executed may be realized by the steps of:
determining whether the first action in the actions to be executed is in a completion state or not as a target pre-condition judging function corresponding to the second action in the other actions aiming at other actions except the first action to be executed in the actions to be executed; wherein the second action is triggered after the first action is completed;
and determining a target pre-condition corresponding to the first action to be executed according to the execution position of the first action to be executed.
It can be understood that the dependency relationship between each action to be performed in the task to be performed can be obtained according to the arrangement sequence of the actions to be performed, for example, from the above-mentioned sequence of 7 actions to be performed in table 4, "move from the current position to the shelf a→load 1→move from the shelf a to the shelf b→lift the mechanical arm→unload 1→lower the mechanical arm→move from the shelf b to the charging point" the dependency relationship between each action to be performed is known as follows: the trigger of the action to be performed "loading the goods 1" depends on that the action to be performed "moving from the current position to the goods shelf a" has been completed, the trigger of the action to be performed "moving from the goods shelf a to the goods shelf b" depends on that the action to be performed "loading the goods 1" has been completed, the trigger of the action to be performed "lifting the mechanical arm" depends on that the action to be performed "moving from the goods shelf a to the goods shelf b" has been completed, the trigger of the action to be performed "unloading the goods 1" depends on that the action to be performed "lifting the mechanical arm" has been completed, the trigger of the action to be performed "lowering the mechanical arm" depends on that the action to be performed "unloading the goods 1" has been completed, and the trigger of the action to be performed "moving from the goods shelf b to the charging point" depends on that the action to be performed "lowering the mechanical arm" has been completed.
In this way, the precondition judging function of each action to be executed can be determined according to the order of each action to be executed in the arrangement order, so that the precondition judging function is set more efficiently.
In practical application, the state of the robot can be updated when each single action is completed in the process of executing the action, including the states of all mechanical arms of the robot, the electric quantity state of the robot, the position of the robot and the like. The precondition judging function may include judging at least one of a state of a robot arm, a state of charge, and a position where the robot is located.
Aiming at other actions except the first action to be executed in all the actions to be executed, acquiring the state of the robot when all the instantiation instructions are executed in the process of executing all the instantiation instructions according to the arrangement sequence of all the actions to be executed; judging whether a target precondition judging function corresponding to a second action in each other action is met or not according to the space state and/or accessory state of the robot when the instantiation instruction of the first action in each action to be executed is completed, so as to control the robot to execute the task to be executed under the condition that the target precondition judging function is met; wherein the second action is triggered after the first action is completed.
Since the programming language controlling the robot to perform tasks may be C language or c++, in order to adapt to different computer program languages and to reduce the programming difficulty, the present application may use a domain specific language (Domain Specified Language, abbreviated as DSL) to design the instruction templates.
Accordingly, the instantiation instruction is an instantiation instruction of a DSL type, and the "execute instantiation instruction according to the arrangement sequence of actions to be performed" in step S204 may be implemented as follows:
sequentially arranging the instantiation instructions according to the arrangement sequence of the actions to be executed to obtain a continuous action instruction set;
and analyzing the continuous action instruction set to obtain computer language codes, and executing the computer language codes.
DSL (Domain Specified Language, domain specific language) is a language specifically tailored to a domain such as a specific application/service/framework, written based only on the relevant concepts and features of the domain, and is also only resolvable by the domain. Thus, DSL is a syntax describing the way a program is generated, the flow structure, and the decision logic in a specific field, and compared to computer programming languages, DSL languages are easier to understand and can be edited without general programming knowledge. DSL provides unique productivity by tailoring specific grammars to specific program areas, which can help users understand some of the usage more quickly. For example, HTML is the DSL language used to describe web page hypertext markup, CSS is the DSL language used to describe page styles, and SQL is the DSL language used to create and retrieve relational databases.
In contrast to computer program languages, DSL languages have the ability to provide structured data information. For example, the regular expression Regex only specifies a string pattern, and its engine will determine if the current string matches the regular expression based on the pattern. The structured query language SQL statement is not actually executed when in use, the input SQL statement is processed by the database, the database can read useful information from the SQL statement, and then the expected result of the query statement is returned from the database. Whereas general-purpose programming languages are designed to solve more abstract problems than domain-specific languages, the focus is not just in a domain. Thus, DSL languages have no concept of computation and execution, do not need to represent computation directly per se, and only require declaration of rules, facts, and hierarchies and relationships between certain elements when used.
Because the flow structure of DSL and the grammar of judgment logic are easier to understand, the method can edit and understand without general programming knowledge, thus, codes of computer program language are generated according to instruction templates of DSL types, users do not need to directly write codes with strict grammar formats and logic, programming difficulty is reduced, and further, requirements on users are reduced.
Illustratively, the following is a template of instructions for the DSL type for which the robot performs the single act of unloading:
the action type enumeration may include any one of loading (load), unloading (unload), charging (charge), moving (move), and the like, for example.
The system id of the shelf is also called a shelf identifier, the identifier can be understood as an identity card number of the shelf in the application scene, each shelf in the application scene corresponds to a unique identifier, the system ids of the shelf and the shelf have a one-to-one correspondence, and the system id can be a string of characters or a vector, so that the application is not particularly limited.
In the discharge instruction templates of the above examples, the instruction template engine may populate actual parameters at the $placeholder in response to a parameter configuration instruction to the instruction template, outputting a parameter populated result.
Therefore, "parameter configuration of the target configuration variables of the target instruction template" in step S203 is essentially achieved by:
determining configuration parameters of a target configuration variable according to scene information in the task execution process to be executed;
and filling the target configuration variables of the target instruction templates through the configuration parameters.
The scenario information in the execution process of the task to be executed in combination with the application scenario can be understood as: the pick action is the location of which shelf is picking.
It will be appreciated that the determination may be made by setting coordinates for each location in the robot work scene. Thus, each location point in the scene corresponds to a unique coordinate. In practical applications, a plurality of shelves may be corresponding to one position, or a plurality of lockers may be corresponding to one position, and in this application, a plurality of shelves corresponding to one position are taken as an example, and the application is not limited thereto.
As shown in fig. 5, the embodiment of the application provides a schematic diagram of a shelf in an application scenario of a control method of a robot, where shelves 1 to 5 are located at the same position, shelves 6 to 10 are located at the same position, and shelves 11 to 15 are located at the same position, that is, the coordinates corresponding to shelves 1 to 5 are the same, the coordinates corresponding to shelves 6 to 10 are the same, and the coordinates corresponding to shelves 11 to 15 are the same. The goods to be picked by the robot are located on the goods shelves 8, the goods shelves 8 are goods shelves with the layer number of 3 in the coordinates corresponding to the goods shelves 6-10, when the configuration variables in the instruction templates are configured in a parameter mode, the coordinates corresponding to the goods shelves 8 and the layer numbers corresponding to the goods shelves 8 at the positions can be configured to obtain instantiated goods picking instructions, and therefore the robot can pick goods to the corresponding goods shelves 8 according to the instantiated goods picking instructions.
In a specific embodiment, after the user finishes dragging the instantiation instruction on the front-end page, the rear end generates a DSL type data structure corresponding to a continuous action instruction set with a certain execution sequence, and because the DSL type data structure does not conform to the habit of a computer program language, when the robot is controlled directly by the continuous action instruction set of the DSL type, the compiling result of the code is wrong, so in order to enable the code to be compiled and run normally, in practical application, the continuous action instruction set of the DSL type needs to be parsed to generate the code of the computer program language type.
The types of codes that control the robot to perform actions are those that can be identified by a computer, including but not limited to Python, java, C, C ++. To enable the robot to "understand" the continuous action instruction set, the DSL-type continuous action instruction set may be parsed and computer programming language-type code generated. Specifically, inheritance relationships, dependency relationships and association relationships among modules in the continuous action instruction set of the DSL type can be analyzed, and codes conforming to corresponding grammar rules can be generated according to analysis results.
In a specific embodiment, the abstract syntax tree (Abstract Syntax Tree, abbreviated as AST) may be used to parse the continuous action instruction set of the DSL type, where the abstract syntax tree may represent the abstract syntax structure of the continuous action instruction set of the DSL type in a tree form, and each node on the tree represents a structure in the continuous action instruction set of the DSL type. Other tools may be used in the present application to parse a DSL-type continuous action instruction set, which is not particularly limited by the present application.
By the technical means, all the actions to be executed required by completing the tasks to be executed can be determined, corresponding instantiation instructions are obtained according to the instruction templates corresponding to the actions to be executed, execution sequences are arranged for all the instantiation instructions according to the execution sequences of all the actions to be executed, a continuous action instruction set is obtained, the continuous action instruction set of DSL type is converted into codes which can be identified by a computer, and the corresponding tasks to be executed can be controlled by the codes.
Therefore, the continuous action instruction set can be spliced according to the instruction templates of the instantiation instructions like building blocks, so that task execution of the robot in different scene requirements can be met, and the diversified requirements can be supported more efficiently. In addition, the method also reduces programming difficulty, improves the multiplexing rate of the program and saves programming time.
In the above example, the post-processing may be understood as prompt information provided to describe that the robot completes the action when the action is completed, and the prompt information is triggered when the robot completes the corresponding action.
In an alternative embodiment, the prompt may be set in the instruction template to trigger the prompt when the robot completes the corresponding action. For example, the instruction template of the movement action can be provided with prompt information of 'movement action completed', and the prompt information of 'movement action completed' is triggered when the movement action from the shelf a to the shelf b is completed; the instruction template of the unloading action can be provided with prompt information of 'the unloading action is completed', and the robot triggers the prompt information of 'the unloading action is completed' when the goods shelf b finishes the unloading action.
In another alternative embodiment, the prompt message may be customized by the user or a finer prompt message may be generated by the action position that has been filled, so the configuration variable further includes the prompt message, and the step of "configuring the parameter of the target configuration variable" in step 203 may further include the following steps:
determining information containing a target action position and finishing actions to be executed as target prompt information; and carrying out parameter configuration on the target configuration variables according to the target prompt information.
For example, for the action of unloading the robot at the position a, after the action is completed, the voice of "the unloading at the position a is completed" may be broadcasted, or the information of "the unloading at the position a is completed" may be fed back to the terminal of the user. Thus, the user does not need to pay attention to the dynamic state of the robot in real time, and knows which action the robot has performed by only executing the prompt message triggered by the action. In addition, under the condition that more robots are in the application scene, a user can quickly grasp actions of the robots which are executed currently through specific prompt information.
In addition, the instruction template in the control method of the robot provided by the embodiment of the application can be provided with at least one feedback information of the robot electric quantity, the abnormal state of the robot and the time for completing the corresponding action, and the feedback information is generated according to the robot electric quantity, the abnormal state of the robot and the time information for completing the corresponding action obtained in real time and is sent to the user.
In addition, in the application, the continuous action instruction set after the sequence arrangement can be issued to the robot needing to execute the task to be executed, the robot locally analyzes the generated code and operates the code, and then the robot is controlled to execute the corresponding task to be executed. Therefore, when a large number of robots exist in an application scene to execute different tasks to be executed, the local analysis of each robot and the corresponding continuous instruction set can avoid data congestion, so that the execution efficiency of the tasks to be executed is improved.
Corresponding to the control method of the robot provided in the first embodiment of the present application, the second embodiment of the present application further provides a control device of the robot, as shown in fig. 6, the control device 600 of the robot includes:
a first determining unit 601, configured to determine an action to be performed required for completing a task to be performed;
a second determining unit 602, configured to determine a target instruction template that matches the action to be performed from preset instruction templates, where the instruction template includes a configuration variable on which the corresponding action is performed;
a parameter configuration unit 603, configured to perform parameter configuration on a target configuration variable in the target instruction template in response to a parameter configuration instruction for the target configuration variable, so as to obtain an instantiation instruction for controlling the robot to execute the action to be executed;
the instruction execution unit 604 is configured to execute the instantiation instruction according to the order of the actions to be performed, so as to control the robot to execute the task to be performed.
Optionally, the control device 600 of the robot further comprises a modeling unit, where the modeling unit is configured to: determining a single action of the robot in a scene; modeling the single action according to the action position of executing the single action to obtain an instruction template corresponding to the single action;
The parameter configuration unit 603 is further configured to: determining a target action position for executing the action to be executed; and carrying out parameter configuration on the target configuration variable according to the target action position.
Optionally, in case the configuration variable further includes a precondition judging function when executing the corresponding action, the parameter configuration unit 603 is further configured to: determining a target pre-condition judging function for executing the action to be executed; and carrying out parameter configuration on the target configuration variable according to the target pre-condition judging function.
Optionally, when the precondition judging function includes a space state judging function for judging whether the robot reaches an execution position where the corresponding action starts to be executed in the action position and/or an accessory state judging function for judging whether an accessory in the robot that executes the corresponding action is in a state where the corresponding action can be executed, the parameter configuration unit 603 is specifically configured to: determining a space state judging function according to the execution position of the action to be executed in the target action position; and/or determining a fitting state judging function according to the target identifier corresponding to the fitting for executing the action to be executed.
Optionally, the second determining unit 602 is further configured to: determining the arrangement sequence of the actions to be executed;
the parameter configuration unit 603 is specifically configured to: determining whether the first action in the actions to be executed is in a completion state or not as a target pre-condition judging function corresponding to the second action in the other actions aiming at other actions except the first action in the actions to be executed; wherein the second action is triggered after the first action is completed; and determining a target pre-condition corresponding to the first action to be executed according to the execution position of the first action to be executed.
Optionally, the control device 600 of the robot further includes an acquisition unit, where the acquisition unit is configured to: and responding to a drag instruction of the front page for the component corresponding to the instantiation instruction, and acquiring the arrangement sequence of the actions to be executed, which is indicated by the drag instruction.
Optionally, in the case that the instruction template is a DSL type instruction template and the instantiation instruction is a DSL type instantiation instruction, the instruction execution unit 604 is specifically configured to: sequentially arranging the instantiation instructions according to the arrangement sequence of the actions to be executed to obtain a continuous action instruction set; and analyzing the continuous action instruction set to obtain computer language codes, and executing the computer language codes.
Optionally, in the case that the configuration variable further includes prompt information triggered when the robot completes the corresponding action, the parameter configuration unit 603 is further specifically configured to: determining information containing the target action position and completing the action to be executed as target prompt information; and carrying out parameter configuration on the target configuration variables according to the target prompt information.
Optionally, the robot comprises a plurality of accessories; in the case that the configuration variables include the identifications corresponding to the accessories of the robot used for executing the corresponding actions, the parameter configuration unit 603 is specifically configured to: determining a target accessory used for executing the action to be executed in the plurality of accessories; and carrying out parameter configuration on the target configuration variable according to the accessory identification of the target accessory.
Optionally, the parameter configuration unit 603 is further specifically configured to: determining configuration parameters of the target configuration variables according to scene information in the execution process of the task to be executed; and filling the target configuration variables of the target instruction templates through the configuration parameters.
Corresponding to the control method of the robot provided in the first embodiment of the present application, the third embodiment of the present application further provides an electronic device for implementing the control method of the robot. As shown in fig. 7, the electronic device 700 includes: a processor 701; and a memory 702 for storing a program of a control method of the robot, the apparatus performing the following steps after being powered on and running the program of the control method of the robot by the processor:
Determining actions to be executed required for completing tasks to be executed;
determining a target instruction template matched with the action to be executed from preset instruction templates, wherein the instruction template comprises configuration variables on which the corresponding action is executed;
responding to a parameter configuration instruction aiming at a target configuration variable in the target instruction template, and carrying out parameter configuration on the target configuration variable to obtain an instantiation instruction for controlling the robot to execute the action to be executed;
and executing the instantiation instruction according to the arrangement sequence of the actions to be executed so as to control the robot to execute the tasks to be executed.
A fourth embodiment of the present application provides a computer-readable storage medium storing a program of a control method of a robot, the program being executed by a processor, the program performing the steps of:
determining actions to be executed required for completing tasks to be executed;
determining a target instruction template matched with the action to be executed from preset instruction templates, wherein the instruction template comprises configuration variables on which the corresponding action is executed;
Responding to a parameter configuration instruction aiming at a target configuration variable in the target instruction template, and carrying out parameter configuration on the target configuration variable to obtain an instantiation instruction for controlling the robot to execute the action to be executed;
and executing the instantiation instruction according to the arrangement sequence of the actions to be executed so as to control the robot to execute the tasks to be executed.
It should be noted that, for the detailed descriptions of the apparatus, the electronic device, and the computer readable storage medium provided in the second embodiment, the third embodiment, and the fourth embodiment of the present application, reference may be made to the related descriptions of the first embodiment of the present application, and the detailed descriptions are omitted here.
While the application has been described in terms of preferred embodiments, it is not intended to be limiting, but rather, it will be apparent to those skilled in the art that various changes and modifications can be made herein without departing from the spirit and scope of the application as defined by the appended claims.
In one typical configuration, the node devices in the blockchain include one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
1. Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), random Access Memory (RAM) of other nature, read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage media or any other non-transmission media that can be used to store information that can be accessed by a computing device. Computer readable media, as defined herein, does not include non-transitory computer readable media (transmission media), such as modulated data signals and carrier waves.
2. It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
While the application has been described in terms of preferred embodiments, it is not intended to be limiting, but rather, it will be apparent to those skilled in the art that various changes and modifications can be made herein without departing from the spirit and scope of the application as defined by the appended claims.

Claims (14)

1. A method of controlling a robot, the method comprising:
determining actions to be executed required for completing tasks to be executed;
determining a target instruction template matched with the action to be executed from preset instruction templates, wherein the instruction template comprises configuration variables on which the corresponding action is executed;
responding to a parameter configuration instruction aiming at a target configuration variable in the target instruction template, and carrying out parameter configuration on the target configuration variable to obtain an instantiation instruction for controlling the robot to execute the action to be executed;
and executing the instantiation instruction according to the arrangement sequence of the actions to be executed so as to control the robot to execute the tasks to be executed.
2. The method of claim 1, wherein prior to the determining the action to be performed required to complete the task to be performed, the method further comprises:
Determining a single action of the robot in a scene;
modeling the single action according to the action position of executing the single action to obtain an instruction template corresponding to the single action;
the configuration variables include action positions at which the corresponding actions are performed; the parameter configuration of the target configuration variable comprises the following steps:
determining a target action position for executing the action to be executed;
and carrying out parameter configuration on the target configuration variable according to the target action position.
3. The method of claim 2, wherein the configuration variables further comprise a precondition determination function when performing the corresponding action;
the parameter configuration of the target configuration variable further includes:
determining a target pre-condition judging function for executing the action to be executed;
and carrying out parameter configuration on the target configuration variable according to the target pre-condition judging function.
4. A method according to claim 3, characterized in that the precondition judging function comprises a spatial state judging function for judging whether the robot arrives at an execution position in the action position at which execution of the corresponding action is started, and/or an accessory state judging function for judging whether an accessory of the robot executing the corresponding action is in a state in which the corresponding action is executable;
The determining the target precondition judging function for executing the action to be executed comprises the following steps:
determining a space state judging function according to the execution position of the action to be executed in the target action position; and/or the number of the groups of groups,
and determining an accessory state judging function according to the target identifier corresponding to the accessory executing the action to be executed.
5. A method according to claim 3, wherein prior to said determining to perform the target precondition judging function of the action to be performed, the method further comprises:
determining the arrangement sequence of the actions to be executed;
the determining the target precondition judging function for executing the action to be executed comprises the following steps:
determining whether the first action in the actions to be executed is in a completion state or not as a target pre-condition judging function corresponding to the second action in the other actions aiming at other actions except the first action in the actions to be executed; wherein the second action is triggered after the first action is completed;
and determining a target pre-condition corresponding to the first action to be executed according to the execution position of the first action to be executed.
6. The method of claim 1, wherein prior to said executing the instantiation instructions in the order of the actions to be performed, the method further comprises:
and responding to a drag instruction of the front page for the component corresponding to the instantiation instruction, and acquiring the arrangement sequence of the actions to be executed, which is indicated by the drag instruction.
7. The method of claim 1, wherein the instruction template is a DSL-type instruction template, the instantiation instruction is a DSL-type instantiation instruction, and the executing the instantiation instruction in the order of the actions to be performed comprises:
sequentially arranging the instantiation instructions according to the arrangement sequence of the actions to be executed to obtain a continuous action instruction set;
and analyzing the continuous action instruction set to obtain computer language codes, and executing the computer language codes.
8. A method according to claim 3, characterized in that the instruction templates are provided with prompt information describing the completion of the corresponding actions by the robot, which prompt information is triggered when the robot completes the corresponding actions.
9. The method of claim 8, wherein the configuration variables further include the hint information, and the parameter configuring the target configuration variables further includes:
determining information containing the target action position and completing the action to be executed as target prompt information;
and carrying out parameter configuration on the target configuration variables according to the target prompt information.
10. The method of claim 1, wherein the robot comprises a plurality of accessories; the configuration variables comprise identifications corresponding to accessories of the robot used for executing the corresponding actions;
the parameter configuration of the target configuration variable comprises the following steps:
determining a target accessory used for executing the action to be executed in the plurality of accessories;
and carrying out parameter configuration on the target configuration variable according to the accessory identification of the target accessory.
11. The method of claim 1, wherein parameter configuring the target configuration variables of the target instruction template comprises:
determining configuration parameters of the target configuration variables according to scene information in the execution process of the task to be executed;
And filling the target configuration variables of the target instruction templates through the configuration parameters.
12. A control device for a robot, the device comprising:
the first determining unit is used for determining actions to be executed required by completing tasks to be executed;
the second determining unit is used for determining a target instruction template matched with the action to be executed from preset instruction templates, wherein the instruction template comprises configuration variables on which the corresponding action is executed;
the parameter configuration unit is used for responding to a parameter configuration instruction aiming at a target configuration variable in the target instruction template, carrying out parameter configuration on the target configuration variable and obtaining an instantiation instruction for controlling the robot to execute the action to be executed;
and the instruction execution unit is used for executing the instantiation instruction according to the arrangement sequence of the actions to be executed so as to control the robot to execute the tasks to be executed.
13. An electronic device, comprising:
a processor; and
a memory for storing a data processing program, the electronic device being powered on and executing the program by the processor, for performing the method of any of claims 1-11.
14. A computer readable storage medium, characterized in that a data processing program is stored, which program is run by a processor, performing the method according to any of claims 1-11.
CN202310873460.4A 2023-07-17 2023-07-17 Robot control method and device, electronic equipment and readable storage medium Pending CN117021073A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310873460.4A CN117021073A (en) 2023-07-17 2023-07-17 Robot control method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310873460.4A CN117021073A (en) 2023-07-17 2023-07-17 Robot control method and device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN117021073A true CN117021073A (en) 2023-11-10

Family

ID=88643861

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310873460.4A Pending CN117021073A (en) 2023-07-17 2023-07-17 Robot control method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN117021073A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116560640A (en) * 2023-07-05 2023-08-08 深圳墨影科技有限公司 Visual editing system and method based on robot design system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116560640A (en) * 2023-07-05 2023-08-08 深圳墨影科技有限公司 Visual editing system and method based on robot design system

Similar Documents

Publication Publication Date Title
Anglani et al. Object-oriented modeling and simulation of flexible manufacturing systems: a rule-based procedure
Niu et al. Enterprise information systems architecture—Analysis and evaluation
CN102426519B (en) Linked data-based multiplexing method of business process execution language (BPEL) templates and services, and system thereof
EP1582985A2 (en) Test case inheritance controlled via attributes
Lytra et al. Supporting consistency between architectural design decisions and component models through reusable architectural knowledge transformations
CN117021073A (en) Robot control method and device, electronic equipment and readable storage medium
CN103744647A (en) Java workflow development system and method based on workflow GPD
CN111506304A (en) Assembly line construction method and system based on parameter configuration
Basile et al. Simulation and analysis of discrete-event control systems based on Petri nets using PNetLab
Motsch et al. Concept for modeling and usage of functionally described capabilities and skills
CN110516000B (en) Workflow management system supporting complex workflow structure
CN115185539A (en) Method, device and storage medium for generating executable dynamic link library file
Sprock et al. Theory of Discrete Event Logistics Systems (DELS) Specification
Kolovos et al. The epsilon pattern language
Niati et al. Towards a digital twin for cyber-physical production systems: a multi-paradigm modeling approach in the postal industry
Rudtsch et al. Approach for the conceptual design validation of production systems using automated simulation-model generation
Birtel et al. Method for the development of an asset administration shell in a product-driven modular production–realizing an active digital object memory
CN112347117A (en) Method and system for realizing PCB design resource sharing based on ERP system
US20070093917A1 (en) Storing and accessing relay ladder logic modules in a relational database
CN109669462A (en) Intelligent planning method and system
El Abdellaoui et al. Scalable, reconfigurable simulation models in industry 4.0-oriented enterprise modeling
Le Goaer et al. A generic solution for weaving business code into executable models
Beyer et al. Knowledge-based planning and adaptation of industrial automation systems
CN115951970A (en) Heterogeneous multi-simulation software integrated development environment
CN113157268B (en) Equipment state processing system combining flow engine and Internet of things

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination