CN116560640A - Visual editing system and method based on robot design system - Google Patents

Visual editing system and method based on robot design system Download PDF

Info

Publication number
CN116560640A
CN116560640A CN202310812694.8A CN202310812694A CN116560640A CN 116560640 A CN116560640 A CN 116560640A CN 202310812694 A CN202310812694 A CN 202310812694A CN 116560640 A CN116560640 A CN 116560640A
Authority
CN
China
Prior art keywords
target
action
determining
configuration
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310812694.8A
Other languages
Chinese (zh)
Other versions
CN116560640B (en
Inventor
杨一鸣
刁忍
刘权
陈鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Mo Ying Technology Co ltd
Original Assignee
Shenzhen Mo Ying Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Mo Ying Technology Co ltd filed Critical Shenzhen Mo Ying Technology Co ltd
Priority to CN202310812694.8A priority Critical patent/CN116560640B/en
Publication of CN116560640A publication Critical patent/CN116560640A/en
Application granted granted Critical
Publication of CN116560640B publication Critical patent/CN116560640B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/34Graphical or visual programming
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Numerical Control (AREA)

Abstract

The invention provides a visual editing system and a visual editing method based on a robot design system, wherein the system comprises the following steps: the action type determining module is used for analyzing a service item to be executed by the robot, determining a target action set of the robot when executing the service item, and extracting action identifiers of all target actions in the target action set; the code matching module is used for determining the category of the target action based on the action identifier and matching the target low code block from a preset low code library corresponding to the category based on the action characteristic of the target action; the visual editing module is used for determining project requirements of business projects, determining cooperative logic among all target actions based on the project requirements, and carrying out cooperative flow configuration and action parameter configuration on the target low-code blocks on the visual editing interface based on the cooperative logic. The convenience of motion design of each target of the robot is guaranteed, the accuracy and reliability of motion editing of the robot are improved, and the operation and working effect of the robot are guaranteed.

Description

Visual editing system and method based on robot design system
Technical Field
The invention relates to the technical field of electric digital data processing, in particular to a visual editing system and method based on a robot design system.
Background
At present, along with the continuous development of science and technology, more and more intelligent devices appear in the work and life of people, great convenience is provided for the life of people, a robot is one of the devices, the robot can replace people to execute dangerous and boring fixed actions, and the cost is saved while the productivity is liberated;
however, at present, the robot can only execute corresponding work tasks in a single action, and once the internal control program is designed, the internal control program is difficult to change;
firstly, because the running of the robot is not controlled by a code program, personnel not engaged in the robot research and development industry can hardly accurately and effectively adjust the control codes of the robot according to the environmental requirements, secondly, the quantity of the control codes is large and complicated, the control codes are changed more complicated, and finally, the current composition condition of the control codes cannot be intuitively and effectively known, so that the position to be adjusted is difficult to position during adjustment, the accuracy and convenience for editing the control codes of the robot are greatly reduced, and meanwhile, the working and running effects of the robot are greatly attenuated;
Because, in order to overcome the above-mentioned drawbacks, the present invention provides a visual editing system and method based on a robot design system.
Disclosure of Invention
The invention provides a visual editing system and a visual editing method based on a robot design system, which are used for realizing accurate and effective confirmation of a target action set required by a robot by analyzing service items to be executed by the robot, realizing effective matching of target low code blocks corresponding to corresponding target actions according to the target action set, and finally carrying out collaborative process configuration and action parameter configuration on the target low code blocks at a visual editing interface, so that the standardization and convenience of the design of each target action of the robot are ensured, the accuracy and reliability of the execution action editing of the robot are improved, and the operation and the working effect of the robot are ensured.
The invention provides a visual editing system based on a robot design system, which comprises:
the action type determining module is used for analyzing the service items to be executed by the robot, determining a target action set of the robot when executing the service items, and extracting action identifiers of all target actions in the target action set;
The code matching module is used for determining the category of the target action based on the action identifier and matching the target low code block from a preset low code library corresponding to the category based on the action characteristic of the target action;
the visual editing module is used for determining project requirements of business projects, determining cooperative logic among all target actions based on the project requirements, and carrying out cooperative flow configuration and action parameter configuration on the target low-code blocks on the visual editing interface based on the cooperative logic.
Preferably, a visual editing system based on a robot design system, an action type determining module, includes:
the system comprises a project acquisition unit, a service branch acquisition unit and a service branch acquisition unit, wherein the project acquisition unit is used for acquiring a service project to be executed by a robot, extracting project configuration information of the service project, and determining service branches contained in the service project based on the project configuration information, wherein the number of the service branches is at least two;
the behavior determining unit is used for determining the execution steps of the robots contained in the corresponding service branches based on the service states which are required to be achieved by each service branch, and determining the action behaviors corresponding to each execution step of the robots based on the service states;
the action summarizing unit is used for determining a first execution sequence of each business branch and a second execution sequence of the robot execution steps contained in each business branch based on the development logic of the business project, summarizing action behaviors in each business branch based on the first execution sequence and the second execution sequence, and obtaining a target action set of the robot when executing the business project.
Preferably, a visual editing system based on a robot design system, an action type determining module, includes:
the motion acquisition unit is used for acquiring the obtained target motion set of the robot, respectively determining an execution device main body corresponding to each target motion in the target motion set, and determining the motion characteristic of each execution device main body based on the project requirement of the business project;
the identification determining unit is used for matching the motion characteristics with the reference motion characteristics corresponding to each motion identification in the preset motion identification library and determining the motion identification of each target motion based on the matching result;
and the association unit is used for carrying out association binding on the action identifier and the target action to complete extraction and determination of the action identifier of the target action.
Preferably, a visual editing system based on a robot design system, an action obtaining unit, includes:
the motion analysis subunit is used for acquiring the motion characteristics of each execution device main body and determining the motion amplitude range of the corresponding target motion based on the operation characteristics;
the action verification subunit is used for comparing the action range of each target action with a preset range interval, judging the target action with the action range which is not in the preset range interval as abnormal action, and correcting the action range of the abnormal action based on the item requirement of the service item and the preset range interval;
And the action screening subunit is used for determining the similarity of the target actions by the motion characteristics of each execution device main body based on the correction result, judging that the two current target actions are overlapped when the similarity is larger than a preset similarity threshold, and performing de-duplication on the overlapped target actions to obtain a final target action set.
Preferably, a visual editing system based on a robot design system, a code matching module, includes:
the action identifier acquisition unit is used for acquiring the action identifier of each target action in the target action set, generating a resource access request based on the action identifier, analyzing the resource access request, determining a target structure of the resource access request, and determining a segment head and a segment tail of the resource access request based on the target structure;
the identity marking unit is used for determining a target marking position based on the segment head and the segment tail, and adding an identity signature of the access terminal to the resource access request based on the target marking position to obtain a target resource access request;
the access unit is used for transmitting the target resource access request to a preset server, authenticating an identity signature carried in the target resource access request based on the preset server, analyzing the target resource access request after the authentication is passed, and extracting action identifiers of all target actions carried in the target resource access request;
The category determining unit is used for matching the action identification of each target action with the category identification of each action category in the preset server and determining the category to which each target action belongs based on the matching result;
the motion characteristic analysis unit is used for determining device joints contained in the target motion, respectively determining corresponding motion parameters of each device joint in the process of executing the target motion, and obtaining the motion characteristic of each target motion based on the device joint and the corresponding motion parameters;
the low code block determining unit is used for accessing a preset low code library in the category based on the action characteristics, sequentially matching the action characteristics with target functions of preset low code blocks in the preset low code library, and determining initial low code blocks based on a matching result;
the low code block checking unit is used for calling a virtual test case corresponding to the target action based on the visual editing interface and inputting the initial low code block into a test port corresponding to the virtual test case;
and the action visualization unit is used for carrying out independent action demonstration on the visual editing interface through the virtual test cases corresponding to the target actions based on the input results, obtaining a target low code block when the action demonstration results are consistent with the action characteristics of the target actions, otherwise, determining the difference characteristics of the action demonstration results and the action characteristics of the target actions, and updating the initial code block based on the difference characteristics until the action demonstration results are consistent with the action characteristics of the target actions.
Preferably, a visual editing system based on a robot design system, an access unit, includes:
the identity analysis subunit is used for acquiring an identity signature carried in the target resource access request based on the preset server, and analyzing the identity signature to obtain account information and a communication address of the access terminal;
the identity verification subunit is configured to match account information and a communication address of the access terminal with reference account information and reference communication address corresponding to a preset authorization terminal, and determine that the access terminal is an authorization terminal when the reference account information and the reference communication address are matched with the account information and the communication address of the access terminal, or reject a target resource access request of the access terminal based on the preset server.
Preferably, a visual editing system based on a robot design system, a visual editing module, includes:
the system comprises a project analysis unit, a project analysis unit and a project analysis unit, wherein the project analysis unit is used for acquiring a service project to be executed by the robot, determining an execution standard of the project in the running process based on project configuration of the service project, and determining project requirements of the service project based on the execution standard;
the action sequence analysis unit is used for determining an execution object corresponding to each target action and a target state which the execution object needs to reach under the target action based on the project requirement, determining the state change quantity of the execution object corresponding to each target action based on the target state, and determining the target dependency relationship among the execution objects based on the state change quantity;
The action sequence analysis unit is used for determining cooperative logic among all target actions based on target dependency relationships, acquiring low-code entities corresponding to all target actions, and marking the corresponding target low-code blocks based on the corresponding relationships between the low-code entities and the target low-code blocks corresponding to the target actions;
the collaborative process configuration unit is used for determining the target position of the target low code block marked by the entity in the visual editing interface based on collaborative logic, performing front-back association on each target low code block in the visual editing interface based on the target position, and completing collaborative process configuration on the target low code block based on association results;
the action parameter configuration unit is used for determining the structure composition of each target low code block based on the collaborative process configuration result, determining the adding position of a control parameter for momentum control of the corresponding low code entity in the corresponding target low code block based on the structure composition, determining the target value of the control parameter based on the adding position, determining the reference motion parameter value corresponding to each target action based on the item requirement of the service item, adjusting the target value of the control parameter based on the reference motion parameter value, and completing the action parameter configuration of the target low code block based on the adjustment result;
And the low code packaging unit is used for packaging the target low code blocks after the collaborative process configuration and the action parameter configuration to obtain a target low code data packet, and transmitting the target low code data packet to the robot terminal based on the transmission interface.
Preferably, a visual editing system based on a robot design system, a low-code packaging unit, includes:
the monitoring subunit is used for acquiring real-time actions of the robot after receiving the target low-code data packet, and determining false actions when action errors exist between the real-time actions and preset reference actions;
the action correction subunit is used for correcting the error low code corresponding to the error action based on the visual editing interface until the error action is consistent with the preset reference action, monitoring a robot action update notification sent by the management terminal in real time, and determining an update low code block and a low code update position corresponding to the update action based on the robot action update notification;
and the updating subunit is used for adding the updating low code block at the low code updating position and logically associating the added updating low code block with the adjacent target low code block.
Preferably, a visual editing system based on a robot design system, a visual editing module, includes:
The work load statistics unit is used for respectively acquiring configuration steps contained in the collaborative process configuration and the action parameter configuration, determining the data configuration quantity contained in each configuration step, and calculating the work load for carrying out collaborative process configuration and action parameter configuration on the target low code block based on the configuration steps and the data configuration quantity contained in each configuration step;
the efficiency calculation unit is used for respectively obtaining the configuration speed for carrying out collaborative process configuration and action parameter configuration on the target low code block based on the visual editing interface, and calculating the configuration efficiency for carrying out collaborative process configuration and action parameter configuration on the target low code block based on the workload and the configuration speed;
the reminding unit is used for comparing the configuration efficiency with a preset efficiency threshold, judging that the configuration efficiency of the cooperative flow configuration and the action parameters of the target low code blocks is qualified when the configuration efficiency is larger than or equal to the preset efficiency threshold, otherwise, judging that the configuration efficiency of the cooperative flow configuration and the action parameters of the target low code blocks is unqualified, and improving the configuration speed of the cooperative flow configuration and the action parameter configuration of the target low code blocks until the configuration efficiency is larger than or equal to the preset efficiency threshold.
The invention provides a visual editing method based on a robot design system, which comprises the following steps:
step 1: analyzing a service item to be executed by the robot, determining a target action set of the robot when executing the service item, and extracting action identifiers of all target actions in the target action set;
step 2: determining the category of the target action based on the action identifier, and matching a target low code block from a preset low code library corresponding to the category based on the action characteristic of the target action;
step 3: determining project requirements of business projects, determining cooperative logic among all target actions based on the project requirements, and performing cooperative flow configuration and action parameter configuration on the target low-code blocks on a visual editing interface based on the cooperative logic.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention. In the drawings:
FIG. 1 is a block diagram of a visual editing system based on a robot design system in an embodiment of the present invention;
FIG. 2 is a block diagram of an action type determining module in a visual editing system based on a robot design system according to an embodiment of the present invention;
fig. 3 is a flowchart of a visual editing method based on a robot design system in an embodiment of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described below with reference to the accompanying drawings, it being understood that the preferred embodiments described herein are for illustration and explanation of the present invention only, and are not intended to limit the present invention.
Example 1:
the present embodiment provides a visual editing system based on a robot design system, as shown in fig. 1, including:
the action type determining module is used for analyzing the service items to be executed by the robot, determining a target action set of the robot when executing the service items, and extracting action identifiers of all target actions in the target action set;
The code matching module is used for determining the category of the target action based on the action identifier and matching the target low code block from a preset low code library corresponding to the category based on the action characteristic of the target action;
the visual editing module is used for determining project requirements of business projects, determining cooperative logic among all target actions based on the project requirements, and carrying out cooperative flow configuration and action parameter configuration on the target low-code blocks on the visual editing interface based on the cooperative logic.
In this embodiment, the service item refers to a specific service that the robot needs to perform work, and may be, for example, a service item such as article gripping or cargo handling.
In this embodiment, the target action set refers to all actions involved in the robot when executing the business project, including rotation, angle of claw grasping, and the like.
In this embodiment, the target actions refer to respective execution actions included in the target action set.
In this embodiment, the action identifier is a tag label for marking different target actions, and the target actions can be quickly identified and distinguished through the action identifier.
In this embodiment, the category is used to characterize the action type corresponding to the target action, so that the target low code block corresponding to the target action can be conveniently matched from the corresponding preset low code library according to the action type.
In this embodiment, the motion characteristics refer to execution forces corresponding to different target motions, and may be, for example, a rotation angle, a movement distance, and the like.
In this embodiment, the preset low code library is set in advance, and is used to store low code blocks corresponding to different target actions, where the low code blocks stored in the preset low code library are not unique.
In this embodiment, the target low code block is a code form corresponding to the target action, and different target actions of the robot can be controlled through the target low code block.
In this embodiment, the project requirements refer to effects to be achieved by the business project, execution sequences among steps, purposes to be achieved, and the like.
In this embodiment, the collaborative logic is used to characterize the sequence of interaction between the target actions during operation, for example, the gripper of the robot first grabs, then the robot body rotates, and finally the robot body moves and carries.
In the embodiment, the visual editing interface is set in advance, and target zone blocks corresponding to different target actions can be visually displayed, so that collaborative process configuration (namely, adjustment of the execution sequence of each target action and the relation relationship with other actions) is conveniently performed on target low code blocks corresponding to different target actions
In this embodiment, the configuration of the motion parameters refers to adjusting the quantity parameters related to the motion strength or the moving distance in the target low code blocks corresponding to different target motions, so as to ensure the normalization of the target motions.
The beneficial effects of the technical scheme are as follows: the method has the advantages that the service items to be executed by the robot are analyzed, the target action set required by the robot is accurately and effectively confirmed, the target low code blocks corresponding to the corresponding target actions are effectively matched according to the target action set, and finally, collaborative process configuration and action parameter configuration are carried out on the target low code blocks at a visual editing interface, so that the standardization and convenience of the design of each target action of the robot are guaranteed, meanwhile, the accuracy and reliability of the execution action editing of the robot are improved, and the operation and working effect of the robot are guaranteed.
Example 2:
on the basis of embodiment 1, this embodiment provides a visual editing system based on a robot design system, as shown in fig. 2, an action type determining module, including:
the system comprises a project acquisition unit, a service branch acquisition unit and a service branch acquisition unit, wherein the project acquisition unit is used for acquiring a service project to be executed by a robot, extracting project configuration information of the service project, and determining service branches contained in the service project based on the project configuration information, wherein the number of the service branches is at least two;
The behavior determining unit is used for determining the execution steps of the robots contained in the corresponding service branches based on the service states which are required to be achieved by each service branch, and determining the action behaviors corresponding to each execution step of the robots based on the service states;
the action summarizing unit is used for determining a first execution sequence of each business branch and a second execution sequence of the robot execution steps contained in each business branch based on the development logic of the business project, summarizing action behaviors in each business branch based on the first execution sequence and the second execution sequence, and obtaining a target action set of the robot when executing the business project.
In this embodiment, the project configuration information is used to characterize the configuration of the service project, specifically, the service branches included in the service project and the like, where different service branches together form the service project to be executed.
In this embodiment, the business branches are part of a business project, and may be, for example, a packaging line or a quality inspection line in a production line, or the like.
In this embodiment, the service state refers to a final effect that each service branch needs to achieve, i.e. a state that needs to be kept clocked during operation.
In this embodiment, the action behavior refers to a change of an action path of the robot when executing the steps, for example, the action may be that the mechanical arm moves or the clamping jaw moves.
In this embodiment, development logic refers to the order in which the various steps or branches of a business project are performed during production or operation.
In this embodiment, the first execution sequence is a sequence for characterizing that the service branches included in the service item are executed, for example, the sequence of cleaning first, loading onto a disk, and sealing last.
In this embodiment, the second execution sequence refers to the sequence of the execution steps of the robots included in each service branch, for example, when the service branch is cleaning, the cleaning solution may be configured first, then the articles are put into the cleaning solution for cleaning, and finally the cleaned articles are salvaged and dehumidified.
The beneficial effects of the technical scheme are as follows: by analyzing the project configuration information of the business project, the overall and effective determination of the target actions involved in the execution of the business project by the robot is realized, so that the low codes corresponding to the target actions of the robot can be accurately and effectively edited through the visual editing interface, and the accuracy and reliability of the execution action editing of the robot are ensured.
Example 3:
on the basis of embodiment 1, this embodiment provides a visual editing system based on a robot design system, and an action type determining module includes:
the motion acquisition unit is used for acquiring the obtained target motion set of the robot, respectively determining an execution device main body corresponding to each target motion in the target motion set, and determining the motion characteristic of each execution device main body based on the project requirement of the business project;
the identification determining unit is used for matching the motion characteristics with the reference motion characteristics corresponding to each motion identification in the preset motion identification library and determining the motion identification of each target motion based on the matching result;
and the association unit is used for carrying out association binding on the action identifier and the target action to complete extraction and determination of the action identifier of the target action.
In this embodiment, the main body of the executing device refers to a device corresponding to different target actions executed by the robot, and may be, for example, a mechanical arm or a clamping jaw.
In this embodiment, the motion characteristics refer to a motion amplitude or a gripping force or the like of each actuator main body when executing the target action.
In this embodiment, the preset action identifier library is set in advance, and is used for storing action identifiers corresponding to different actions.
In this embodiment, the reference motion characteristics are set in advance and have corresponding motion identifications.
The beneficial effects of the technical scheme are as follows: by effectively acquiring the motion characteristics of different target actions in the target action set, the action identification of each target action is accurately and effectively determined from a preset action identification library through the operation characteristics, so that the corresponding low-code data block is conveniently called according to the action identification, the accurate and effective visual editing of the target action of the robot is realized, and the operation and the working effect of the robot are ensured.
Example 4:
on the basis of embodiment 3, this embodiment provides a visual editing system based on a robot design system, and an action obtaining unit includes:
the motion analysis subunit is used for acquiring the motion characteristics of each execution device main body and determining the motion amplitude range of the corresponding target motion based on the operation characteristics;
the action verification subunit is used for comparing the action range of each target action with a preset range interval, judging the target action with the action range which is not in the preset range interval as abnormal action, and correcting the action range of the abnormal action based on the item requirement of the service item and the preset range interval;
And the action screening subunit is used for determining the similarity of the target actions by the motion characteristics of each execution device main body based on the correction result, judging that the two current target actions are overlapped when the similarity is larger than a preset similarity threshold, and performing de-duplication on the overlapped target actions to obtain a final target action set.
In this embodiment, the motion range is a maximum range for characterizing each target motion of the robot, and may be, for example, a maximum length range in which the robot arm can extend.
In this embodiment, the preset range interval is known in advance, i.e. the allowable range of motion of different parts of the robot.
In this embodiment, the abnormal motion refers to a target motion in which the motion amplitude range of the target motion is not within a preset range section.
In this embodiment, the preset similarity threshold is set in advance, and is the lowest criterion for measuring whether the target actions in the target action set are similar or not, and can be adjusted.
The beneficial effects of the technical scheme are as follows: by analyzing the action amplitude ranges of different target actions in the target action set, the abnormal actions of the action amplitude ranges are corrected, and then the overlapping actions in the target action set are subjected to the duplicate removal operation, so that the accuracy and reliability of the finally obtained target action set are ensured, the target actions of the robot are conveniently and accurately edited through the visual editing interface, and the standardization and convenience of the design of the target actions of the robot are ensured.
Example 5:
on the basis of embodiment 1, this embodiment provides a visual editing system based on a robot design system, and a code matching module includes:
the action identifier acquisition unit is used for acquiring the action identifier of each target action in the target action set, generating a resource access request based on the action identifier, analyzing the resource access request, determining a target structure of the resource access request, and determining a segment head and a segment tail of the resource access request based on the target structure;
the identity marking unit is used for determining a target marking position based on the segment head and the segment tail, and adding an identity signature of the access terminal to the resource access request based on the target marking position to obtain a target resource access request;
the access unit is used for transmitting the target resource access request to a preset server, authenticating an identity signature carried in the target resource access request based on the preset server, analyzing the target resource access request after the authentication is passed, and extracting action identifiers of all target actions carried in the target resource access request;
the category determining unit is used for matching the action identification of each target action with the category identification of each action category in the preset server and determining the category to which each target action belongs based on the matching result;
The motion characteristic analysis unit is used for determining device joints contained in the target motion, respectively determining corresponding motion parameters of each device joint in the process of executing the target motion, and obtaining the motion characteristic of each target motion based on the device joint and the corresponding motion parameters;
the low code block determining unit is used for accessing a preset low code library in the category based on the action characteristics, sequentially matching the action characteristics with target functions of preset low code blocks in the preset low code library, and determining initial low code blocks based on a matching result;
the low code block checking unit is used for calling a virtual test case corresponding to the target action based on the visual editing interface and inputting the initial low code block into a test port corresponding to the virtual test case;
and the action visualization unit is used for carrying out independent action demonstration on the visual editing interface through the virtual test cases corresponding to the target actions based on the input results, obtaining a target low code block when the action demonstration results are consistent with the action characteristics of the target actions, otherwise, determining the difference characteristics of the action demonstration results and the action characteristics of the target actions, and updating the initial code block based on the difference characteristics until the action demonstration results are consistent with the action characteristics of the target actions.
In this embodiment, the resource access request refers to request data that is generated according to an action identifier of a target action and is capable of accessing a preset server.
In this embodiment, the target structure is used to characterize the composition of the resource access request, thereby facilitating the selection of the appropriate location for the identity marking.
In this embodiment, the segment header and the segment trailer characterize the starting position and the ending position of the resource access request, respectively.
In this embodiment, the target marking position refers to a position that can mark the resource access request, that is, a segment head and a segment tail, which is determined by the segment head and the segment tail.
In this embodiment, the identity signature refers to data capable of representing the identity information of the access terminal, and the identity signature of the resource access request is completed by adding the data of the identity information of the access terminal to the target mark position in the resource access request, so as to facilitate accurate and efficient analysis of the resource access request by the preset server.
In this embodiment, the target resource access request refers to a final resource access request obtained by adding identity information of the access terminal to the generated resource access request.
In this embodiment, the preset server is set in advance, and is used for storing identity information corresponding to different terminals.
In this embodiment, authenticating the identity signature carried in the target resource access request based on the preset server refers to verifying identity information of the access terminal, so as to determine whether the current access terminal has permission to access the preset server.
In this embodiment, the category identifier is a label for marking the corresponding types of different actions, and the action categories can be accurately and effectively distinguished through the category identifier, where the action categories are used for characterizing the types corresponding to the different actions.
In this embodiment, the device joint is a motion joint used to characterize the robot involved in performing the target motion, and may be, for example, a joint involved in extending the robot arm.
In this embodiment, the motion parameter is an angular equivalent value that characterizes the rotation of each device joint during motion.
In this embodiment, the motion characteristics refer to motion characteristics capable of characterizing the type of target motion and the motion amplitude condition.
In this embodiment, the preset low code library is a database in a different category for storing low code blocks under that category.
In this embodiment, the preset low code block is in a preset low code library, and is used to control the robot to execute the code of the corresponding target action.
In this embodiment, the target function refers to the purpose that different preset low code blocks can achieve.
In this embodiment, the initial low code block refers to a low code block that is matched from a preset low code library according to the action feature, and can ensure the accuracy of the current low code block, that is, the low code block needs to be checked.
In this embodiment, the virtual test case is a simulation component for verifying whether the matched initial low code block can control the corresponding device to execute the corresponding target action.
In this embodiment, the difference feature is a difference between the motion feature for representing the motion demonstration result and the target motion, and may specifically be a difference between motion types or a difference between motion parameters.
In this embodiment, updating the initial code block based on the difference feature refers to determining a low code block to be modified according to the difference feature, determining a specific value to be adjusted according to the difference degree of the low code block and the low code block, and finally updating the initial code block.
The beneficial effects of the technical scheme are as follows: the method comprises the steps of accurately and effectively determining the category of the target action according to the action identifier of the target action, accurately and effectively acquiring the low code blocks of each target action from a preset low code library of the type of the target action according to the action characteristics of the target action, and finally checking the acquired low code blocks, so that the accuracy and reliability of the finally obtained low code blocks of each target action are ensured, the standardization and convenience of the design of each target action of the robot are conveniently ensured, and the operation and the working effect of the robot are ensured.
Example 6:
on the basis of embodiment 5, this embodiment provides a visual editing system based on a robot design system, an access unit, including:
the identity analysis subunit is used for acquiring an identity signature carried in the target resource access request based on the preset server, and analyzing the identity signature to obtain account information and a communication address of the access terminal;
the identity verification subunit is configured to match account information and a communication address of the access terminal with reference account information and reference communication address corresponding to a preset authorization terminal, and determine that the access terminal is an authorization terminal when the reference account information and the reference communication address are matched with the account information and the communication address of the access terminal, or reject a target resource access request of the access terminal based on the preset server.
In this embodiment, the account information refers to information such as a user name of the access terminal.
In this embodiment, the preset authorization terminal is known in advance, i.e. a terminal that is able to access the preset server.
In this embodiment, the reference account information and the reference communication address refer to account information and communication address corresponding to a preset authorized terminal.
The beneficial effects of the technical scheme are as follows: the identity information of the access terminal is accurately and effectively authenticated and analyzed through the preset server, so that the safety and reliability of the access terminal for low-code access are ensured, and the reliability of visual editing of the target action of the robot is also facilitated.
Example 7:
on the basis of embodiment 1, this embodiment provides a visual editing system based on a robot design system, a visual editing module, including:
the system comprises a project analysis unit, a project analysis unit and a project analysis unit, wherein the project analysis unit is used for acquiring a service project to be executed by the robot, determining an execution standard of the project in the running process based on project configuration of the service project, and determining project requirements of the service project based on the execution standard;
the action sequence analysis unit is used for determining an execution object corresponding to each target action and a target state which the execution object needs to reach under the target action based on the project requirement, determining the state change quantity of the execution object corresponding to each target action based on the target state, and determining the target dependency relationship among the execution objects based on the state change quantity;
the action sequence analysis unit is used for determining cooperative logic among all target actions based on target dependency relationships, acquiring low-code entities corresponding to all target actions, and marking the corresponding target low-code blocks based on the corresponding relationships between the low-code entities and the target low-code blocks corresponding to the target actions;
The collaborative process configuration unit is used for determining the target position of the target low code block marked by the entity in the visual editing interface based on collaborative logic, performing front-back association on each target low code block in the visual editing interface based on the target position, and completing collaborative process configuration on the target low code block based on association results;
the action parameter configuration unit is used for determining the structure composition of each target low code block based on the collaborative process configuration result, determining the adding position of a control parameter for momentum control of the corresponding low code entity in the corresponding target low code block based on the structure composition, determining the target value of the control parameter based on the adding position, determining the reference motion parameter value corresponding to each target action based on the item requirement of the service item, adjusting the target value of the control parameter based on the reference motion parameter value, and completing the action parameter configuration of the target low code block based on the adjustment result;
and the low code packaging unit is used for packaging the target low code blocks after the collaborative process configuration and the action parameter configuration to obtain a target low code data packet, and transmitting the target low code data packet to the robot terminal based on the transmission interface.
In this embodiment, the project configuration refers to an operational effect that the business project needs to achieve.
In this embodiment, the execution standard refers to a condition that the service item needs to satisfy in the running process, a state that the service item needs to be presented last, and the like.
In this embodiment, the execution object is an execution body corresponding to different target actions, for example, may be an article that needs to be gripped by a clamping jaw.
In this embodiment, the target state refers to an operation condition that the execution object needs to satisfy under the target action, for example, it may be that the object needs to be gripped tightly by the gripping jaw of the robot and successfully transported to the corresponding position.
In this embodiment, the state change amount is a state change condition for indicating that the execution object is changed from the start to the end, and may be, for example, a condition in which the position of the article is shifted or the like.
In this embodiment, the target dependency relationship is used to characterize the dependency relationship between different execution objects, so as to facilitate accurate and effective analysis of the collaborative logic of the target actions corresponding to the execution objects.
In this embodiment, the low code entity refers to a component of the robot that the low code block needs to control.
In this embodiment, the entity marking refers to performing corresponding device marking on the low code block, so as to facilitate collaborative process configuration and action parameter configuration on the low code block through the visual editing interface.
In this embodiment, the target positions refer to positions where different target low-code blocks should be typeset in the visual editing interface, so that coordination and effective control on the robot are facilitated.
In this embodiment, performing front-to-back association on each target low code block in the visual editing interface based on the target position refers to performing logic association on each target low code block, so as to facilitate obtaining a completed low code program, thereby realizing effective control over the robot.
In this embodiment, momentum control refers to controlling the motion variation of the low-code entity during motion, i.e. controlling the extension of the mechanical arm, etc.
In this embodiment, the control parameter refers to a code segment in the target low code block that can control the motion amplitude of the robot device.
In this embodiment, the target value refers to a specific value corresponding to the control parameter.
In this embodiment, the reference motion parameter value refers to a value of a motion amplitude or a rotation angle that each target motion theoretically needs to reach.
In this embodiment, the target low code data packet refers to a complete low code program obtained by encapsulating a target low code block after collaborative flow configuration and action parameter configuration.
The beneficial effects of the technical scheme are as follows: the method has the advantages that the cooperative logic relation among all target actions in the target action set is accurately and effectively determined according to the project requirements of the service project, the cooperative flow configuration and the action parameter configuration are carried out on the target low code blocks corresponding to all target actions through the cooperative logic, finally, the configured target low code data blocks are packaged and issued to the robot terminal, the standardization and convenience of the design of all target actions of the robot are guaranteed, meanwhile, the accuracy and the reliability of the action editing executed by the robot are improved, and the operation and the working effect of the robot are guaranteed.
Example 8:
on the basis of embodiment 7, this embodiment provides a visual editing system based on a robot design system, a low-code packaging unit, including:
the monitoring subunit is used for acquiring real-time actions of the robot after receiving the target low-code data packet, and determining false actions when action errors exist between the real-time actions and preset reference actions;
the action correction subunit is used for correcting the error low code corresponding to the error action based on the visual editing interface until the error action is consistent with the preset reference action, monitoring a robot action update notification sent by the management terminal in real time, and determining an update low code block and a low code update position corresponding to the update action based on the robot action update notification;
And the updating subunit is used for adding the updating low code block at the low code updating position and logically associating the added updating low code block with the adjacent target low code block.
In this embodiment, the preset reference actions are known in advance, i.e. actions that the robot needs to present when executing the business item.
In this embodiment, the false action refers to an action in which the real-time action of the robot after receiving the target low-code packet is distinguished from the preset reference action.
In this embodiment, the false low code refers to the corresponding low code segment of the false action in the target low code block.
In this embodiment, the error low code corresponding to the error action is corrected based on the visual editing interface until the error action is consistent with the preset reference action, which may be that the position of the error low code in the target low code block is locked by the action characteristic of the error action, and a specific value to be corrected is determined according to the difference degree between the error action and the preset reference action, so as to implement correction of the error low code.
In this embodiment, the robot action update notification may be to update an existing action of the robot or to add a new target action on the basis of the original action.
In this embodiment, the update action refers to an action requiring replacement or addition.
In this embodiment, updating a low code block refers to a low code fragment that needs to be updated or a low code fragment that needs to be added.
In this embodiment, the low code update location refers to a location where replacement of an original low code is required or a location where addition of an updated low code block is required on an original basis.
The beneficial effects of the technical scheme are as follows: by monitoring the real-time actions of the robot after receiving the target low-code data packet, the accuracy and the reliability of the visual editing of the target low-code block corresponding to the target actions of the robot are conveniently ensured, and the operation and the working effect of the robot are ensured.
Example 9:
on the basis of embodiment 1, this embodiment provides a visual editing system based on a robot design system, a visual editing module, including:
the work load statistics unit is used for respectively acquiring configuration steps contained in the collaborative process configuration and the action parameter configuration, determining the data configuration quantity contained in each configuration step, and calculating the work load for carrying out collaborative process configuration and action parameter configuration on the target low code block based on the configuration steps and the data configuration quantity contained in each configuration step;
And calculating the workload of collaborative process configuration and action parameter configuration on the target low-code block according to the following formula:
wherein,,representing collaboration of target low code blocksThe workload of flow configuration and action parameter configuration; />Representing error factors, and the value range is 0.01,0.015; />The serial number of the configuration step included in the collaborative process configuration of the target low code block is represented, and the value range is [1, ]>];/>Representing the total number of configuration steps contained in the collaborative process configuration of the target low-code block; />Representing the +.>The data configuration amount contained in the individual configuration steps; />Representing the sequence number of the configuration step included in the configuration of the action parameters of the target low code block, and the value range is [1, ]>];/>Representing the total number of configuration steps contained in the configuration of the action parameters of the target low-code block; />Representing the +.>The data configuration amount contained in the individual configuration steps;
the efficiency calculation unit is used for respectively obtaining the configuration speed for carrying out collaborative process configuration and action parameter configuration on the target low code block based on the visual editing interface, and calculating the configuration efficiency for carrying out collaborative process configuration and action parameter configuration on the target low code block based on the workload and the configuration speed;
Calculating the configuration efficiency of collaborative process configuration and action parameter configuration on the target low-code blocks according to the following formula:
wherein,,the configuration efficiency of the collaborative process configuration and the action parameter configuration of the target low code block is represented, and the value range is (0, 1); />Indicating the time length for completing collaborative process configuration and action parameter configuration expectation of the target low-code block; />Representing the workload of collaborative process configuration and action parameter configuration for the target low-code block;representing the configuration speed of collaborative process configuration for the target low code block; />Representing the configuration speed of performing action parameter configuration on the target low code block; />Representing the invalid configuration time length generated in the process of carrying out collaborative process configuration on the target low-code block and action parameter configuration; />Allowable fluctuation value representing configuration efficiency, and the value range is (-0.01, 0.01)
The reminding unit is used for comparing the configuration efficiency with a preset efficiency threshold, judging that the configuration efficiency of the cooperative flow configuration and the action parameters of the target low code blocks is qualified when the configuration efficiency is larger than or equal to the preset efficiency threshold, otherwise, judging that the configuration efficiency of the cooperative flow configuration and the action parameters of the target low code blocks is unqualified, and improving the configuration speed of the cooperative flow configuration and the action parameter configuration of the target low code blocks until the configuration efficiency is larger than or equal to the preset efficiency threshold.
In this embodiment, the configuration step refers to a configuration working link involved in performing collaborative flow configuration and action parameter configuration on the target low-code block.
In this embodiment, the data configuration amount refers to the amount of data modification or adjustment involved in each configuration step.
In this embodiment, the workload refers to the total amount of work for collaborative process configuration and action parameter configuration on the target low code block, i.e. the total number of parameters to be configured.
In this embodiment, the preset efficiency threshold is set in advance, and is the minimum standard for measuring whether the configuration efficiency of the collaborative process configuration and the action parameter configuration for the target low code block is qualified, and can be adjusted.
The beneficial effects of the technical scheme are as follows: by calculating the configuration efficiency of the target low code blocks, the method is convenient for accurately and effectively grasping the editing efficiency of each target action of the robot through the visual editing interface, is convenient for timely adjusting the configuration speed when the configuration efficiency is too low, and improves the accuracy and reliability of executing action editing on the robot.
Example 10:
the embodiment provides a visual editing method based on a robot design system, as shown in fig. 3, including:
Step 1: analyzing a service item to be executed by the robot, determining a target action set of the robot when executing the service item, and extracting action identifiers of all target actions in the target action set;
step 2: determining the category of the target action based on the action identifier, and matching a target low code block from a preset low code library corresponding to the category based on the action characteristic of the target action;
step 3: determining project requirements of business projects, determining cooperative logic among all target actions based on the project requirements, and performing cooperative flow configuration and action parameter configuration on the target low-code blocks on a visual editing interface based on the cooperative logic.
The beneficial effects of the technical scheme are as follows: the method has the advantages that the service items to be executed by the robot are analyzed, the target action set required by the robot is accurately and effectively confirmed, the target low code blocks corresponding to the corresponding target actions are effectively matched according to the target action set, and finally, collaborative process configuration and action parameter configuration are carried out on the target low code blocks at a visual editing interface, so that the standardization and convenience of the design of each target action of the robot are guaranteed, meanwhile, the accuracy and reliability of the execution action editing of the robot are improved, and the operation and working effect of the robot are guaranteed.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (10)

1. A visual editing system based on a robotic design system, comprising:
the action type determining module is used for analyzing the service items to be executed by the robot, determining a target action set of the robot when executing the service items, and extracting action identifiers of all target actions in the target action set;
the code matching module is used for determining the category of the target action based on the action identifier and matching the target low code block from a preset low code library corresponding to the category based on the action characteristic of the target action;
the visual editing module is used for determining project requirements of business projects, determining cooperative logic among all target actions based on the project requirements, and carrying out cooperative flow configuration and action parameter configuration on the target low-code blocks on the visual editing interface based on the cooperative logic.
2. The visual editing system based on a robot design system according to claim 1, wherein the action type determining module comprises:
the system comprises a project acquisition unit, a service branch acquisition unit and a service branch acquisition unit, wherein the project acquisition unit is used for acquiring a service project to be executed by a robot, extracting project configuration information of the service project, and determining service branches contained in the service project based on the project configuration information, wherein the number of the service branches is at least two;
the behavior determining unit is used for determining the execution steps of the robots contained in the corresponding service branches based on the service states which are required to be achieved by each service branch, and determining the action behaviors corresponding to each execution step of the robots based on the service states;
the action summarizing unit is used for determining a first execution sequence of each business branch and a second execution sequence of the robot execution steps contained in each business branch based on the development logic of the business project, summarizing action behaviors in each business branch based on the first execution sequence and the second execution sequence, and obtaining a target action set of the robot when executing the business project.
3. The visual editing system based on a robot design system according to claim 1, wherein the action type determining module comprises:
The motion acquisition unit is used for acquiring the obtained target motion set of the robot, respectively determining an execution device main body corresponding to each target motion in the target motion set, and determining the motion characteristic of each execution device main body based on the project requirement of the business project;
the identification determining unit is used for matching the motion characteristics with the reference motion characteristics corresponding to each motion identification in the preset motion identification library and determining the motion identification of each target motion based on the matching result;
and the association unit is used for carrying out association binding on the action identifier and the target action to complete extraction and determination of the action identifier of the target action.
4. A visual editing system based on a robot design system according to claim 3, wherein the action acquisition unit comprises:
the motion analysis subunit is used for acquiring the motion characteristics of each execution device main body and determining the motion amplitude range of the corresponding target motion based on the operation characteristics;
the action verification subunit is used for comparing the action range of each target action with a preset range interval, judging the target action with the action range which is not in the preset range interval as abnormal action, and correcting the action range of the abnormal action based on the item requirement of the service item and the preset range interval;
And the action screening subunit is used for determining the similarity of the target actions by the motion characteristics of each execution device main body based on the correction result, judging that the two current target actions are overlapped when the similarity is larger than a preset similarity threshold, and performing de-duplication on the overlapped target actions to obtain a final target action set.
5. The visual editing system based on a robot design system according to claim 1, wherein the code matching module comprises:
the action identifier acquisition unit is used for acquiring the action identifier of each target action in the target action set, generating a resource access request based on the action identifier, analyzing the resource access request, determining a target structure of the resource access request, and determining a segment head and a segment tail of the resource access request based on the target structure;
the identity marking unit is used for determining a target marking position based on the segment head and the segment tail, and adding an identity signature of the access terminal to the resource access request based on the target marking position to obtain a target resource access request;
the access unit is used for transmitting the target resource access request to a preset server, authenticating an identity signature carried in the target resource access request based on the preset server, analyzing the target resource access request after the authentication is passed, and extracting action identifiers of all target actions carried in the target resource access request;
The category determining unit is used for matching the action identification of each target action with the category identification of each action category in the preset server and determining the category to which each target action belongs based on the matching result;
the motion characteristic analysis unit is used for determining device joints contained in the target motion, respectively determining corresponding motion parameters of each device joint in the process of executing the target motion, and obtaining the motion characteristic of each target motion based on the device joint and the corresponding motion parameters;
the low code block determining unit is used for accessing a preset low code library in the category based on the action characteristics, sequentially matching the action characteristics with target functions of preset low code blocks in the preset low code library, and determining initial low code blocks based on a matching result;
the low code block checking unit is used for calling a virtual test case corresponding to the target action based on the visual editing interface and inputting the initial low code block into a test port corresponding to the virtual test case;
and the action visualization unit is used for carrying out independent action demonstration on the visual editing interface through the virtual test cases corresponding to the target actions based on the input results, obtaining a target low code block when the action demonstration results are consistent with the action characteristics of the target actions, otherwise, determining the difference characteristics of the action demonstration results and the action characteristics of the target actions, and updating the initial code block based on the difference characteristics until the action demonstration results are consistent with the action characteristics of the target actions.
6. The visual editing system based on a robot design system according to claim 5, wherein the access unit comprises:
the identity analysis subunit is used for acquiring an identity signature carried in the target resource access request based on the preset server, and analyzing the identity signature to obtain account information and a communication address of the access terminal;
the identity verification subunit is configured to match account information and a communication address of the access terminal with reference account information and reference communication address corresponding to a preset authorization terminal, and determine that the access terminal is an authorization terminal when the reference account information and the reference communication address are matched with the account information and the communication address of the access terminal, or reject a target resource access request of the access terminal based on the preset server.
7. The visual editing system based on a robot design system according to claim 1, wherein the visual editing module comprises:
the system comprises a project analysis unit, a project analysis unit and a project analysis unit, wherein the project analysis unit is used for acquiring a service project to be executed by the robot, determining an execution standard of the project in the running process based on project configuration of the service project, and determining project requirements of the service project based on the execution standard;
The action sequence analysis unit is used for determining an execution object corresponding to each target action and a target state which the execution object needs to reach under the target action based on the project requirement, determining the state change quantity of the execution object corresponding to each target action based on the target state, and determining the target dependency relationship among the execution objects based on the state change quantity;
the action sequence analysis unit is used for determining cooperative logic among all target actions based on target dependency relationships, acquiring low-code entities corresponding to all target actions, and marking the corresponding target low-code blocks based on the corresponding relationships between the low-code entities and the target low-code blocks corresponding to the target actions;
the collaborative process configuration unit is used for determining the target position of the target low code block marked by the entity in the visual editing interface based on collaborative logic, performing front-back association on each target low code block in the visual editing interface based on the target position, and completing collaborative process configuration on the target low code block based on association results;
the action parameter configuration unit is used for determining the structure composition of each target low code block based on the collaborative process configuration result, determining the adding position of a control parameter for momentum control of the corresponding low code entity in the corresponding target low code block based on the structure composition, determining the target value of the control parameter based on the adding position, determining the reference motion parameter value corresponding to each target action based on the item requirement of the service item, adjusting the target value of the control parameter based on the reference motion parameter value, and completing the action parameter configuration of the target low code block based on the adjustment result;
And the low code packaging unit is used for packaging the target low code blocks after the collaborative process configuration and the action parameter configuration to obtain a target low code data packet, and transmitting the target low code data packet to the robot terminal based on the transmission interface.
8. The visual editing system based on a robot design system according to claim 7, wherein the low code packaging unit comprises:
the monitoring subunit is used for acquiring real-time actions of the robot after receiving the target low-code data packet, and determining false actions when action errors exist between the real-time actions and preset reference actions;
the action correction subunit is used for correcting the error low code corresponding to the error action based on the visual editing interface until the error action is consistent with the preset reference action, monitoring a robot action update notification sent by the management terminal in real time, and determining an update low code block and a low code update position corresponding to the update action based on the robot action update notification;
and the updating subunit is used for adding the updating low code block at the low code updating position and logically associating the added updating low code block with the adjacent target low code block.
9. The visual editing system based on a robot design system according to claim 1, wherein the visual editing module comprises:
the work load statistics unit is used for respectively acquiring configuration steps contained in the collaborative process configuration and the action parameter configuration, determining the data configuration quantity contained in each configuration step, and calculating the work load for carrying out collaborative process configuration and action parameter configuration on the target low code block based on the configuration steps and the data configuration quantity contained in each configuration step;
the efficiency calculation unit is used for respectively obtaining the configuration speed for carrying out collaborative process configuration and action parameter configuration on the target low code block based on the visual editing interface, and calculating the configuration efficiency for carrying out collaborative process configuration and action parameter configuration on the target low code block based on the workload and the configuration speed;
the reminding unit is used for comparing the configuration efficiency with a preset efficiency threshold, judging that the configuration efficiency of the cooperative flow configuration and the action parameters of the target low code blocks is qualified when the configuration efficiency is larger than or equal to the preset efficiency threshold, otherwise, judging that the configuration efficiency of the cooperative flow configuration and the action parameters of the target low code blocks is unqualified, and improving the configuration speed of the cooperative flow configuration and the action parameter configuration of the target low code blocks until the configuration efficiency is larger than or equal to the preset efficiency threshold.
10. The visual editing method based on the robot design system is characterized by comprising the following steps of:
step 1: analyzing a service item to be executed by the robot, determining a target action set of the robot when executing the service item, and extracting action identifiers of all target actions in the target action set;
step 2: determining the category of the target action based on the action identifier, and matching a target low code block from a preset low code library corresponding to the category based on the action characteristic of the target action;
step 3: determining project requirements of business projects, determining cooperative logic among all target actions based on the project requirements, and performing cooperative flow configuration and action parameter configuration on the target low-code blocks on a visual editing interface based on the cooperative logic.
CN202310812694.8A 2023-07-05 2023-07-05 Visual editing system and method based on robot design system Active CN116560640B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310812694.8A CN116560640B (en) 2023-07-05 2023-07-05 Visual editing system and method based on robot design system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310812694.8A CN116560640B (en) 2023-07-05 2023-07-05 Visual editing system and method based on robot design system

Publications (2)

Publication Number Publication Date
CN116560640A true CN116560640A (en) 2023-08-08
CN116560640B CN116560640B (en) 2024-01-02

Family

ID=87491815

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310812694.8A Active CN116560640B (en) 2023-07-05 2023-07-05 Visual editing system and method based on robot design system

Country Status (1)

Country Link
CN (1) CN116560640B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150089477A1 (en) * 2013-09-26 2015-03-26 International Business Machines Corporation Understanding computer code with human language assistance
CN106985150A (en) * 2017-03-21 2017-07-28 深圳泰坦创新科技有限公司 The method and apparatus of control machine human action
CN110413276A (en) * 2019-07-31 2019-11-05 网易(杭州)网络有限公司 Parameter edit methods and device, electronic equipment, storage medium
CN110955421A (en) * 2019-11-22 2020-04-03 上海乐白机器人有限公司 Method, system, electronic device, storage medium for robot programming
CN113021294A (en) * 2021-03-10 2021-06-25 王悦翔 Robot automation system capable of configuring environment/action/flow and construction method and application thereof
CN113626309A (en) * 2021-07-06 2021-11-09 深圳点猫科技有限公司 Method and device for simulating operation of mobile terminal, electronic equipment and storage medium
CN117021073A (en) * 2023-07-17 2023-11-10 网易(杭州)网络有限公司 Robot control method and device, electronic equipment and readable storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150089477A1 (en) * 2013-09-26 2015-03-26 International Business Machines Corporation Understanding computer code with human language assistance
CN106985150A (en) * 2017-03-21 2017-07-28 深圳泰坦创新科技有限公司 The method and apparatus of control machine human action
CN110413276A (en) * 2019-07-31 2019-11-05 网易(杭州)网络有限公司 Parameter edit methods and device, electronic equipment, storage medium
CN110955421A (en) * 2019-11-22 2020-04-03 上海乐白机器人有限公司 Method, system, electronic device, storage medium for robot programming
CN113021294A (en) * 2021-03-10 2021-06-25 王悦翔 Robot automation system capable of configuring environment/action/flow and construction method and application thereof
CN113626309A (en) * 2021-07-06 2021-11-09 深圳点猫科技有限公司 Method and device for simulating operation of mobile terminal, electronic equipment and storage medium
CN117021073A (en) * 2023-07-17 2023-11-10 网易(杭州)网络有限公司 Robot control method and device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
CN116560640B (en) 2024-01-02

Similar Documents

Publication Publication Date Title
CN109190368B (en) SQL injection detection device and SQL injection detection method
CN110691009B (en) Network equipment inspection method and device
JP2003091432A (en) Software evaluation system and software evaluation tool
CN110252674A (en) A kind of method and system of the packing box quality testing based on machine vision
CN107784479A (en) A kind of business flow processing method and apparatus
CN112968815B (en) Method for realizing continuous transmission in broken network
CN116560640B (en) Visual editing system and method based on robot design system
CN110539299A (en) Robot working method, controller and robot system
CN117451115B (en) Real-time state monitoring method for sorting conveying system
CN112529218A (en) Building safety detection method and system based on correlation analysis
CN110188734A (en) The recognition methods of welding type and device
CN114647575B (en) C++ inter-process exception analysis system and method based on higher-order function
CN108960455A (en) Service operation state analysis method, calculates equipment and storage medium at device
Kibira et al. Building A Digital Twin of AN Automated Robot Workcell
CN111844021B (en) Mechanical arm cooperative control method, device, equipment and storage medium
US20130117206A1 (en) Dynamic training and tagging of computer code
CN114722025A (en) Data prediction method, device and equipment based on prediction model and storage medium
CN112231062A (en) Safety test system and method for programmable industrial controller
CN116932413B (en) Defect processing method, defect processing device and storage medium for test task
CN106384046B (en) Method for detecting mobile application program with dynamic and static states
CN117473200B (en) Comprehensive acquisition and analysis method for website information data
CN111190819B (en) State control method for airborne software test project
CN115344499A (en) Interface message verification processing method and device
CN117522384B (en) Robot operation and maintenance method and device, node equipment and storage medium
Zhao et al. Feature Recognition and Analysis Method for Cyber Security Assets of Intelligent Connected Vehicles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant