CN117472423A - Visual workflow layout system, method, equipment and medium for decoupling reference resource and flow design - Google Patents

Visual workflow layout system, method, equipment and medium for decoupling reference resource and flow design Download PDF

Info

Publication number
CN117472423A
CN117472423A CN202311349803.3A CN202311349803A CN117472423A CN 117472423 A CN117472423 A CN 117472423A CN 202311349803 A CN202311349803 A CN 202311349803A CN 117472423 A CN117472423 A CN 117472423A
Authority
CN
China
Prior art keywords
flow
gateway
resources
api
resource
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311349803.3A
Other languages
Chinese (zh)
Inventor
杨哲
潘咪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Pengxi Semiconductor Co ltd
Original Assignee
Shanghai Pengxi Semiconductor Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Pengxi Semiconductor Co ltd filed Critical Shanghai Pengxi Semiconductor Co ltd
Priority to CN202311349803.3A priority Critical patent/CN117472423A/en
Publication of CN117472423A publication Critical patent/CN117472423A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/70Software maintenance or management
    • G06F8/71Version control; Configuration management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/36Software reuse
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/66Arrangements for connecting between networks having differing types of switching systems, e.g. gateways

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application provides a visual workflow orchestration system, method, equipment and medium for decoupling reference resources and flow design, comprising: gateway call layer: the gateway is used for realizing standardized management of resources by calling the gateway; resource configuration layer: the flow node is used for configuring resources used by the system and the gateway; and (3) treating and monitoring layers: the method is used for uniformly managing and monitoring the resources. The method can be at least used for solving the problems that the configuration is complex and multiplexing and expansion are difficult when the flow nodes call external resources in the flow template design. When the process nodes refer to external resources, all necessary parameters need to be configured, and when a plurality of process nodes call the same resources, repeated configuration is needed; secondly, in the semiconductor business process, the process management software interacts with other computer integrated manufacturing systems, and because of different interface definitions, databases, message cluster middleware and the like, the technical problem that the configuration of process nodes needs to be modified in a large amount when different provider software is docked is solved.

Description

Visual workflow layout system, method, equipment and medium for decoupling reference resource and flow design
Technical Field
The present application relates to the field of system development technologies, and in particular, to a system, a method, an apparatus, and a medium for arranging a visual workflow with reference resources decoupled from a flow design.
Background
The manufacturing precision of the semiconductor wafer is high, the production scene is many and complex, and in order to improve the productivity and the chip yield of a semiconductor factory, the production efficiency is improved, and the factory automation level is required to be continuously improved. In automatic production, the service flow is visualized by using automatic flow software, an off-line production scene is subjected to on-line visualization, the service line is carded, integrated and optimized by taking the flow as a core, and the follow-up and monitoring flow execution is facilitated, so that the resource allocation of a semiconductor factory and the optimization of the service flow are facilitated.
However, the inventors found that there are at least the following technical problems in the related art:
the conventional semiconductor process software has the following two problems that firstly, in the process template design, when process nodes call external resources, the configuration is complex, multiplexing and expansion are difficult, so that the overall condition of the resources is difficult to grasp, and a unified viewing panel cannot be constructed for the type, the quantity and the service condition of the resources. When the process nodes refer to external resources, all necessary parameters need to be configured, and when a plurality of process nodes call the same resources, repeated configuration is needed; secondly, in the semiconductor business process, the process management software interacts with other computer integrated manufacturing systems, and because of different interface definitions, databases, message cluster middleware and the like, a great deal of modification of process node configuration is needed when different provider software is docked.
The method simplifies the configuration of the process node, uniformly manages the internal resources and the external resources, uniformly configures the external system resources such as interfaces, message clusters and the like by utilizing the resource management, only uniformly modifies the configuration in the resource management when different suppliers are docked, does not need to modify the process template, and the process node is focused on the configuration related to the service.
Disclosure of Invention
It is an object of the present application to provide a system, a method, an apparatus and a medium for arranging a visual workflow with reference to decoupling resources from a flow design, which at least solve the above-mentioned drawbacks of the related art.
To achieve the above object, some embodiments of the present application provide the following aspects:
in a first aspect, some embodiments of the present application further provide a visualization workflow orchestration system that decouples reference resources from a flow design, comprising:
gateway call layer: the gateway is used for realizing standardized management of resources by calling the gateway;
resource configuration layer: the flow node is used for configuring resources used by the system and the gateway;
and (3) treating and monitoring layers: the method is used for uniformly managing and monitoring the resources.
As a preferred technical scheme of the present application: the gateway call layer calls the gateway as follows:
101. requesting a gateway;
102. acquiring gateway configuration information;
103. analyzing the request parameters;
104. calling a service;
105. processing a response result;
106. and returning a result.
As a preferred technical scheme of the present application: the gateway call layer also operates a flow template through a flow engine; the process engine invokes the resource by invoking a process node of the gateway:
111: executing a flow node of the calling gateway;
112: acquiring configuration information of a gateway;
113: converting the flow variable into an input parameter;
114: the gateway layer executes a processing flow;
115: the parameter is converted into a flow variable.
As a preferred technical scheme of the present application: the resource configuration layer configures the resources specifically as follows:
201: analyzing workflow requirements;
202: whether the required resource is released or not, if not, configuring the resource and releasing the resource, and then designing the flow; if so, designing a flow;
203: designing a flow;
204: checking a flow;
205: a release flow;
206: and checking the flow instance.
As a preferred technical scheme of the present application: the resource configuration layer also configures APIs by configuring API sites.
As a preferred technical scheme of the present application: the API site is configured as follows:
211: newly adding an API site;
212: basic configuration is carried out on the API site;
213: the API sites are commonly configured.
As a preferred technical scheme of the present application: the API is configured as follows:
221: newly adding an API;
222: performing basic configuration;
223: defining API entry and exit parameters;
224: performing address configuration;
225: and performing a parameter entering test on the defined API, judging whether the API is capable of being successfully called according to the expected result.
As a preferred technical scheme of the present application: the configuration of the flow node of the gateway is specifically as follows:
231: selecting a gateway site from the flow nodes;
232: selecting a gateway defined under a gateway site;
233: define a multi-instance loop:
234: defining an entry called by an interface;
235: defining the play parameters of the interface.
As a preferred technical scheme of the present application: the treatment monitoring process of the treatment monitoring layer comprises the following steps:
301: uniformly treating the resources;
302: monitoring resources in the flow node;
303: selecting different time dimensions to view the report;
304: and identifying abnormality and carrying out resource adjustment.
In a second aspect, some embodiments of the present application further provide a visualization workflow orchestration that references resources decoupled from a flow design, comprising the steps of:
the standardized management of the resources is realized by calling the gateway;
configuring the resources used by the system and the flow nodes of the gateway;
and uniformly managing and monitoring the resources.
In a third aspect, some embodiments of the present application further provide a computer apparatus, the apparatus comprising:
one or more processors; and a memory storing computer program instructions that, when executed, cause the processor to perform the system as described above.
In a fourth aspect, some embodiments of the present application also provide a computer readable medium having stored thereon computer program instructions executable by a processor to implement a system as described above.
Compared with the prior art, in the scheme provided by the embodiment of the application, the method shields the difference of protocols in the synchronous/asynchronous of the external resources, and focuses on the business process when the process template is designed, so that the details of resource calling are not required to be focused; the resource types are conveniently expanded without affecting the design of the flow nodes.
The method simplifies node configuration information and has strong reusability. The defined APIs can be used in all process templates; and after the uniform modification of the resources, all the used flow templates take effect, and the templates do not need to be modified.
The method and the device can support monitoring and control, monitor resource use, facilitate the investigation of abnormal call conditions, locate flow template design defects and optimize flow Cheng Moban.
Drawings
FIG. 1 is a block diagram of a system provided in an embodiment of the present application;
FIG. 2 is a flow chart of a process node of a process engine executing a call API according to an embodiment of the present application;
FIG. 3 is a flowchart of overall process and function assignment provided in an embodiment of the present application;
fig. 4 is a flowchart of an API gateway implementation provided in an embodiment of the present application;
fig. 5 is a schematic diagram of an API site and an AP I configuration provided by an embodiment of the present application;
FIG. 6 is a schematic diagram of an API site configuration provided in an embodiment of the present application;
FIG. 7 is a schematic diagram of an API configuration provided in an embodiment of the present application;
FIG. 8 is a diagram of an interface call flow node configuration procedure provided in an embodiment of the present application;
fig. 9 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
Example 1
The process comprises the following steps:
gateway call layer 100: the gateway is used for realizing standardized management of resources by calling the gateway;
resource configuration layer 200: the flow node is used for configuring resources used by the system and the gateway;
treatment monitoring layer 300: the method is used for uniformly managing and monitoring the resources.
In some embodiments of the present application, the gateway call layer 100 calls a gateway as follows:
101. requesting a gateway;
102. acquiring gateway configuration information;
103. analyzing the request parameters;
104. calling a service;
105. processing a response result;
106. and returning a result.
In some embodiments of the present application, the gateway call layer 100 further runs a flow template through a flow engine; the flow engine invokes the resource by invoking a flow node of the gateway.
In some embodiments of the present application, the flow engine invokes a flow node of the gateway as follows:
111: executing a flow node of the calling gateway;
112: acquiring configuration information of a gateway;
113: converting the flow variable into an input parameter;
114: the gateway layer executes a processing flow;
115: the parameter is converted into a flow variable.
In some embodiments of the present application, the resource configuration layer 200 configures the resources specifically as follows:
201: analyzing workflow requirements;
202: whether the required resource is released or not, if not, configuring the resource and releasing the resource, and then designing the flow; if so, designing a flow;
203: designing a flow;
204: checking a flow;
205: a release flow;
206: and checking the flow instance.
In some embodiments of the present application, the resource configuration layer 200 also configures APIs by configuring API sites.
In some embodiments of the present application, the API site is configured as follows:
211: newly adding an API site;
212: basic configuration is carried out on the API site;
213: the API sites are commonly configured.
In some embodiments of the present application, the API is configured as follows:
221: newly adding an API;
222: performing basic configuration;
223: defining API entry and exit parameters;
224: performing address configuration;
225: and performing a parameter entering test on the defined API, judging whether the API is capable of being successfully called according to the expected result.
In some embodiments of the present application, the configuration of the flow node of the gateway is specifically as follows:
231: selecting a gateway site from the flow nodes;
232: selecting a gateway defined under a gateway site;
233: define a multi-instance loop:
234: defining an entry called by an interface;
235: defining the play parameters of the interface.
In some embodiments of the present application, the abatement monitoring process of the abatement monitoring layer 300 is specifically as follows:
301: uniformly treating the resources;
302: monitoring resources in the flow node;
303: selecting different time dimensions to view the report;
304: and identifying abnormality and carrying out resource adjustment.
Example two
Because the formats of the resources provided by the data suppliers are variable, the same resources need to be repeatedly operated when the resources are used due to service and docking differences. The gateway layer is arranged, so that the problem of data exchange among a plurality of services is solved, resources are uniformly and standardized managed, and a plurality of service systems can be integrated.
If the user calls the gateway based on the requirement, a call application can be sent to the gateway so as to determine a target gateway to be called by the user and a target data provider for providing the target gateway by the gateway.
Referring to fig. 2, when the flow engine runs the flow template, the flow node invokes the resource through the gateway layer. Resources can be reused when services are added or adjusted, and definition of flow nodes and flow variables is focused in workflow construction. The process engine analyzes the workflow, identifies process nodes and process variables, and generates a process instance.
The flow engine executes a flow node of the calling gateway; the process engine triggers the scheduling of the process through a trigger mechanism, the scheduling process is to asynchronously advance by putting the process node instance into a synchronous queue, and the execution of each node task is an independent short transaction in the whole advancing process. Flow engine task pushing mainly takes two forms, one is to push a process by acquiring tasks in a memory task queue, and the other is to push the process by a timing compensation task mechanism.
The flow engine then obtains configuration information for the gateway.
Converting the flow variable into an input parameter; after entering the task of the execution node, the flow variable is converted into the parameter entering information of the calling gateway, the parameter entering format is formatted, and the parameter entering format is matched with the parameter format defined by the gateway.
The gateway layer executes a processing flow; and according to the execution process of the gateway, requesting the services of different protocols, and obtaining the result. And the gateway layer acquires other configurations of the gateway according to the calling gateway name and executes the other configurations.
Converting the output parameters into process variables; and converting the out-of-reference into a flow variable and entering the following flow node task processing until the node is ended.
Referring to FIG. 3, the overall flow and function allocation of resources and flow templates; the resource manager can uniformly configure the resources used by the system, and the flow manager only needs to design a flow template and add the resources in the flow nodes.
The resource manager performs resource allocation in resource management.
When the flow node is configured, taking interface call as an example:
dragging the flow node onto a flow template design canvas, and entering a page for editing the flow node after double clicking;
selecting a gateway defined under a gateway site;
defining a multi-instance loop;
mapping the interface parameter and the flow variable;
the value is passed to the flow variable.
The management monitoring layer 300 performs unified management on the resources; if the resource is adjusted, the resource can be newly increased or reduced and flexibly configured. The system resources are unified and integrated in the resource management, so that the management cost of the resources is reduced.
Monitoring resources in the flow node; selecting different time dimensions of year, month, day and the like to view the report; finally, identifying abnormality and adjusting resources; and (5) carrying out resource calling abnormal alarm according to the statistical information, identifying a slow interface and confirming response time. The resource monitoring can provide decision support for solving the problems of unbalanced resource use, unstable running time, misuse of resources and the like.
Example III
Taking the running API as an example:
and respectively carrying out the calling resource flow and the flow node loading process of the API so as to realize mutual independence of the automatic platform system.
The call resource flow is as follows:
referring to fig. 4, when an API call occurs, an API process flow is entered to acquire an external resource.
Acquiring configuration information of the API; including the site address of the API, the communication protocol, the in-parameters, the out-parameters, etc.
And analyzing the configured API entry, and processing format conversion, request mode and the like.
And encapsulating the call request according to different protocols, and initiating remote service call.
The return parameters are parsed according to different data formats, such as xml, json, etc., and mapped to the output parameters according to the configuration.
And finally, returning the processed parameters to the calling party.
When the flow engine runs the flow template, the flow node calls the resource through the gateway layer. Resources can be reused when services are added or adjusted, and definition of flow nodes and flow variables is focused in workflow construction. The process engine analyzes the workflow, identifies process nodes and process variables, and generates a process instance.
Take the example of a flow node calling an API.
When the flow engine executes the flow template, the controller is triggered first to acquire the flow template, generate a flow instance, generate an initial node instance, store the node instance into the queue, take out the node from the queue and enter the next step.
And after the node configuration is loaded, modifying the node instance state to be in execution, setting execution timeout time, and loading the context construction node task.
Converting the flow variable into an input parameter; after entering the task of the execution node, converting the flow variable into parameter entering information for calling the API, formatting, and matching the parameter entering format with the parameter format defined by the API.
The gateway layer executes a processing flow; and according to the execution process of the API gateway, requesting the services of different protocols, and obtaining the result. And the gateway layer acquires other configuration of the API according to the calling API name to execute.
Converting the output parameters into process variables; and converting the out-of-reference into a flow variable and entering the following flow node task processing until the node is ended.
The resource manager can uniformly configure the resources used by the system, and the flow manager only needs to design a flow template and add the resources in the flow nodes.
Take configuration API sites, APIs as an example. Referring to fig. 5, an API site is first configured; the API is reconfigured.
Referring to fig. 6, the configuration API site:
inputting a site name and description, and confirming and storing;
configuring a communication protocol, a site address, a message code and the like;
the Header and cookie are reconfigured.
Referring to fig. 7, the configuration API:
and selecting an API site to which the API belongs, defining an API name, and describing the API.
Whether the configuration API is available for simulation, responds to the timeout time.
Parameter name, parameter type, whether array, whether transmission is necessary, etc. parameter name, parameter type, whether array, etc. are defined by the input parameters.
Definition Method, URL, header, response, etc. post, etc., requires definition of the requestor.
Performing a parameter entering test on the defined API, judging whether the API can be successfully called or not according to the expected result;
in the simulation scene, API simulation response content is defined, the response content can define default return values, and the response content can be respectively configured according to different request conditions.
And then carrying out configuration of the API gateway flow node.
Referring to fig. 8, a defined API site is selected in a flow node; the API defined under the API site is then selected.
Define a multi-instance loop: including whether serial or not, parallel. When serial or parallel operation is selected, it is necessary to define the cycle data and the completion conditions.
And mapping and calling the interface parameter and the flow variable.
The parameters of the interface are defined by passing the values to the flow variables.
The resource management layer performs unified management on the resources:
including API sites, APIs, message clusters, data sources, SQL, etc. The system resources are unified and integrated in the resource management, so that the management cost of the resources is reduced. Resource management may enable cross-team resource coordination and sharing. In the use of business personnel, the existing resource status is mastered through resource management, and the resources are added, deleted and revised. After the resource library is established, the multiplexing of the resources can be realized.
And then monitoring the resources in the flow node: the whole can send the report according to resource division, such as API call report, and message. The API call report comprises API call times, response time, abnormal times, abnormal rate and the like, and the message sending report comprises message sending times, abnormal rate and the like.
The resource monitoring indexes are checked through different time dimensions such as year, month and day, so that the overall resource use condition can be realized macroscopically, and the running data of a specific time window can be checked purposefully. The aggregation of data is realized in the system through the time dimension, the data analysis is effectively performed, and the transverse and longitudinal comparison of the data is realized.
And (5) carrying out resource calling abnormal alarm according to the statistical information, identifying a slow interface and confirming response time. The method specifically comprises the following condition alarms:
(1) the response time is too long. If the response time is too long, the design of a flow engine or a gateway layer can be optimized, codes can be optimized, and the running quality of the system can be improved.
(2) And (5) abnormal alarm. And identifying abnormal points in the resource call in the chart, rapidly positioning the problem, checking the reason, solving the problem, and avoiding the interruption of the business flow caused by the abnormality and influencing the actual production.
(3) The average response time period is confirmed. In the use process of the resource, considering the response time of the resource, the balanced use of the resource can be noted in the design of the flow template.
The resource monitoring can provide decision support for solving the problems of unbalanced resource use, unstable running time, misuse of resources and the like.
Example IV
The embodiment of the application is applied to the manufacturing process of the semiconductor wafer, the gateway layer 100 is used for simplifying and configuring the flow nodes in the manufacturing flow of the semiconductor wafer, the internal resources and the external resources in the manufacturing flow of the semiconductor wafer are uniformly managed, and the efficiency of the manufacturing flow of the semiconductor wafer is improved by uniformly managing the internal resources; external system resources in the manufacturing process of the unified semiconductor wafer, such as different interface definitions of the docking external suppliers, databases, message cluster middleware and the like, are managed through external resources, so that the receiving and utilizing efficiency of the external resources is improved.
Example five
In addition, the embodiment of the application further provides a computer device, the structure of which is shown in fig. 9, the device comprises a memory 1 for storing computer readable instructions and a processor 2 for executing the computer readable instructions, wherein the computer readable instructions, when executed by the processor, trigger the processor to execute the method.
The methods and/or embodiments of the present application may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. The above-described functions defined in the method of the present application are performed when the computer program is executed by a processing unit.
It should be noted that, the computer readable medium described in the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
In the present application, however, a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations of the present application may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowchart or block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of devices, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
As another aspect, the present application also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be present alone without being fitted into the device. The computer readable medium carries one or more computer readable instructions executable by a processor to implement the steps of the methods and/or techniques of the various embodiments of the present application described above.
In a typical configuration of the present application, the terminals, the devices of the services network each include one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer-readable media include both permanent and non-permanent, removable and non-removable media, and information storage may be implemented by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape storage or other magnetic storage devices, or any other non-transmission medium which can be used to store information that can be accessed by a computing device.
In addition, the embodiment of the application also provides a computer program which is stored in the computer equipment, so that the computer equipment executes the method for executing the control code.
It should be noted that the present application may be implemented in software and/or a combination of software and hardware, for example, using Application Specific Integrated Circuits (ASIC), a general purpose computer or any other similar hardware device. In some embodiments, the software programs of the present application may be executed by a processor to implement the above steps or functions. Likewise, the software programs of the present application (including associated data structures) may be stored on a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. In addition, some steps or functions of the present application may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is evident that the word "comprising" does not exclude other elements or steps, and that the singular does not exclude a plurality. A plurality of units or means recited in the apparatus claims can also be implemented by means of one unit or means in software or hardware. The terms first, second, etc. are used to denote a name, but not any particular order.

Claims (10)

1. A visualization workflow orchestration system that decouples reference resources from flow design, comprising: gateway call layer (100): the gateway is used for realizing standardized management of resources by calling the gateway;
resource allocation layer (200): the flow node is used for configuring resources used by the system and the gateway;
treatment monitoring layer (300): the method is used for uniformly managing and monitoring the resources.
2. The visualization workflow orchestration system of claim 1, wherein the gateway call layer (100) calls a gateway as follows:
101. requesting a gateway;
102. acquiring gateway configuration information;
103. analyzing the request parameters;
104. calling a service;
105. processing a response result;
106. and returning a result.
3. The visualization workflow orchestration system that decouples referencing resources from flow design according to claim 2, wherein the gateway call layer (100) also runs flow templates through a flow engine; the process engine invokes the resource by invoking a process node of the gateway:
111: executing a flow node of the calling gateway;
112: acquiring configuration information of a gateway;
113: converting the flow variable into an input parameter;
114: the gateway layer executes a processing flow;
115: the parameter is converted into a flow variable.
4. A visualization workflow orchestration system that references resources and flow design decoupling as recited in claim 3, wherein the resource configuration layer (200) configures resources specifically as follows:
201: analyzing workflow requirements;
202: whether the required resource is released or not, if not, configuring the resource and releasing the resource, and then designing the flow; if so, designing a flow;
203: designing a flow;
204: checking a flow;
205: a release flow;
206: and checking the flow instance.
5. The visualization workflow orchestration system that references resources and flow design decoupling of claim 4, wherein the resource configuration layer (200) further configures APIs by configuring API sites;
the API site is configured as follows:
211: newly adding an API site;
212: basic configuration is carried out on the API site;
213: public configuration is carried out on API sites;
the API is configured as follows:
221: newly adding an API;
222: performing basic configuration;
223: defining API entry and exit parameters;
224: performing address configuration;
225: and performing a parameter entering test on the defined API, judging whether the API is capable of being successfully called according to the expected result.
6. The visualization workflow orchestration system of claim 5, wherein the flow node configuration of the gateway is specifically as follows:
231: selecting a gateway site from the flow nodes;
232: selecting a gateway defined under a gateway site;
233: define a multi-instance loop:
234: defining an entry called by an interface;
235: defining the play parameters of the interface.
7. The visualization workflow orchestration system that references resources and flow design decouples of claim 1, wherein the governance monitoring process of the governance monitoring layer (300) is specifically as follows:
301: uniformly treating the resources;
302: monitoring resources in the flow node;
303: selecting different time dimensions to view the report;
304: and identifying abnormality and carrying out resource adjustment.
8. A method of orchestrating a visual workflow that references a resource decoupled from a flow design, the method performed with the system of any one of claims 1-7, comprising the steps of:
the standardized management of the resources is realized by calling the gateway;
configuring the resources used by the system and the flow nodes of the gateway;
and uniformly managing and monitoring the resources.
9. A computer device, the device comprising:
one or more processors; and a memory storing computer program instructions that, when executed, cause the processor to perform the method of claim 8.
10. A computer readable medium having stored thereon computer program instructions executable by a processor to implement the method of claim 8.
CN202311349803.3A 2023-10-18 2023-10-18 Visual workflow layout system, method, equipment and medium for decoupling reference resource and flow design Pending CN117472423A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311349803.3A CN117472423A (en) 2023-10-18 2023-10-18 Visual workflow layout system, method, equipment and medium for decoupling reference resource and flow design

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311349803.3A CN117472423A (en) 2023-10-18 2023-10-18 Visual workflow layout system, method, equipment and medium for decoupling reference resource and flow design

Publications (1)

Publication Number Publication Date
CN117472423A true CN117472423A (en) 2024-01-30

Family

ID=89630372

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311349803.3A Pending CN117472423A (en) 2023-10-18 2023-10-18 Visual workflow layout system, method, equipment and medium for decoupling reference resource and flow design

Country Status (1)

Country Link
CN (1) CN117472423A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117667362A (en) * 2024-01-31 2024-03-08 上海朋熙半导体有限公司 Method, system, equipment and readable medium for scheduling process engine

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117667362A (en) * 2024-01-31 2024-03-08 上海朋熙半导体有限公司 Method, system, equipment and readable medium for scheduling process engine
CN117667362B (en) * 2024-01-31 2024-04-30 上海朋熙半导体有限公司 Method, system, equipment and readable medium for scheduling process engine

Similar Documents

Publication Publication Date Title
EP3543866B1 (en) Resource-efficient record processing in unified automation platforms for robotic process automation
US11244233B2 (en) Intelligent adaptor service in unified automation platforms for robotic process automation
US10698745B2 (en) Adapter extension for inbound messages from robotic automation platforms to unified automation platform
US9483329B2 (en) Categorizing and modeling integration adapters
US20170161167A1 (en) End-to-end tracing and logging
Virta et al. SOA-Based integration for batch process management with OPC UA and ISA-88/95
US20140123114A1 (en) Framework for integration and execution standardization (fiesta)
CN117472423A (en) Visual workflow layout system, method, equipment and medium for decoupling reference resource and flow design
CN110780856B (en) Electricity data release platform based on micro-service
US20090063395A1 (en) Mapping log sets between different log analysis tools in a problem determination environment
US20080229274A1 (en) Automating Construction of a Data-Source Interface For Component Applications
CN112445860B (en) Method and device for processing distributed transaction
CN112435072A (en) Model creating method and device, electronic equipment and storage medium
Vanhove et al. Tengu: An experimentation platform for big data applications
CN116708558A (en) Full-link tracking system proxy method, device, computer equipment and storage medium
CN114661594A (en) Method, apparatus, medium, and program product for automated testing
Perez et al. Monitoring platform evolution toward serverless computing for 5G and beyond systems
CN114237853A (en) Task execution method, device, equipment, medium and program product applied to heterogeneous system
US12039352B2 (en) Systems and methods for automatically generating guided user interfaces (GUIs) for tracking and migrating legacy networked resources within an enterprise during a technical migration
CN112765246A (en) Task processing method and device, electronic equipment and storage medium
Sellami et al. Fadi-a deployment framework for big data management and analytics
CN111143310B (en) Log recording method and device and readable storage medium
US9367358B2 (en) Virtualizing a set of managers to form a composite manager in an autonomic system
EP3903181A1 (en) System and method for configuring simple object access protocol (soap) components
CN117667362B (en) Method, system, equipment and readable medium for scheduling process engine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination