CN118093317A - Method for generating flow execution information and related equipment - Google Patents
Method for generating flow execution information and related equipment Download PDFInfo
- Publication number
- CN118093317A CN118093317A CN202410309768.0A CN202410309768A CN118093317A CN 118093317 A CN118093317 A CN 118093317A CN 202410309768 A CN202410309768 A CN 202410309768A CN 118093317 A CN118093317 A CN 118093317A
- Authority
- CN
- China
- Prior art keywords
- node
- execution information
- execution
- flow
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 64
- 230000004044 response Effects 0.000 claims abstract description 16
- 230000005856 abnormality Effects 0.000 claims description 5
- 238000004590 computer program Methods 0.000 claims description 4
- 230000000875 corresponding effect Effects 0.000 description 10
- 238000010586 diagram Methods 0.000 description 10
- 230000008569 process Effects 0.000 description 9
- 230000002093 peripheral effect Effects 0.000 description 7
- 241000135164 Timea Species 0.000 description 4
- 238000004891 communication Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 2
- 238000003780 insertion Methods 0.000 description 2
- 230000037431 insertion Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 241000699670 Mus sp. Species 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000013475 authorization Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/3089—Monitoring arrangements determined by the means or processing involved in sensing the monitored data, e.g. interfaces, connectors, sensors, probes, agents
- G06F11/3096—Monitoring arrangements determined by the means or processing involved in sensing the monitored data, e.g. interfaces, connectors, sensors, probes, agents wherein the means or processing minimize the use of computing system or of computing system component resources, e.g. non-intrusive monitoring which minimizes the probe effect: sniffing, intercepting, indirectly deriving the monitored data from other directly available data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/3055—Monitoring arrangements for monitoring the status of the computing system or of the computing system component, e.g. monitoring if the computing system is on, off, available, not available
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/3065—Monitoring arrangements determined by the means or processing involved in reporting the monitored data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/3089—Monitoring arrangements determined by the means or processing involved in sensing the monitored data, e.g. interfaces, connectors, sensors, probes, agents
- G06F11/3093—Configuration details thereof, e.g. installation, enabling, spatial arrangement of the probes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/23—Updating
- G06F16/2379—Updates performed during online database operations; commit processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/80—Database-specific techniques
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Quality & Reliability (AREA)
- Databases & Information Systems (AREA)
- Computing Systems (AREA)
- Data Mining & Analysis (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The disclosure provides a method and related equipment for generating flow execution information. The method comprises the following steps: generating a target request based on node execution information of at least one node in response to a start of execution of a flow including the node; wherein, the input parameter and the output parameter of the node in the node execution information are related based on the node identification of the node; the target request is sent to a database to store the node execution information, and flow execution information of the flow is obtained; the flow execution information includes at least one of: the node may be configured to perform an input parameter, an output parameter, an execution state, an execution start time, an execution end time, an execution duration, an execution operation, or the node identification.
Description
Technical Field
The disclosure relates to the field of computer technology, and in particular, to a method and related equipment for generating flow execution information.
Background
When a flow is executed, a corresponding execution record is usually generated, which needs to record information of each node in the flow. At present, at the beginning of the process running to a certain node, an insert data request needs to be sent to the database, and when the running of the node is finished, an update data request is sent to the database. This results in that each node needs to perform two input/output operations on the database during the execution period, so that the whole process consumes longer time, and the efficiency of executing the process is reduced.
Disclosure of Invention
The disclosure provides a method and related equipment for generating flow execution information, so as to solve the technical problems of long time consumption, low efficiency and the like of flow execution caused by complex storage operation of the flow execution information to a certain extent.
In a first aspect of the present disclosure, a method for generating flow execution information is provided, including:
Generating a target request based on node execution information of at least one node in response to a start of execution of a flow including the node; wherein, the input parameter and the output parameter of the node in the node execution information are related based on the node identification of the node;
The target request is sent to a database to store the node execution information, and flow execution information of the flow is obtained; the flow execution information includes at least one of: the node may be configured to perform an input parameter, an output parameter, an execution state, an execution start time, an execution end time, an execution duration, an execution operation, or the node identification.
In a second aspect of the present disclosure, there is provided a generating apparatus of flow execution information, including:
A request module for generating a target request based on node execution information of at least one node in response to a start of execution of a flow including the node; wherein, the input parameter and the output parameter of the node in the node execution information are related based on the node identification of the node;
The sending module is used for sending the target request to a database so as to store the node execution information and obtain the flow execution information of the flow; the flow execution information includes at least one of: the node may be configured to perform an input parameter, an output parameter, an execution state, an execution start time, an execution end time, an execution duration, an execution operation, or the node identification.
In a third aspect of the disclosure, an electronic device is provided that includes one or more processors, a memory; and one or more programs, wherein the one or more programs are stored in the memory and executed by the one or more processors, the programs comprising instructions for performing the method of the first aspect.
In a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium containing a computer program which, when executed by one or more processors, causes the processors to perform the method of the first aspect.
In a fifth aspect of the present disclosure, there is provided a computer program product comprising computer program instructions which, when executed on a computer, cause the computer to perform the method of the first aspect.
As can be seen from the above description, the method and the related device for generating flow execution information provided by the present disclosure relate input parameters and output parameters in node execution information of nodes in a flow, and generate a target request to send to a database to store the node execution information, thereby obtaining the flow execution information of the flow. And the method does not need to send a request to the database to insert and update data when the node starts to execute and ends to execute, so that the time consumption of the operation of the database is reduced, and the efficiency of generating flow execution information is improved.
Drawings
In order to more clearly illustrate the technical solutions of the present disclosure or related art, the drawings required for the embodiments or related art description will be briefly described below, and it is apparent that the drawings in the following description are only embodiments of the present disclosure, and other drawings may be obtained according to these drawings without inventive effort to those of ordinary skill in the art.
Fig. 1 is a schematic diagram of a generating architecture of flow execution information according to an embodiment of the disclosure.
Fig. 2 is a schematic hardware architecture diagram of an exemplary electronic device according to an embodiment of the disclosure.
Fig. 3 is a schematic diagram of a flow implementation.
Fig. 4 is a schematic flowchart of a method of generating flow execution information according to an embodiment of the present disclosure.
Fig. 5 is a schematic diagram of a method for generating flow execution information.
Fig. 6 is a schematic diagram of a flow execution information generating apparatus according to an embodiment of the present disclosure.
Detailed Description
For the purposes of promoting an understanding of the principles and advantages of the disclosure, reference will now be made to the embodiments illustrated in the drawings and specific language will be used to describe the same.
It should be noted that unless otherwise defined, technical or scientific terms used in the embodiments of the present disclosure should be given the ordinary meaning as understood by one of ordinary skill in the art to which the present disclosure pertains. The terms "first," "second," and the like, as used in embodiments of the present disclosure, do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. The word "comprising" or "comprises", and the like, means that elements or items preceding the word are included in the element or item listed after the word and equivalents thereof, but does not exclude other elements or items. The terms "connected" or "connected," and the like, are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", etc. are used merely to indicate relative positional relationships, which may also be changed when the absolute position of the object to be described is changed.
It will be appreciated that prior to using the technical solutions disclosed in the embodiments of the present disclosure, the user should be informed and authorized of the type, usage range, usage scenario, etc. of the personal information related to the present disclosure in an appropriate manner according to the relevant legal regulations.
For example, in response to receiving an active request from a user, a prompt is sent to the user to explicitly prompt the user that the operation it is requesting to perform will require personal information to be obtained and used with the user. Thus, the user can autonomously select whether to provide personal information to software or hardware such as an electronic device, an application program, a server or a storage medium for executing the operation of the technical scheme of the present disclosure according to the prompt information.
As an alternative but non-limiting implementation, in response to receiving an active request from a user, the manner in which the prompt information is sent to the user may be, for example, a popup, in which the prompt information may be presented in a text manner. In addition, a selection control for the user to select to provide personal information to the electronic device in a 'consent' or 'disagreement' manner can be carried in the popup window.
It will be appreciated that the above-described notification and user authorization process is merely illustrative and not limiting of the implementations of the present disclosure, and that other ways of satisfying relevant legal regulations may be applied to the implementations of the present disclosure.
Fig. 1 shows a schematic diagram of a generation architecture of flow execution information of an embodiment of the present disclosure. Referring to fig. 1, the flow execution information generation architecture 100 may include a server 110, a terminal 120, and a network 130 providing a communication link. The server 110 and the terminal 120 may be connected through a wired or wireless network 130. The server 110 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server that provides basic cloud computing services such as cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, security services, CDNs, and the like.
The terminal 120 may be a hardware or software implementation. For example, when the terminal 120 is a hardware implementation, it may be a variety of electronic devices having a display screen and supporting page display, including but not limited to smartphones, tablets, e-book readers, laptop and desktop computers, and the like. When the terminal 120 is implemented in software, it may be installed in the above-listed electronic device; it may be implemented as a plurality of software or software modules (e.g., software or software modules for providing distributed services) or as a single software or software module, without limitation.
It should be noted that, the method for generating the flow execution information provided in the embodiment of the present application may be executed by the terminal 120 or may be executed by the server 110. It should be understood that the number of terminals, networks, and servers in fig. 1 are illustrative only and are not intended to be limiting. There may be any number of terminals, networks, and servers, as desired for implementation.
Fig. 2 shows a schematic hardware structure of an exemplary electronic device 200 provided by an embodiment of the disclosure. As shown in fig. 2, the electronic device 200 may include: processor 202, memory 204, network module 206, peripheral interface 208, and bus 210. Wherein the processor 202, the memory 204, the network module 206, and the peripheral interface 208 are communicatively coupled to each other within the electronic device 200 via a bus 210.
Processor 202 may be a central processing unit (Central Processing Unit, CPU), a generator of flow execution information, a neural Network Processor (NPU), a Microcontroller (MCU), a programmable logic device, a Digital Signal Processor (DSP), an Application SPECIFIC INTEGRATED Circuit (ASIC), or one or more integrated circuits. The processor 202 may be used to perform functions related to the techniques described in this disclosure. In some embodiments, processor 202 may also include multiple processors integrated as a single logic component. For example, as shown in fig. 2, the processor 202 may include a plurality of processors 202a, 202b, and 202c.
The memory 204 may be configured to store data (e.g., instructions, computer code, etc.). As shown in fig. 2, the data stored in the memory 204 may include program instructions (e.g., program instructions for implementing a method of generating flow execution information of an embodiment of the present disclosure) and data to be processed (e.g., the memory may store configuration files of other modules, etc.). The processor 202 may also access program instructions and data stored in the memory 204 and execute the program instructions to perform operations on the data to be processed. The memory 204 may include volatile storage or nonvolatile storage. In some embodiments, memory 204 may include Random Access Memory (RAM), read Only Memory (ROM), optical disks, magnetic disks, hard disks, solid State Disks (SSD), flash memory, memory sticks, and the like.
The network module 206 may be configured to provide communications with other external devices to the electronic device 200 via a network. The network may be any wired or wireless network capable of transmitting and receiving data. For example, the network may be a wired network, a local wireless network (e.g., bluetooth, wiFi, near Field Communication (NFC), etc.), a cellular network, the internet, or a combination of the foregoing. It will be appreciated that the type of network is not limited to the specific examples described above. In some embodiments, network module 306 may include any combination of any number of Network Interface Controllers (NICs), radio frequency modules, receivers, modems, routers, gateways, adapters, cellular network chips, etc.
Peripheral interface 208 may be configured to connect electronic device 200 with one or more peripheral devices to enable information input and output. For example, the peripheral devices may include input devices such as keyboards, mice, touchpads, touch screens, microphones, various types of sensors, and output devices such as displays, speakers, vibrators, and indicators.
Bus 210 may be configured to transfer information between the various components of electronic device 200 (e.g., processor 202, memory 204, network module 206, and peripheral interface 208), such as an internal bus (e.g., processor-memory bus), an external bus (USB port, PCI-E bus), etc.
It should be noted that, although the architecture of the electronic device 200 described above only shows the processor 202, the memory 204, the network module 206, the peripheral interface 208, and the bus 210, in a specific implementation, the architecture of the electronic device 200 may also include other components necessary to achieve normal execution. Furthermore, those skilled in the art will appreciate that the architecture of the electronic device 200 may also include only the components necessary to implement the embodiments of the present disclosure, and not all of the components shown in the figures.
During execution of a flow, the system typically records flow execution information for the flow. At the beginning of a process run to a node, the system will send an insert data request to the database, and at the end of the run of the node, the system will send another update data request to the database, updating the time and state information of the node. As shown in fig. 3, fig. 3 shows a schematic diagram of a flow implementation. In fig. 3, the system starts with node a in the process, and at this time, the system sends an insert data request to the database via the network, where the insert data request may request to store data dataA, where the data dataA may include the node ID of node a: 123. start time: timeA, inputting parameters: user, status: ongoing, … …, etc. When the system runs out of the process at node a, an update data request is sent to the database via the network, which may include: node ID: 123. end time: timeB, output parameters: token=234, status: run success/failure, … …, etc. Based on the update data request, the data dataA in the database may be updated as: node ID: 123. start time: timeA, inputting parameters: user, end time: timeB, output parameters: token=234, status: run success/failure, … …, etc. This makes it necessary to perform two database operations (an insertion operation at the beginning and an update operation at the end) each time a node performs, and the database operations are IO (Input-Output) operations, which are time-consuming and result in a relatively long time-consuming process. The load on the database may also increase, especially in high concurrency scenarios. In addition, since it is necessary to wait until the node execution is completed and then update the data, this may result in poor real-time performance of the data. Therefore, how to increase the efficiency of generating the flow execution information, thereby reducing the time consumption of the flow, and increasing the execution efficiency of the flow, etc. are technical problems that need to be solved.
In view of this, the embodiments of the present disclosure provide a method for generating flow execution information and related devices. And the input parameters and the output parameters in the node execution information of the nodes in the flow are correlated, and the target request is generated and sent to the database to store the node execution information, so that the flow execution information of the flow is obtained. And the method does not need to send a request to the database to insert and update data when the node starts to execute and ends to execute, so that the time consumption of the operation of the database is reduced, and the efficiency of generating flow execution information is improved.
Referring to fig. 4, fig. 4 shows a schematic flowchart of a method of generating flow execution information according to an embodiment of the present disclosure. The method for generating the flow execution information according to the embodiment of the present disclosure may be deployed at a server side or a terminal. In fig. 4, the method 400 for generating the flow execution information may further include the following steps.
In step S410, in response to the start of execution of a flow including at least one node, generating a target request based on node execution information of the node; wherein the input parameters and the output parameters of the node in the node execution information are associated based on the node identification of the node.
A flow may refer to a collection of steps that complete a task or goal. The flow may include at least one node, which may refer to a step of the flow. Node execution information may refer to information related to the execution of a node, such as whether the node is executed (e.g., executed or not), an execution status (e.g., success or failure fail), a start time, an end time, an execution duration, an execution operation, an input parameter, an output parameter, a node identification. Wherein an input parameter may refer to data or information passed to a node for supporting execution of the node. The output parameter may refer to data or information generated after the node executes. A node may have a unique node identification, which may be an identifier, that is used to distinguish between different nodes. For example, the identifier may be a number, letter, combinations thereof, and the like. The target request may refer to a request for storage node execution information generated based on the node execution information.
Specifically, the flow may include node A, B, C, D, E, node a may be an established document, node B may be a loop operation, node C may be a conditional branch, node D may be an added record, and node E may be an end. During execution of the flow, when the node a is processed, node execution information data_a of the node a may be obtained, and the node execution information data_a may include: the node a is executed, and at least one of the Input parameter input_a, the Output parameter output_a, the execution state of the node a being success, the execution start time t_ startA, the execution end time t_enda, the execution duration t_a, the node Identifier identifier_a, and the execution operation o_a is executed. Wherein, the Input parameter input_a and the Output parameter output_a are associated based on the node Identifier identifier_a. Then, the Input parameter input_a and the Output parameter output_a may be sent to the database for storage based on the same request, and compared with the prior art in which the request needs to be sent to the database when the execution starts and when the execution ends, according to the method of the embodiment of the present disclosure, the Input parameter and the Output parameter of each node are associated according to the node identifier, and are combined into one node to generate one request and submitted to the database, so that the number of times of database operations is reduced, the time consumption of the flow is reduced, and the efficiency of the flow execution is improved.
Referring to fig. 5, fig. 5 shows a schematic diagram of a method of generating flow execution information according to an embodiment of the present disclosure. In fig. 5, the system runs node a and node B of the flow in sequence. When the running system starts to run the node a, the first data dataa_1 at this time is stored in the memory, and the first data dataa_1 may include: node ID: 123. start time: timeA, inputting parameters: user, status: in progress. When the operation is finished at the node a, storing the second data dataa_2 at this time in the memory, the second data dataa_2 may include: node ID: 123. end time: timeB, output parameters: token=234, status: run success/failure. The first data a_1 and the second data a_2 may be associated based on the node identification of the node a, for example, may be based on the node ID: 123. The node execution information data_a of the node a can be obtained, including: node ID: 123. start time: timeA, inputting parameters: user, end time: timeB, output parameters: token=234, status: run success/failure. The node execution information data_a of the node a can be directly sent to the database for storage, so that compared with the method shown in fig. 3, which is used for sending requests to the database for data storage when the node a starts to execute and ends to execute respectively, the operation times of the database are reduced, and the operation efficiency is improved.
In some embodiments, the target request comprises a first target request;
generating a target request based on node execution information of the node, including:
And generating the first target request based on node execution information of a first node in the nodes, wherein the node execution information of the first node has a first batch identifier.
The first node may be one or more nodes, and may be part of or all of the nodes of the flow. The node execution information of the plurality of first nodes can be used as a first batch to be combined into a first target request, and the first target request is sent to the database, so that the request operation on the database is further reduced, and the time consumption of the database is reduced. During this time, a lot field may be added to the node execution information of the first node, where the lot field is a first lot identification to indicate a requested lot for the first node.
Specifically, similarly, when processing node B, node C, node D, the corresponding node execution information data_ B, data _ C, data _d can be obtained, respectively. The node execution information of the node A, the node B, the node C and the node D can be combined into a target request1, and the node execution information of the node A, the node B, the node C and the node D is sent to the database as a batch. Therefore, compared with the traditional method that each node needs to initiate two requests to be submitted to the database, the method disclosed by the embodiment of the invention combines the input parameters and the output parameters of each node and combines the different nodes again, so that the plurality of nodes only need to be inserted into the database once, the operation of the database is reduced, the time consumption of the database is reduced, and the execution speed and the execution efficiency of the flow are accelerated. As shown in fig. 5, when the operating system starts to operate the node B, third data datab_1 at this time is stored in the memory, and the third data datab_1 may include: node ID: 456. start time: timeC, inputting parameters: type = 234, status: in progress. When the operation is finished at the node B, fourth data datab_2 at this time is stored in the memory, and the fourth data datab_2 may include: node ID: 456. end time: timeD, output parameters: record=234, status: run success/failure. The third data datab_1 and the fourth data datab_2 may be associated based on node identification of the node B, e.g. may be based on node ID:456, are associated. The node execution information data_b of the node B can be obtained, including: node ID: 456. start time: timeC, inputting parameters: type=234, end time: timeD, output parameters: record=234, status: run success/failure. At this time, the node execution information data_a of the node a and the node execution information data_b of the node B may be associated based on the same batch number N1, and the same request is initiated to be submitted to the database, so that the data of the plurality of nodes are submitted to the database together for storage, and the operation frequency of the database is further reduced.
In some embodiments, the target request further comprises a second target request;
generating a target request based on the node execution information of the node, further comprising:
and generating the second target request based on node execution information of a second node in the nodes, wherein the second node is different from the first node, and the node execution information of the second node has a second batch identifier.
The first node may be a part of nodes in the flow, and then at least one second node may be used as a second batch to generate a second target request, so as to send the second target request to the database, and request to store node execution information of the second node. During this period, a lot field may also be added to the node execution information of the second node, where the lot field is a second lot identifier to indicate the requested lot of the second node. For example, the node execution information of node E may be generated into a target request2 to send the node execution information of node E as a batch to the database.
In step S420, the target request is sent to a database to store the node execution information, so as to obtain flow execution information of the flow; the flow execution information includes at least one of: the node may be configured to perform an input parameter, an output parameter, an execution state, an execution start time, an execution end time, an execution duration, an execution operation, or the node identification.
Wherein the target request may contain instructions to the database and node execution information to be stored. After receiving the target request, the database performs a corresponding operation, such as inserting, updating or deleting records, to store node execution information. To ensure that the node execution information is properly stored, it may be verified whether the target request was successfully executed, for example, by querying the database for response or status information.
In some embodiments, the flow execution information further includes the first lot identification and/or the second lot identification, the method further including at least one of:
Responding to successful storage of node execution information of the first node, and displaying the flow execution information and successful execution identification of the first node;
and responding to successful storage of the node execution information of the second node, and displaying the flow execution information and the successful execution identification of the second node.
For the node execution information of the stored success, the corresponding successful execution identifier and the corresponding batch identifier may be displayed. Specifically, when the node execution information of the first node is successfully stored, flow execution information including the first batch identifier may be displayed; and when the node execution information of the second node is successfully stored, the flow execution information comprising the second batch identifier can be displayed.
In some embodiments, the method 400 further comprises:
Responding to failure of node execution information storage of the first node, and re-requesting the database to store the node execution information of the first node;
And requesting the database to store node execution information of the second node.
In this case, since there is a certain risk in storing the input parameters and the output parameters of each node in the database, if the synchronous storage fails, the whole flow fails. Based on the consideration, in the method of the embodiment of the disclosure, the storage of the flow execution information is decoupled from the operation logic of the flow, so that a fault-tolerant design exists, one batch of storage failures can be retried asynchronously, and the storage of other batches is not affected, so that the stability of the flow execution is ensured. For example, when the node execution information storage of the first node fails, the database may be re-requested to store the node execution information of the first node; meanwhile, the flow can be continuously executed, and node execution information of the second node of the subsequent batch is stored. Thus, the efficiency and the reliability of the flow execution can be considered.
In some embodiments, the method 400 further comprises at least one of:
responding to successful storage of node execution information of the second node, displaying the first batch identifier and refusing to display the second batch identifier;
Or in response to the node execution information storage interrupt of the first node, terminating the display of the flow execution information.
In the conventional method, if the node execution information of the node 1 and the node 2 in the current batch is successfully inserted into the database, the node execution information of the node 3 fails to be inserted into the database, and the execution information of the node 4 in the subsequent batch is successfully inserted, at this time, the user can see that the node 1, the node 2 and the node 4 are successfully inserted and display, and the related information of the node 3 is lost. In order to solve the problem that a user may see incomplete information, the batch identification of the node execution information which is not stored successfully at present can be displayed, and the batch identification of the node execution information which is stored successfully later is not displayed, so that the display of the lost node is avoided, and the correctness and consistency of the flow execution information of the view angle of the user are ensured. For example, three nodes are actually stored in the database: node 1, node 2, node 4, but node 1, node 2 may be exposed to successful storage when exposed to the user. The node information of the node can be stored to indicate which batch the node belongs to by the batch identifier, and the node 4 can be stored in the database, but if the node 3 is not stored successfully, only the first batch identifier of the continuous node 3 is displayed when the node is displayed, and the second batch identifier of the node 4 is not displayed, so that the display of the lost node cannot occur. The presentation flow execution information may be terminated if an interrupt occurs.
In some embodiments, the method 400 further comprises:
in response to a record generation request for the flow execution information, acquiring the flow execution information from the database to generate a flow execution record in a preset format; wherein the record generation request includes the preset format.
In some embodiments, the method 400 further comprises:
and sending abnormality prompt information in response to the occurrence of abnormality in the storage of the node execution information.
In particular, time consuming operation records may individually open asynchronous tasks after insertion into the database is successful. For example, generating a flow execution record in a preset format, such as excel; or send an alarm or the like when an abnormality occurs in the storage.
It should be noted that the method of the embodiments of the present disclosure may be performed by a single device, such as a computer or a server. The method of the embodiment can also be applied to a distributed scene, and is completed by mutually matching a plurality of devices. In the case of such a distributed scenario, one of the devices may perform only one or more steps of the methods of embodiments of the present disclosure, the devices interacting with each other to accomplish the methods.
It should be noted that the foregoing describes some embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments described above and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
Based on the same technical concept, corresponding to the method of any embodiment, the present disclosure further provides a generating device of flow execution information, referring to fig. 6, where the generating device of flow execution information includes:
A request module for generating a target request based on node execution information of at least one node in response to a start of execution of a flow including the node; wherein, the input parameter and the output parameter of the node in the node execution information are related based on the node identification of the node;
The sending module is used for sending the target request to a database so as to store the node execution information and obtain the flow execution information of the flow; the flow execution information includes at least one of: the node may be configured to perform an input parameter, an output parameter, an execution state, an execution start time, an execution end time, an execution duration, an execution operation, or the node identification.
For convenience of description, the above devices are described as being functionally divided into various modules, respectively. Of course, the functions of the various modules may be implemented in the same one or more pieces of software and/or hardware when implementing the present disclosure.
The device of the foregoing embodiment is configured to implement the method for generating the corresponding flow execution information in any of the foregoing embodiments, and has the beneficial effects of the corresponding method embodiment, which is not described herein.
Based on the same technical concept, corresponding to the method of any embodiment, the disclosure further provides a non-transitory computer readable storage medium storing computer instructions for causing the computer to execute the method of generating the flow execution information according to any embodiment.
The computer readable media of the present embodiments, including both permanent and non-permanent, removable and non-removable media, may be used to implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device.
The computer instructions stored in the storage medium of the foregoing embodiments are used to make the computer execute the method for generating the flow execution information according to any one of the foregoing embodiments, and have the beneficial effects of the corresponding method embodiments, which are not described herein again.
Those of ordinary skill in the art will appreciate that: the discussion of any of the embodiments above is merely exemplary and is not intended to suggest that the scope of the disclosure, including the claims, is limited to these examples; the technical features of the above embodiments or in the different embodiments may also be combined under the idea of the present disclosure, the steps may be implemented in any order, and there are many other variations of the different aspects of the embodiments of the present disclosure as described above, which are not provided in details for the sake of brevity.
Additionally, well-known power/ground connections to Integrated Circuit (IC) chips and other components may or may not be shown within the provided figures, in order to simplify the illustration and discussion, and so as not to obscure the embodiments of the present disclosure. Furthermore, the devices may be shown in block diagram form in order to avoid obscuring the embodiments of the present disclosure, and this also accounts for the fact that specifics with respect to implementation of such block diagram devices are highly dependent upon the platform on which the embodiments of the present disclosure are to be implemented (i.e., such specifics should be well within purview of one skilled in the art). Where specific details (e.g., circuits) are set forth in order to describe example embodiments of the disclosure, it should be apparent to one skilled in the art that embodiments of the disclosure can be practiced without, or with variation of, these specific details. Accordingly, the description is to be regarded as illustrative in nature and not as restrictive.
While the present disclosure has been described in conjunction with specific embodiments thereof, many alternatives, modifications, and variations of those embodiments will be apparent to those skilled in the art in light of the foregoing description. For example, other memory architectures (e.g., dynamic RAM (DRAM)) may use the embodiments discussed.
The disclosed embodiments are intended to embrace all such alternatives, modifications and variances which fall within the broad scope of the appended claims. Accordingly, any omissions, modifications, equivalents, improvements, and the like, which are within the spirit and principles of the embodiments of the disclosure, are intended to be included within the scope of the disclosure.
Claims (10)
1. A method for generating flow execution information includes:
Generating a target request based on node execution information of at least one node in response to a start of execution of a flow including the node; wherein, the input parameter and the output parameter of the node in the node execution information are related based on the node identification of the node;
The target request is sent to a database to store the node execution information, and flow execution information of the flow is obtained; the flow execution information includes at least one of: the node may be configured to perform an input parameter, an output parameter, an execution state, an execution start time, an execution end time, an execution duration, an execution operation, or the node identification.
2. The method of claim 1, wherein the target request comprises a first target request;
generating a target request based on node execution information of the node, including:
And generating the first target request based on node execution information of a first node in the nodes, wherein the node execution information of the first node has a first batch identifier.
3. The method of claim 2, wherein the target request further comprises a second target request;
generating a target request based on the node execution information of the node, further comprising:
and generating the second target request based on node execution information of a second node in the nodes, wherein the second node is different from the first node, and the node execution information of the second node has a second batch identifier.
4. The method of claim 3, the flow execution information further comprising the first lot identification and/or the second lot identification, the method further comprising at least one of:
Responding to successful storage of node execution information of the first node, and displaying the flow execution information and successful execution identification of the first node;
and responding to successful storage of the node execution information of the second node, and displaying the flow execution information and the successful execution identification of the second node.
5. A method according to claim 3, further comprising:
Responding to failure of node execution information storage of the first node, and re-requesting the database to store the node execution information of the first node;
And requesting the database to store node execution information of the second node.
6. The method of claim 5, further comprising at least one of:
responding to successful storage of node execution information of the second node, displaying the first batch identifier and refusing to display the second batch identifier;
Or in response to the node execution information storage interrupt of the first node, terminating the display of the flow execution information.
7. The method according to any one of claims 1-6, further comprising:
In response to a record generation request for the flow execution information, acquiring the flow execution information from the database to generate a flow execution record in a preset format; wherein the record generation request includes the preset format;
Or in response to the occurrence of an abnormality in the storage of the node execution information, sending abnormality prompt information.
8. A flow execution information generating device, comprising:
A request module for generating a target request based on node execution information of at least one node in response to a start of execution of a flow including the node; wherein, the input parameter and the output parameter of the node in the node execution information are related based on the node identification of the node;
The sending module is used for sending the target request to a database so as to store the node execution information and obtain the flow execution information of the flow; the flow execution information includes at least one of: the node may be configured to perform an input parameter, an output parameter, an execution state, an execution start time, an execution end time, an execution duration, an execution operation, or the node identification.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method of any one of claims 1 to 7 when the program is executed.
10. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410309768.0A CN118093317A (en) | 2024-03-18 | 2024-03-18 | Method for generating flow execution information and related equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410309768.0A CN118093317A (en) | 2024-03-18 | 2024-03-18 | Method for generating flow execution information and related equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118093317A true CN118093317A (en) | 2024-05-28 |
Family
ID=91143934
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410309768.0A Pending CN118093317A (en) | 2024-03-18 | 2024-03-18 | Method for generating flow execution information and related equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118093317A (en) |
-
2024
- 2024-03-18 CN CN202410309768.0A patent/CN118093317A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108153670B (en) | Interface testing method and device and electronic equipment | |
US10963447B2 (en) | Automatic lock removal method for scalable synchronization in dynamic data structures | |
CN111064626B (en) | Configuration updating method, device, server and readable storage medium | |
CN112685391B (en) | Service data migration method and device, computer equipment and storage medium | |
CN109325744B (en) | Payment processing method, payment processing device, payment processing medium and electronic equipment | |
CN111694639A (en) | Method and device for updating address of process container and electronic equipment | |
CN110852752B (en) | Method, device, equipment and storage medium for processing recharge order withdrawal exception | |
CN111125168B (en) | Data processing method and device, electronic equipment and storage medium | |
US20230161664A1 (en) | Method of responding to operation, electronic device, and storage medium | |
CN117435569A (en) | Dynamic capacity expansion method, device, equipment, medium and program product for cache system | |
CN115098528B (en) | Service processing method, device, electronic equipment and computer readable storage medium | |
CN118093317A (en) | Method for generating flow execution information and related equipment | |
CN113806309B (en) | Metadata deleting method, system, terminal and storage medium based on distributed lock | |
CN114296651A (en) | Method and equipment for storing user-defined data information | |
CN113282617A (en) | Data query method and business system page turning method | |
CN113032021A (en) | System switching and data processing method, device, equipment and storage medium thereof | |
CN113448960A (en) | Method and device for importing form file | |
CN112486501B (en) | Spark application deployment management method and related equipment | |
CN118365452B (en) | Transaction method of hot spot account based on Redis apparatus, medium, and device | |
CN110264211B (en) | Wind control method, system, device and equipment | |
CN117290203A (en) | Code detection method and related equipment | |
CN117459484A (en) | Message group processing method and related equipment | |
EP4383085A1 (en) | Text search processing method and related device | |
CN113821449A (en) | System testing method and device and electronic equipment | |
CN117931176A (en) | Business application generation method, device, platform and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |