CN116661978A - Distributed flow processing method and device and distributed business flow engine - Google Patents

Distributed flow processing method and device and distributed business flow engine Download PDF

Info

Publication number
CN116661978A
CN116661978A CN202310952609.8A CN202310952609A CN116661978A CN 116661978 A CN116661978 A CN 116661978A CN 202310952609 A CN202310952609 A CN 202310952609A CN 116661978 A CN116661978 A CN 116661978A
Authority
CN
China
Prior art keywords
execution
node
flow
service
packet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310952609.8A
Other languages
Chinese (zh)
Other versions
CN116661978B (en
Inventor
李�杰
叶吕楸
蔡锦堂
田欢春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Yunrong Innovation Technology Co ltd
Original Assignee
Zhejiang Yunrong Innovation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Yunrong Innovation Technology Co ltd filed Critical Zhejiang Yunrong Innovation Technology Co ltd
Priority to CN202310952609.8A priority Critical patent/CN116661978B/en
Publication of CN116661978A publication Critical patent/CN116661978A/en
Application granted granted Critical
Publication of CN116661978B publication Critical patent/CN116661978B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/461Saving or restoring of program or task context
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multi Processors (AREA)
  • Computer And Data Communications (AREA)

Abstract

The invention discloses a distributed flow processing method, a distributed flow processing device and a distributed business flow engine, and relates to the technical field of business flow engines, wherein the method comprises the following steps: acquiring a service request of a user, mapping the service request to a corresponding flow according to a preset mapping rule, and extracting to obtain a flow chart; analyzing the service request and initializing the flow entry parameters; scheduling a service side processing execution group based on packet information and a load state of the service side in the flow chart to obtain a packet execution result and a packet context returned by the service side, updating the flow context according to the packet context, and determining the service side to be scheduled; and integrating the grouping execution results of each execution group to obtain a service processing result and returning an interface response to the user. The invention is applicable to complex business scenes and can provide comprehensive support for complex business systems such as online transaction, distributed microservices and the like.

Description

Distributed flow processing method and device and distributed business flow engine
Technical Field
The invention relates to the technical field of business process engines, in particular to a distributed process processing method and device and a distributed business process engine.
Background
The process and the service are bound to form the service process engine, so that the service requirements of the related fields can be conveniently realized, and the service process engine is convenient for users to use.
The principle and function of the traditional business process engine are basically similar, when the process is edited, the process components have no explicit access parameters, and share a global context, so that the process editing process is completely separated from codes, the requirements on users are higher, the process scheduling is not flexible, and the distributed calling and executing strategies of the components (or tasks) are not flexible.
Disclosure of Invention
In view of this, the embodiment of the invention provides a distributed flow processing method, a distributed flow processing device and a distributed business flow engine.
According to a first aspect, an embodiment of the present invention provides a distributed flow processing method, where the method is applied to a scheduling side, and the method includes:
acquiring a service request of a user, mapping the service request to a corresponding flow according to a preset mapping rule, and extracting to obtain a flow chart; the flow chart comprises: at least one execution group and grouping information, the grouping information includes: the execution sequence among the execution groups, the execution importance, the execution condition, the grouping sequence number, the nodes contained in the current group, the grouping data of the current group, all the entry parameters of the current group and the service name of the current group;
Analyzing the service request and initializing the flow entry parameters;
scheduling a service side processing execution group based on packet information and a load state of the service side in the flow chart to obtain a packet execution result and a packet context returned by the service side, updating the flow context according to the packet context, and determining the service side to be scheduled;
and integrating the grouping execution results of each execution group to obtain a service processing result and returning an interface response to the user.
With reference to the first aspect, in a first implementation manner of the first aspect, based on packet information in the flowchart and a load state of a service side, the scheduling the service side processes an execution group to obtain a packet execution result and a packet context returned by the service side, and updates the flow context and determines the service side to be scheduled according to the packet context, which specifically includes:
based on grouping information and load states of the service sides in the flow chart, determining the service side corresponding to the first execution group to be executed, and sending the execution group, the corresponding grouping information and the flow context to the corresponding service side;
receiving a packet execution result and a packet context returned by a service side, updating a flow context, determining a service side corresponding to a next execution group to be executed based on packet information in a flow chart and a load state of the service side, and sending the execution group, the corresponding packet information and the flow context to the corresponding service side;
And determining that a packet execution result and a packet context corresponding to the last execution group are received, updating the flow context and terminating the scheduling.
According to a second aspect, an embodiment of the present invention further provides a distributed flow processing method, where the method is applied to a service side, and the method includes:
receiving a packet execution request sent by a scheduling side; the packet execution request includes: the execution group, the grouping information corresponding to the execution group and the flow context, wherein the grouping information at least comprises the following contents: the execution sequence among the execution groups, the execution importance, the execution condition, the grouping sequence number, the nodes contained in the current group, the grouping data of the current group, all the entry parameters of the current group and the service name of the current group;
based on the grouping execution request, preparing an entry parameter corresponding to the node, and processing the node to obtain a node processing result and an exit parameter corresponding to the node according to the node type corresponding to the node;
updating the exit parameters corresponding to the nodes to the packet context;
and assembling node processing results corresponding to all nodes in the execution group to obtain a grouping processing result, and returning the grouping execution result and updated grouping context to the scheduling side.
With reference to the second aspect, in a first implementation manner of the second aspect, the preparing, based on the packet execution request, an entry parameter corresponding to a node, and according to a node type corresponding to the node, processing the node to obtain a node processing result and an exit parameter corresponding to the node specifically includes:
based on the grouping execution request, preparing node entry parameters according to a value expression configured by the nodes;
acquiring a node type corresponding to the node;
determining the node type as a functional node, executing corresponding functional node logic, and obtaining a node processing result and an outlet parameter corresponding to the node;
determining the node type as a service node, acquiring a method of a node corresponding component from the cache, calling the service component in a reflection mode, and acquiring a calling result of the method to obtain a node processing result and an outlet parameter corresponding to the node.
With reference to the second aspect, in a second embodiment of the second aspect, the method further comprises the steps of:
starting and scanning out marked components based on the packet execution request, analyzing metadata corresponding to the scanned components, caching and reporting the metadata; the components required on the service side are marked.
According to a second aspect, an embodiment of the present invention further provides a distributed flow processing apparatus, where the apparatus is applied to a scheduling side, and the apparatus includes:
The first receiving module is used for acquiring a service request of a user, mapping the service request to a corresponding flow according to a preset mapping rule, and extracting to obtain a flow chart; the flow chart comprises: at least one execution group and grouping information, the grouping information includes: the execution sequence among the execution groups, the execution importance, the execution condition, the grouping sequence number, the nodes contained in the current group, the grouping data of the current group, all the entry parameters of the current group and the service name of the current group;
the analysis module is used for analyzing the service request and initializing the flow entry parameters;
the scheduling module is used for scheduling the service side to process the execution group based on the packet information in the flow chart and the load state of the service side, obtaining a packet execution result returned by the service side and a packet context, updating the flow context according to the packet context and determining the service side to be scheduled;
and the integration module is used for integrating the grouping execution results of each execution group to obtain a service processing result and returning an interface response to the user.
According to a fourth aspect, an embodiment of the present invention further provides a distributed flow processing apparatus, where the apparatus is applied to a service side, and the apparatus includes:
The second receiving module is used for receiving the packet execution request sent by the scheduling side; the packet execution request includes: the execution group, the grouping information corresponding to the execution group and the flow context, wherein the grouping information at least comprises the following contents: the execution sequence among the execution groups, the execution importance, the execution condition, the grouping sequence number, the nodes contained in the current group, the grouping data of the current group, all the entry parameters of the current group and the service name of the current group;
the preparation module is used for preparing the entry parameters corresponding to the nodes based on the packet execution request, and processing the nodes to obtain node processing results and the exit parameters corresponding to the nodes according to the node types corresponding to the nodes;
the updating module is used for updating the outlet parameters corresponding to the nodes into the packet context;
and the assembly module is used for assembling node processing results corresponding to all the nodes in the execution group to obtain a grouping processing result and returning the grouping execution result and the updated grouping context to the dispatching side.
According to a fifth aspect, an embodiment of the present invention further provides a distributed business process engine, including:
the system comprises an editor, a scheduler and at least one executor, wherein the scheduler and the executor are connected with the editor, and the executor is connected with the scheduler so as to perform service logic combination according to the component list to form a flow chart conforming to the service;
The editor is used for disassembling the business process into at least one component and acquiring an actuator component list fed back by the actuator;
the scheduler is used for receiving the service request of the user, mapping the service request to a corresponding flow according to a preset mapping rule, extracting to obtain a flow chart, analyzing the service request, initializing flow entry parameters, scheduling a service side to process an execution group based on grouping information in the flow chart and a load state of the service side, obtaining a grouping execution result and a grouping context returned by the service side, updating the flow context according to the grouping context, determining an executor to be scheduled, integrating grouping execution results of each execution group, obtaining a service processing result and feeding back the service processing result to the user;
the flow chart comprises: at least one execution group and grouping information, the grouping information includes: the execution sequence among the execution groups, the execution importance, the execution condition, the grouping sequence number, the nodes contained in the current group, the grouping data of the current group, all the entry parameters of the current group and the service name of the current group;
the executor is used for receiving a packet execution request sent by a scheduling side, preparing an entry parameter corresponding to a node based on the packet execution request, processing the node to obtain a node processing result and an exit parameter corresponding to the node according to the node type corresponding to the node, updating the exit parameter corresponding to the node to a packet context, assembling the node processing result corresponding to each node in the execution group to obtain the packet processing result, and returning the packet execution result and the updated packet context to the scheduler;
The packet execution request includes: the execution group, the grouping information corresponding to the execution group and the flow context; after receiving the grouping execution request and starting, the executor scans out marked components, analyzes metadata corresponding to the scanned components, caches the metadata, obtains an executor component list and reports the executor component list to the editor.
According to a sixth aspect, an embodiment of the present invention further provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor executes the program to implement the steps of the distributed flow processing method as described in any one of the above.
According to a seventh aspect, embodiments of the present invention also provide a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of a distributed flow processing method as described in any of the above.
The distributed flow processing method, the distributed flow processing device and the distributed business flow engine realize more flexible flow scheduling and executing strategies by splitting the business flow engine into two parts of a scheduling side and a service side, support advanced characteristics such as load balancing, fault transfer and the like, thereby improving the robustness of distributed services.
Drawings
The features and advantages of the present invention will be more clearly understood by reference to the accompanying drawings, which are illustrative and should not be construed as limiting the invention in any way, in which:
FIG. 1 shows one of the flow diagrams of the distributed flow processing method provided by the present invention;
FIG. 2 is a schematic diagram of a componentized business process in a distributed process processing method provided by the present invention;
FIG. 3 is a schematic diagram of a componentized business process packet in a distributed process method provided by the present invention;
FIG. 4 is a second flow chart of the distributed flow processing method according to the present invention;
FIG. 5 is a third flow chart illustrating a distributed flow processing method according to the present invention;
FIG. 6 shows a fourth flow diagram of a distributed flow processing method provided by the present invention;
FIG. 7 is a schematic diagram showing a distributed flow processing method according to the present invention;
FIG. 8 is a schematic diagram of a distributed flow processing apparatus according to the present invention;
FIG. 9 is a schematic diagram illustrating a second embodiment of a distributed processing apparatus according to the present invention;
FIG. 10 is a schematic diagram of a distributed flow processing engine according to the present invention;
Fig. 11 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to fall within the scope of the application.
The process and the service are bound to form a service process engine, namely, the process and the service are in a highly coupled relation, so that the service requirements of the related field can be conveniently realized, and the service is convenient for users to use.
The principle and the function of the traditional business process engine are basically similar, and the current active is taken as an example for explanation, and the implementation process is as follows:
the process is defined as follows: the flow definition in the activity refers to an XML file of a business flow model and symbol (Business Process Model and Notation, BPMN) 2.0 standard, which describes information of each node, connection, event, etc. of the flow, and in the flow definition, a user can define information of execution sequence, condition, variable, etc. of the flow. BPMN 2.0, an international standard for modeling and representing business processes, provides a standard set of graphical symbologies for business process charts that can be understood and used by business analysts, technology developers, and business personnel.
The flow engine: the process engine of the Activiti is a Java-based tool and is responsible for analyzing the process definition and executing the process according to the process definition, and in the process engine, a user can define the information of the execution environment, parameters, listeners and the like of the process.
Tasks: the task in actiti refers to a backlog in the flow, which is created by the task node in the flow definition. In a task, a user can view information such as the execution state of the task, task variables, a processor and the like.
The execution process comprises the following steps: the process of executing the flow of Activiti is mainly divided into two stages: flow definition parsing and flow instance execution. In the process definition analysis stage, the Activiti reads the process definition file and analyzes the process definition file into an internal model, wherein the internal model comprises information such as process definition, task nodes, connection lines, events and the like; in the execution stage of the flow instance, the Activiti creates the flow instance according to the flow definition model, and gradually executes each node in the flow according to the execution sequence in the flow definition until the flow is ended or an abnormality occurs.
In general, the flow execution principle of Activiti is to analyze a flow definition, create a flow instance, gradually execute each node in the flow according to the execution sequence in the flow definition, and finally complete the whole execution process of the flow.
However, the conventional business process engines taking Activiti as an example use BPMN standard, which can only be used for a workflow use scene, cannot be used as a solution for complex business processing, and business components of the business process engines need to implement explicit interfaces agreed by the process engines, are scheduled by the process engines and use a hard coding mode, so that the coupling degree of the process and the business is very high, the process components have no explicit access parameters when editing the process, and need to share a global context, the process editing process is completely separated from codes, the requirements on users are also high, and the process scheduling of the business process engines is inflexible, and the distributed calling and executing strategies of the components (or tasks) are inflexible.
In business process engines, context generally refers to environmental data or state information during execution of a process, which may include process variables, process instance information, task data, event information, etc., where the context provides the necessary information at various stages of process execution to help determine the execution path of the process or to store process results. It should be noted that, the context scheme in the conventional business process engine only provides a basic process variable framework, and the basic process variable framework is read and written in the code through the API and cannot be transferred through the form of component parameters.
In order to solve the above-mentioned problems, a distributed flow processing method is provided in this embodiment. The distributed flow processing method according to the embodiment of the present invention may be used in an electronic device, including but not limited to a computer, a mobile terminal, etc., and fig. 1 is a flow diagram of the distributed flow processing method according to the embodiment of the present invention, as shown in fig. 1, where the method is applied to a scheduling side, such as a scheduler of a business flow engine, and the method includes the following steps:
a10, acquiring a service request of a user, mapping the service request to a corresponding flow according to a preset mapping rule, and extracting to obtain a flow chart. In an embodiment of the present invention, the flowchart includes at least the following: at least one execution group and grouping information, and it should be noted that if all execution groups perform service integration according to information included in the grouping information, a service flow corresponding to the whole service request can be formed, and the grouping information at least includes the following contents: execution order between execution groups, execution importance, execution conditions, packet sequence number, nodes (flowNodes) contained in the current group, packet data of the current group, all entry parameters (inParams) of the current group, and service name (serviceName) of the current group.
Referring to fig. 2, in this method, the service processing function is split into several components, each of which can be considered as a node, and implemented by writing codes or multiplexing existing component codes, and the components can set corresponding input/output parameters (inParams/outpkey), node type (type), service name (serviceName), etc. For a complete flow, from the beginning component (node) to the end component (node) ending, the middle can include several components (nodes). If the flow is serial sequential execution among nodes (if shown in fig. 2), the flow can be wholly divided into a plurality of groups according to node information and business rules to obtain a plurality of execution groups; if there is a branch to be performed in parallel between nodes in the flow, in the embodiment of the present invention, the branch may be divided into one execution group, or may be divided into a plurality of execution groups according to components included in the branch. It can be seen that no matter whether the flow is performed in series or a branch path performed in parallel exists, the execution group can obtain corresponding execution information, so as to determine the first execution group to be executed and the last execution group to be executed. In the embodiment of the invention, the first and last execution groups are introduced, and corresponding input/output parameters are set for the components as the input/output parameters of the flow, so that the input/output parameters of the flow are definitely defined, and the readability and usability of the flow are improved.
It should be noted that, in this method, each node has and can only be divided into one of the execution groups when grouping.
There are also corresponding execution orders, execution importance, and execution conditions between the execution groups, and in brief, each execution group includes at least one component (node), and since each component corresponds to information such as corresponding node content, node type, etc., the execution group formed by the components also has corresponding grouping information, grouping data, etc.
Referring to fig. 3, more specifically, in order to better divide the end-to-end execution groups, a start (begin) component and an end (end) component are introduced in the method, and the whole flow is divided into three groups, wherein after grouping, the first execution group includes a start component, a service a component 1, a service a component 2 and a JSON component, the second execution group includes a service B component 1 and a variable component, and the third execution group includes a service C component 1 and an end component.
The flow grouping is an important function for improving the execution efficiency, and the number of network communication times can be reduced and the execution efficiency can be improved by dividing a plurality of continuous service nodes with the same micro service and the function nodes mixed in the service nodes into a group for overall scheduling.
In the method, when a flow is grouped, a plurality of continuous service nodes with the same service and adjacent functional nodes are all divided into one group, and when the flow chart is executed, the whole group is taken as a unit, and the whole flow chart is issued to an executor for execution. Taking fig. 3 as an example, the first execution group scheduler schedules execution to the executor a of the service side a, the second execution group scheduler schedules execution to the executor B of the service side B, and the third execution group scheduler schedules execution to the executor C of the service side C.
By scheduling in groups, allowing multiple tasks to be performed in parallel or serially in the same context, execution efficiency may be improved.
In the embodiment of the invention, the business process engine is componentized, and the componentized business process engine can disassemble the complex business process into a series of reusable and combinable components which can be combined and scheduled according to specific business rules and processes to execute complex business logic. Therefore, the development efficiency can be improved, the maintenance cost is reduced, and the expandability and the flexibility of the system are improved.
The node information includes node content, execution sequence between nodes, conditions, variables, and the like.
In step a10, the user initiates a service request in the form of a TCP/HTTP/MQ middleware, where the message format includes, but is not limited to, JSON/XML, etc., and the scheduling side initializes the service request of the user, and then forms a one-to-one mapping between the received request ID and the configured flow ID according to a preset configuration rule in a preset mapping rule, and further extracts related flow chart data to implement the mapping between the request and the flow.
A20, analyzing the service request, and initializing the flow entry parameters. Specifically, the scheduling side parses and converts the message in the service request, and initializes the flow entry parameter.
A30, based on the packet information in the flow chart and the load state of the service side, the service side is scheduled to process the execution group, a packet execution result and a packet context returned by the service side are obtained, and the flow context is updated and the service side to be scheduled is determined according to the packet context.
In the conventional business process engine, complex codes are usually required to be written for processing the context data, development difficulty and error risk are increased, and in the embodiment of the invention, data exchange between the context and the component input/output parameters is realized by specifying the data flow rule through the flow chart, so that the processing of the context data is simplified.
The scheduler is used as a scheduling side and a central hub of the business process engine and bears the key tasks of each execution group obtained by scheduling and executing the grouping, and the scheduler reasonably distributes the business request of the user to the executors of the corresponding service side based on grouping information in the flow chart, so that load balancing and efficient request processing are realized.
In the embodiment of the invention, the business process engine is split into two parts of a dispatching side and a service side, which correspond to a dispatcher and an executor respectively, and the dispatcher is responsible for automatic grouping dispatching of the process and manages all the executors, thereby realizing more flexible and reliable process dispatching and execution strategies. For example, if one of the actuators fails, the scheduler may reassign tasks to other available actuators. In addition, the scheduler also supports more complex scheduling strategies, such as load balancing, fault transfer and the like, so as to adapt to different service scenes and requirements, realize more flexible flow scheduling and executing strategies, and further improve the robustness of the distributed service. For example, if one of the actuators fails, the scheduler may reassign tasks to other available actuators.
The scheduler selects the most suitable actuator to process the service request by monitoring the load state of the actuator in real time, so that the stability and the high-quality performance of the whole system are ensured, the scheduling side serves as a key node of dynamic routing, receives the service requests from users and ensures that the service requests are correctly routed to the corresponding service side.
Because there is a first execution group after grouping, in the method, a scheduling side firstly schedules an executor with the optimal current load state to process the first execution group, after the executor processes the first execution group, a grouping execution result and a grouping context corresponding to the first execution group are fed back to the scheduling side, the scheduling side updates a flow context according to the grouping context, the grouping information is further influenced and adjusted when the context is updated, and a service side corresponding to the next execution group to be executed, namely, the executor, is determined again according to the grouping information and the load state of the service side, and the next execution group is sequentially processed until the last execution group is processed by the service side.
Preferably, the packet context includes at least the following: packet ID, packet ingress parameters, packet egress parameters, and business process engine variables.
As a preferred implementation manner of the embodiment of the present invention, the load state may include information such as processing capacity (determined by various physical resources) of the actuators, load proportion, and the like, when the scheduling side selects a suitable actuator, firstly, the actuator whose load state does not exceed the preset threshold is selected according to the preset threshold, and the actuator may also select the most suitable actuator from the actuators whose load state does not exceed the preset threshold to perform processing by predicting the load state of the actuator in a future time period, that is, predicting the load state, specifically, the actuator whose predicted load state is the best.
Namely, the scheduler is used as a core function of the system and is responsible for controlling the scheduling of the execution packets, and the scheduler ensures that the requests are timely distributed to the most suitable executor for processing through load balancing and dynamic request distribution, so that the stability and performance optimization of the system are maintained, and the throughput and response time of the system are further improved.
And A40, integrating the grouping execution results of each execution group to obtain a service processing result and returning an interface response to the user.
And integrating the grouping execution results corresponding to each execution group of the componentized service flow engine to obtain service processing results required by the service, and then the scheduling side feeds back the service processing results to the user return interface.
The distributed flow processing method provided by the invention has the advantages that the business flow engine is split into the dispatching side and the service side, more flexible flow dispatching and executing strategies are realized, the advanced characteristics such as load balancing, fault transferring and the like are supported, so that the robustness of distributed services is improved, the flow is split into at least one executing group, the dispatching side dispatches the service side to execute the corresponding executing group, the dispatching side is responsible for automatic grouping dispatching of the flow, and manages all the service sides, so that more flexible and reliable flow dispatching and executing strategies are realized, thereby defining data circulation rules, realizing data exchange between context and component input/output parameters, further simplifying the processing of context data, enabling the executing group processed by the service side to clearly define the input/output parameters of the flow, simplifying the complexity of data management, reducing errors, improving the development efficiency, improving the readability and usability of the flow, and being suitable for complex business scenes, and providing comprehensive support for complex business systems such as online transactions, distributed microservices and the like.
The following describes a distributed flow processing method provided in the present invention with reference to fig. 4, and step a30 specifically includes:
a31, determining the service side corresponding to the first execution group to be executed based on the grouping information in the flow chart and the load state of the service side, and sending the execution group, the corresponding grouping information and the flow context to the corresponding service side. At this time, the flow context may also be regarded as a packet completion current packet context.
A32, receiving a packet execution result and a packet context returned by the service side, updating the flow context, determining the service side corresponding to the next execution group to be executed based on the packet information in the flow chart and the load state of the service side, and sending the execution group, the corresponding packet information and the flow context to the corresponding service side. Whereby continuous packet processing is performed.
A33, determining that a packet execution result and a packet context corresponding to the last execution group are received, updating the flow context and terminating the scheduling.
In the embodiment of the invention, the packet data and the flow context are packed into a packet execution request and sent to the executor.
Taking fig. 3 as an example, the scheduler starts the execution of the packet according to the deployment definition of the flow, the scheduler sends the first execution group to the executor a, and sends the flow context, that is, the packet context to the executor a, the executor a processes each component in the packet, and feeds back the packet execution result and the packet context, the scheduler receives the packet execution result and the packet context of the executor a while updating the packet context, and then dispatches to the next packet, the scheduler sends the next execution group to the executor B, and sends the flow context, that is, the packet context to the executor B, the executor B processes each component in the packet, and feeds back the packet execution result and the packet context, the scheduler receives the packet execution result and the packet context of the executor B, and updates the packet context, and then dispatches to the next packet, the scheduler sends the flow context, that is, the packet context to the executor C, processes each component in the packet, and feeds back the packet execution result and the packet context, and the scheduler sends the packet context, and the packet execution result and the packet context to the executor C, and the final packet execution is completed. The scheduler, upon determining that the execution of the packet is complete (i.e., the end component is reached), terminates the flow and executes exit execution logic.
In order to solve the above-mentioned problems, a distributed flow processing method is provided in this embodiment. The distributed flow processing method according to the embodiment of the present invention may be used in an electronic device, including but not limited to a computer, a mobile terminal, etc., and fig. 5 is a flow diagram of the distributed flow processing method according to the embodiment of the present invention, as shown in fig. 5, where the method is applied to a service side, such as an executor of a service side of a business flow engine, and the method includes the following steps:
b10, receiving a packet execution request sent by a scheduling side. In the embodiment of the present invention, the packet execution request includes at least the following contents: the execution group, the grouping information corresponding to the execution group and the flow context, and it should be noted that all the execution groups can form the service flow corresponding to the whole service request according to the service integration performed by the grouping information, and the grouping information at least includes the following contents: the execution sequence, the execution importance, the execution condition, the packet number, the node (flownode) included in the current group, the packet data of the current group, all the entry parameters (inParams) of the current group, and the service name (serviceName) of the current group form a flow chart of the service flow.
B30, based on the grouping execution request, preparing the entry parameters corresponding to the nodes, and processing the nodes to obtain the node processing result and the exit parameters corresponding to the nodes according to the node types corresponding to the nodes.
In the embodiment of the invention, the service side prepares the node to enter the parameters according to the value expression configured by the node. For example, the node entry parameter userName field is configured with the value expression $user.name, then the object of $user is found from the context, and the name field of the object is assigned to the userName field of the node entry parameter object; the value expression 18 is configured in the userAge field of the node entry parameter, and then the constant 18 is assigned to the userAge field of the node entry parameter object.
Referring to fig. 2, in this method, the service processing function is split into several components, each of which can be considered as a node, and implemented by writing codes or multiplexing existing component codes, and the components can set corresponding input/output parameters (inParams/outpkey), node type (type), service name (serviceName), etc. For a complete flow, from the beginning component (node) to the end component (node) ending, the middle can include several components (nodes). If the flow is serial sequential execution among nodes (if shown in fig. 2), the flow can be wholly divided into a plurality of groups according to node information and business rules to obtain a plurality of execution groups; if there is a branch to be performed in parallel between nodes in the flow, in the embodiment of the present invention, the branch may be divided into one execution group, or may be divided into a plurality of execution groups according to components included in the branch. It can be seen that no matter whether the flow is performed in series or a branch path performed in parallel exists, the execution group can obtain corresponding execution information, so as to determine the first execution group to be executed and the last execution group to be executed. In the embodiment of the invention, the first and last execution groups are introduced, and corresponding input/output parameters are set for the components as the input/output parameters of the flow, so that the input/output parameters of the flow are definitely defined, and the readability and usability of the flow are improved.
It should be noted that, in this method, each node has and can only be divided into one of the execution groups when grouping.
There are also corresponding execution orders, execution importance, and execution conditions between the execution groups, and in brief, each execution group includes at least one component (node), and since each component corresponds to information such as corresponding node content, node type, etc., the execution group formed by the components also has corresponding grouping information, grouping data, etc.
When the service side determines that the execution group to be processed is the first execution group according to the packet sequence number in the packet information, the execution context initialization logic places the entry parameter object into the packet context according to the interface configuration.
According to different node types, the method correspondingly matches the processing modes of the nodes.
And B40, updating the exit parameters corresponding to the nodes to the packet context. In the embodiment of the invention, the service side adds the outlet parameter in the node return value into the context according to the name of the outlet parameter configured by the node, and the description is that if the same name variable exists, the whole replacement is performed.
Steps B30 and B40 are a continuous loop step, and the server will find the next node according to the subsequent node condition of the current node: only one subsequent node is provided, and the next node is directly executed; if a plurality of subsequent nodes exist, executing a conditional expression, and selecting the subsequent nodes; the subsequent node is not in the current group, and execution is ended; the subsequent node is an end node, and execution is ended.
And B50, assembling node processing results corresponding to all the nodes in the execution group to obtain a packet processing result, and returning the packet execution result and the updated packet context to the scheduling side.
It can be seen that, after each executor is normally started, a packet execution request issued by the scheduling side is received, nodes and lines are executed according to the flowchart, and a packet execution result is returned to the scheduling side.
The distributed flow processing method provided by the invention has the advantages that the business flow engine is split into the dispatching side and the service side, more flexible flow dispatching and execution strategies are realized, the advanced characteristics such as load balancing, fault transfer and the like are supported, so that the robustness of distributed service is improved, the flow is split into at least one execution group, the dispatching side dispatches the service side to execute the corresponding execution group, thereby prescribing a data flow rule, realizing data exchange between the context and the component input/output parameters, simplifying the processing of the context data, and the execution group processed by the service side can definitely define the flow input/output parameters, simplifying the complexity of data management, reducing errors, improving the development efficiency, improving the readability and usability of the flow, being applicable to complex business scenes and being capable of providing comprehensive support for complex business systems such as online transactions, distributed micro-service and the like.
The following describes a distributed flow processing method provided by the present invention with reference to fig. 6, where the method further includes:
b31, based on the packet execution request, preparing the node entry parameters according to the value expression of the node configuration.
B32, obtaining the node type corresponding to the node.
And B33, determining the node type as a functional node, and executing corresponding functional node logic to obtain a node processing result and an outlet parameter corresponding to the node.
For example, if the node is a json node, returning a json object according to the configured json key and the value expression; the nodes are regular nodes, and execute the regular expression according to the configured parameter-entering value expression and return a judging result; and if the node is a variable node, returning the extracted variable according to the configured value expression.
And B34, determining the node type as a service node, acquiring a method of a corresponding component of the node from the cache, calling the service component in a reflection mode, and acquiring a calling result of the method to obtain a node processing result and an outlet parameter corresponding to the node.
The following describes a distributed flow processing method provided by the present invention with reference to fig. 7, where the method further includes:
and B20, starting and scanning out marked components based on the packet execution request, analyzing metadata corresponding to the scanned components, caching and reporting the metadata. Wherein the components required for the actuator are labeled.
When the executor at the service side receives the packet execution request, the executor can be started, and after the startup, one side scanning operation is executed, and the marked component is scanned. Preferably, after receiving the packet execution request, the executor may verify the packet execution request, and after the verification is successful, start the executor.
In the embodiment of the invention, a user, for example, a developer marks the method as a flow component in a manner of adding Spring custom annotation on the method, then when a service side executor is started in the method, the marked component is scanned out through a Spring custom annotation function, the executor acquires metadata corresponding to the component in a reflection manner, wherein the metadata comprises a component class name, a method name, an inlet parameter, an outlet parameter and the like, and the metadata of the component is cached for use in the actual operation of the flow chart, and the executor reports the acquired metadata for use by the user.
By introducing the self-defined annotation of Spring, automatic component registration during application starting is realized, decoupling of flow and business logic is realized, and flexibility and maintainability of the system are enhanced.
Therefore, the user is allowed to drag the registered components into the process, the input parameter assignment of the components is carried out through the agreed context, and the input parameter is written back into the context, so that the process definition is more flexible and modularized, and various complex business requirements can be met.
Specifically, the service side reports metadata to the development side to assist the development side in editing the flow. More specifically, after the actuator is started, the actuator component is reported to a process editor to form an actuator component list, a user can select a required service component from the actuator component list when editing a process, the service component is added to the process in a dragging mode and the like, and the user can edit the selected component information including the entry parameters and the exit parameters of the configuration component. Wherein, for the variable assignment of the in-parameter and out-parameter, the user can use the custom assignment symbol "$" and the variable to complete. By specifying the variable names of the in-and out-parameters in the editing interface and identifying with the "$" symbol, the user can dynamically pass the variable values to the component as the flow is being executed, so that when editing the component information of the flow, the user can explicitly specify a certain attribute of the input parameters, and the value of the attribute can be obtained from a variable or the attribute of a variable. Similarly, the user may also explicitly define variables of the output parameters and assign output results of the components to the defined variables.
For example, edit input parameters of a component; directly assigning a variable $A to the parameterisation InParam; directly assigning the value of the attribute a of the variable $A to the parameterisation InParam; assigning a variable $A to an attribute field1 of the enrollment object InParam; assigning the value of the attribute a of the variable $A to the attribute field1 of the enrolled object InParam; the output result is assigned to give the parameter value $out.
The following describes a distributed flow processing apparatus provided by an embodiment of the present invention, and the distributed flow processing apparatus described below and the distributed flow processing method described above may be referred to correspondingly.
In order to solve the above-mentioned problems, a distributed flow processing apparatus is provided in the present embodiment. The distributed flow processing apparatus according to the embodiment of the present invention may be used in an electronic device, including but not limited to a computer, a mobile terminal, etc., and fig. 8 is a schematic structural diagram of the distributed flow processing apparatus according to the embodiment of the present invention, as shown in fig. 8, where the apparatus is applied to a scheduling side, such as a scheduler of a business flow engine, and the apparatus includes:
the first obtaining module 10 is configured to obtain a service request of a user, map the service request to a corresponding flow according to a preset mapping rule, and extract the flow chart. In an embodiment of the present invention, the flowchart includes at least the following: at least one execution group and grouping information, and it should be noted that, if the grouping information is used for service integration, all execution groups can form a service flow corresponding to the whole service request, and the grouping information at least includes the following contents: execution order between execution groups, execution importance, execution conditions, packet sequence number, nodes (flowNodes) contained in the current group, packet data of the current group, all entry parameters (inParams) of the current group, and service name (serviceName) of the current group.
The flow grouping is an important function for improving the execution efficiency, and the number of network communication times can be reduced and the execution efficiency can be improved by dividing a plurality of continuous service nodes with the same micro service and the function nodes mixed in the service nodes into a group for overall scheduling.
In the device, when a flow is grouped, a plurality of continuous service nodes with the same service and adjacent functional nodes are all divided into one group, and when the flow chart is executed, the whole group is taken as a unit, and the whole flow chart is issued to an executor for execution. Taking fig. 3 as an example, the first execution group scheduler schedules execution to the executor a of the service side a, the second execution group scheduler schedules execution to the executor B of the service side B, and the third execution group scheduler schedules execution to the executor C of the service side C.
By scheduling in groups, allowing multiple tasks to be performed in parallel or serially in the same context, execution efficiency may be improved.
In the embodiment of the invention, the business process engine is componentized, and the componentized business process engine can disassemble the complex business process into a series of reusable and combinable components which can be combined and scheduled according to specific business rules and processes to execute complex business logic. Therefore, the development efficiency can be improved, the maintenance cost is reduced, and the expandability and the flexibility of the system are improved.
The node information includes node content, execution sequence between nodes, conditions, variables, and the like.
The parsing module 20 is configured to parse the service request and initialize the flow entry parameter. Specifically, the scheduling side parses and converts the message in the service request, and initializes the flow entry parameter.
The scheduling module 30 is configured to schedule the service side to process the execution group based on the packet information in the flowchart and the load state of the service side, obtain a packet execution result and a packet context returned by the service side, update the flow context according to the packet context, and determine the service side to be scheduled.
In the conventional business process engine, complex codes are usually required to be written for processing the context data, development difficulty and error risk are increased, and in the embodiment of the invention, data exchange between the context and the component input/output parameters is realized by specifying the data flow rule through the flow chart, so that the processing of the context data is simplified.
The scheduler is used as a scheduling side and a central hub of the business process engine and bears the key tasks of each execution group obtained by scheduling and executing the grouping, and the scheduler reasonably distributes the business request of the user to the executors of the corresponding service side based on grouping information in the flow chart, so that load balancing and efficient request processing are realized.
In the embodiment of the invention, the business process engine is split into two parts of a dispatching side and a service side, which correspond to a dispatcher and an executor respectively, and the dispatcher is responsible for automatic grouping dispatching of the process and manages all the executors, thereby realizing more flexible and reliable process dispatching and execution strategies. For example, if one of the actuators fails, the scheduler may reassign tasks to other available actuators. In addition, the scheduler also supports more complex scheduling strategies, such as load balancing, fault transfer and the like, so as to adapt to different service scenes and requirements, realize more flexible flow scheduling and executing strategies, and further improve the robustness of the distributed service. For example, if one of the actuators fails, the scheduler may reassign tasks to other available actuators.
The scheduler selects the most suitable actuator to process the service request by monitoring the load state of the actuator in real time, so that the stability and the high-quality performance of the whole system are ensured, the scheduling side serves as a key node of dynamic routing, receives the service requests from users and ensures that the service requests are correctly routed to the corresponding service side.
As a preferred implementation manner of the embodiment of the present invention, the load state may include information such as processing capacity (determined by various physical resources) of the actuators, load proportion, and the like, when the scheduling side selects a suitable actuator, firstly, the actuator whose load state does not exceed the preset threshold is selected according to the preset threshold, and the actuator may also select the most suitable actuator from the actuators whose load state does not exceed the preset threshold to perform processing by predicting the load state of the actuator in a future time period, that is, predicting the load state, specifically, the actuator whose predicted load state is the best.
Namely, the scheduler is used as a core function of the system and is responsible for controlling the scheduling of the execution packets, and the scheduler ensures that the requests are timely distributed to the most suitable executor for processing through load balancing and dynamic request distribution, so that the stability and performance optimization of the system are maintained, and the throughput and response time of the system are further improved.
And the integrating module 40 is used for integrating the grouping execution results of each execution group to obtain service processing results and returning interface responses to the user.
And integrating the grouping execution results corresponding to each execution group of the componentized service flow engine to obtain service processing results required by the service, and then the scheduling side feeds back the service processing results to the user return interface.
The distributed flow processing device provided by the invention realizes more flexible flow scheduling and executing strategies by splitting the business flow engine into two parts of the scheduling side and the service side, supports advanced characteristics such as load balancing, fault transfer and the like, thereby improving the robustness of distributed services, and is applicable to complex business scenes and can provide comprehensive support for complex business systems such as online transactions, distributed microservices and the like by splitting the flow into at least one executing group, scheduling the service side by the scheduling side to execute the corresponding executing group, managing all the service sides in charge of automatic grouping scheduling of the flow, thereby realizing more flexible and reliable flow scheduling and executing strategies, defining data circulation rules, realizing data exchange between context and component input/output parameters, and definitely defining the flow input/output parameters by the executing group processed by the service side, thereby definitely defining the flow input/output parameters, simplifying the complexity of data management, reducing errors, improving the development efficiency, improving the readability and usability of the flow.
In order to solve the above-mentioned problems, a distributed flow processing apparatus is provided in the present embodiment. The distributed flow processing apparatus according to the embodiment of the present invention may be used in an electronic device, including but not limited to a computer, a mobile terminal, etc., and fig. 9 is a schematic structural diagram of the distributed flow processing apparatus according to the embodiment of the present invention, as shown in fig. 9, where the apparatus is applied to a service side, such as an actuator of a service side of a business flow engine, and the apparatus includes:
the second receiving module 50 is configured to receive a packet execution request sent by the scheduling side. In the embodiment of the present invention, the packet execution request includes at least the following contents: the execution group, the grouping information corresponding to the execution group and the flow context, and it should be noted that all the execution groups can form the service flow corresponding to the whole service request according to the service integration performed by the grouping information, and the grouping information at least includes the following contents: the execution sequence, the execution importance, the execution condition, the packet number, the node (flownode) included in the current group, the packet data of the current group, all the entry parameters (inParams) of the current group, and the service name (serviceName) of the current group form a flow chart of the service flow.
The preparation module 60 is configured to prepare an entry parameter corresponding to a node based on the packet execution request, and process the node according to a node type corresponding to the node to obtain a node processing result and an exit parameter corresponding to the node.
In the embodiment of the invention, the service side prepares the node to enter the parameters according to the value expression configured by the node. For example, the node entry parameter userName field is configured with the value expression $user.name, then the object of $user is found from the context, and the name field of the object is assigned to the userName field of the node entry parameter object; the value expression 18 is configured in the userAge field of the node entry parameter, and then the constant 18 is assigned to the userAge field of the node entry parameter object.
When the service side determines that the execution group to be processed is the first execution group according to the packet sequence number in the packet information, the execution context initialization logic places the entry parameter object into the packet context according to the interface configuration.
According to different node types, the device correspondingly matches the processing modes of the nodes.
An updating module 70, configured to update the egress parameter corresponding to the node to the packet context. In the embodiment of the invention, the service side adds the outlet parameter in the node return value into the context according to the name of the outlet parameter configured by the node, and the description is that if the same name variable exists, the whole replacement is performed.
The device is a continuous circulation step, and the server can search for the next node according to the subsequent node condition of the current node: only one subsequent node is provided, and the next node is directly executed; if a plurality of subsequent nodes exist, executing a conditional expression, and selecting the subsequent nodes; the subsequent node is not in the current group, and execution is ended; the subsequent node is an end node, and execution is ended.
And the assembling module 80 is configured to assemble node processing results corresponding to the nodes in the execution group, obtain a packet processing result, and return the packet execution result and the updated packet context to the scheduling side.
It can be seen that, after each executor is normally started, a packet execution request issued by the scheduling side is received, nodes and lines are executed according to the flowchart, and a packet execution result is returned to the scheduling side.
The distributed flow processing device provided by the invention realizes more flexible flow scheduling and executing strategies by splitting the business flow engine into the scheduling side and the service side, supports advanced characteristics such as load balancing, fault transfer and the like, thereby improving the robustness of distributed services.
In order to solve the above-described problems, a distributed business process engine is provided in the present embodiment. The distributed business process engine according to the embodiment of the present invention may be used in an electronic device, including but not limited to a computer, a mobile terminal, etc., and fig. 10 is a schematic structural diagram of the distributed business process engine according to the embodiment of the present invention, as shown in fig. 10, the business process engine includes:
the system comprises an editor, a scheduler and at least one executor, wherein the scheduler and the executor are both connected with the editor, and the executor is connected with the scheduler.
The editor is used for splitting the business process into at least one component and acquiring an actuator component list fed back by the actuator so as to perform business logic combination according to the component list to form a flow chart conforming to the business.
The editor is a development side of process arrangement, a user can divide a service processing function into a plurality of components, the design of the process is completed in the process editor by writing codes or multiplexing the existing component codes, the user can select corresponding service components by dragging components and the like after acquiring an actuator component list fed back by an actuator in the process of editing the process, and the variable assignment process of entering and exiting the parameters is completed by editing component information. This can be achieved by using the custom assignment symbol "$" and the variable. The editor may communicate the relevant information to the scheduler by way of publishing.
The editor allows the user to drag the registered components into the process, carries out the parameter input assignment of the components through the agreed context, and outputs the parameter input assignment back to the context, so that the process definition is more flexible and modularized, and can adapt to various complex business requirements.
The scheduler is used for receiving the service request of the user, mapping the service request to a corresponding flow according to a preset mapping rule, extracting to obtain a flow chart, analyzing the service request, initializing the flow entry parameter, scheduling the service side to process the execution group based on the grouping information in the flow chart and the load state of the service side, obtaining a grouping execution result and a grouping context returned by the service side, updating the flow context according to the grouping context, determining an executor to be scheduled, integrating the grouping execution results of each execution group, obtaining the service processing result and feeding back to the user.
In an embodiment of the present invention, the flowchart includes at least the following: at least one execution group and grouping information, and it should be noted that, if the grouping information is used for service integration, all execution groups can form a service flow corresponding to the whole service request, and the grouping information at least includes the following contents: execution order between execution groups, execution importance, execution conditions, packet sequence number, nodes (flowNodes) contained in the current group, packet data of the current group, all entry parameters (inParams) of the current group, and service name (serviceName) of the current group.
And the executor is used for receiving the packet execution request sent by the scheduling side, preparing the entry parameters corresponding to the nodes based on the packet execution request, processing the node to obtain the node processing result and the exit parameters corresponding to the nodes according to the node types corresponding to the nodes, updating the exit parameters corresponding to the nodes to the packet context, assembling the node processing result corresponding to each node in the execution group to obtain the packet processing result, and returning the packet execution result and the updated packet context to the scheduler.
In an embodiment of the present invention, the packet execution request includes: the method comprises the steps of executing a group, grouping information corresponding to the executing group and a flow context, after receiving a grouping executing request and starting, an executor scans out marked components, analyzing metadata corresponding to the scanned components, caching the metadata, obtaining an executor component list and reporting the executor component list to an editor.
The distributed business process engine provided by the invention realizes more flexible process scheduling and executing strategies by splitting the business process engine into three parts of a scheduler, an executor and an editor, supports advanced characteristics such as load balancing, fault transfer and the like, thereby improving the robustness of distributed services, and realizes more flexible and reliable process scheduling and executing strategies by splitting the process into at least one executing group and executing the corresponding executing group by the scheduler scheduling executor, managing all the executors by the scheduler in charge of automatic grouping scheduling of the process, thereby specifying data circulation rules, realizing data exchange between context and component input/output parameters, simplifying the processing of context data, enabling the executing group processed by the executor to definitely define the process input/output parameters, simplifying the complexity of data management, reducing errors, improving the development efficiency, improving the readability and usability of the process, enabling a user to drag registered components into the process, and enabling the user to flexibly define and adapt to various requirements of the modularized business by carrying out the input and output parameters of the context. The invention is applicable to complex business scenes and can provide comprehensive support for complex business systems such as online transaction, distributed microservices and the like.
Fig. 11 illustrates a physical structure diagram of an electronic device, as shown in fig. 11, which may include: processor 510, communication interface (Communications Interface) 520, memory 530, and communication bus 540, wherein processor 510, communication interface 520, memory 530 complete communication with each other through communication bus 540. Processor 510 may invoke logic commands in memory 530 to perform a distributed flow processing method comprising:
acquiring a service request of a user, mapping the service request to a corresponding flow according to a preset mapping rule, and extracting to obtain a flow chart; the flow chart comprises: at least one execution group and grouping information, the grouping information includes: the execution sequence among the execution groups, the execution importance, the execution condition, the grouping sequence number, the nodes contained in the current group, the grouping data of the current group, all the entry parameters of the current group and the service name of the current group;
analyzing the service request and initializing the flow entry parameters;
scheduling a service side processing execution group based on packet information and a load state of the service side in the flow chart to obtain a packet execution result and a packet context returned by the service side, updating the flow context according to the packet context, and determining the service side to be scheduled;
Integrating the grouping execution results of each execution group to obtain a service processing result and returning an interface response to the user;
or receiving a packet execution request sent by a scheduling side; the packet execution request includes: the execution group, the grouping information corresponding to the execution group and the flow context, wherein the grouping information at least comprises the following contents: the execution sequence among the execution groups, the execution importance, the execution condition, the grouping sequence number, the nodes contained in the current group, the grouping data of the current group, all the entry parameters of the current group and the service name of the current group;
based on the grouping execution request, preparing an entry parameter corresponding to the node, and processing the node to obtain a node processing result and an exit parameter corresponding to the node according to the node type corresponding to the node;
updating the exit parameters corresponding to the nodes to the packet context;
and assembling node processing results corresponding to all nodes in the execution group to obtain a grouping processing result, and returning the grouping execution result and updated grouping context to the scheduling side.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in some part contributing to the prior art in the form of a software medium, which may be stored in a computer readable storage medium such as ROM/RAM, a magnetic disk, an optical disk, etc., including several commands for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the various embodiments or methods of some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A distributed flow processing method, wherein the method is applied to a scheduling side, and the method comprises:
acquiring a service request of a user, mapping the service request to a corresponding flow according to a preset mapping rule, and extracting to obtain a flow chart; the flow chart comprises: at least one execution group and grouping information, the grouping information includes: the execution sequence among the execution groups, the execution importance, the execution condition, the grouping sequence number, the nodes contained in the current group, the grouping data of the current group, all the entry parameters of the current group and the service name of the current group;
analyzing the service request and initializing the flow entry parameters;
Scheduling a service side processing execution group based on packet information and a load state of the service side in the flow chart to obtain a packet execution result and a packet context returned by the service side, updating the flow context according to the packet context, and determining the service side to be scheduled;
and integrating the grouping execution results of each execution group to obtain a service processing result and returning an interface response to the user.
2. The distributed flow processing method according to claim 1, wherein the step of scheduling the service side to process the execution group based on the packet information in the flow chart and the load state of the service side, obtaining a packet execution result and a packet context returned by the service side, and updating the flow context and determining the service side to be scheduled according to the packet context, specifically comprises:
based on grouping information and load states of the service sides in the flow chart, determining the service side corresponding to the first execution group to be executed, and sending the execution group, the corresponding grouping information and the flow context to the corresponding service side;
receiving a packet execution result and a packet context returned by a service side, updating a flow context, determining a service side corresponding to a next execution group to be executed based on packet information in a flow chart and a load state of the service side, and sending the execution group, the corresponding packet information and the flow context to the corresponding service side;
And determining that a packet execution result and a packet context corresponding to the last execution group are received, updating the flow context and terminating the scheduling.
3. A distributed flow processing method, wherein the method is applied to a service side, and the method comprises:
receiving a packet execution request sent by a scheduling side; the packet execution request includes: the execution group, the grouping information corresponding to the execution group and the flow context, wherein the grouping information at least comprises the following contents: the execution sequence among the execution groups, the execution importance, the execution condition, the grouping sequence number, the nodes contained in the current group, the grouping data of the current group, all the entry parameters of the current group and the service name of the current group;
based on the grouping execution request, preparing an entry parameter corresponding to the node, and processing the node to obtain a node processing result and an exit parameter corresponding to the node according to the node type corresponding to the node;
updating the exit parameters corresponding to the nodes to the packet context;
and assembling node processing results corresponding to all nodes in the execution group to obtain a grouping processing result, and returning the grouping execution result and updated grouping context to the scheduling side.
4. The distributed flow processing method according to claim 3, wherein the preparing the entry parameter corresponding to the node based on the packet execution request, and processing the node according to the node type corresponding to the node, the processing result of the node and the exit parameter corresponding to the node specifically includes:
Based on the grouping execution request, preparing node entry parameters according to a value expression configured by the nodes;
acquiring a node type corresponding to the node;
determining the node type as a functional node, executing corresponding functional node logic, and obtaining a node processing result and an outlet parameter corresponding to the node;
determining the node type as a service node, acquiring a method of a node corresponding component from the cache, calling the service component in a reflection mode, and acquiring a calling result of the method to obtain a node processing result and an outlet parameter corresponding to the node.
5. A distributed flow processing method according to claim 3, further comprising the steps of:
starting and scanning out marked components based on the packet execution request, analyzing metadata corresponding to the scanned components, caching and reporting the metadata; the components required on the service side are marked.
6. A distributed flow processing apparatus, wherein the apparatus is applied to a scheduling side, the apparatus comprising:
the first receiving module is used for acquiring a service request of a user, mapping the service request to a corresponding flow according to a preset mapping rule, and extracting to obtain a flow chart; the flow chart comprises: at least one execution group and grouping information, the grouping information includes: the execution sequence among the execution groups, the execution importance, the execution condition, the grouping sequence number, the nodes contained in the current group, the grouping data of the current group, all the entry parameters of the current group and the service name of the current group;
The analysis module is used for analyzing the service request and initializing the flow entry parameters;
the scheduling module is used for scheduling the service side to process the execution group based on the packet information in the flow chart and the load state of the service side, obtaining a packet execution result returned by the service side and a packet context, updating the flow context according to the packet context and determining the service side to be scheduled;
and the integration module is used for integrating the grouping execution results of each execution group to obtain a service processing result and returning an interface response to the user.
7. A distributed flow processing apparatus, wherein the apparatus is applied to a service side, the apparatus comprising:
the second receiving module is used for receiving the packet execution request sent by the scheduling side; the packet execution request includes: the execution group, the grouping information corresponding to the execution group and the flow context, wherein the grouping information at least comprises the following contents: the execution sequence among the execution groups, the execution importance, the execution condition, the grouping sequence number, the nodes contained in the current group, the grouping data of the current group, all the entry parameters of the current group and the service name of the current group;
the preparation module is used for preparing the entry parameters corresponding to the nodes based on the packet execution request, and processing the nodes to obtain node processing results and the exit parameters corresponding to the nodes according to the node types corresponding to the nodes;
The updating module is used for updating the outlet parameters corresponding to the nodes into the packet context;
and the assembly module is used for assembling node processing results corresponding to all the nodes in the execution group to obtain a grouping processing result and returning the grouping execution result and the updated grouping context to the dispatching side.
8. A distributed business process engine, comprising:
the system comprises an editor, a scheduler and at least one executor, wherein the scheduler and the executor are both connected with the editor, and the executor is connected with the scheduler;
the editor is used for disassembling the business process into at least one component, and acquiring an actuator component list fed back by an actuator, so as to perform business logic combination according to the component list to form a flow chart conforming to the business;
the scheduler is used for receiving the service request of the user, mapping the service request to a corresponding flow according to a preset mapping rule, extracting to obtain a flow chart, analyzing the service request, initializing flow entry parameters, scheduling a service side to process an execution group based on grouping information in the flow chart and a load state of the service side, obtaining a grouping execution result and a grouping context returned by the service side, updating the flow context according to the grouping context, determining an executor to be scheduled, integrating grouping execution results of each execution group, obtaining a service processing result and feeding back the service processing result to the user;
The flow chart comprises: at least one execution group and grouping information, the grouping information includes: the execution sequence among the execution groups, the execution importance, the execution condition, the grouping sequence number, the nodes contained in the current group, the grouping data of the current group, all the entry parameters of the current group and the service name of the current group;
the executor is used for receiving a packet execution request sent by a scheduling side, preparing an entry parameter corresponding to a node based on the packet execution request, processing the node to obtain a node processing result and an exit parameter corresponding to the node according to the node type corresponding to the node, updating the exit parameter corresponding to the node to a packet context, assembling the node processing result corresponding to each node in the execution group to obtain the packet processing result, and returning the packet execution result and the updated packet context to the scheduler;
the packet execution request includes: the execution group, the grouping information corresponding to the execution group and the flow context; after receiving the grouping execution request and starting, the executor scans out marked components, analyzes metadata corresponding to the scanned components, caches the metadata, obtains an executor component list and reports the executor component list to the editor.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the distributed flow processing method of any of claims 1 to 5 when the program is executed.
10. A non-transitory computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when executed by a processor, implements the steps of the distributed flow processing method according to any of claims 1 to 5.
CN202310952609.8A 2023-08-01 2023-08-01 Distributed flow processing method and device and distributed business flow engine Active CN116661978B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310952609.8A CN116661978B (en) 2023-08-01 2023-08-01 Distributed flow processing method and device and distributed business flow engine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310952609.8A CN116661978B (en) 2023-08-01 2023-08-01 Distributed flow processing method and device and distributed business flow engine

Publications (2)

Publication Number Publication Date
CN116661978A true CN116661978A (en) 2023-08-29
CN116661978B CN116661978B (en) 2023-10-31

Family

ID=87717519

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310952609.8A Active CN116661978B (en) 2023-08-01 2023-08-01 Distributed flow processing method and device and distributed business flow engine

Country Status (1)

Country Link
CN (1) CN116661978B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117611095A (en) * 2023-12-06 2024-02-27 阿帕数字科技有限公司 Design method of multifunctional combination collocation system applied to supply chain

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090307298A1 (en) * 2008-06-09 2009-12-10 International Business Machines Corporation Optimizing Service Processing Based on Business Information, Operational Intelligence, and Self-Learning
CN103279840A (en) * 2013-06-08 2013-09-04 北京首钢自动化信息技术有限公司 Workflow engine implement method based on dynamic language and event processing mechanism
CN105162878A (en) * 2015-09-24 2015-12-16 网宿科技股份有限公司 Distributed storage based file distribution system and method
CN109547575A (en) * 2019-01-04 2019-03-29 中国银行股份有限公司 A kind of data dispatching method, device and equipment
CN109739550A (en) * 2018-12-28 2019-05-10 四川新网银行股份有限公司 A kind of micro services traffic scheduling engine based under Internet advertising distribution
CN113220436A (en) * 2021-05-28 2021-08-06 工银科技有限公司 Universal batch operation execution method and device under distributed environment
CN114328587A (en) * 2021-12-30 2022-04-12 中国民航信息网络股份有限公司 NDC message distributed analysis system architecture integration method and device
CN115344361A (en) * 2021-05-14 2022-11-15 华为技术有限公司 Management method and management system of computing nodes
CN115760085A (en) * 2022-11-29 2023-03-07 中国银行股份有限公司 Message processing method and device of distributed real-time payment system
CN115829715A (en) * 2022-12-26 2023-03-21 江苏苏宁银行股份有限公司 Banking transaction dispatching center control method and banking transaction dispatching center
CN116382673A (en) * 2023-03-02 2023-07-04 江苏苏云信息科技有限公司 Service grid-based distributed programmable business process engine system

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090307298A1 (en) * 2008-06-09 2009-12-10 International Business Machines Corporation Optimizing Service Processing Based on Business Information, Operational Intelligence, and Self-Learning
CN103279840A (en) * 2013-06-08 2013-09-04 北京首钢自动化信息技术有限公司 Workflow engine implement method based on dynamic language and event processing mechanism
CN105162878A (en) * 2015-09-24 2015-12-16 网宿科技股份有限公司 Distributed storage based file distribution system and method
CN109739550A (en) * 2018-12-28 2019-05-10 四川新网银行股份有限公司 A kind of micro services traffic scheduling engine based under Internet advertising distribution
CN109547575A (en) * 2019-01-04 2019-03-29 中国银行股份有限公司 A kind of data dispatching method, device and equipment
CN115344361A (en) * 2021-05-14 2022-11-15 华为技术有限公司 Management method and management system of computing nodes
WO2022237255A1 (en) * 2021-05-14 2022-11-17 华为技术有限公司 Management method and system for computing node
CN113220436A (en) * 2021-05-28 2021-08-06 工银科技有限公司 Universal batch operation execution method and device under distributed environment
CN114328587A (en) * 2021-12-30 2022-04-12 中国民航信息网络股份有限公司 NDC message distributed analysis system architecture integration method and device
CN115760085A (en) * 2022-11-29 2023-03-07 中国银行股份有限公司 Message processing method and device of distributed real-time payment system
CN115829715A (en) * 2022-12-26 2023-03-21 江苏苏宁银行股份有限公司 Banking transaction dispatching center control method and banking transaction dispatching center
CN116382673A (en) * 2023-03-02 2023-07-04 江苏苏云信息科技有限公司 Service grid-based distributed programmable business process engine system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LEE, JAE YEOL.ET: "Process-centric engineering Web services in a distributed and collaborative environment", 《COMPUTERS & INDUSTRIAL ENGINEERING 》 *
林国丹;黄钦开;余阳;潘茂林;: "Activiti引擎的无状态云工作流调度算法", 计算机集成制造系统, no. 06 *
魏星: "分布式架构下数据库查询的并行处理与优化", 《中国博士学位论文全文数据库(信息科技辑)》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117611095A (en) * 2023-12-06 2024-02-27 阿帕数字科技有限公司 Design method of multifunctional combination collocation system applied to supply chain
CN117611095B (en) * 2023-12-06 2024-04-26 阿帕数字科技有限公司 Design method of multifunctional combination collocation system applied to supply chain

Also Published As

Publication number Publication date
CN116661978B (en) 2023-10-31

Similar Documents

Publication Publication Date Title
US20130117064A1 (en) Business process analysis combining modeling, simulation and collaboration with web and cloud delivery
CN108280023B (en) Task execution method and device and server
CN110825511A (en) Operation flow scheduling method based on modeling platform model
US8538793B2 (en) System and method for managing real-time batch workflows
CN116661978B (en) Distributed flow processing method and device and distributed business flow engine
EP2031507A1 (en) Systems and/or methods for location transparent routing and execution of processes
CN108243012B (en) Charging application processing system, method and device in OCS (online charging System)
CN111400011B (en) Real-time task scheduling method, system, equipment and readable storage medium
CN114926143B (en) Method and platform for configuring enterprise workflow based on business components and process engine
CN112130812B (en) Analysis model construction method and system based on data stream mixed arrangement
CN111208992A (en) System scheduling workflow generation method and system
CN114461357A (en) Remote sensing satellite raw data real-time processing flow scheduling engine
US11966825B2 (en) System and method for executing an operation container
CN115454452A (en) Application platform loading method suitable for energy industry internet platform
CN113835853A (en) Distributed RPA robot management method, device and storage medium combining AI and RPA
CN113918288A (en) Task processing method, device, server and storage medium
CN112418796A (en) Sub-process node activation method and device, electronic equipment and storage medium
JP2003132039A (en) Scenario dividing system
CN116185242B (en) Service arrangement method and device and electronic equipment
CN117591132B (en) Service release method and release system
CN117608770A (en) Distributed flow processing method and device for simulation task
CN116820538A (en) Unified message management method and message management platform thereof
CN117371773A (en) Business process arranging method, device, electronic equipment and medium
CN114168283A (en) Distributed timed task scheduling method and system
CN115964036A (en) Visual service arrangement system based on micro-service architecture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant