CN114936074A - Method for realizing dynamic service pipeline based on event driving and Reactor mode - Google Patents

Method for realizing dynamic service pipeline based on event driving and Reactor mode Download PDF

Info

Publication number
CN114936074A
CN114936074A CN202210330920.4A CN202210330920A CN114936074A CN 114936074 A CN114936074 A CN 114936074A CN 202210330920 A CN202210330920 A CN 202210330920A CN 114936074 A CN114936074 A CN 114936074A
Authority
CN
China
Prior art keywords
data
task chain
service pipeline
input
event
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210330920.4A
Other languages
Chinese (zh)
Inventor
王海龙
王京晶
郑亚凯
高艳涛
吴泽荃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Positive Network Technology Co ltd
Original Assignee
Positive Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Positive Network Technology Co ltd filed Critical Positive Network Technology Co ltd
Priority to CN202210330920.4A priority Critical patent/CN114936074A/en
Publication of CN114936074A publication Critical patent/CN114936074A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/545Interprogram communication where tasks reside in different layers, e.g. user- and kernel-space

Abstract

The invention discloses a method for realizing a dynamic service pipeline based on an event-driven and Reactor mode, which comprises the following steps of a, designing a framework, and executing a corresponding task chain after the message on a message middleware is combined and processed by a dispatching center; b. dynamically configuring a service pipeline, dividing a task chain according to data groups on the Topic so as to form the service pipeline, wherein different division modes form different service pipelines; c. and executing the task chain, wherein the input requirement of the task chain is to match multiple types of data on multiple Topics on the message middleware, and after the multiple types of data are combined into a complete input, the execution of the task chain is triggered. A task chain is associated with an input, and the input is combined by a dispatching center on data on message middleware, and a method for dynamically configuring a service pipeline is provided by different combination or division modes.

Description

Method for realizing dynamic service pipeline based on event driving and Reactor mode
Technical Field
The invention belongs to the technical field of communication, and particularly relates to a method for realizing a dynamic service pipeline based on an event-driven and Reactor mode.
Background
With the rapid development of modern communication technology, computer network technology and enterprise informatization, more and more application systems are used by enterprises. However, since these application systems operate independently of each other, information cannot be exchanged and shared between them, so that a lot of information is enclosed in the independent application systems, and a so-called "information island" is formed inside the enterprise. The unified star forwarding gateway is built among a plurality of application systems, so that the communication cost of direct interconnection of the systems can be effectively reduced. Furthermore, on the basis of the method, more flexible service pipeline configuration among multiple systems can be realized, which is a scheduling concept. The distributed scheduling tools (such as xxl-jobs and quartz) in business mainly focus on scheduling of timed tasks, and the core idea is to abstract a Scheduler (task Scheduler), a Trigger (Trigger) and a Job (task). The Trigger is used for defining Trigger time, namely, according to what time rule to execute a task, the Job is used for representing a scheduled task, the Scheduler is a controller for actually executing scheduling, and a polling Trigger mode is adopted to further execute a certain Job associated with the Trigger. How to implement traffic pipeline configuration between systems is a problem that needs to be solved at present.
Disclosure of Invention
Aiming at the defects of the prior art, the invention aims to provide a method for realizing a dynamic service pipeline based on an event-driven and Reactor mode.
The invention provides the following technical scheme:
a method for realizing dynamic service pipeline based on event-driven and Reactor mode includes a, designing architecture, executing corresponding task chain after message on message middleware is processed by combination of dispatching center;
b. dynamically configuring a service pipeline, dividing a task chain according to data groups on the Topic so as to form the service pipeline, wherein different division modes form different service pipelines;
c. and executing the task chain, wherein the input requirement of the task chain is to match multiple types of data on multiple topocs on the message middleware, and after the data are combined into a complete input, the execution of the task chain is triggered.
Preferably, the task chain includes a plurality of element handlers for executing the service processing, and the thread pool layer can ensure that the execution of other task chains is not blocked when the task chain executes IO.
Preferably, a closest match point algorithm is adopted when matching multiple types of data on multiple topics on the message middleware, and is used for matching complete input, and meanwhile, a fault-tolerant mechanism is provided.
Preferably, the data in the message middleware is from a plurality of systems, the data processing speeds of the systems have differences, the messages in the message middleware ensure the final consistency of the data but cannot ensure the real-time consistency of the messages, the scheduling center needs to be responsible for fault-tolerant processing when the task chain is assembled, and the data which cannot be matched within a period of time is stored for secondary processing.
Preferably, the effect of the exact Once semantic model on the complexity and performance of the system is relatively large, and the scheduling center needs to be capable of ensuring correct processing under the condition that repeated information and lost information occur, namely, finding the nearest matching point and storing unmatched data to facilitate secondary processing.
Preferably, the searching for the closest matching point is realized by a matching algorithm, and the pseudo code of the matching algorithm is as follows:
a. specifying a timeout time;
b. the middleware event drive is used for transmitting the received data to a scheduling center, the data is added into a corresponding queue, and the scheduling center records the arrival time of the data;
c1, if the queue associated with the task chain input is empty, the task chain does not trigger;
c2, if the queues related to the task chain input have at least one value, scanning other queues by taking the latest data as the reference, if a matching point is found, marking the middle data as not matching and recording the data in the database, and if no matching point is found, repeating the step a;
c3, if the time of the data in the queue exceeds the threshold value, marking the corresponding data as not matching, removing the queue and recording to the database, and repeating the step a.
Preferably, the element Handler contains a Feign call.
Compared with the prior art, the invention has the following beneficial effects:
(1) the invention discloses a method for realizing dynamic service pipelines based on event-driven and Reactor modes, wherein the output of each subsystem can be sent to a middleware through an interface, a plurality of service pipelines can be registered to a dispatching center, the same data is consumed, the network flow is reduced, the method is a generalization of a producer-consumer model, and the dispatching center combines data to carry out combined triggering, so that the actual service requirements can be met, and the triggering flexibility is provided.
(2) The invention relates to a method for realizing a dynamic service pipeline based on an event-driven and Reactor mode.A task chain is associated with an input, one input is combined by a scheduling center for data on a message middleware, and the data on different topics are combined to obtain complete input, thereby providing the capability of dynamically configuring the service pipeline.
(3) The invention discloses a method for realizing a dynamic service pipeline based on an event-driven and Reactor mode, wherein the data transmission of a message middleware to a scheduling center is a streaming data transmission, and a closest matching point needs to be found in a plurality of pieces of streaming data by one complete input.
(4) The invention relates to a method for realizing a dynamic service pipeline based on an event-driven and Reactor mode.A task chain comprises a plurality of units Handler for executing service processing, the Handler also comprises Feign calling, and a thread pool layer can ensure that the execution of other task chains is not blocked when the task chain executes IO.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 is an architectural diagram of the present invention.
FIG. 2 is a diagram of 2 different divisions of a task chain input for the present invention.
FIG. 3 is a closed loop diagram of the data flow of the present invention.
FIG. 4 is a closed loop diagram of data flow for different input modes of the present invention.
FIG. 5 is a diagram of an exemplary queue data for a dispatch center of the present invention.
FIG. 6 is a flow chart of the present invention algorithm for finding a closest match point.
Fig. 7 is a diagram illustrating a scheduling manner according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings. It is to be understood that the described embodiments are only a few embodiments of the present invention, and not all embodiments. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
The first embodiment is as follows:
as shown in fig. 1, a method for implementing a dynamic service pipeline based on event-driven and Reactor modes includes the following steps:
a. the architecture design, after the message on the message middleware is combined and processed by the dispatching center, the corresponding task chain is executed;
b. dynamically configuring a service pipeline, dividing a task chain according to data groups on the Topic so as to form the service pipeline, wherein different division modes form different service pipelines;
c. and executing the task chain, wherein the input requirement of the task chain is to match multiple types of data on multiple topocs on the message middleware, and after the data are combined into a complete input, the execution of the task chain is triggered.
The task chain comprises a plurality of units handlers for executing business processing, and the thread pool layer can ensure that the execution of other task chains is not blocked when the task chain executes IO. When matching multi-class data on a plurality of topics on the message middleware, a nearest matching point algorithm is adopted for matching complete input, and a fault-tolerant mechanism is provided.
The data in the message middleware comes from a plurality of systems, the data processing speeds of the systems have difference, the messages in the message middleware ensure the final consistency of the data but cannot ensure the real-time consistency of the messages, and the scheduling center needs to be responsible for fault-tolerant processing when the task chain is assembled and stores the data which cannot be matched within a period of time so as to facilitate secondary processing. The method has the advantages that the effect of the exact Once semantic model on the complexity and performance of the system is large, the dispatching center needs to be capable of ensuring that the system can correctly process the repeated information and the lost information, namely, the nearest matching point is found, and the unmatched data is stored so as to be convenient for secondary processing.
Finding the closest matching point is realized by a matching algorithm, and the pseudo code of the matching algorithm is as follows: a. specifying a timeout time; b. the middleware event drive transmits the received data to a scheduling center, the data is added into a corresponding queue, and the scheduling center records the arrival time of the data; c1, if the queue associated with the task chain input is empty, the task chain does not trigger; c2, if the queues related to the task chain input have at least one value, scanning other queues by taking the latest data as the reference, if a matching point is found, marking the data in the middle as mismatching and recording the data in the database, and if no matching point is found, repeating the step a; c3, if the time of the data in the queue exceeds the threshold value, marking the corresponding data as not matching, removing the queue and recording to the database, and repeating the step a.
Example two
a. Architectural design
As shown in connection with fig. 1, the message middleware serves as a source of data, and the data in the message middleware is originated from each subsystem. And after the messages on the message middleware are combined and processed by the scheduling center, executing a corresponding task chain. The task chain comprises a plurality of element handlers for executing business processing, and the Handler can also comprise Feign call. The thread pool layer can ensure that the execution of other task chains is not blocked when the task chain executes IO.
b. Scheduling mechanism
The input to a task chain is deterministic, but the way in which the input is composed is manifold, and a complete input to a task chain can have multiple "partitions". A task chain can set a partition, i.e. which data on which topics an input consists of. Since these data are sourced from other subsystems, a service pipeline is formed. At the same time, if another different partition is set, another service pipeline is formed.
In multiple division modes, the input of the task chain requires matching multiple types of data on multiple topics on the message middleware (there are several fields for matching on the data), and after the data are combined into a complete input, the execution of the task chain is triggered. The invention provides a closest matching point algorithm which is used for matching complete input and can provide a fault-tolerant mechanism at the same time.
For example, the input requirements of a task chain are credential information, window information, and event information. Configuring different partitions to form the required input of a task chain, namely realizing the dynamic configuration of a business system. One example partitioning approach is: subsystem 1 outputs witness information and window information (located at Topic1), and event information (Topic 2). As shown in fig. 2.
c. The fault-tolerant mechanism is characterized in that data in the message middleware is sourced from a plurality of systems, the data processing speeds of the systems are different, and the message in the message middleware ensures the final consistency of the data but does not ensure the real-time consistency of the message. The dispatching center needs to be responsible for fault-tolerant processing when the task chain is assembled, and stores data which cannot be matched within a period of time so as to facilitate secondary processing.
The scheduling method, taking the architecture diagram of fig. 1 as an example, has two task chains. Suppose that the first chain is subscribed to data of testimonial information and office information, and the second chain is subscribed to data of office information. This means that the testimonial information and office information form a complete match, which can trigger the execution of the first chain. The data of the office information is complete for the second chain, so that execution of the second chain can be triggered directly. A typical execution sequence is shown in fig. 7.
If the office information originates from the office system. The witness information is derived from an identity witness system. The complete closed loop of data flow is shown in fig. 3. If an application subsystem generates office data and testimonial data at the same time, the task chain can switch to input a 'dividing' mode and subscribe the data generated by the application subsystem. As shown in fig. 4.
The method has the advantages that the effect of the exact Once semantic model on the complexity and performance of the system is large, the dispatching center needs to be capable of ensuring that the system can correctly process under the condition that repeated information (At Most Once semantic model) and lost information (At Least Once semantic model) appear, namely, the nearest matching point is found, and unmatched data are stored to facilitate secondary processing.
Assume that a task chain requires testimonial information and office information as input. The input is in a streaming mode and a batch mode, wherein the batch mode refers to that a plurality of inputs are combined at one time within a length window and transmitted to a task chain for execution. The stream processing means that each group of the scheduling center synthesizes an input and immediately transmits the input to the task chain for execution. One example data in the dispatch center is shown in fig. 5.
The matching algorithm pseudo-code is as follows:
specifying a timeout time, as shown in connection with fig. 6; b. the middleware event drive is used for transmitting the received data to a scheduling center, the data is added into a corresponding queue, and the scheduling center records the arrival time of the data; c1. if the queue associated with the task chain input is empty, the task chain is not triggered; c2. if the queues associated with the task chain input have at least one value, scanning other queues based on the latest data, and if a matching point is found, marking the middle data as not matching and recording the data to the database. If no matching point is found, repeating step a.
c3. And if the time of the data in the queue exceeds the threshold value, marking the corresponding data as not matched, removing the queue and recording the data to the database, and repeating the step a.
The device obtained by the technical scheme is a method for realizing the dynamic service pipeline based on the event-driven and Reactor modes, and the output of each subsystem can be sent to the middleware through an interface. The output of each subsystem is equivalent to the producer, but the consumer is determined by the dispatch center. A plurality of service pipelines can be registered with the dispatching center, consume the same data, reduce network flow and generalize a producer-consumer model. Meanwhile, the mode of combining and triggering the data by the dispatching center can meet the actual service requirement and provide the triggering flexibility. The service pipeline is dynamically configured, one task chain is associated with one input, and one input is combined by the dispatching center on the data on the message middleware. Combining data on different topics to get complete input provides the ability to dynamically configure the service pipeline.
In the invention, the source of the event is middleware, the event and the data carried by the event are sent to a dispatching center through a uniform interface, and the dispatching center can combine the data carried by the event and determine whether to trigger a task chain. The task chain part also expands the Reactor thought in network programming, and expands a single processing unit (Handler) of a single machine into a Handler chain and expands the range of the Handler. The Handler is not only a local execution unit, but also can be the package of one or a plurality of micro service calls and the package of operators, such as a filter, data format check and the like.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention, and it will be apparent to those skilled in the art that various modifications and variations can be made in the present invention; any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (7)

1. A method for realizing a dynamic service pipeline based on event-driven and Reactor modes is characterized by comprising the following steps:
a. the architecture design, after the message on the message middleware is combined and processed by the dispatching center, the corresponding task chain is executed;
b. dynamically configuring a service pipeline, dividing a task chain according to data groups on the Topic so as to form the service pipeline, wherein different division modes form different service pipelines;
c. and executing the task chain, wherein the input requirement of the task chain is to match multiple types of data on multiple Topics on the message middleware, and after the multiple types of data are combined into a complete input, the execution of the task chain is triggered.
2. The method of claim 1, wherein the task chain comprises a plurality of element handlers for performing business processing, and the thread pool layer is capable of ensuring that the task chain does not block execution of other task chains when IO is executed in the task chain.
3. The method of claim 1, wherein a closest match point algorithm is used to match complete inputs and provide a fault-tolerant mechanism when matching multiple types of data on multiple topics on message middleware.
4. The method for implementing a dynamic service pipeline based on event-driven and Reactor modes as claimed in claim 3, wherein the data in the message middleware is from multiple systems, the speed of processing data by each system has difference, the message in the message middleware ensures the final consistency of the data, but cannot ensure the real-time consistency of the message, the scheduling center needs to be responsible for fault-tolerant processing when inputting the assembly task chain, and the data that cannot be matched for a period of time is saved for secondary processing.
5. The method for implementing the dynamic service pipeline based on the event-driven and Reactor mode as claimed in claim 4, wherein the implementation of the exact Once semantic model has a great influence on the complexity and performance of the system, and the dispatch center needs to be able to ensure that the correct processing can be performed under the condition of repeated information and lost information, i.e., to find the nearest matching point and to store the unmatched data for the secondary processing.
6. The method for implementing a dynamic service pipeline based on event-driven and Reactor modes as claimed in claim 5, wherein finding the closest matching point is implemented by a matching algorithm, and the pseudo code of the matching algorithm is as follows:
a. specifying a timeout time;
b. the middleware event drive transmits the received data to a scheduling center, the data is added into a corresponding queue, and the scheduling center records the arrival time of the data;
c1, if the queue associated with the task chain input is empty, the task chain does not trigger;
c2, if the queues related to the task chain input have at least one value, scanning other queues by taking the latest data as the reference, if a matching point is found, marking the data in the middle as mismatching and recording the data in the database, and if no matching point is found, repeating the step a;
c3, if the time of the data in the queue exceeds the threshold value, marking the corresponding data as not matched, moving out the queue and recording to the database, and repeating the step a.
7. The method for implementing a dynamic service pipeline based on event driven and Reactor modes as claimed in claim 2, wherein the element Handler contains a Feign call.
CN202210330920.4A 2022-03-30 2022-03-30 Method for realizing dynamic service pipeline based on event driving and Reactor mode Pending CN114936074A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210330920.4A CN114936074A (en) 2022-03-30 2022-03-30 Method for realizing dynamic service pipeline based on event driving and Reactor mode

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210330920.4A CN114936074A (en) 2022-03-30 2022-03-30 Method for realizing dynamic service pipeline based on event driving and Reactor mode

Publications (1)

Publication Number Publication Date
CN114936074A true CN114936074A (en) 2022-08-23

Family

ID=82862315

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210330920.4A Pending CN114936074A (en) 2022-03-30 2022-03-30 Method for realizing dynamic service pipeline based on event driving and Reactor mode

Country Status (1)

Country Link
CN (1) CN114936074A (en)

Similar Documents

Publication Publication Date Title
Wang et al. Geryon: Accelerating distributed cnn training by network-level flow scheduling
CN111737329A (en) Unified data acquisition platform for rail transit
JPH10510641A (en) Internal execution thread management method
CN113422842B (en) Distributed power utilization information data acquisition system considering network load
CN112148455A (en) Task processing method, device and medium
CN100530105C (en) Cocurrent event processing device and method in multi-task software system
CN103595654A (en) HQoS implementation method, device and network equipment based on multi-core CPUs
CN112527523A (en) Distributed message transmission method and system for high-performance computing multiple clouds
US7950011B2 (en) Leveraging advanced queues to implement event based job scheduling
Hwang et al. Modification of mosquitto broker for delivery of urgent MQTT message
CN111478839A (en) Physical bus and operating system decoupled distributed aviation communication system
CN114936074A (en) Method for realizing dynamic service pipeline based on event driving and Reactor mode
CN112099930A (en) Quantum computer cluster distributed queue scheduling method
CN110113257B (en) Unified data access gateway based on big data and implementation method
CN110780869A (en) Distributed batch scheduling
CN116225741A (en) Heterogeneous multi-core inter-core communication scheduling method
EP4086753A1 (en) Decision scheduling customization method and device based on information flow
CN111884948B (en) Assembly line scheduling method and device
CN104636206A (en) Optimization method and device for system performance
CN114897532A (en) Operation log processing method, system, device, equipment and storage medium
CN111866157A (en) Cloud service gateway and cloud service internal and external request format conversion method
CN112527532A (en) Path scheduling method based on message
CN111459625A (en) Flow scheduling method based on micro-service
Ashjaei et al. Dynamic reconfiguration in hartes switched ethernet networks
CN112087373B (en) Message sending method and service device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination