CN111338773A - Distributed timed task scheduling method, scheduling system and server cluster - Google Patents

Distributed timed task scheduling method, scheduling system and server cluster Download PDF

Info

Publication number
CN111338773A
CN111338773A CN202010107644.6A CN202010107644A CN111338773A CN 111338773 A CN111338773 A CN 111338773A CN 202010107644 A CN202010107644 A CN 202010107644A CN 111338773 A CN111338773 A CN 111338773A
Authority
CN
China
Prior art keywords
task
node
scheduling
service
message queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010107644.6A
Other languages
Chinese (zh)
Other versions
CN111338773B (en
Inventor
李红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huayun Data Co ltd
Original Assignee
Huayun Data Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huayun Data Co ltd filed Critical Huayun Data Co ltd
Priority to CN202010107644.6A priority Critical patent/CN111338773B/en
Publication of CN111338773A publication Critical patent/CN111338773A/en
Application granted granted Critical
Publication of CN111338773B publication Critical patent/CN111338773B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multi Processors (AREA)
  • Computer And Data Communications (AREA)

Abstract

The invention provides a distributed timing task scheduling method, a scheduling system and a server cluster, wherein the method comprises the steps of respectively configuring an independent flow scheduling process, a service scheduling process and a task timer in at least two nodes; selecting one node as a designated node to respond to the timing task, sending the timing task to a message queue, and establishing monitoring between at least one node selected by the rest nodes and the message queue; the flow scheduling process determines whether to reselect a new designated node from the message queue based on a retry mechanism according to a time limit set by the task timer, and the service scheduling process in the selected node of the message queue responds to the timed task. The invention effectively solves the defect of overhigh dependence on resources of the timing task in the scheduling process so as to realize the consistency of the scheduling of the timing task and overcome the defect that the timing task can not be effectively responded due to the node failure in the scheduling process of the timing task.

Description

Distributed timed task scheduling method, scheduling system and server cluster
Technical Field
The invention relates to the technical field of computers, in particular to a distributed timing task scheduling method, a distributed timing task scheduling system and a server cluster.
Background
A cloud platform, a server cluster configuration environment runs one or more Project projects (Project) that require a configuration timing Task (Task). Based on the situation that in a large-scale cloud platform and server cluster configuration environment, there are many projects and multiple tasks included in a single project, it is necessary to perform task determination, for example, timeout state determination of an order system, timed update of cache data, timed sending of mails to users, even some regularly calculated reports, and the like. In order to ensure high availability and high fault tolerance, a large-scale cloud platform or server cluster usually adopts a distributed architecture, so that distributed timing tasks are generated at the same time.
Currently, the technical route comparing mainstream distributed timing task scheduling is based on Quartz framework. And the Quartz stores a plurality of node tasks into a database when realizing distributed task scheduling. When a task is executed, Quartz calls a trigger from a database to execute the task, and if the name and the execution time of the trigger are the same, the task is executed from one node. If the node fails to execute the task, the task is distributed to another node to be executed, and therefore distributed task scheduling is achieved. Meanwhile, in order to solve the problem of high availability of a cloud platform and a server cluster in the prior art, Zookeeper can be adopted to realize data fragmentation, so that data cannot be repeatedly processed, and the data processing speed is increased. The method plays a significant role in the fields of financial industry, mobile payment and the like.
However, the technical route adopting Zookeeper can cause very large pressure on the database, and the database is at risk of downtime; in the technical route adopting Quartz, the dependence on the database is strong, and the deployment of the database is complex; particularly, the Quartz framework is adopted to realize the over-occupation of resources when the distributed timing task is scheduled, the uniqueness in the task scheduling process cannot be ensured, and the technical means of only adopting the distributed database in the prior art can cause the calculation overhead and the calculation pressure on the distributed database.
In view of the above, there is a need for an improved distributed timing task scheduling method and related similar technical solutions in the prior art to solve the above problems.
Disclosure of Invention
The invention aims to disclose a distributed timed task scheduling method, a distributed timed task scheduling system and a server cluster, which are used for overcoming the defects in the prior art, in particular to solve the defect of high dependence on resources in the scheduling process of timed tasks; meanwhile, the method and the device aim to improve the fault tolerance of the node running the timing task and reduce the difficulty of scheduling the timing task so as to achieve the consistency of task scheduling.
To achieve the first object, the present invention provides a distributed timed task scheduling method, including:
configuring independent flow scheduling process, service scheduling process and task timer in at least two nodes respectively;
selecting one node as a designated node to respond to the timing task, sending the timing task to a message queue, and establishing monitoring between at least one node selected by the rest nodes and the message queue;
the flow scheduling process determines whether to reselect a new designated node from the message queue based on a retry mechanism according to a time limit set by the task timer, and the service scheduling process in the selected node of the message queue responds to the timed task.
As a further improvement of the invention, a timing task is sent to a message queue, monitoring is established between all nodes and the message queue, a flow scheduling process determines whether a new appointed node is reselected by the message queue based on a polling mechanism according to a time limit set by a task timer, and a service scheduling process contained in the new appointed node responds to the timing task.
As a further improvement of the invention, when the service scheduling process of the designated node cannot respond to the timed task, the flow scheduling process is used for determining whether to reselect a new designated node from the rest nodes according to the polling mechanism in the message queue according to the time limit set by the task timer based on the distributed lock corresponding to the timed task among a plurality of nodes, so that the service scheduling process contained in the new designated node responds to the timed task.
As a further improvement of the invention, before configuring an independent process scheduling process, a service scheduling process and a task timer in at least two nodes, the method further comprises: and detecting the response capability of all the nodes to the timing task, and storing the detection result into a message queue.
As a further improvement of the invention, the node is provided with a pair of process scheduling process and service scheduling process, and the process scheduling process and the service scheduling process are decoupled by using a message queue;
the message queue is a RabbitMQ.
As a further improvement of the invention, the node is configured with a plurality of pairs of flow scheduling processes and service scheduling processes, and the flow scheduling processes and the service scheduling processes are decoupled by using a message queue;
the message queue is a RabbitMQ.
As a further improvement of the invention, the number of the multiple pairs of flow scheduling processes configured in the node and the number of the flow scheduling processes included in the service scheduling process are greater than the number of the service scheduling processes, the service scheduling process is configured as one or more service units for responding to the timing task, and the service scheduling process and the service units have a mapping relationship;
the service unit is a container, a virtual machine or a micro-service;
the distributed timing task scheduling method is used for a multi-timing task scene.
As a further improvement of the invention, the method also comprises the following steps:
after the service scheduling process in the selected node responds to the timing task, the result corresponding to the responding timing task is stored in the storage device, and the specified node is notified by the message queue.
Based on the same invention idea, the invention also discloses a distributed timed task scheduling system, comprising:
the system comprises a task scheduler, a task timer and service scheduler, a message queue and a timing task issuing component which are deployed in a node;
the timing task issuing component selects one node as a designated node to respond to the timing task, issues the timing task to the message queue, and establishes monitoring between at least one selected node of the rest nodes and the message queue; the task scheduler comprises a flow scheduling process, the flow scheduling process determines whether a new appointed node is reselected by the message queue based on a retry mechanism according to the time limit set by the task timer, and a service scheduler comprising a service scheduling process in the nodes selected by the message queue responds to the timed task.
As a further improvement of the present invention,
and sending the timed task to a message queue, establishing monitoring between all nodes and the message queue, determining whether a new appointed node is reselected by the message queue based on a polling mechanism according to a time limit set by a task timer by a task scheduler, and responding to the timed task by a service scheduler containing a service scheduling process in the nodes selected by the message queue.
As a further improvement of the present invention,
when the service scheduler of the node which cannot be specified by the timing task responds, a flow scheduling process contained in the task scheduler is used for determining whether to reselect a new specified node from the rest nodes according to a polling mechanism in the message queue based on a distributed lock corresponding to the timing task among the nodes, wherein the distributed lock corresponds to the timing task, and the service scheduler containing the service scheduling process in the node selected by the message queue responds to the timing task.
As a further improvement of the present invention,
before configuring independent process scheduling process, service scheduling process and task timer in at least two nodes, respectively, the method further comprises: and detecting the response capability of all the nodes to the timing task, and storing the detection result into a message queue.
As a further improvement of the present invention, the task scheduler and the service scheduler configured by the node include a pair of a process scheduling process and a service scheduling process, and the process scheduling process and the service scheduling process are decoupled by using a message queue;
the message queue is a RabbitMQ.
As a further improvement of the present invention, the task scheduler and the service scheduler configured in the node comprise multiple pairs of flow scheduling processes and service scheduling processes, and couple the multiple pairs of flow scheduling processes and service scheduling processes to the message queue;
the message queue is a RabbitMQ.
As a further improvement of the present invention, the number of the process scheduling processes is greater than the number of the service scheduling processes, the service scheduling processes are configured as one or more service units for responding to the timing task, and the service scheduling processes and the service units have a mapping relationship;
the service unit is a container, a virtual machine or a micro-service;
the distributed timed task scheduling system is used for responding to a multi-timed task scene.
As a further improvement of the invention, the method also comprises the following steps:
the storage device is used for storing a result corresponding to the response timing task into the storage device after the service scheduling process in the selected node responds to the timing task, and informing the designated node by the message queue; the storage device is selected from a JVM memory, a distributed storage component or a database.
Finally, the invention also discloses a server cluster,
the server cluster is configured with at least two nodes,
the server cluster runs the distributed timing task scheduling method as the first invention creates.
Compared with the prior art, the invention has the beneficial effects that:
the distributed timed task scheduling method, the distributed timed task scheduling system and the server cluster effectively overcome the defect of high dependence on resources of timed tasks in the scheduling process, improve the fault tolerance of nodes for operating the timed tasks, respond to the fault tolerance of a plurality of timed tasks by the server cluster, and reduce the difficulty of scheduling the timed tasks, so as to realize the consistency of the scheduled tasks, particularly the defect that the timed tasks cannot be effectively responded due to the node faults in the scheduling process of the timed tasks, and realize the horizontal flexibility of scheduling the timed tasks. Meanwhile, the computing device or the cloud computing platform which is configured with the distributed timed task scheduling system can reasonably use the physical resources and/or the virtual resources in the process of responding to the timed tasks by the resources, and waste or idling of the physical resources and/or the virtual resources is effectively prevented.
Drawings
FIG. 1 is an overall flowchart of a distributed timed task scheduling method according to the present invention;
fig. 2 is a topological diagram of a distributed timed task scheduling system according to the present invention, and illustrates a service architecture of the distributed timed task scheduling system for operating the distributed timed task scheduling method illustrated in fig. 1;
FIG. 3 is a business architecture of a variation of the distributed timed task scheduling system shown in FIG. 2;
FIG. 4 is a service architecture of a second variant of the distributed timed task scheduling system shown in FIG. 2;
FIG. 5 is a service architecture of a third variant of the distributed timed task scheduling system shown in FIG. 2;
fig. 6 is a schematic diagram of a server cluster that selects a designated node based on an external execution subject, and issues a timing task to the designated node, where the external execution subject is a user or an administrator;
FIG. 7 is a detailed flowchart of a distributed timed task scheduling method according to the present invention;
fig. 8 is a service architecture diagram in which a message queue responds to a plurality of timing tasks issued by a designated node;
FIG. 9 is a diagram of a service architecture of a timed task issuing module;
FIG. 10 is a topology diagram of a computer readable medium.
Detailed Description
The present invention is described in detail with reference to the embodiments shown in the drawings, but it should be understood that these embodiments are not intended to limit the present invention, and those skilled in the art should understand that functional, methodological, or structural equivalents or substitutions made by these embodiments are within the scope of the present invention.
Before describing in detail various embodiments of the present invention, technical terms and meanings referred to in the specification are summarized as follows.
Term "Logic"includes any physical and tangible functions for performing a task. For example, each operation illustrated in the flowcharts corresponds to a logical component for performing the operation. Operations may be performed using, for example, software running on a computer device, hardware (e.g., chip-implemented logic functions), etc., and/or any combination thereof. When implemented by a computing device, the logical components represent electrical components that are physical parts of the computer system, regardless of the manner in which they are implemented.
Term "Task"in this application and"Timed tasks'or'Task"has an equivalent meaning and can be replaced with" Job "in the actual code programming. The scheduling of the timed task can be expressed in actual scenes as that a payment system runs for batch at 1 and half a hour each day, one-day clearing is carried out, and the number 1 of each month is carried out for the last month clearing, or in various business scenes such as that delivery information and logistics information are reminded to a customer in the form of short messages or mails after goods are delivered successfully, or that forced recovery is carried out on a cloud host distributed to the user according to a lease term, and the like.
Term "Specifying a node"and term"New designated node"means the Node or the computing Node responding to the timing task formed based on different time points in the scheduling process of the timing task, as shown in fig. 2, if Node-1 is selected as the responding Node when the first issued timing task is sent, then Node-1 is the designated Node, when Node-1 cannot execute the scheduling processing of the timing task due to special reasons such as downtime, response timeout, etc., the scheduling processing of the timing task is transferred to Node-2 and/or Node-3 through the message queue 100 for processing, then Node-2 and/or Node-3 can be understood as the" new designated Node "specified in this application.
Term "Node point"and term"Computing node"has the technical meaning of equivalent.
Phrase "Is configured as"or a phrase"Is configured to"includes any manner in which any kind of physical and tangible functionality may be constructed to perform the identified operations. The functions may be configured to perform operations using, for example, software running on a computer device, hardware (e.g., chip-implemented logic functions), and/or the like, and/or any combination thereof.
The first embodiment is as follows:
referring to fig. 1, fig. 2, and fig. 6 to fig. 8, an embodiment of a distributed timed task scheduling method (hereinafter referred to as "scheduling method") according to the present invention is disclosed. The scheduling method and the distributed timed task scheduling system (hereinafter referred to as "scheduling system") disclosed in the second embodiment may be operated in a computing device/system such as the server cluster 200 that can respond to the timed task, and the computing device/system may also be configured in a data center, a cloud platform, and other scenarios, and the description of the present application is focused on the server cluster 200 as a typical example, and specifically refers to the first embodiment and the second embodiment.
Referring to fig. 1, the distributed timed task scheduling method includes the following steps S1 to S3. The distributed timing task scheduling method is used for a multi-timing task scene.
First, step S1 is executed to configure an independent flow scheduling process, a service scheduling process, and a task timer in each of at least two nodes.
Referring to fig. 2 and 7, Node-1 to Node3 are configured in the server cluster 200, wherein Node-1 configures an independent flow scheduling process 821, a service scheduling process 921 and a task Timer 811 (Timer); node-2 configures independent flow scheduling process 822, service scheduling process 922 and task Timer 812 (Timer); node-3 configures independent flow scheduling process 823, service scheduling process 923 and task Timer 813 (Timer). The process scheduling process 821 logically runs in the Task scheduler 801(Task-scheduler), the Service scheduling process 921 logically runs in the Service scheduler 901, the Task scheduler 801 in Node-1 can run one or more process scheduling processes 821, the Service scheduler 901 in Node-1 can run one or more Service scheduling processes 921, and Node-2 and Node-3 are configured as described in Node-1. Particularly, after receiving the issued timing task, any one of the three computing nodes from Node1 to Node-3 issues the timing task to the message queue 100(RabbitMQ cluster), and the other computing nodes may also receive the task message in the holding message queue 100 synchronously. Meanwhile, the period set by the task timer in each compute Node (for example, when a certain timed task starts to be executed and when the timed task needs to be executed) also corresponds to the timed task, if a certain timed task is not executed within the period set by the task timer, the compute Node configuring the Service-scheduler (Service-scheduler) is determined to be unable to respond to the timed task, and then the timed task is transferred to a new designated Node such as Node-2 and/or Node-3 through the message queue 100 to complete the response to the timed task. In this embodiment, the "response to the timed task" or "response to the timed task" may be embodied as specific operations such as task viewing, task deleting, viewing of task history execution records, task execution, and task content modification. The task timer counts a set period, creates a timing task when the timing period ends, and issues the timing task to the message queue 100 as indicated by arrows task1 to task3 in fig. 2.
The independent flow scheduling process, the service scheduling process and the task timer are respectively configured in at least two nodes, so that the timing task can be executed by other computing nodes when the Node-1 goes down, and high reliability and fault tolerance in the timing task scheduling process are ensured.
Then, step S2 is executed, a node is selected as a designated node to respond to the timing task, the timing task is issued to the message queue, and monitoring is established between at least one selected node of the remaining nodes and the message queue. For example, if Node-1 is designated, Node-2 and Node-3 are backup nodes when Node-1 cannot respond to the timing task, so that one or more of the computing nodes can be used as the new designated Node.
When a timed task is sent to the message queue 100, monitoring is established between all nodes and the message queue 100, and the process scheduling processes 821-823 determine whether to reselect a new designated node from the message queue 100 based on a polling mechanism according to time limits set by the task timers 811-813, so that the service scheduling processes 921-923 included in the new designated node respond to the timed task. It should be noted that, in this embodiment, the task scheduler 801 and the Service scheduler (Service-scheduler)901 configured in the same Node (for example, Node-1) do not form a one-to-one logical relationship, and may respond to a certain timing task together with the Service scheduler (Service-scheduler)902 in Node-2 or the Service scheduler (Service-scheduler)903 in Node-3. When Node-1 is down, the timing task issued by the task scheduler 801 to the message queue 100 may be responded by a service scheduler 902 (or a service scheduler 903 configured in Node-3 of course) in Node-2 that includes one or more service scheduling processes 922. The term "remaining nodes" refers to other nodes (e.g., Node-2, Node-3) configured in the server cluster 200 when Node-1 is designated, and the new designated Node is one or more selected nodes from the remaining nodes.
Referring to fig. 2, when the service scheduling process 921 of the Node to which the timing task cannot be assigned responds, the flow scheduling process 901 is used to determine whether to reselect a new assigned Node from the remaining nodes (i.e., Node-2 and Node-3) by the polling mechanism in the message queue 100 according to the time limit set by the task timer 811, based on the distributed lock corresponding to the timing task, so that the service scheduling process (i.e., the service scheduling process 922 and/or the service scheduling process 923) included in the new assigned Node responds to the timing task.
When Node-1 issues the timing task, Node-2 issues the timing task to message queue 100 as shown by arrow task2, and Node-3 issues the timing task to message queue 100 as shown by arrow task 3. At the same time, snoops are established between all nodes and the message queue 100. When it is determined that the timed task is executed through the service scheduling process 921 included in the service scheduler 901 in Node-1, the message queue 100 issues the timed task to the service scheduler 901, and after the timed task is executed, returns the result to the message queue 100, and finally notifies the result to the task scheduler 801 through the message queue 100, so that the task scheduler 801 stores the execution result of the timed task generated by the service scheduling processes 921-923 into the storage device 201 to be called or accessed by the user or the administrator. At this point, a complete timed task scheduling operation is completed.
Specifically, in this embodiment, before configuring the independent process scheduling processes 821 to 823, the service scheduling processes 921 to 923, and the task timers 811 to 813 in at least two nodes, the method further includes: and detecting the response capability of all the nodes to the timing task, and storing the detection result into the message queue 100. The scheduling method further comprises the following steps: after the service scheduling process in the selected node responds to the timing task, the result corresponding to the responding timing task is stored in the storage device 201, and the specified node is notified by the message queue. Preferably, the storage means 201 is selected from the group consisting of JVM memory, distributed storage components or databases, and most preferably is a distributed database to improve CRUD operation efficiency. Meanwhile, the distributed storage component can also be a block storage or a file storage. When the timing task is in scenes of retrieval, insertion, modification, deletion and the like of a certain object, file storage is preferred; when the timing task is a scene of accessing and downloading a streaming media file such as a video, block storage is preferable.
In this embodiment, the applicant takes Node-1 as the designated Node for responding to the timing task. Node-1 configures a pair of process scheduling process 821 and service scheduling process 921 and decouples process scheduling process 821 and service scheduling process 921 using message queue 100. The message queue is RabbitMQ. Node-2 and Node-3 are configured with reference to Node-1.
Specifically, as described in fig. 8, the RabbitMQ is implemented according to the distributed characteristics of Erlang (the RabbitMQ bottom layer is implemented by an Erlang architecture, so the rabbitmqctl starts an Erlang node, and connects the RabbitMQ node using the Erlang system based on the Erlang node, and a correct Erlang Cookie and a node name are required in the connection process, and the Erlang node obtains authentication by exchanging the Erlang Cookie), so the Erlang is installed first when deploying the RabbitMQ distributed cluster, and the Cookie of one service is copied to another node.
In the RabbitMQ cluster (i.e. RabbitMQ cluster), each RabbitMQ is a peer node, i.e. each node is provided for a client connection to receive and send messages. The nodes are divided into memory nodes and disk nodes, and generally, the memory nodes and the disk nodes are all established as the disk nodes so as to prevent messages after the machine is restarted from disappearing; exchange601 is the key component that accepts producer messages and routes the messages to the message queue 100. The exchange type and Binding determine the routing rules of the message. So the producer wants to send a message, first has to declare an Exchange601 and the Binding602 corresponding to the Exchange 601. This can be done by exchange Declar and BindingDeclar. In the Rabbit MQ, three parameters, namely, ExchangeName, Exchange type and Durable, are required for declaring one Exchange 601. The Exchange name is the name of the Exchange, which needs to be specified when creating Binding and the producer pushes the message through publishing. Exchange type, refers to the type of Exchange, and in the RabbitMQ, there are four types of Exchange: direct type, Fanout type and Topic type, different exchanges will exhibit different routing behavior. Durable is a persistent attribute of this Exchange 601. Declaring that a Binding needs to provide a QueueName, an ExchangeName and a BindingKey. The following sets forth different routing rules exhibited by different exchange types.
When a producer sends a message, the producer needs to designate a RoutingKey and Exchange, and after receiving the RoutingKey, the Exchange can judge the Exchange type.
a) If the type is Direct, comparing the routingKey in the message with the Binding keys in all Binding associated with the Exchange, and if the routingKey is equal to the Binding key in all Binding, sending the comparison to the Queue corresponding to the Binding.
b) If the type is Fanout, the message is sent to all queue defined by Binding with the Exchange, which is actually a broadcasting action.
c) And if the type is the Topic type, matching the RoutengKey with the BindingKey according to the regular expression, and if the matching is successful, sending the RoutengKey to the corresponding Queue.
The RabbitMQ cluster will send the messages to each consumer (consumer) in sequence. On average each consumer receives an equal number of messages. This manner of sending messages is called round-robin. Referring to fig. 8, after the timed task is issued to Exchange601, a plurality of Queues including Q1-Qn are formed based on the binding process, the plurality of Queues including Q1-Qn constitute Queues603, and Q1-Qn are issued one by one to the service scheduling process and executed.
Step S3, the process scheduler determines whether to reselect a new designated node from the message queue based on the retry mechanism according to the time limit set by the task timer, and the service scheduler in the node selected by the message queue 100 responds to the timed task.
In this embodiment, based on the polling distribution mechanism of the message queue 100, a load balancing capability may be provided for task messages corresponding to the timed task. As shown in connection with fig. 7, solid arrows 5 and dashed arrows 5 represent that the service scheduling process in the three nodes listens for the timed task messages in the message queue 100. Since the service scheduling processes 921, 922, 923 allocated and configured by the three nodes monitor the same timing task message of the exchange type, the timing task can be sequentially distributed to the service scheduling process 921 selected by the service scheduling process through the load balancing policy of the message queue 100, and after the service scheduling process 921 finishes executing the timing task, an acknowledgement is sent to the message queue 100 along the direction indicated by the arrow 6 in fig. 7, and after the message queue 100 receives the acknowledgement, the timing task is considered to be executed completely; if service scheduling process 921 in Node-1 does not send an acknowledgement to message queue 100, message queue 100 sends the timing task that has been sent to service scheduling process 921 to service scheduling process 922 in Node-2 after a set period of time (e.g., 0.5 seconds) until message queue 100 receives the acknowledgement. Because the service scheduling processes 921-923 and the process scheduling processes 821-823 in the nodes are decoupled through the message queue 100, uniqueness of the timing task scheduling process can be achieved through the message queue 100, and the problem of HA and load balancing in the distributed timing task scheduling process is solved.
Particularly, in a cloud platform example with a plurality of computing nodes, the computing nodes with different scales can be simultaneously accessed through the message queue 100, so that the logic stability and reliability of the computing nodes in the capacity expansion or capacity reduction process can be improved, a service scheduling process in a last suitable node can be matched for a timing task, and the simplicity of capacity expansion of the computing nodes by the cloud computing platform applying the distributed timing task scheduling method is simplified.
In theory, the scheduling method disclosed in the basic embodiment can respond to the use requirement of the timing task for the cloud platforms of two to any plurality of computing nodes. Meanwhile, the scheduling method disclosed by the embodiment does not need to rely on the traditional Quartz + Zookeeper framework, so that the technical problem of excessive resource occupation during the scheduling of the distributed timing tasks is solved, the calculation overhead and the calculation pressure of the database (the lower concept of the storage device 201) are reduced, and the deployment difficulty of the database is reduced.
With reference to FIG. 7, applicants show more particularly a complete flow of the scheduling method.
The scheduling method can be further subdivided into the execution steps (i.e., step 1 to step 7) as shown by arrows 1 to 7, wherein the solid arrows represent the actual execution flows, and the dashed arrows represent the optional execution flows. As shown in FIG. 7, it represents an example in which the scheduling process for a certain timing task depends on the flow scheduling process 823 in Node-3, and when Node-3 goes down, the timing task is transferred to the service scheduling process 921 in Node-1 to be executed through the message queue 100.
Arrow 1 (step 1): the user designs a timing task flow chart through a page designer and submits the timing task flow chart to a program for operating the scheduling method combined with the embodiment. The program analyzes the timed task step and sends the timed task step to a process scheduling Node (for example, Node-3) to create a timed task, after the timed task is successfully created, the timed task record is written back to the storage device 201, and simultaneously, a timed timer is started in the Node-3 where the process scheduling is located, the timed timers of each timed task are independent and do not influence each other, and at this time, only the Node-3 records the timed task.
Arrow 2 (step 2): at this time, if Node-3 goes down, the timing task just created on Node-3 will be lost, and in order to avoid such a single point failure, Node-3 will issue the task just created to message queue 100, and at this time, other process scheduling nodes (i.e., Node-1 and Node-2) are monitoring message queue 100.
Arrow 3 (step 3): when a new task is issued to the message queue 100, the Node-1 and the Node-2 can immediately monitor and acquire the new timing tasks, and create a timer at the Node where the Node is located as a task timer corresponding to the timing task, so that the timing task cannot be lost as long as it is ensured that one Node of the server cluster 200 does not go down.
Arrow 4 (step 4): when timing arrives, all process scheduling processes trigger the timing task to create a timing task, but in order to ensure that the same timing task cannot be repeatedly executed, each node tries to create a distributed lock aiming at the same timing task when the timing task is issued, only one distributed lock of the same timing task is successfully created, so that the timing task of only one node is successfully issued to the message queue 100, other nodes do not successfully create the distributed locks of the timing task, the timing task is cancelled to be issued when the timing task arrives again, the timing task generation is triggered again when the timing task arrives in the next round, the distributed locks are created for the timing task in a preemptive mode again, the timing task is issued when the timing task succeeds in creation, and the timing task is cancelled by the current node otherwise.
Arrow 5 (step 5): the service scheduling process monitors the timing task in the message queue 100, and after the timing task is issued to the message queue 100 through step 4, the service scheduling process can receive the timing task and execute the timing task. Because a plurality of service scheduling processes monitor the same Topic, the message queue 100 adopts a load balancing strategy, such as polling scheduling, and sequentially distributes timing tasks to the service scheduling processes, after the service scheduling processes are executed, the service scheduling processes distribute results to the message queue 100 to request the message queue 100 to send a message consumption confirmation receipt, and when the confirmation receipt is received, the message consumption is completed; if the Node is down in the process of executing the timing task, the message queue 100 will issue the message consumption receipt to another service scheduling process to execute the task if the message consumption receipt is not received after a certain time, so that as long as one service scheduling process exists, the timing task can be guaranteed to be consumed by one service scheduling process from Node1 to Node-3.
Arrow 6 (step 6): the result of the execution of step 5 (i.e., the acknowledgement receipt described above) is distributed to the message queue 100.
Arrow 7 (step 7): the process scheduling monitors the service execution result in the message queue 100, and when the execution result is distributed to the message queue 100, one process scheduling process receives the execution result and writes the result back to the storage device 201 for storage; if the timing task has the next associated operation branch, one branch is selected according to the result to continue to send the timing task to execute the steps 1 to 7 for one time, and the current timing task is ended until the timing task execution result received in the step 7 does not have the subsequent associated timing task.
Example two:
the embodiment discloses a first specific implementation mode of a distributed timed task scheduling method. Compared with the scheduling method disclosed in the first embodiment, the main difference is that in the present embodiment, a node configures multiple pairs of flow scheduling processes and service scheduling processes, and decouples the flow scheduling processes and the service scheduling processes by using a message queue; the message queue 100 is a RabbitMQ.
Referring to FIG. 4, Node-2 is configured with a service scheduler 902 and a service scheduler 912, each configured with one or more service scheduling processes. Referring to FIG. 5, Node-2 is configured with a task scheduler 802 and a task scheduler 812, each of which is configured with one or more process scheduling processes. As shown in fig. 4 and fig. 5, the multiple service schedulers and the multiple task schedulers configured in the same node are all decoupled by the message queue 100, and when a service scheduling process configured in a certain service scheduler located in the same node cannot send a confirmation receipt to the message queue 100, a matching service scheduling process can be directly selected in the same node in a polling manner, so that the computational overhead of the message queue 100 and the scheduling pressure on the timed task can be reduced to a certain extent.
Of course, in this embodiment, not all nodes need to configure the task scheduler or all nodes need to configure the service scheduler. As shown in FIG. 3, in one example, Node-1 to Node-3 each configure independent task schedulers 801-803, while only Node-1 configures the service scheduler 901. As shown in FIG. 4, in another example, each of Node-1 to Node-3 is configured with independent task schedulers 801 to 803, Node-1 is configured with a service scheduler 901, Node-2 is configured with a service scheduler 902 and a service scheduler 912, and Nod-3 is not configured with a service scheduler. As shown in FIG. 5, in another example, each of Node-1 to Node-3 is configured with independent service schedulers 901-903, Node-1 is configured with task scheduler 801, Node-2 is configured with task scheduler 802 and task scheduler 812, and Node-3 is not configured with task scheduler. The task scheduler and the service scheduler in each of the aforementioned examples are decoupled through the message queue 100, so that the timing task can be adaptively adjusted to better meet the actual service requirement, and the functions of peak clipping and valley filling are provided.
Preferably, in this embodiment, the number of the multiple pairs of flow scheduling processes configured in the node or the computing node and the number of the flow scheduling processes included in the service scheduling process are greater than the number of the service scheduling processes, the service scheduling process is configured as one or more service units for responding to the timing task, and the service scheduling process and the service units have a mapping relationship. Wherein the service unit is a container, a virtual machine or a microservice. Since the service unit may temporarily occupy resources/services in the node or the computing node, the resources/services required by the service scheduling process corresponding to the response of the timing task are differentiated. Therefore, through the setting, the resource is called more accurately by the service scheduling process specifically responding to the timing task, and the waste of computing resources, storage resources and network resources in the server cluster 200 or the cloud platform is reduced.
Example three:
referring to fig. 6 and fig. 9, this embodiment discloses a specific implementation of a distributed timed task scheduling system (hereinafter referred to as "scheduling system"). The scheduling system applies the inventive idea of the scheduling method as disclosed in the above first and/or second embodiments. In the present embodiment, the scheduling system is deployed in an example of the server cluster 200.
The distributed timing task scheduling system comprises:
task schedulers 801-803, task timers 811-813 and service schedulers 901-903 deployed in nodes, a message queue 100, and a timed task issuing component 500.
The timed task issuing component 500 selects a node as a designated node to respond to the timed task, issues the timed task to the message queue 100, and establishes monitoring between at least one selected node of the remaining nodes and the message queue. The task scheduler comprises a flow scheduling process that determines whether to reselect a new designated node based on a retry mechanism from the message queue according to a time limit set by the task timer, and a service scheduler comprising a service scheduling process among the nodes selected by the message queue 100 responds to the timed task.
Of course, the premise for performing "set up listening between at least one selected node in the remaining nodes and the message queue" is that the health status of the designated node initially selected by the timed task issuing component 500 and its configured service scheduler 901 containing the service scheduling process 921 can respond to the timed task issued through the message queue 100. Timed task issuing component 500 includes User Interface (UI)501 and Load Balancer (LB) 502. The user interface 501 provides a visual instruction input interface so that a user or an administrator can input an instruction for creating a timed task to the scheduling system through the user interface 501, and can visually view the execution result of the timed task through the user interface 501.
As shown in fig. 2 and fig. 7, Node-1 to Node3 are configured in the server cluster 200, where Node-1 configures an independent flow scheduling process 821, a service scheduling process 921, and a task Timer 811 (Timer); node-2 configures independent flow scheduling process 822, service scheduling process 922 and task Timer 812 (Timer); node-3 configures independent flow scheduling process 823, service scheduling process 923 and task Timer 813 (Timer). Flow scheduling process 821 runs logically in Task scheduler 801(Task-scheduler), service scheduling process 921 runs logically in service scheduler 901, Task scheduler 801 in Node-1 can run one or more flow scheduling processes 821, service scheduler 901 in Node-1 can run one or more service scheduling processes 921, Node-2 and Node-3 are configured as described with reference to Node-1. Particularly, after receiving the issued timing task, any one of the three computing nodes from Node1 to Node-3 issues the timing task to the message queue 100(RabbitMQ cluster), and the other computing nodes may also receive the task message in the holding message queue 100 synchronously. Meanwhile, the period set by the task timer in each computing Node (for example, when a certain timing task starts to be executed and when the certain timing task needs to be executed) also corresponds to the timing task, if a certain timing task is not executed within the period set by the task timer, the computing Node configuring the service scheduler is determined to be unable to respond to the timing task, and then the timing task is transferred to a new designated Node such as Node-2 and/or Node-3 through the message queue 100 to complete the response to the timing task. In this embodiment, the "response to the timed task" or "response to the timed task" may be embodied as specific operations such as task viewing, task deleting, viewing of task history execution records, task execution, and task content modification. The task timer counts a set period, creates a timing task when the timing period ends, and issues the timing task to the message queue 100 as indicated by arrows task1 to task3 in fig. 2.
The scheduling system disclosed in this embodiment and the scheduling method disclosed in the first embodiment or the second embodiment have the same inventive concept, and are only protected from the perspective of devices and methods, so that the scheduling system disclosed in this embodiment and the first embodiment or the second embodiment have the same technical solution portion, please refer to the description of the first embodiment and/or the second embodiment, and no further description is given here.
Example four:
the embodiment discloses a server cluster 200, and the server cluster 200 is composed of one or more computers with independent computing, storage and communication functions. The server cluster 200 is configured with at least two nodes. In this embodiment, the nodes are computing nodes (Computer nodes), i.e., Node-1 to Node-3, and are not limited in number to these three computing nodes. The server cluster 200 executes a distributed timed task scheduling method as disclosed in the first embodiment and/or the second embodiment.
A node or a computing node is a functional logical partition in a cloud platform, and can be further understood as a computer or a computer containing a computer-readable medium 700 as disclosed in fig. 10, and can be a physical computer or a virtual computer. The computer-readable medium 600 comprises at least one processor 702 and at least one memory device coupled to the processor 702, the memory device storing and executing the computer program instructions 701 of a distributed timed task scheduling method as disclosed in the first embodiment and/or the second embodiment. In this embodiment, please refer to the description of the first embodiment and/or the second embodiment for a specific technical scheme of the distributed timing task scheduling method, which is not described herein again. Meanwhile, the memory device 201 in the first embodiment is not the same concept as the memory device 201 in the first embodiment, and generally includes the memory device 201. The storage device of the present embodiment is generally configured as a mass storage medium such as a mechanical hard disk and a disk array for storing data.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above-listed detailed description is only a specific description of a possible embodiment of the present invention, and they are not intended to limit the scope of the present invention, and equivalent embodiments or modifications made without departing from the technical spirit of the present invention should be included in the scope of the present invention.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Furthermore, it should be understood that although the present description refers to embodiments, not every embodiment may contain only a single embodiment, and such description is for clarity only, and those skilled in the art should integrate the description, and the embodiments may be combined as appropriate to form other embodiments understood by those skilled in the art.

Claims (17)

1. A distributed timing task scheduling method is characterized by comprising the following steps:
configuring independent flow scheduling process, service scheduling process and task timer in at least two nodes respectively;
selecting one node as a designated node to respond to the timing task, sending the timing task to a message queue, and establishing monitoring between at least one node selected by the rest nodes and the message queue;
the flow scheduling process determines whether to reselect a new designated node from the message queue based on a retry mechanism according to a time limit set by the task timer, and the service scheduling process in the selected node of the message queue responds to the timed task.
2. The distributed timed task scheduling method according to claim 1, wherein the timed task is issued to the message queue, monitoring is established between all nodes and the message queue, and the process scheduling process determines whether to reselect a new designated node from the message queue based on a polling mechanism according to a time limit set by the task timer, so that the service scheduling process included in the new designated node responds to the timed task.
3. The distributed timed task scheduling method according to claim 2, wherein when the timed task fails to respond to the service scheduling process of the designated node, the flow scheduling process is used to determine whether to reselect a new designated node from the remaining nodes by the polling mechanism in the message queue according to the time limit set by the task timer, based on the distributed lock corresponding to the timed task, among the plurality of nodes, so that the service scheduling process included in the new designated node responds to the timed task.
4. The distributed timed task scheduling method according to any one of claims 1 to 3, wherein before configuring the independent flow scheduling process, service scheduling process and task timer in at least two nodes, respectively, further comprising: and detecting the response capability of all the nodes to the timing task, and storing the detection result into a message queue.
5. The distributed timed task scheduling method according to claim 4, wherein the node configures a pair of process scheduling process and service scheduling process, and decouples the process scheduling process and the service scheduling process using the message queue;
the message queue is a RabbitMQ.
6. The distributed timed task scheduling method according to claim 4, wherein the node configures multiple pairs of flow scheduling processes and service scheduling processes, and decouples the flow scheduling processes and the service scheduling processes by using a message queue;
the message queue is a RabbitMQ.
7. The distributed timed task scheduling method according to claim 6, wherein the number of the multi-pair flow scheduling processes configured in the node and the number of the flow scheduling processes included in the service scheduling process are greater than the number of the service scheduling processes, the service scheduling process is configured as one or more service units for responding to the timed task, and the service scheduling process and the service units have a mapping relationship;
the service unit is a container, a virtual machine or a micro-service;
the distributed timing task scheduling method is used for a multi-timing task scene.
8. The distributed timed task scheduling method according to claim 1, 2, 3, 5, 6 or 7, characterized by further comprising:
after the service scheduling process in the selected node responds to the timing task, the result corresponding to the responding timing task is stored in the storage device, and the specified node is notified by the message queue.
9. A distributed timed task scheduling system, comprising:
the system comprises a task scheduler, a task timer and service scheduler, a message queue and a timing task issuing component which are deployed in a node;
the timing task issuing component selects one node as a designated node to respond to the timing task, issues the timing task to the message queue, and establishes monitoring between at least one selected node of the rest nodes and the message queue; the task scheduler comprises a flow scheduling process, the flow scheduling process determines whether a new appointed node is reselected by the message queue based on a retry mechanism according to the time limit set by the task timer, and a service scheduler comprising a service scheduling process in the nodes selected by the message queue responds to the timed task.
10. The distributed timed task scheduling system according to claim 9,
and sending the timed task to a message queue, establishing monitoring between all nodes and the message queue, determining whether a new appointed node is reselected by the message queue based on a polling mechanism according to a time limit set by a task timer by a task scheduler, and responding to the timed task by a service scheduler containing a service scheduling process in the nodes selected by the message queue.
11. The distributed timed task scheduling system according to claim 9,
when the service scheduler of the node which cannot be specified by the timing task responds, a flow scheduling process contained in the task scheduler is used for determining whether to reselect a new specified node from the rest nodes according to a polling mechanism in the message queue based on a distributed lock corresponding to the timing task among the nodes, wherein the distributed lock corresponds to the timing task, and the service scheduler containing the service scheduling process in the node selected by the message queue responds to the timing task.
12. The distributed timed task scheduling system according to any of claims 9 to 11,
before configuring independent process scheduling process, service scheduling process and task timer in at least two nodes, respectively, the method further comprises: and detecting the response capability of all the nodes to the timing task, and storing the detection result into a message queue.
13. The distributed timed task scheduling system according to claim 12, wherein the task scheduler and the service scheduler configured by the node include a pair of a process scheduling process and a service scheduling process, and the process scheduling process and the service scheduling process are decoupled by using a message queue;
the message queue is a RabbitMQ.
14. The distributed timed task scheduling system according to claim 12, wherein the task scheduler and the service scheduler configured by the node include multiple pairs of flow scheduling processes and service scheduling processes, and the flow scheduling processes and the service scheduling processes are decoupled by using a message queue;
the message queue is a RabbitMQ.
15. The distributed timed task scheduling system according to claim 14, wherein the number of said flow scheduling processes is greater than the number of service scheduling processes, said service scheduling processes are configured as one or more service units for responding to timed tasks, and said service scheduling processes and service units have a mapping relationship;
the service unit is a container, a virtual machine or a micro-service;
the distributed timed task scheduling system is used for responding to a multi-timed task scene.
16. The distributed timed task scheduling system according to claims 9, 10, 11, 13, 14 or 15, characterized in that it also comprises:
the storage device is used for storing a result corresponding to the response timing task into the storage device after the service scheduling process in the selected node responds to the timing task, and informing the designated node by the message queue; the storage device is selected from a JVM memory, a distributed storage component or a database.
17. A cluster of servers, characterized in that,
the server cluster is configured with at least two nodes,
the server cluster runs the distributed timed task scheduling method according to any one of claims 1 to 8.
CN202010107644.6A 2020-02-21 2020-02-21 Distributed timing task scheduling method, scheduling system and server cluster Active CN111338773B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010107644.6A CN111338773B (en) 2020-02-21 2020-02-21 Distributed timing task scheduling method, scheduling system and server cluster

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010107644.6A CN111338773B (en) 2020-02-21 2020-02-21 Distributed timing task scheduling method, scheduling system and server cluster

Publications (2)

Publication Number Publication Date
CN111338773A true CN111338773A (en) 2020-06-26
CN111338773B CN111338773B (en) 2023-06-20

Family

ID=71184151

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010107644.6A Active CN111338773B (en) 2020-02-21 2020-02-21 Distributed timing task scheduling method, scheduling system and server cluster

Country Status (1)

Country Link
CN (1) CN111338773B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111784185A (en) * 2020-07-14 2020-10-16 广东电网有限责任公司电力调度控制中心 Distributed power distribution communication network timed task scheduling system
CN111858007A (en) * 2020-07-29 2020-10-30 广州海鹚网络科技有限公司 Task scheduling method and device based on message middleware
CN111913793A (en) * 2020-07-31 2020-11-10 同盾控股有限公司 Distributed task scheduling method, device, node equipment and system
CN111930492A (en) * 2020-10-12 2020-11-13 南京赛宁信息技术有限公司 Task flow scheduling method and system based on decoupling task data model
CN111970148A (en) * 2020-08-14 2020-11-20 北京金山云网络技术有限公司 Distributed task scheduling method and system
CN112445595A (en) * 2020-11-26 2021-03-05 深圳晶泰科技有限公司 Multitask submission system based on slurm computing platform
CN112910952A (en) * 2021-01-13 2021-06-04 叮当快药科技集团有限公司 Distributed task scheduling method and device, storage medium and electronic device
CN113342499A (en) * 2021-06-29 2021-09-03 中国农业银行股份有限公司 Distributed task calling method, device, equipment, storage medium and program product
WO2021189857A1 (en) * 2020-09-23 2021-09-30 平安科技(深圳)有限公司 Task state detection method and apparatus, and computer device and storage medium
CN114063936A (en) * 2022-01-18 2022-02-18 苏州浪潮智能科技有限公司 Method, system, equipment and storage medium for optimizing timing task
CN117370457A (en) * 2023-09-26 2024-01-09 浪潮智慧科技有限公司 Multithreading data real-time synchronization method, equipment and medium
CN117370457B (en) * 2023-09-26 2024-07-09 浪潮智慧科技有限公司 Multithreading data real-time synchronization method, equipment and medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150066571A1 (en) * 2013-08-30 2015-03-05 Soeren Balko High-load business process scalability
CN106484530A (en) * 2016-09-05 2017-03-08 努比亚技术有限公司 A kind of distributed task dispatching O&M monitoring system and method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150066571A1 (en) * 2013-08-30 2015-03-05 Soeren Balko High-load business process scalability
CN106484530A (en) * 2016-09-05 2017-03-08 努比亚技术有限公司 A kind of distributed task dispatching O&M monitoring system and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
屈志坚;王群峰;王汉林;: "调度中心流计算集群排队模型的CQB并行均衡控制方法" *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111784185A (en) * 2020-07-14 2020-10-16 广东电网有限责任公司电力调度控制中心 Distributed power distribution communication network timed task scheduling system
CN111858007A (en) * 2020-07-29 2020-10-30 广州海鹚网络科技有限公司 Task scheduling method and device based on message middleware
CN111913793A (en) * 2020-07-31 2020-11-10 同盾控股有限公司 Distributed task scheduling method, device, node equipment and system
CN111970148A (en) * 2020-08-14 2020-11-20 北京金山云网络技术有限公司 Distributed task scheduling method and system
WO2021189857A1 (en) * 2020-09-23 2021-09-30 平安科技(深圳)有限公司 Task state detection method and apparatus, and computer device and storage medium
CN111930492A (en) * 2020-10-12 2020-11-13 南京赛宁信息技术有限公司 Task flow scheduling method and system based on decoupling task data model
CN111930492B (en) * 2020-10-12 2021-01-19 南京赛宁信息技术有限公司 Task flow scheduling method and system based on decoupling task data model
CN112445595B (en) * 2020-11-26 2022-10-25 深圳晶泰科技有限公司 Multitask submission system based on slurm computing platform
CN112445595A (en) * 2020-11-26 2021-03-05 深圳晶泰科技有限公司 Multitask submission system based on slurm computing platform
CN112910952A (en) * 2021-01-13 2021-06-04 叮当快药科技集团有限公司 Distributed task scheduling method and device, storage medium and electronic device
CN113342499A (en) * 2021-06-29 2021-09-03 中国农业银行股份有限公司 Distributed task calling method, device, equipment, storage medium and program product
CN113342499B (en) * 2021-06-29 2024-04-30 中国农业银行股份有限公司 Distributed task calling method, device, equipment, storage medium and program product
CN114063936A (en) * 2022-01-18 2022-02-18 苏州浪潮智能科技有限公司 Method, system, equipment and storage medium for optimizing timing task
CN114063936B (en) * 2022-01-18 2022-03-22 苏州浪潮智能科技有限公司 Method, system, equipment and storage medium for optimizing timing task
CN117370457A (en) * 2023-09-26 2024-01-09 浪潮智慧科技有限公司 Multithreading data real-time synchronization method, equipment and medium
CN117370457B (en) * 2023-09-26 2024-07-09 浪潮智慧科技有限公司 Multithreading data real-time synchronization method, equipment and medium

Also Published As

Publication number Publication date
CN111338773B (en) 2023-06-20

Similar Documents

Publication Publication Date Title
CN111338773B (en) Distributed timing task scheduling method, scheduling system and server cluster
CN111338774B (en) Distributed timing task scheduling system and computing device
US10846140B2 (en) Off-site backup of workloads for multi-tenant cloud computing system
US10873623B2 (en) Dynamically modifying a cluster of computing nodes used for distributed execution of a program
CN105357296B (en) Elastic caching system under a kind of Docker cloud platforms
US8260840B1 (en) Dynamic scaling of a cluster of computing nodes used for distributed execution of a program
JP6669682B2 (en) Cloud server scheduling method and apparatus
Zhou et al. On cloud service reliability enhancement with optimal resource usage
US9280390B2 (en) Dynamic scaling of a cluster of computing nodes
JP5843823B2 (en) Saving program execution status
US8321558B1 (en) Dynamically monitoring and modifying distributed execution of programs
CN113169952A (en) Container cloud management system based on block chain technology
US9104488B2 (en) Support server for redirecting task results to a wake-up server
CN105681426B (en) Heterogeneous system
CN109582459A (en) The method and device that the trustship process of application is migrated
WO2023071576A1 (en) Container cluster construction method and system
CN113206877A (en) Session keeping method and device
CN105373563B (en) Database switching method and device
US11522966B2 (en) Methods, devices and systems for non-disruptive upgrades to a replicated state machine in a distributed computing environment
Hu et al. Transactional mobility in distributed content-based publish/subscribe systems
CN111835809B (en) Work order message distribution method, work order message distribution device, server and storage medium
CN115640100A (en) Virtual machine information synchronization method and computer readable medium
Sun et al. Adaptive trade‐off between consistency and performance in data replication
Edmondson et al. QoS-enabled distributed mutual exclusion in public clouds
Duela et al. Ensuring Data Security in Cloud Environment Through Availability

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant