CN111597033A - Task scheduling method and device - Google Patents

Task scheduling method and device Download PDF

Info

Publication number
CN111597033A
CN111597033A CN201910125842.2A CN201910125842A CN111597033A CN 111597033 A CN111597033 A CN 111597033A CN 201910125842 A CN201910125842 A CN 201910125842A CN 111597033 A CN111597033 A CN 111597033A
Authority
CN
China
Prior art keywords
client
task
server
servers
distributed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910125842.2A
Other languages
Chinese (zh)
Inventor
梅志文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Jingdong Shangke Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN201910125842.2A priority Critical patent/CN111597033A/en
Publication of CN111597033A publication Critical patent/CN111597033A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/542Event management; Broadcasting; Multicasting; Notifications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5022Workload threshold
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/508Monitor

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer And Data Communications (AREA)

Abstract

The invention discloses a task scheduling method and device, and relates to the technical field of computers. One embodiment of the method comprises: receiving a client information change event pushed by a distributed coordination center, wherein the client information change event comprises identifications of a plurality of client servers; according to the total task item number and the identifications of the plurality of client servers, sharding is performed on the plurality of client servers; and sending a slicing result to the distributed coordination center, wherein the slicing result comprises the task items distributed to the client servers. The implementation mode can solve the technical problem that server resources cannot be fully utilized.

Description

Task scheduling method and device
Technical Field
The invention relates to the technical field of computers, in particular to a task scheduling method and a task scheduling device.
Background
In large-scale enterprises and internet applications, each system more or less helps users complete the business process after submitting orders and paying money by means of task scheduling. Taking the train ticket system as an example, when the user inquires the remaining tickets of the train number, submits the order and completes the payment, the operation process for the user is completed, and then the booking timing task of the train ticket back-end system scans the order of 'payment is completed and waits for booking' at intervals (for example, 10 seconds), and then requests the supplier to book the train tickets.
Most systems deploy a developed timing task to a single server to run, and deploy another server as a standby machine. And when one server is down, starting the other server to continue running the timing task so as to process the order.
Currently, a timed task is developed by using quartz (open source project), and is deployed to three to five servers, and then leader election is performed through zookeeper (a distributed, open source distributed application coordination service). When the task is scheduled and executed regularly, whether the current server is a leader is judged, and if the current server is the leader, the corresponding timed task is executed. And when the leader is down, the zookeeper reselects a new leader, and the new leader continues to execute the timing service.
In the process of implementing the invention, the inventor finds that at least the following problems exist in the prior art:
1) the resource of the server cannot be fully utilized to execute the task processing, only one server is in an active (executing) state and all other servers are in a standby (waiting) state all the time when the timed task scheduling is executed;
2) when the data volume reaches a certain magnitude, the single server cannot process the data, the processing capacity of the single server is always limited, and the performance bottleneck and the throughput cannot be solved through horizontal capacity expansion;
3) when the zookeeper goes down, the timed task cannot be executed.
Disclosure of Invention
In view of this, embodiments of the present invention provide a task scheduling method and apparatus, so as to solve the technical problem that server resources cannot be fully utilized.
To achieve the above object, according to an aspect of an embodiment of the present invention, there is provided a task scheduling method, including:
receiving a client information change event pushed by a distributed coordination center, wherein the client information change event comprises identifications of a plurality of client servers;
according to the total task item number and the identifications of the plurality of client servers, sharding is performed on the plurality of client servers;
and sending a slicing result to the distributed coordination center, wherein the slicing result comprises the task items distributed to the client servers.
Optionally, performing fragmentation on the plurality of client servers according to the total task item number and the identifiers of the plurality of client servers, including:
traversing the identifications of the plurality of client servers and determining the number of the client servers;
determining the number of the task items distributed to each client server through a complementation function according to the total number of the task items and the number of the client servers;
and determining the task items allocated to the client servers according to the number of the task items allocated to the client servers and a preset allocation rule.
Optionally, determining, according to the total number of task items and the number of the client servers, the number of task items allocated to each client server by a remainder function, and determining, according to the number of task items allocated to each client server and a preset allocation rule, the task items allocated to each client server, includes:
dividing the total task item number by the number of the client servers to obtain an integer quotient and a remainder;
and sequentially distributing an integer quotient of task items to each client server, and sequentially distributing the remainder number of task items to at least one client server.
Optionally, the method further comprises:
receiving load data reported by the client and storing the load data in a database;
and reading load data in the database at intervals, judging whether the load of the client exceeds a preset load threshold value according to the load data, and if so, sending a capacity expansion instruction to the client.
Optionally, the method further comprises:
and if the event pushed by the distributed coordination center is not received within a preset time threshold, sending a degradation instruction to the client.
In addition, according to still another aspect of an embodiment of the present invention, there is provided a task scheduling method including:
receiving a fragment completion event pushed by a distributed scheduling center, wherein the fragment completion event comprises task items distributed to each client server;
and matching corresponding order data in a file system according to the task item, processing the order data, and sending a processing result to the file system.
Optionally, the method further comprises:
receiving a degradation instruction sent by a server, and stopping each client server from executing a current task;
and the preset client server reads the task information stored in other client servers, and creates and executes the timing task according to the task information.
In addition, according to another aspect of the embodiments of the present invention, there is provided a task scheduling apparatus including:
the system comprises a first receiving module, a second receiving module and a first sending module, wherein the first receiving module is used for receiving a client information change event pushed by a distributed coordination center, and the client information change event comprises identifications of a plurality of client servers;
the fragmentation module is used for executing fragmentation on the plurality of client servers according to the total task item number and the identifications of the plurality of client servers;
and the sending module is used for sending the slicing result to the distributed coordination center, wherein the slicing result comprises the task items distributed to each client server.
Optionally, the slicing module is configured to:
traversing the identifications of the plurality of client servers and determining the number of the client servers;
determining the number of the task items distributed to each client server through a complementation function according to the total number of the task items and the number of the client servers;
and determining the task items allocated to the client servers according to the number of the task items allocated to the client servers and a preset allocation rule.
Optionally, determining the task items allocated to each client server according to a preset total number of task items and the number of the client servers and a remainder function, including:
dividing the total task item number by the number of the client servers to obtain an integer quotient and a remainder;
and sequentially distributing an integer quotient of task items to each client server, and sequentially distributing the remainder number of task items to at least one client server.
Optionally, the system further comprises a load module, configured to:
receiving load data reported by the client and storing the load data in a database;
and reading load data in the database at intervals, judging whether the load of the client exceeds a preset load threshold value according to the load data, and if so, sending a capacity expansion instruction to the client.
Optionally, the system further comprises a downgrading module, configured to:
and if the event pushed by the distributed coordination center is not received within a preset time threshold, sending a degradation instruction to the client.
In addition, according to still another aspect of an embodiment of the present invention, there is provided a task scheduling apparatus including:
the second receiving module is used for receiving a fragment completion event pushed by the distributed scheduling center, wherein the fragment completion event comprises task items distributed to each client server;
and the execution module is used for matching corresponding order data in a file system according to the task item, processing the order data and sending a processing result to the file system.
Optionally, the execution module is further configured to:
receiving a degradation instruction sent by a server, and stopping each client server from executing a current task;
and the preset client server reads the task information stored in other client servers, and creates and executes the timing task according to the task information.
In addition, according to another aspect of the embodiments of the present invention, there is provided a task scheduling method, including:
a server receives a client information change event pushed by a distributed coordination center, wherein the client information change event comprises identifications of a plurality of client servers; according to the total task item number and the identifications of the plurality of client servers, sharding is performed on the plurality of client servers; sending a fragmentation result to the distributed coordination center, wherein the fragmentation result comprises task items distributed to each client server;
the method comprises the steps that a client receives a fragment completion event pushed by a distributed scheduling center, wherein the fragment completion event comprises task items distributed to client servers; and matching corresponding order data in a file system according to the task item, processing the order data, and sending a processing result to the file system.
In addition, according to still another aspect of an embodiment of the present invention, there is provided a task scheduling system including: the system comprises a server and a client, wherein the server comprises the task scheduling device in any embodiment, and the client comprises the task scheduling device in any embodiment.
According to another aspect of the embodiments of the present invention, there is also provided an electronic device, including:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any of the embodiments described above.
According to another aspect of the embodiments of the present invention, there is also provided a computer readable medium, on which a computer program is stored, which when executed by a processor implements the method of any of the above embodiments.
One embodiment of the above invention has the following advantages or benefits: the technical means of receiving the client information change event pushed by the distributed coordination center and performing fragmentation on the plurality of client servers according to the total task item number and the client information change event is adopted, so that the technical problem that server resources cannot be fully utilized is solved. The embodiment of the invention is based on an event notification mechanism of a distributed coordination center and the fragmentation result of each client server, and the client servers are dynamically coordinated by the server side to process tasks in parallel, so that the server resources are fully utilized, and the problems that only one server can process the tasks and other servers are idle are avoided. The embodiment of the invention can realize infinite transverse capacity expansion and capacity reduction, and carry out degradation processing on the timing task after the downtime of the distributed coordination center, thereby ensuring the high availability of the system.
Further effects of the above-mentioned non-conventional alternatives will be described below in connection with the embodiments.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
FIG. 1 is a schematic diagram of a main flow of a task scheduling method according to an embodiment of the present invention;
FIG. 2 is a framework diagram of a task scheduling system according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a directory structure for a zookeeper according to an embodiment of the invention;
FIG. 4 is a schematic diagram of a main flow of a task scheduling method according to another embodiment of the present invention;
FIG. 5 is a diagram illustrating a main flow of a task scheduling method according to a referential embodiment of the present invention;
FIG. 6 is a diagram illustrating a main flow of a task scheduling method according to another referential embodiment of the present invention;
FIG. 7 is a schematic diagram of the main modules of a task scheduler according to an embodiment of the invention;
FIG. 8 is a schematic diagram of the main blocks of a task scheduler according to another embodiment of the invention;
FIG. 9 is a schematic diagram of the main modules of a task scheduling system according to an embodiment of the present invention;
FIG. 10 is an exemplary system architecture diagram in which embodiments of the present invention may be employed;
fig. 11 is a schematic structural diagram of a computer system suitable for implementing a terminal device or a server according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention are described below with reference to the accompanying drawings, in which various details of embodiments of the invention are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a schematic diagram of a main flow of a task scheduling method according to an embodiment of the present invention. As one embodiment of the present invention. As shown in fig. 1, the task scheduling method is applied to a server, and may include:
step 101, receiving a client information change event pushed by a distributed coordination center, where the client information change event includes identifiers of multiple client servers.
Firstly, a server needs to monitor a directory of client information stored in a distributed coordination center, and when data of the distributed coordination center changes, the distributed coordination center pushes a client information change event to the server. The client information change event includes identifications of a plurality of client servers, for example, IP of each client server.
Fig. 2 is a frame diagram of a task scheduling system for implementing the task scheduling method according to the embodiment of the present invention. As shown in fig. 2, the task scheduling system may include a server, at least one client, and a distributed coordination center, and may further include a database. The server may be a server cluster, where the server cluster includes a plurality of server servers, and one of the servers may be used as a leader and the other servers are used as a follower. The clients may be client clusters, each client cluster includes a plurality of client servers (i.e., execution machines), for example, one of the client clusters is configured to process a train ticket job, and the other client cluster is configured to process an air ticket job. The distributed coordination center can be a zookeeper cluster, and the server cluster and the client cluster monitor the catalog change of the zookeeper cluster so as to obtain the event pushed by the zookeeper cluster.
Fig. 3 is a schematic diagram of a directory structure of zookeeper according to an embodiment of the present invention. The directory structure stored in the zookeeper in the running process of the task scheduling method is shown in fig. 3, wherein the meaning of the key directory node is as follows:
[ directory node $ { appName }: representing the name of each client cluster, the appName of each client cluster is different, and a job name (appName) is allocated to each client cluster by a service-related person. The naming can be based on jobs processed by different client clusters, such as plane, train, etc.
② catalog node $ { job1 }: when a task (job) is manually added at a job server, a $ { job1} directory structure is created, and $ { job1} represents the name of the added task, for example, job1 is drawing a ticket, job2 is booking a ticket, job3 is returning a ticket, and the like, and represents three different types of tasks corresponding to a certain job. Meanwhile, three directory nodes are also created under the $ job1 directory, namely sharing, executing and instances. Wherein, the sharing node directory stores the fragmentation state, such as fragmented or waiting for fragmentation; the execute node directory stores an execution command of the manual click.
(iii) directory node $ { ip1 }: IP representing each client server registered on the zookeeper, such as IP1, IP2, etc.
Directory node $ { instances }: after task slicing is completed, directory nodes $ { IP1}, $ { IP2} etc., $ { IP1}, $ { IP2} etc. are created under the instances directory to represent the IP of each client server. And creating a sharing directory under the directories of $ { ip1}, $ { ip2} and the like, wherein the sharing directory stores task item identifications allocated to the client servers.
Specifically, the data storage structure of zookeeper is as shown in the following table:
Figure BDA0001973547540000091
when a client applies for accessing the task scheduling system, the client can be audited manually, after the audit is passed, the server allocates a job name to the client, creates a directory node corresponding to the node in fig. 3, namely $ { appName }, and then the server sends the job name and an address of the distributed coordination center to each client server in the client. When the client server is started, connection is established with the distributed coordination center according to the address and communication is carried out, and at the moment, the distributed coordination center establishes a corresponding directory node (worker/server) in the graph 3.
For example, when a client cluster applies for accessing the task scheduling system, the server system is accessed, an access application button is clicked, and the client cluster name, the contact name and the telephone are filled in. And then, carrying out manual review by a reviewer of the server, after the review is clicked, providing an address of an appName and a zookeeper registration center for the client by the server (the client establishes communication connection with zk through the address), and simultaneously establishing a directory node corresponding to the part (i) in the graph 3 for the appName on the zookeeper by the server.
The server monitors data changes of a client information directory and data changes of a fragment mark directory in the distributed coordination center, namely, the server monitors data changes of a directory node/worker/server on a zookeeper and data changes of the directory node/worker/job/$ { appName }/$ { job1 }/searching. When the data of the directory nodes change, the zookeeper pushes the corresponding changed data to the server.
When a certain client server is registered on the distributed coordination center, the data change of the directory node/worker/server is triggered, and the server receives a client information change event pushed by the distributed coordination center, so that the client server needing to distribute the task items is determined.
Therefore, the following three conditions are mainly used for triggering the change of the directory for storing the client information in the distributed coordination center: 1) when the client accesses the task scheduling system, the server distributes the job name for the client, creates the corresponding directory node in the figure 3, the client server and the distributed coordination center are connected and communicated, and at the moment, the distributed coordination center creates the corresponding directory node in the figure 3. 2) When a client server registers with the distributed coordination center. 3) When a certain client server is disconnected with the distributed coordination center, the directory node corresponding to the client server can be automatically deleted. The three conditions can trigger the client information stored in the distributed coordination center to change, so that the client information pushed to the server by the distributed coordination center changes events, and the server can perform fragmentation or re-fragmentation conveniently.
And step 102, according to the total task item number and the identifications of the plurality of client servers, performing fragmentation on the plurality of client servers.
Since the client servers registered to the distributed coordination center are changed, and in step 101, the distributed coordination center pushes a client information change event to the server, thereby triggering the server to fragment the client servers. Therefore, the embodiment of the invention can dynamically coordinate the client servers to process the tasks in parallel based on the event notification mechanism of the distributed coordination center and the slicing result of each client server, thereby fully utilizing the server resources. Specifically, based on the total task item number input by the user and the identification of the client server obtained in step 101, fragmentation is performed on each client server.
Optionally, before the step 102, the method further includes: and modifying the storage content in the directory of the distributed coordination center storage fragmentation state into to-be-fragmented storage content. As shown in step 101, after the task is successfully added to the server, the server creates a corresponding directory node $ { job1} in fig. 3 on zookeeper. Meanwhile, three directory nodes are also created under the $ job1 directory, namely sharing, executing and instances. Wherein, the distribution state-to-be-fragmented is stored under the directory node/worker/job/$ { appName }/$ { job1 }/sharing.
In addition, the server monitors the directory of the distributed coordination center for storing the fragmentation state, and when the data of the directory changes, the distributed coordination center pushes the fragmentation state change event to the server, so that the fragmentation action is triggered. It should be noted that when a certain client server is registered in the distributed coordination center, a data change of the directory node/worker/server is triggered, and meanwhile, the server modifies the distribution state stored under the directory node/worker/job/$ { appName }/$ { job1 }/sharing from the fragmented state to the to-be-fragmented state, thereby triggering the server to re-fragment.
Optionally, the performing fragmentation on the plurality of client servers according to the total number of task items and the identified client information change events of the plurality of client servers includes: traversing the identifiers of the plurality of client servers and determining the number of the client servers; determining the number of task items allocated to each client server according to the preset total number of task items and the number of the client servers and a complementation function; and determining the task items allocated to the client servers according to the number of the task items allocated to the client servers and a preset allocation rule. The number of the client servers and the corresponding information identifications are determined by traversing the identifications of the client servers, the number of the task items distributed to the client servers is obtained by dividing the identifiers, and then the corresponding fragment items are distributed to the client servers after the fragmentation is executed by further combining with a preset distribution rule. The allocation rule may be preset by the user, such as the priority of the client server, the identification of the client server, the number of the client server, and the like.
Specifically, determining the number of task items allocated to each client server according to a preset total number of task items and the number of the client servers and through a remainder function, and determining the task items allocated to each client server according to the number of task items allocated to each client server and a preset allocation rule, includes: dividing the number of preset total task items by the number of the client servers to obtain an integer quotient and a remainder; and sequentially distributing an integer quotient of task items to each client server, and sequentially distributing the remainder number of task items to each at least one client server. For example, the task items may be sequentially assigned according to the priority size of the client server, the identification size of the client server, the number of the client server, and the like.
Generally, the total task item number can be preset according to needs, so that the sharding items required to be executed by each client server can be determined according to the total task item number and the number of the client servers and based on a complementation function. For example, if the total number of task items is x (shardingnum) and the number of client servers is y (clientnum), then:
the number shardingnnumberclient allocated to each client server is x ÷ y
The two numbers are complemented to obtain the number of the rest fragmentation items, namely, modShardingNum, x mod y
From 0 to x-modShardingNum, current numbers are distributed to each client server in sequence until each client server is distributed with shardingNumPerClient numbers, and then 1 number is distributed to each client server in sequence from the rest numbers until the rest numbers are distributed.
For example: if the total number of task items of a certain task is 15 and the number of client servers is 5, the number of task items allocated to each server is 3, the number of task items allocated to the first server is 0,1 and 2, the number of task items allocated to the second server is 3, 4 and 5, and the steps are carried out sequentially. If the number of the client servers is 4, the first server is allocated with task items of 0,1, 2 and 12, the second server is allocated with task items of 3, 4, 5 and 13, the third server is allocated with task items of 6, 7, 8 and 14, and the fourth server is allocated with task items of 9, 10 and 11.
It should be noted that the execution policy and the total number of task items of the task may be modified at any time to dynamically adjust the throughput and performance of task execution, and when there is no task allocated to the remaining client servers, the total number of task items may be increased to allow the remaining client servers to allocate task items for data processing.
And 103, sending a fragmentation result to the distributed coordination center, wherein the fragmentation result comprises the task items distributed to each client server.
After determining the task items distributed by each client server, the server side sends the fragmentation result to the distributed coordination center, and the distributed coordination center updates the directory according to the fragmentation result. Specifically, the distributed coordination center updates the fragmentation result to the corresponding directory node $ { instances } in fig. 3, and the sharing directory stores the task item identifier allocated to each client server. For example, directory node $ { ip1} stores 0,1, 2, directory node $ { ip2} stores 3, 4, 5, and so on.
According to the various embodiments described above, it can be seen that the technical means of the present invention, by receiving the client information change event pushed by the distributed coordination center, and performing fragmentation on a plurality of client servers according to the total task item number and the client information change event, solves the problem that server resources cannot be fully utilized. The embodiment of the invention is based on an event notification mechanism of a distributed coordination center and the fragmentation result of each client server, and the client servers are dynamically coordinated by the server side to process tasks in parallel, so that the server resources are fully utilized, and the problems that only one server can process the tasks and other servers are idle are avoided. The embodiment of the invention can realize infinite transverse capacity expansion and capacity reduction, and carry out degradation processing on the timing task after the downtime of the distributed coordination center, thereby ensuring the high availability of the system.
Fig. 4 is a schematic diagram of a main flow of a task scheduling method according to another embodiment of the present invention. As one embodiment of the present invention. As shown in fig. 4, the task scheduling method applied to the client may include:
step 401, receiving a fragmentation completion event pushed by a distributed scheduling center, where the fragmentation completion event includes task items allocated to each client server.
The client monitors the fragmentation information directory of the distributed scheduling center in real time, for example, the directory node $ { instances } corresponding to the node r in fig. 3, so that it can be known whether the data under the target node changes, and if so, it indicates that the server executes fragmentation, so that the distributed scheduling center pushes a fragmentation completion event to the client.
For example, a client server IP1 listens in real time
And when the directory data changes, the zookeeper pushes the corresponding task item identification to the client server.
Step 402, matching corresponding order data in a file system according to the task item, processing the order data, and sending a processing result to the file system.
And each client server receives the fragment completion event pushed by the distributed dispatching center respectively and then creates and starts a timing task (quartz). And when the specific service logic is scheduled and executed each time, firstly, acquiring a task item allocated to the client server, then, inquiring specific order data in the file system according to the task item, and carrying out service logic processing.
When the client server starts, task information (such as a task name, a job name, the total number of task items and a task execution expression) is written into a log4j framework of a file, for example, the task information is written into a/export/Logs/jd. When the client server is restarted, the created jobinformation is written into the/export/Logs/jd. Therefore, when a client server needs to create a new timed task, the task information is asynchronously written into the local file using log4j, and then the new timed task is created and executed.
For example, if the total number of task items of a certain task is 15 and the number of client servers is 5, the number of task items allocated to each server is 3, and thus the number of task items allocated to the first server is 0,1, and 2. Assuming that the number of orders to be processed is 100, mod (100,15) gets the orders with values 0,1, 2, i.e. the orders to be processed by the first server. Such as the 15 th order, the 16 th order, the 17 th order, the 30 th order, the 31 st order, the 32 nd order, etc.
According to the various embodiments described above, it can be seen that the technical means of the present invention, by receiving the client information change event pushed by the distributed coordination center, and performing fragmentation on a plurality of client servers according to the total task item number and the client information change event, solves the problem that server resources cannot be fully utilized. The embodiment of the invention is based on an event notification mechanism of a distributed coordination center and the fragmentation result of each client server, and the client servers are dynamically coordinated by the server side to process tasks in parallel, so that the server resources are fully utilized, and the problems that only one server can process the tasks and other servers are idle are avoided. The embodiment of the invention can realize infinite transverse capacity expansion and capacity reduction, and carry out degradation processing on the timing task after the downtime of the distributed coordination center, thereby ensuring the high availability of the system.
Fig. 5 is a schematic diagram of a main flow of a task scheduling method according to a referential embodiment of the present invention, where the task scheduling method may specifically include:
step 501, a server receives a client information change event pushed by a distributed coordination center, where the client information change event includes identifiers of multiple client servers.
The server monitors data change of the directory node of the worker/server, and when the data of the directory node changes, the zookeeper pushes the corresponding changed data to the server, namely the IP of each client server stored under the current worker/server.
Step 502, the server side modifies the storage content in the directory of the distributed coordination center storage fragmentation state into the to-be-fragmented storage content.
After step 501, the server will be at the directory node
The allocation state stored under the conditions of/worker/job/$ { appName }/$ { job1 }/sharing is modified from being fragmented into being fragmented.
Step 503, the server performs fragmentation on the plurality of client servers according to the total task item number and the client information change event.
The server monitors the data change of the directory node/worker/job/$ { appName }/$ { job1 }/sharing, so after step 502, the server is triggered to execute the fragmentation.
Step 504, the server sends the fragmentation result to the distributed coordination center, and the fragmentation result includes the task items allocated to each client server.
After the fragmentation is executed, the server side sends the fragmentation result to the distributed coordination center, the distributed coordination center updates the fragmentation result to a directory node/worker/job/$ { appName }/$ { job1 }/sharing, and each ip correspondingly stores the assigned task item identifier under the directory.
It should be noted that leader election can be performed based on the zookeeper's election mechanism, so that the elected leader performs fragmentation.
In step 505, the client receives a fragmentation completion event pushed by the distributed scheduling center, where the fragmentation completion event includes task items assigned to each client server.
Because the client monitors the fragment information directory of the distributed scheduling center in real time, when the directory data changes, the zookeeper pushes the corresponding changed data to the client.
Such as client server IP1 real-time listening directory node
And/worker/job/$ { appName }/$ { job1}/instances/$ { IP1 }/sharing data, and zookeeper pushes data corresponding to/$ { IP1 }/sharing to the client server IP 1.
Step 506, the client matches corresponding order data in the file system according to the task item, processes the order data, and sends the processing result to the file system.
And each client server respectively receives the fragment completion event pushed by the distributed scheduling center, judges whether the timing task is established or not, executes the timing task if the timing task is established, and establishes and starts the timing task if the timing task is not established.
And when scheduling and executing specific service logic each time, firstly acquiring a task item allocated to the client server, then inquiring specific order data in the file system according to the task item, carrying out service logic processing, and finally sending a processing result to the file system.
And step 507, the client reports the load data and the execution track data to the server.
The client calls the server interface at intervals (for example, 10 seconds) to report load data (for example, CPU utilization rate and memory utilization rate) of the client. Meanwhile, the client also reports the execution track data of the task at intervals.
And step 508, the server receives the load data and the execution track data reported by the client, and stores the load data and the execution track data in a database and the execution track data.
And the server receives the load data and the execution track data reported by the client in real time and writes the load data and the execution track data into the database so as to judge whether to perform capacity expansion or alarm.
Therefore, the embodiment of the invention enables a plurality of client servers to process related task data in parallel by dynamically distributing the task items for all the client servers, and breaks through the authority and bottleneck of processing the task data by a single machine. And the execution strategy and the total task item number of the tasks can be modified at any time to dynamically adjust the task execution throughput and performance, and when the remaining client servers are not allocated with the tasks, the total task item number can be increased to enable the remaining client servers to be allocated with the task items for data processing.
In addition, in one embodiment of the present invention, the detailed implementation of the task scheduling method is described in detail in the above task scheduling method, and therefore, the repeated content is not described again.
Fig. 6 is a schematic diagram of a main flow of a task scheduling method according to another referential embodiment of the present invention, where the task scheduling method may specifically include:
step 601, the server reads the execution track data in the database at intervals, and judges whether to trigger an alarm.
The server reads the execution track data in the database at intervals, judges whether the execution completion data reported by the client server is received within a preset time threshold, and if the execution completion data is not received within the preset time threshold, the execution completion data indicates that the client server is abnormal, short message reminding is sent to research personnel at the server and the client, and alarm is triggered.
Step 602, the server reads load data in the database at intervals, and determines whether the load of the client exceeds a preset load threshold according to the load data, and if so, sends a capacity expansion instruction to the client.
The server side can inquire load data (such as CPU utilization rate and memory utilization rate) of the client server in the last 1 minute every 10 seconds, and when the CPU utilization rate of a certain server continuously exceeds a threshold value for 3 times or the memory utilization rate of the certain server continuously exceeds the threshold value for 3 times, a capacity expansion instruction is sent to the client side so as to trigger the capacity expansion of the client side. Specifically, the capacity expansion step includes: receiving a capacity expansion event notification, and calling a private cloud interface to create a container according to the type of a client, a data center, a deployment environment, a container specification and a container environment image; the client application package is built against the newly created container and the client program is then started.
After the client program is started, the application type (i.e. appName) and the IP address of the client are written into a zokeper/worker/server/directory, so that registration is completed. And then triggering the automatic re-fragmentation operation of the server, and re-distributing the task items for each client server of the service line system.
Therefore, the embodiment of the invention ensures that the client can automatically expand capacity in an infinite transverse direction according to the system load, improves the throughput of the system, ensures high availability of the system and reduces the maintenance cost of personnel of an access party.
Step 603, if the event pushed by the distributed coordination center is not received within a preset time threshold, the server sends a degradation instruction to the client.
After the client servers are started, the server side establishes tcp long connection with each client server through remote call service (RPC), when the zookeeper is down, a worker manually turns on a degradation switch at the server side, and the server side sends a degradation instruction through the long connection with each client server.
Step 604, receiving the degradation instruction sent by the server, and stopping each client server from executing the current task.
When each client server receives the degradation request, all the currently started timing tasks of the client server are stopped, and all task information is deleted from the registry of the client server.
Step 605, the preset client server reads the task information stored in each other client server, and creates and executes the timing task according to the task information.
A certain client server may be designated as a downgrade server in advance, and when a downgrade instruction is received, only the client server executes a timing task, and other servers stop executing the timing task. Specifically, all task information in each client server local file of the client server creates and starts degraded timing tasks, and a single machine processes all timing tasks.
After the zookeeper recovers, the worker manually closes the degradation switch, then all client servers stop all currently started degradation tasks after receiving a degradation switch closing command sent by the server, delete task information from the task registry, read all task information in a preset local file of the client server, create and start a normal timing task, and start a normal distributed task scheduling process.
In addition, in one embodiment of the present invention, the detailed implementation of the task scheduling method is described in detail in the above task scheduling method, and therefore, the repeated content is not described again.
Fig. 7 is a schematic diagram of main modules of a task scheduling apparatus according to an embodiment of the present invention, and as shown in fig. 7, the task scheduling apparatus 700 includes a first receiving module 701, a slicing module 702, and a sending module 703. The first receiving module 701 is configured to receive a client information change event pushed by a distributed coordination center, where the client information change event includes identifiers of multiple client servers; the sharding module 702 is configured to perform sharding on the plurality of client servers according to the total task item number and the identifiers of the plurality of client servers; the sending module 703 is configured to send a fragmentation result to the distributed coordination center, where the fragmentation result includes task items to which each client server is assigned.
Optionally, the fragmentation module 702 is configured to:
traversing the identifiers of the plurality of client servers and determining the number of the client servers;
determining the number of task items allocated to each client server according to the preset total number of task items and the number of the client servers and a complementation function;
and determining the task items allocated to the client servers according to the number of the task items allocated to the client servers and a preset allocation rule.
Optionally, determining, according to a preset total number of task items and the number of the client servers, the number of task items allocated to each client server by a remainder function, and determining, according to the number of task items allocated to each client server and a preset allocation rule, the task items allocated to each client server, includes:
dividing the number of preset total task items by the number of the client servers to obtain an integer quotient and a remainder;
and sequentially distributing an integer quotient of task items to each client server, and sequentially distributing the remainder number of task items to at least one client server.
Optionally, the system further comprises a load module, configured to:
receiving load data reported by the client and storing the load data in a database;
and reading load data in the database at intervals, judging whether the load of the client exceeds a preset load threshold value according to the load data, and if so, sending a capacity expansion instruction to the client.
Optionally, the system further comprises a downgrading module, configured to:
and if the event pushed by the distributed coordination center is not received within a preset time threshold, sending a degradation instruction to the client.
Fig. 8 is a schematic diagram of main blocks of a task scheduling apparatus according to another embodiment of the present invention, and as shown in fig. 8, the task scheduling apparatus 800 includes a second receiving module 801 and an executing module 802. The second receiving module 801 is configured to receive a fragmentation completion event pushed by a distributed scheduling center, where the fragmentation completion event includes task items allocated to each client server; the execution module 802 is configured to match corresponding order data in a file system according to the task item, process the order data, and send a processing result to the file system.
Optionally, the executing module 802 is further configured to:
receiving a degradation instruction sent by a server, and stopping each client server from executing a current task;
and the preset client server reads the task information stored in other client servers, and creates and executes the timing task according to the task information.
According to the various embodiments described above, it can be seen that the technical means of the present invention, by receiving the client information change event pushed by the distributed coordination center, and performing fragmentation on a plurality of client servers according to the total task item number and the client information change event, solves the problem that server resources cannot be fully utilized. The embodiment of the invention is based on an event notification mechanism of a distributed coordination center and the fragmentation result of each client server, and the client servers are dynamically coordinated by the server side to process tasks in parallel, so that the server resources are fully utilized, and the problems that only one server can process the tasks and other servers are idle are avoided. The embodiment of the invention can realize infinite transverse capacity expansion and capacity reduction, and carry out degradation processing on the timing task after the downtime of the distributed coordination center, thereby ensuring the high availability of the system.
It should be noted that, in the implementation of the task scheduling device of the present invention, the details of the task scheduling method are already described in detail, and therefore, the repeated details are not described herein.
FIG. 9 is a schematic diagram of the main modules of a task scheduling system according to an embodiment of the present invention. As shown in fig. 9, the task scheduling system includes a server, a client, a zookeeper, a database, and a file system. The server side comprises an access application module, a task management module, a fragmentation module, a remote calling module and a monitoring module.
The access application module: and when the service line system wants to access the task scheduling system, the service end is accessed, an 'access application' button is clicked, and the cluster name of the client end, the name of the contact person and the telephone are filled. And then, carrying out manual review by a reviewer of the server, after the review is clicked, providing an address of an appName and a zookeeper registration center for the client by the server (the client establishes communication connection with zk through the address), and simultaneously establishing a directory node corresponding to the part (i) in the graph 3 for the appName on the zookeeper by the server.
The task management module: and performing leader election, wherein the purpose is to execute the slicing action by the elected leader. Monitoring node changes of a zookeer/worker/server/directory, wherein child nodes created under the directory are temporary nodes, IP registered by a client is stored, when the client is down, the directory nodes can be automatically deleted, an event is triggered to be notified to a server, and the server performs re-fragmentation action subsequently. And reading all task information under the worker/job/directory. Monitoring/worker/job/$ { appName }/$ { job1 }/sharing node change, acquiring the fragmentation state, and if the fragmentation state is not fragmented, calling a fragmentation module to distribute task items for all client servers. And functions of adding and modifying tasks are provided. Basic information of the task is written into a/worker/job/$ { appName }/$ { job1} directory of the zookeeper by adding a task function, and a slicing module is directly called to distribute task items for all clients.
A remote calling module: and storing the load data and the execution track data which are received by the interface and reported by the client into a database. And sending a degradation instruction to the client.
A monitoring module: and the data reported by the client can be written into the database, the execution track data stored in the database is inquired, the client server with overtime execution is monitored, and an alarm is given.
Each client server comprises a registry module, a timing task creating module, a timing task starting module, a service logic execution module, a load monitoring module and a degradation module.
The registry module is used for writing task information into a registry, the creation timing task module is used for creating a timing task, the execution timing task module is used for executing the timing task, and the service logic execution module is used for executing service logic and reporting execution track data. The load monitoring module is used for reporting load data.
Fig. 10 shows an exemplary system architecture 1000 to which a task scheduling method or a task scheduling apparatus of an embodiment of the present invention may be applied.
As shown in fig. 10, the system architecture 1000 may include terminal devices 1001, 1002, 1003, a network 1004, and a server 1005. The network 1004 is used to provide a medium for communication links between the terminal devices 1001, 1002, 1003 and the server 1005. Network 1004 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
A user may use terminal devices 1001, 1002, 1003 to interact with a server 1004 via a network 1004 to receive or send messages or the like. The terminal devices 1001, 1002, 1003 may have installed thereon various messenger client applications such as shopping applications, web browser applications, search applications, instant messenger, mailbox clients, social platform software, etc. (by way of example only).
The terminal devices 1001, 1002, 1003 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 1005 may be a server that provides various services, such as a backend management server (for example only) that supports shopping websites browsed by users using the terminal devices 1001, 1002, 1003. The background management server may analyze and process the received data such as the product information query request, and feed back a processing result (for example, target push information and product information — only an example) to the terminal device.
It should be noted that the task scheduling method provided in the embodiment of the present invention is generally executed by the terminal devices 1001, 1002, and 1003 in the public place, and may be executed by the server 1005, and accordingly, the task scheduling apparatus is generally provided in the terminal devices 1001, 1002, and 1003 in the public place, and may be provided in the server 1005.
It should be understood that the number of terminal devices, networks, and servers in fig. 10 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to FIG. 11, shown is a block diagram of a computer system 1100 suitable for use with a terminal device implementing an embodiment of the present invention. The terminal device shown in fig. 11 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 11, the computer system 1100 includes a Central Processing Unit (CPU)1101, which can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)1102 or a program loaded from a storage section 1108 into a Random Access Memory (RAM) 1103. In the RAM1103, various programs and data necessary for the operation of the system 1100 are also stored. The CPU 1101, ROM 1102, and RAM1103 are connected to each other by a bus 1104. An input/output (I/O) interface 1105 is also connected to bus 1104.
The following components are connected to the I/O interface 1105: an input portion 1106 including a keyboard, mouse, and the like; an output portion 1107 including a signal output unit such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and a speaker; a storage section 1108 including a hard disk and the like; and a communication section 1109 including a network interface card such as a LAN card, a modem, or the like. The communication section 1109 performs communication processing via a network such as the internet. A driver 1110 is also connected to the I/O interface 1105 as necessary. A removable medium 1111 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1110 as necessary, so that a computer program read out therefrom is mounted into the storage section 1108 as necessary.
In particular, according to the embodiments of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication portion 1109 and/or installed from the removable medium 1111. The above-described functions defined in the system of the present invention are executed when the computer program is executed by a Central Processing Unit (CPU) 1101.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present invention may be implemented by software or hardware. The described modules may also be provided in a processor, which may be described as: a processor includes a first receiving module, a fragment determining module, and a sending module, wherein the names of the modules do not in some cases constitute a limitation on the modules themselves.
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to comprise: receiving a client information change event pushed by a distributed coordination center, wherein the client information change event comprises identifications of a plurality of client servers; according to the total task item number and the client information change event, sharding is executed on the plurality of client servers; and sending a slicing result to the distributed coordination center, wherein the slicing result comprises the task items distributed to the client servers.
According to the technical scheme of the embodiment of the invention, the technical means of receiving the client information change event pushed by the distributed coordination center and executing fragmentation on the plurality of client servers according to the total task item number and the client information change event is adopted, so that the technical problem that server resources cannot be fully utilized is solved. The embodiment of the invention is based on an event notification mechanism of a distributed coordination center and the fragmentation result of each client server, and the client servers are dynamically coordinated by the server side to process tasks in parallel, so that the server resources are fully utilized, and the problems that only one server can process the tasks and other servers are idle are avoided. The embodiment of the invention can realize infinite transverse capacity expansion and capacity reduction, and carry out degradation processing on the timing task after the downtime of the distributed coordination center, thereby ensuring the high availability of the system.
The above-described embodiments should not be construed as limiting the scope of the invention. Those skilled in the art will appreciate that various modifications, combinations, sub-combinations, and substitutions can occur, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (13)

1. A task scheduling method is applied to a server and comprises the following steps:
receiving a client information change event pushed by a distributed coordination center, wherein the client information change event comprises identifications of a plurality of client servers;
according to the total task item number and the identifications of the plurality of client servers, sharding is performed on the plurality of client servers;
and sending a slicing result to the distributed coordination center, wherein the slicing result comprises the task items distributed to the client servers.
2. The method of claim 1, wherein performing sharding on the plurality of client servers based on the total number of task items and the identification of the plurality of client servers comprises:
traversing the identifiers of the plurality of client servers, and determining the number of the client servers;
determining the number of the task items distributed to each client server through a complementation function according to the total number of the task items and the number of the client servers;
and determining the task items allocated to the client servers according to the number of the task items allocated to the client servers and a preset allocation rule.
3. The method according to claim 2, wherein determining the number of task items allocated to each client server according to the total number of task items and the number of the client servers and through a remainder function, and determining the task items allocated to each client server according to the number of task items allocated to each client server and a preset allocation rule comprises:
dividing the total task item number by the number of the client servers to obtain an integer quotient and a remainder;
and sequentially distributing an integer quotient of task items to each client server, and sequentially distributing the remainder number of task items to at least one client server.
4. The method of claim 1, further comprising:
receiving load data reported by the client and storing the load data in a database;
and reading load data in the database at intervals, judging whether the load of the client exceeds a preset load threshold value according to the load data, and if so, sending a capacity expansion instruction to the client.
5. The method of claim 1, further comprising:
and if the event pushed by the distributed coordination center is not received within a preset time threshold, sending a degradation instruction to the client.
6. A task scheduling method is applied to a client and comprises the following steps:
receiving a fragment completion event pushed by a distributed scheduling center, wherein the fragment completion event comprises task items distributed to each client server;
and matching corresponding order data in a file system according to the task item, processing the order data, and sending a processing result to the file system.
7. The method of claim 6, further comprising:
receiving a degradation instruction sent by a server, and stopping each client server from executing a current task;
and the preset client server reads the task information stored in other client servers, and creates and executes the timing task according to the task information.
8. A task scheduling device, provided at a server, includes:
the system comprises a first receiving module, a second receiving module and a first sending module, wherein the first receiving module is used for receiving a client information change event pushed by a distributed coordination center, and the client information change event comprises identifications of a plurality of client servers;
the fragmentation module is used for executing fragmentation on the plurality of client servers according to the total task item number and the identifications of the plurality of client servers;
and the sending module is used for sending the slicing result to the distributed coordination center, wherein the slicing result comprises the task items distributed to each client server.
9. A task scheduling apparatus, provided at a client, comprising:
the second receiving module is used for receiving a fragment completion event pushed by the distributed scheduling center, wherein the fragment completion event comprises task items distributed to each client server;
and the execution module is used for matching corresponding order data in a file system according to the task item, processing the order data and sending a processing result to the file system.
10. A method for task scheduling, comprising:
a server receives a client information change event pushed by a distributed coordination center, wherein the client information change event comprises identifications of a plurality of client servers; according to the total task item number and the identifications of the plurality of client servers, sharding is performed on the plurality of client servers; sending a fragmentation result to the distributed coordination center, wherein the fragmentation result comprises task items distributed to each client server;
the method comprises the steps that a client receives a fragment completion event pushed by a distributed scheduling center, wherein the fragment completion event comprises task items distributed to client servers; and matching corresponding order data in a file system according to the task item, processing the order data, and sending a processing result to the file system.
11. A task scheduling system, comprising: a server and a client, the server comprising the task scheduling device of claim 8, the client comprising the task scheduling device of claim 9.
12. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.
13. A computer-readable medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-7.
CN201910125842.2A 2019-02-20 2019-02-20 Task scheduling method and device Pending CN111597033A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910125842.2A CN111597033A (en) 2019-02-20 2019-02-20 Task scheduling method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910125842.2A CN111597033A (en) 2019-02-20 2019-02-20 Task scheduling method and device

Publications (1)

Publication Number Publication Date
CN111597033A true CN111597033A (en) 2020-08-28

Family

ID=72186893

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910125842.2A Pending CN111597033A (en) 2019-02-20 2019-02-20 Task scheduling method and device

Country Status (1)

Country Link
CN (1) CN111597033A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112948078A (en) * 2021-02-10 2021-06-11 中国工商银行股份有限公司 Revenue allocation task processing method and device based on service call
CN113177824A (en) * 2021-05-06 2021-07-27 北京沃东天骏信息技术有限公司 Replenishment task processing method, replenishment task processing device, computer system and readable storage medium
CN113760968A (en) * 2020-09-24 2021-12-07 北京沃东天骏信息技术有限公司 Data query method, device, system, electronic equipment and storage medium
WO2022078347A1 (en) * 2020-10-13 2022-04-21 深圳壹账通智能科技有限公司 Task scheduling method and apparatus, electronic device, and storage medium
CN115086292A (en) * 2022-06-15 2022-09-20 浙江省标准化研究院(金砖国家标准化(浙江)研究中心、浙江省物品编码中心) Distributed instant server push scheme architecture design method, device and storage medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113760968A (en) * 2020-09-24 2021-12-07 北京沃东天骏信息技术有限公司 Data query method, device, system, electronic equipment and storage medium
WO2022078347A1 (en) * 2020-10-13 2022-04-21 深圳壹账通智能科技有限公司 Task scheduling method and apparatus, electronic device, and storage medium
CN112948078A (en) * 2021-02-10 2021-06-11 中国工商银行股份有限公司 Revenue allocation task processing method and device based on service call
CN113177824A (en) * 2021-05-06 2021-07-27 北京沃东天骏信息技术有限公司 Replenishment task processing method, replenishment task processing device, computer system and readable storage medium
CN115086292A (en) * 2022-06-15 2022-09-20 浙江省标准化研究院(金砖国家标准化(浙江)研究中心、浙江省物品编码中心) Distributed instant server push scheme architecture design method, device and storage medium

Similar Documents

Publication Publication Date Title
CN108737270B (en) Resource management method and device for server cluster
CN111597033A (en) Task scheduling method and device
CN110825535B (en) Job scheduling method and system
CN108897854B (en) Monitoring method and device for overtime task
CN109783151B (en) Method and device for rule change
CN103092698A (en) System and method of cloud computing application automatic deployment
CN110071965B (en) Data center management system based on cloud platform
CN111309448B (en) Container instance creating method and device based on multi-tenant management cluster
CN109879126B (en) Elevator reservation method and system
US11416294B1 (en) Task processing for management of data center resources
US8606908B2 (en) Wake-up server
CN111478781B (en) Message broadcasting method and device
CN112052133A (en) Service system monitoring method and device based on Kubernetes
US8930518B2 (en) Processing of write requests in application server clusters
CN110659124A (en) Message processing method and device
CN112685499A (en) Method, device and equipment for synchronizing process data of work service flow
CN113485806A (en) Method, device, equipment and computer readable medium for processing task
CN113282589A (en) Data acquisition method and device
CN113128197A (en) Method and device for managing application production versions
CN117076096A (en) Task flow execution method and device, computer readable medium and electronic equipment
US10893015B2 (en) Priority topic messaging
CN108833147B (en) Configuration information updating method and device
CN113127225A (en) Method, device and system for scheduling data processing tasks
CN112448977A (en) System, method, apparatus and computer readable medium for assigning tasks
CN111190731A (en) Cluster task scheduling system based on weight

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination