CN112965796B - Task scheduling system, method and device - Google Patents

Task scheduling system, method and device Download PDF

Info

Publication number
CN112965796B
CN112965796B CN202110224021.1A CN202110224021A CN112965796B CN 112965796 B CN112965796 B CN 112965796B CN 202110224021 A CN202110224021 A CN 202110224021A CN 112965796 B CN112965796 B CN 112965796B
Authority
CN
China
Prior art keywords
node
task
executed
execution
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110224021.1A
Other languages
Chinese (zh)
Other versions
CN112965796A (en
Inventor
苏辉
杨康
叶靖祺
霍瑞强
张岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
17win Network Technology Co ltd
Original Assignee
17win Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 17win Network Technology Co ltd filed Critical 17win Network Technology Co ltd
Priority to CN202110224021.1A priority Critical patent/CN112965796B/en
Publication of CN112965796A publication Critical patent/CN112965796A/en
Application granted granted Critical
Publication of CN112965796B publication Critical patent/CN112965796B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/547Remote procedure calls [RPC]; Web services

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the invention discloses a task scheduling system, a task scheduling method and a task scheduling device. The submitting node sends the acquired task to the distributing node. The distribution node stores the received tasks into a database; when a target task to be executed currently exists in the database, caching the target task into a preset memory area; the weight reflects the processing performance of the execution node, and the distribution node binds the corresponding execution node of each target task according to the weight corresponding to each execution node. The executing node judges whether a task to be executed matched with the executing node exists in the memory area or not; if the task to be executed is matched with the task to be executed, the task to be executed is read from the memory area, and the task to be executed is executed. The allocation node can realize performance monitoring and task allocation of all the execution nodes, bind tasks for each execution node according to the weight of each execution node, improve the resource utilization rate of each execution node, and realize the scheduling of distributed cluster tasks under lower resource consumption.

Description

Task scheduling system, method and device
Technical Field
The present invention relates to the field of distributed systems, and in particular, to a task scheduling system, method and apparatus.
Background
With the rapid development of the internet, the business functions are complex and the system is more and more huge, and at this time, the business can split the huge system into a plurality of subsystem modules, and each subsystem module deploys a set of services to form a huge distributed cluster mode.
The former task system is scheduled in the same service, and each subsystem module has own task scheduling after the service system is split. Each module needs to implement its own task schedule, which consumes a lot of system resources. Secondly, the traditional task scheduling is single-machine scheduling, and the existing distributed cluster mode cannot be met.
It can be seen how to implement the scheduling of distributed cluster tasks with low resource consumption is a problem that needs to be solved by those skilled in the art.
Disclosure of Invention
The embodiment of the invention aims to provide a task scheduling system, a task scheduling method and a task scheduling device, which can realize the scheduling of distributed cluster tasks under lower resource consumption.
In order to solve the technical problems, an embodiment of the present invention provides a task scheduling system, including a submitting node, an allocating node and an executing node; the distribution node is respectively in communication connection with the submitting node and the executing node;
the submitting node is used for sending the acquired task to the distributing node;
the distribution node is used for storing the received tasks to a database; when a target task to be executed currently exists in the database, caching the target task into a preset memory area; binding the corresponding execution nodes of the target tasks according to the weight corresponding to the execution nodes;
the executing node is used for judging whether a task to be executed matched with the executing node exists in the memory area or not; and if the task to be executed matched with the task to be executed exists, reading the task to be executed from the memory area, and executing the task to be executed.
Optionally, the allocation node is configured to adjust a weight of each execution node according to a performance index of each execution node.
Optionally, the allocation node is configured to determine, according to the total number of the target tasks and the weight corresponding to each execution node, the number of target tasks corresponding to each execution node; setting the corresponding relation between each executing node and the target task according to the number of the target tasks corresponding to each executing node, and recording the corresponding relation to the memory area.
Optionally, the submitting node, the distributing node and the executing node are all configured to record node information to redis when the node is started, so that when the running state of the target node changes, a notification message is sent to a node subscribing to the state information of the target node.
Optionally, the allocation node is configured to intercept the task sent by the submitting node according to the set current limiting requirement, and store the task meeting the current limiting requirement to the database.
Optionally, the allocation node establishes communication connection with the submitting node and the executing node respectively based on RPC protocol.
The embodiment of the invention also provides a task scheduling method, which comprises the following steps:
storing the received task to a database;
when a target task to be executed currently exists in the database, caching the target task into a preset memory area;
binding corresponding execution nodes of the target tasks according to the weight corresponding to the execution nodes so that the execution nodes can read the tasks to be executed from the memory area and execute the tasks to be executed when detecting that the tasks to be executed matched with the execution nodes exist in the memory area.
Optionally, the method further comprises:
and adjusting the weight of each execution node according to the performance index of each execution node.
Optionally, binding the corresponding execution node for each target task according to the weight corresponding to each execution node includes:
determining the number of target tasks corresponding to each execution node according to the total number of the target tasks and the weight corresponding to each execution node;
setting the corresponding relation between each executing node and the target task according to the number of the target tasks corresponding to each executing node, and recording the corresponding relation to the memory area.
Optionally, storing the received task in the database includes:
according to the set current limiting requirement, intercepting the task sent by the submitting node, and storing the task meeting the current limiting requirement into a database.
The embodiment of the invention also provides a task scheduling device which comprises a storage unit, a cache unit and a binding unit;
the storage unit is used for storing the received tasks into a database;
the caching unit is used for caching the target task to a preset memory area when the target task to be executed currently exists in the database;
the binding unit is configured to bind the corresponding execution node to each target task according to the weight corresponding to each execution node, so that the execution node detects that a task to be executed matching with the execution node exists in the memory area, reads the task to be executed from the memory area, and executes the task to be executed.
Optionally, the device further comprises an adjusting unit;
the adjusting unit is used for adjusting the weight of each executing node according to the performance index of each executing node.
Optionally, the binding unit includes a determining subunit and a recording subunit;
the determining subunit is configured to determine, according to the total number of the target tasks and the weight corresponding to each execution node, the number of target tasks corresponding to each execution node;
the recording subunit is configured to set a corresponding relationship between each execution node and a target task according to the number of target tasks corresponding to each execution node, and record the corresponding relationship to the memory area.
Optionally, the storage unit is configured to intercept the task sent by the submitting node according to the set current limiting requirement, and store the task meeting the current limiting requirement to the database.
According to the technical scheme, the distribution node is respectively in communication connection with the submitting node and the executing node. The submitting node sends the acquired task to the distributing node. Considering that the submitting node may send a large number of tasks to the distributing node in a short time, in order to ensure the orderly processing of the tasks, the distributing node may store the received tasks to the database first; when the target task to be executed currently exists in the database, caching the target task into a preset memory area; the weight reflects the processing performance of the execution node, and the allocation node can bind the corresponding execution node for each target task according to the weight corresponding to each execution node. The executing node judges whether a task to be executed matched with the executing node exists in the memory area or not; if the task to be executed is matched with the task to be executed, the task to be executed is read from the memory area, and the task to be executed is executed. In the technical scheme, the distribution node can monitor the performance of all the execution nodes and distribute tasks, so that the dispatching of distributed cluster tasks is realized. The allocation node binds tasks for each execution node according to the weight of each execution node, so that the resource utilization rate of each execution node is improved, and the dispatching of the distributed cluster tasks is realized under lower resource consumption. And the allocation node only needs to buffer the task to be executed to the preset memory area, and the execution node automatically reads the task to be executed matched with the allocation node, so that the workload of the allocation node is effectively reduced, and the execution efficiency of the task is improved.
Drawings
For a clearer description of embodiments of the present invention, the drawings that are required to be used in the embodiments will be briefly described, it being apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to the drawings without inventive effort for those skilled in the art.
FIG. 1 is a schematic diagram of a task scheduling system according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of task processing between nodes according to an embodiment of the present invention;
FIG. 3 is a flowchart of a task scheduling method according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a task scheduling device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. Based on the embodiments of the present invention, all other embodiments obtained by a person of ordinary skill in the art without making any inventive effort are within the scope of the present invention.
In order to better understand the aspects of the present invention, the present invention will be described in further detail with reference to the accompanying drawings and detailed description.
Next, a task scheduling system provided by the embodiment of the invention is described in detail. Fig. 1 is a schematic structural diagram of a task scheduling system according to an embodiment of the present invention, where the system includes a submitting node 11, a distributing node 12, and an executing node 13; the distribution node 12 has a communication connection with the commit node 11 and the execution node 13, respectively.
In the embodiment of the present invention, in order to implement the scheduling of the distributed cluster task, the nodes in the distributed cluster may be divided into the submitting node 11, the distributing node 12 and the executing node 13 for different functions that the nodes need to implement. Wherein the number of executing nodes 13 is greater than the number of commit nodes 11 and allocate nodes 12. One distribution node 12 may implement management of a plurality of execution nodes 13.
For convenience of description, in fig. 1, one commit node 11, one allocation node 12, and three execution nodes 13 are taken as examples, and in the embodiment of the present invention, the number of commit nodes 11, allocation nodes 12, and execution nodes 13 is not limited.
A commit node 11 for sending the acquired tasks to a distribution node 12.
In order to improve the efficiency of information transfer between nodes, in practical applications, the information transfer between nodes may be implemented using a remote call protocol (Remote Procedure Call Protocol, RPC), that is, the distribution node 12 establishes communication connection with the submitting node 11 and the executing node 13 respectively based on the RPC protocol.
The RPC protocol uses netty as the bottom communication, uses the long-connection network communication established by the netty, can enable the task information to be sent and confirmed among the nodes, and ensures the reliability of the information. The Netty transfer speed is also a property that relies on nio—zero copy. The Java memory includes heap memory, stack memory, and a string constant pool, where the heap memory is the block occupying the largest memory space, and is also where Java objects are stored, generally, if our data needs to be read from the IO to the heap memory, the middle needs to pass through a Socket buffer, that is, a data can be copied twice to reach his end point, and if the data volume is large, unnecessary resource waste is caused. For the situation, netty uses zero copy, which is another large characteristic in NIO, when data needs to be received, a memory is opened outside the heap memory, the data is directly read from IO to the opened memory, and the data can be directly operated through ByteBuf in Netty, so that the transmission speed is increased.
An allocation node 12 for storing the received tasks in a database; when the target task to be executed currently exists in the database, caching the target task into a preset memory area; and binding the corresponding execution node 13 to each target task according to the weight corresponding to each execution node 13.
Considering that the submitting node 11 may send a large number of tasks to the distributing node 12 in a short time, in order to ensure orderly processing of the tasks, the distributing node 12 may store the received tasks to a database first; when the target task to be executed currently exists in the database, the target task is cached to a preset memory area.
The weight reflects the processing performance of the execution node 13, and the allocation node 12 may bind the corresponding execution node 13 to each target task according to the weight corresponding to each execution node 13.
In practical applications, considering that the processing performance of each execution node belongs to a dynamic variable, the allocation node 12 may adjust the weight of each execution node 13 according to the performance index of each execution node 13.
The performance metrics may include CPU resource occupancy, memory resource occupancy, current required execution task volume, and the like. Taking the CPU resource occupation amount as an example, the smaller the CPU resource occupation amount is, the better the processing performance of the node is indicated, and at the moment, a higher weight can be set for the node, so that more tasks are allocated for the node.
There may be various ways of adjusting the weights according to the performance indexes of the nodes, and one way may be to use the sum of the quantized values of the parameters as the weight after performing unified quantization processing on the parameters in the performance indexes. In another mode, different weighting coefficients can be set for different parameters, and the weighting coefficients are used for carrying out weighted summation on the parameters to obtain the weight.
In practical applications, the task types are various, including real-time tasks, timing tasks, circulation tasks, and the like. Wherein the real-time task submits a task to be executed immediately after that. The timed task is a task that is executed at a specified point in time, such as 3-point execution (single pass) today. The loop task includes CronExpression, which is implemented in a similar manner as quartz.
The target task refers to a task that is currently required to be executed. In the embodiment of the present invention, when the allocation node 12 allocates tasks to each execution node 13 according to the weight of each execution node 13, the number of target tasks corresponding to each execution node 13 may be determined according to the total number of target tasks and the weight corresponding to each execution node 13; and setting the corresponding relation between each executing node 13 and the target task according to the number of the target tasks corresponding to each executing node 13, and recording the corresponding relation to the memory area.
For example, assume that there are 5 executing nodes 13, in turn, nodes 1 through 5, and the corresponding weights are 6,2,4,2,1. The number of tasks currently required to be executed is 30, and the number of target tasks allocated to the nodes 1 to 5 is 12,4,8,4,2 in turn.
The executing node 13 is used for judging whether a task to be executed matched with the executing node exists in the memory area; if the task to be executed is matched with the task to be executed, the task to be executed is read from the memory area, and the task to be executed is executed.
In the embodiment of the invention, the allocation node 12 only needs to cache the task to be executed into the preset memory area, and the execution node 13 automatically reads the task to be executed matched with the execution node, so that the workload of the allocation node is effectively reduced, and the execution efficiency of the task is improved.
In the embodiment of the present invention, in order to facilitate each node to understand the operation condition of other nodes, the submitting node 11, the distributing node 12 and the executing node 13 may record node information to redis when the node is started, so as to send a notification message to a node subscribing to the state information of the target node when the operation state of the target node changes.
Each node will register the information of the node to redis when started, so that other nodes subscribed to the node will receive the notification message of the node on-line. Meanwhile, the redis component is fast, data persistence is supported, the data in the memory can be stored in a disk, and the data can be loaded again for use when restarting.
Fig. 2 is a schematic diagram of task processing between nodes according to an embodiment of the present invention, where in fig. 2, a left node is an executing node, an intermediate node is an allocating node, a right node is a submitting node, and each node may register a message with a redis and subscribe to messages of other nodes. In the embodiment of the invention, the redis for managing the node messages in the distributed cluster can be called a redis cluster. In practical application, the submitting node may submit the task to the distributing node, and the distributing node may issue the task to the executing node, where the task is not transmitted to the executing node, but the binding relationship between the task and the executing node is recorded in the memory area, so that the executing node pulls the task from the memory area. After performing the node pull task, the distributing node may feed back a message to the submitting node that the task is complete. The number of each type of node is often more, in the embodiment of the present invention, all submitting nodes may be regarded as task submitting clusters, all distributing nodes may be regarded as task distributing clusters, and all executing nodes may be regarded as task executing clusters. The execution node can call the task creation list to store the task in the database; invoking a task waiting queue to detect the execution of the timing task; and calling a task execution queue, caching the target task to be executed into a memory area, and recording the execution node corresponding to each task.
To avoid congestion of tasks, the distribution node 12 may set a throttling requirement. The allocation node 12 intercepts the tasks sent by the submitting node 11 according to the set current limiting requirements, and stores the tasks meeting the current limiting requirements into a database.
In particular implementations, the distribution node 12 may distribute tokens to tasks sent by the submitting node 11, all of which need to be taken to an available token before processing. Tokens are added into the bucket according to a certain rate according to the size of the current limit. The bucket sets the maximum placed token limit and when the bucket is full, the newly added token is discarded or rejected. After the task is reached, the tokens in the token bucket are firstly acquired, other service logic can be carried out with the tokens, and after the service logic is processed, the tokens are directly deleted. The token bucket has a minimum limit, and when tokens in the bucket reach the minimum limit, the tokens are not deleted after the task is processed, so that enough current limit is ensured.
According to the technical scheme, the distribution node is respectively in communication connection with the submitting node and the executing node. The submitting node sends the acquired task to the distributing node. Considering that the submitting node may send a large number of tasks to the distributing node in a short time, in order to ensure the orderly processing of the tasks, the distributing node may store the received tasks to the database first; when the target task to be executed currently exists in the database, caching the target task into a preset memory area; the weight reflects the processing performance of the execution node, and the allocation node can bind the corresponding execution node for each target task according to the weight corresponding to each execution node. The executing node judges whether a task to be executed matched with the executing node exists in the memory area or not; if the task to be executed is matched with the task to be executed, the task to be executed is read from the memory area, and the task to be executed is executed. In the technical scheme, the distribution node can monitor the performance of all the execution nodes and distribute tasks, so that the dispatching of distributed cluster tasks is realized. The allocation node binds tasks for each execution node according to the weight of each execution node, so that the resource utilization rate of each execution node is improved, and the dispatching of the distributed cluster tasks is realized under lower resource consumption. And the allocation node only needs to buffer the task to be executed to the preset memory area, and the execution node automatically reads the task to be executed matched with the allocation node, so that the workload of the allocation node is effectively reduced, and the execution efficiency of the task is improved.
Fig. 3 is a flowchart of a task scheduling method according to an embodiment of the present invention, including:
s301: and storing the received task in a database.
In the embodiment of the invention, in order to realize the dispatching of the distributed cluster tasks, the nodes in the distributed cluster can be divided into a submitting node, an allocating node and an executing node aiming at different functions required to be realized by the nodes. The submitting node is used for sending the acquired task to the distributing node.
Considering that a submitting node may send a large number of tasks to a distributing node in a short time, the distributing node may store the received tasks to a database first in order to ensure an orderly processing of the tasks.
S302: when the target task to be executed currently exists in the database, the target task is cached to a preset memory area.
S303: binding corresponding execution nodes of each target task according to the weight corresponding to each execution node so that the execution nodes can read the tasks to be executed from the memory area and execute the tasks to be executed when detecting that the tasks to be executed matched with the execution nodes exist in the memory area.
The weight reflects the processing performance of the execution node, and the allocation node can bind the corresponding execution node for each target task according to the weight corresponding to each execution node.
The executing node can judge whether a task to be executed matched with the executing node exists in the memory area or not; if the task to be executed is matched with the task to be executed, the task to be executed is read from the memory area, and the task to be executed is executed.
In the embodiment of the invention, the allocation node only needs to cache the task to be executed into the preset memory area, and the execution node automatically reads the task to be executed matched with the allocation node, so that the workload of the allocation node is effectively reduced, and the execution efficiency of the task is improved.
Optionally, the method further comprises:
and adjusting the weight of each execution node according to the performance index of each execution node.
Optionally, binding the corresponding execution node for each target task according to the weight corresponding to each execution node includes:
determining the number of target tasks corresponding to each execution node according to the total number of the target tasks and the weight corresponding to each execution node;
setting the corresponding relation between each executing node and the target task according to the number of the target tasks corresponding to each executing node, and recording the corresponding relation to the memory area.
Optionally, storing the received task in the database includes:
according to the set current limiting requirement, intercepting the task sent by the submitting node, and storing the task meeting the current limiting requirement into a database.
The description of the features in the embodiment corresponding to fig. 3 may be referred to the related description of the embodiment corresponding to fig. 1, which is not repeated here.
According to the technical scheme, the distribution node is respectively in communication connection with the submitting node and the executing node. The submitting node sends the acquired task to the distributing node. Considering that the submitting node may send a large number of tasks to the distributing node in a short time, in order to ensure the orderly processing of the tasks, the distributing node may store the received tasks to the database first; when the target task to be executed currently exists in the database, caching the target task into a preset memory area; the weight reflects the processing performance of the execution node, and the allocation node can bind the corresponding execution node for each target task according to the weight corresponding to each execution node. The executing node judges whether a task to be executed matched with the executing node exists in the memory area or not; if the task to be executed is matched with the task to be executed, the task to be executed is read from the memory area, and the task to be executed is executed. In the technical scheme, the distribution node can monitor the performance of all the execution nodes and distribute tasks, so that the dispatching of distributed cluster tasks is realized. The allocation node binds tasks for each execution node according to the weight of each execution node, so that the resource utilization rate of each execution node is improved, and the dispatching of the distributed cluster tasks is realized under lower resource consumption. And the allocation node only needs to buffer the task to be executed to the preset memory area, and the execution node automatically reads the task to be executed matched with the allocation node, so that the workload of the allocation node is effectively reduced, and the execution efficiency of the task is improved.
Fig. 4 is a schematic structural diagram of a task scheduling device according to an embodiment of the present invention, which includes a storage unit 41, a cache unit 42, and a binding unit 43;
a storage unit 41 for storing the received task in a database;
the caching unit 42 is configured to cache the target task to a preset memory area when the target task to be executed currently exists in the database;
and the binding unit 43 is configured to bind the corresponding execution node to each target task according to the weight corresponding to each execution node, so that if the execution node detects that the task to be executed matching with the execution node exists in the memory area, the task to be executed is read from the memory area, and the task to be executed is executed.
Optionally, the device further comprises an adjusting unit;
and the adjusting unit is used for adjusting the weight of each executing node according to the performance index of each executing node.
Optionally, the binding unit includes a determining subunit and a recording subunit;
the determining subunit is used for determining the number of the target tasks corresponding to each executing node according to the total number of the target tasks and the weight corresponding to each executing node;
the recording subunit is used for setting the corresponding relation between each executing node and the target task according to the number of the target tasks corresponding to each executing node, and recording the corresponding relation to the memory area.
Optionally, the storage unit is configured to intercept a task sent by the submitting node according to a set current limiting requirement, and store the task meeting the current limiting requirement to the database.
The description of the features in the embodiment corresponding to fig. 4 may be referred to the related description of the embodiment corresponding to fig. 1, and will not be repeated here.
According to the technical scheme, the distribution node is respectively in communication connection with the submitting node and the executing node. The submitting node sends the acquired task to the distributing node. Considering that the submitting node may send a large number of tasks to the distributing node in a short time, in order to ensure the orderly processing of the tasks, the distributing node may store the received tasks to the database first; when the target task to be executed currently exists in the database, caching the target task into a preset memory area; the weight reflects the processing performance of the execution node, and the allocation node can bind the corresponding execution node for each target task according to the weight corresponding to each execution node. The executing node judges whether a task to be executed matched with the executing node exists in the memory area or not; if the task to be executed is matched with the task to be executed, the task to be executed is read from the memory area, and the task to be executed is executed. In the technical scheme, the distribution node can monitor the performance of all the execution nodes and distribute tasks, so that the dispatching of distributed cluster tasks is realized. The allocation node binds tasks for each execution node according to the weight of each execution node, so that the resource utilization rate of each execution node is improved, and the dispatching of the distributed cluster tasks is realized under lower resource consumption. And the allocation node only needs to buffer the task to be executed to the preset memory area, and the execution node automatically reads the task to be executed matched with the allocation node, so that the workload of the allocation node is effectively reduced, and the execution efficiency of the task is improved.
The task scheduling system, the task scheduling method and the task scheduling device provided by the embodiment of the invention are described in detail. In the description, each embodiment is described in a progressive manner, and each embodiment is mainly described by the differences from other embodiments, so that the same similar parts among the embodiments are mutually referred. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section. It should be noted that it will be apparent to those skilled in the art that various modifications and adaptations of the invention can be made without departing from the principles of the invention and these modifications and adaptations are intended to be within the scope of the invention as defined in the following claims.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.

Claims (9)

1. The task scheduling system is characterized by comprising a submitting node, an allocating node and an executing node; the distribution node is respectively in communication connection with the submitting node and the executing node;
the submitting node is used for sending the acquired task to the distributing node;
the distribution node is used for storing the received tasks to a database; when a target task to be executed currently exists in the database, caching the target task into a preset memory area; adjusting the weight of each execution node according to the performance index of each execution node, and binding the corresponding execution node of each target task according to the weight corresponding to each execution node;
the executing node is used for judging whether a task to be executed matched with the executing node exists in the memory area or not; and if the task to be executed matched with the task to be executed exists, reading the task to be executed from the memory area, and executing the task to be executed.
2. The task scheduling system according to claim 1, wherein the allocation node is configured to determine, according to the total number of the target tasks and the weight corresponding to each execution node, the number of target tasks corresponding to each execution node; setting the corresponding relation between each executing node and the target task according to the number of the target tasks corresponding to each executing node, and recording the corresponding relation to the memory area.
3. The task scheduling system of claim 1, wherein the submitting node, the distributing node, and the executing node are each configured to record node information to redis when a node is started, so that when an operation state of a target node changes, a notification message is sent to a node subscribing to the state information of the target node.
4. The task scheduling system according to claim 1, wherein the allocation node is configured to intercept the task sent by the submitting node according to a set current limiting requirement, and store the task meeting the current limiting requirement in the database.
5. The task scheduling system according to claim 1, wherein the distribution node establishes communication connection with the submitting node and the executing node, respectively, based on RPC protocol.
6. A method for task scheduling, comprising:
storing the received task to a database;
when a target task to be executed currently exists in the database, caching the target task into a preset memory area;
and adjusting the weight of each execution node according to the performance index of each execution node, and binding the corresponding execution node for each target task according to the weight corresponding to each execution node so that the execution node can read the task to be executed from the memory area and execute the task to be executed when detecting that the task to be executed matched with the execution node exists in the memory area.
7. The task scheduling method according to claim 6, wherein binding each of the target tasks to its corresponding execution node according to the weight corresponding to each of the execution nodes includes:
determining the number of target tasks corresponding to each execution node according to the total number of the target tasks and the weight corresponding to each execution node;
setting the corresponding relation between each executing node and the target task according to the number of the target tasks corresponding to each executing node, and recording the corresponding relation to the memory area.
8. The task scheduling method of claim 6, wherein storing the received task in a database comprises:
according to the set current limiting requirement, intercepting the task sent by the submitting node, and storing the task meeting the current limiting requirement into a database.
9. The task scheduling device is characterized by comprising a storage unit, a cache unit and a binding unit;
the storage unit is used for storing the received tasks into a database;
the caching unit is used for caching the target task to a preset memory area when the target task to be executed currently exists in the database;
the binding unit is configured to adjust a weight of each execution node according to a performance index of each execution node, and bind each corresponding execution node to each target task according to a weight corresponding to each execution node, so that the execution node detects that a task to be executed matched with the execution node exists in the memory area, and then reads the task to be executed from the memory area, and executes the task to be executed.
CN202110224021.1A 2021-03-01 2021-03-01 Task scheduling system, method and device Active CN112965796B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110224021.1A CN112965796B (en) 2021-03-01 2021-03-01 Task scheduling system, method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110224021.1A CN112965796B (en) 2021-03-01 2021-03-01 Task scheduling system, method and device

Publications (2)

Publication Number Publication Date
CN112965796A CN112965796A (en) 2021-06-15
CN112965796B true CN112965796B (en) 2024-04-09

Family

ID=76276048

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110224021.1A Active CN112965796B (en) 2021-03-01 2021-03-01 Task scheduling system, method and device

Country Status (1)

Country Link
CN (1) CN112965796B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113626188B (en) * 2021-08-02 2024-09-20 北京金山云网络技术有限公司 Task pushing method, device, computer equipment and storage medium

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105468450A (en) * 2015-12-29 2016-04-06 华为技术有限公司 Task scheduling method and system
CN106909451A (en) * 2017-02-28 2017-06-30 郑州云海信息技术有限公司 A kind of distributed task dispatching system and method
CN107515784A (en) * 2016-06-16 2017-12-26 阿里巴巴集团控股有限公司 A kind of method and apparatus of computing resource in a distributed system
CN108829504A (en) * 2018-06-28 2018-11-16 泰康保险集团股份有限公司 A kind of method for scheduling task, device, medium and electronic equipment
CN110633135A (en) * 2019-08-15 2019-12-31 中国平安财产保险股份有限公司 Asynchronous task allocation method and device, computer equipment and storage medium
CN110750341A (en) * 2018-07-24 2020-02-04 深圳市优必选科技有限公司 Task scheduling method, device, system, terminal equipment and storage medium
CN110908806A (en) * 2019-12-02 2020-03-24 北京蜜莱坞网络科技有限公司 Mixed flow task management method, device, equipment and storage medium
CN111090519A (en) * 2019-12-05 2020-05-01 东软集团股份有限公司 Task execution method and device, storage medium and electronic equipment
CN111221632A (en) * 2019-10-15 2020-06-02 中国平安财产保险股份有限公司 Distributed parallel task scheduling method and device, computer equipment and storage medium
CN111813513A (en) * 2020-06-24 2020-10-23 中国平安人寿保险股份有限公司 Real-time task scheduling method, device, equipment and medium based on distribution
CN111913793A (en) * 2020-07-31 2020-11-10 同盾控股有限公司 Distributed task scheduling method, device, node equipment and system
CN112000445A (en) * 2020-07-08 2020-11-27 苏宁云计算有限公司 Distributed task scheduling method and system
CN112035235A (en) * 2020-09-02 2020-12-04 中国平安人寿保险股份有限公司 Task scheduling method, system, device and storage medium
CN112068959A (en) * 2020-09-04 2020-12-11 北京明略昭辉科技有限公司 Self-adaptive task scheduling method and system and retrieval method comprising method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106445676B (en) * 2015-08-05 2019-10-22 杭州海康威视系统技术有限公司 A kind of method for allocating tasks and task allocation apparatus that distributed data calculates
US10877801B2 (en) * 2018-09-28 2020-12-29 Atlassian Pty Ltd. Systems and methods for scheduling tasks

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105468450A (en) * 2015-12-29 2016-04-06 华为技术有限公司 Task scheduling method and system
CN107515784A (en) * 2016-06-16 2017-12-26 阿里巴巴集团控股有限公司 A kind of method and apparatus of computing resource in a distributed system
CN106909451A (en) * 2017-02-28 2017-06-30 郑州云海信息技术有限公司 A kind of distributed task dispatching system and method
CN108829504A (en) * 2018-06-28 2018-11-16 泰康保险集团股份有限公司 A kind of method for scheduling task, device, medium and electronic equipment
CN110750341A (en) * 2018-07-24 2020-02-04 深圳市优必选科技有限公司 Task scheduling method, device, system, terminal equipment and storage medium
CN110633135A (en) * 2019-08-15 2019-12-31 中国平安财产保险股份有限公司 Asynchronous task allocation method and device, computer equipment and storage medium
CN111221632A (en) * 2019-10-15 2020-06-02 中国平安财产保险股份有限公司 Distributed parallel task scheduling method and device, computer equipment and storage medium
CN110908806A (en) * 2019-12-02 2020-03-24 北京蜜莱坞网络科技有限公司 Mixed flow task management method, device, equipment and storage medium
CN111090519A (en) * 2019-12-05 2020-05-01 东软集团股份有限公司 Task execution method and device, storage medium and electronic equipment
CN111813513A (en) * 2020-06-24 2020-10-23 中国平安人寿保险股份有限公司 Real-time task scheduling method, device, equipment and medium based on distribution
CN112000445A (en) * 2020-07-08 2020-11-27 苏宁云计算有限公司 Distributed task scheduling method and system
CN111913793A (en) * 2020-07-31 2020-11-10 同盾控股有限公司 Distributed task scheduling method, device, node equipment and system
CN112035235A (en) * 2020-09-02 2020-12-04 中国平安人寿保险股份有限公司 Task scheduling method, system, device and storage medium
CN112068959A (en) * 2020-09-04 2020-12-11 北京明略昭辉科技有限公司 Self-adaptive task scheduling method and system and retrieval method comprising method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"Distributed scheduling and data sharing in late-binding overlays";Antonio Delgado Peris等;《2014 International Conference on High Performance Computing & Simulation (HPCS)》;20141231;全文 *
内存计算框架局部数据优先拉取策略;卞琛;于炯;修位蓉;钱育蓉;英昌甜;廖彬;;计算机研究与发展(第04期);全文 *
异构Hadoop集群下的负载自适应反馈调度策略;潘佳艺;王芳;杨静怡;谭支鹏;;计算机工程与科学(第03期);全文 *

Also Published As

Publication number Publication date
CN112965796A (en) 2021-06-15

Similar Documents

Publication Publication Date Title
CN107241281B (en) Data processing method and device
CN109218355A (en) Load equalizing engine, client, distributed computing system and load-balancing method
US20180349178A1 (en) A method and system for scalable job processing
JPH10500509A (en) Event distribution apparatus and method in operating system
AU2017331066A1 (en) Systems and methods for providing messages to multiple subscribers
CN108900626B (en) Data storage method, device and system in cloud environment
CN105007337A (en) Cluster system load balancing method and system thereof
WO2024016596A1 (en) Container cluster scheduling method and apparatus, device, and storage medium
CN112565774A (en) Video transcoding resource scheduling method and device
US20150195229A1 (en) Listening for externally initiated requests
CN109117279B (en) Electronic device, method for limiting inter-process communication thereof and storage medium
CN111586140A (en) Data interaction method and server
CN114928579A (en) Data processing method and device, computer equipment and storage medium
CN111200606A (en) Deep learning model task processing method, system, server and storage medium
CN112965796B (en) Task scheduling system, method and device
US10348814B1 (en) Efficient storage reclamation for system components managing storage
US8180823B2 (en) Method of routing messages to multiple consumers
CN115412500B (en) Asynchronous communication method, system, medium and equipment supporting load balancing strategy
US10630602B1 (en) Resource allocation using restore credits
CN110955461A (en) Processing method, device and system of computing task, server and storage medium
CN114327862B (en) Memory allocation method and device, electronic equipment and storage medium
CN111724262B (en) Subsequent package query system of application server and working method thereof
JP2019526860A (en) Scalable real-time messaging system
CN114489978A (en) Resource scheduling method, device, equipment and storage medium
CN114553959A (en) Situation awareness-based cloud native service grid configuration on-demand issuing method and application

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant