CN111240822B - Task scheduling method, device, system and storage medium - Google Patents
Task scheduling method, device, system and storage medium Download PDFInfo
- Publication number
- CN111240822B CN111240822B CN202010042580.6A CN202010042580A CN111240822B CN 111240822 B CN111240822 B CN 111240822B CN 202010042580 A CN202010042580 A CN 202010042580A CN 111240822 B CN111240822 B CN 111240822B
- Authority
- CN
- China
- Prior art keywords
- scheduling
- task
- target
- range
- scheduling end
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 65
- 230000008859 change Effects 0.000 claims abstract description 95
- 238000012545 processing Methods 0.000 claims abstract description 37
- 238000012806 monitoring device Methods 0.000 claims description 90
- 238000012544 monitoring process Methods 0.000 claims description 24
- 238000004590 computer program Methods 0.000 claims description 5
- 230000008569 process Effects 0.000 claims description 5
- 239000002699 waste material Substances 0.000 abstract description 6
- 238000001514 detection method Methods 0.000 abstract 2
- 238000004891 communication Methods 0.000 description 23
- 238000010586 diagram Methods 0.000 description 12
- 230000006870 function Effects 0.000 description 7
- 230000003287 optical effect Effects 0.000 description 5
- 230000002159 abnormal effect Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 230000001174 ascending effect Effects 0.000 description 2
- 230000006399 behavior Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 238000012163 sequencing technique Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 206010033799 Paralysis Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/3003—Monitoring arrangements specially adapted to the computing system or computing system component being monitored
- G06F11/3017—Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system is implementing multitasking
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Quality & Reliability (AREA)
- Computer And Data Communications (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The application discloses a task scheduling method, a task scheduling device, a task scheduling system and a storage medium, and belongs to the technical field of computers. In the application, the state detection device can monitor the state of each scheduling end in the scheduling end cluster, and when the scheduling end with the changed task processing capacity caused by the changed state exists, each scheduling end in the normal working state can redetermine the task scheduling range according to the state change message sent by the state detection device so as to schedule the task to be executed by the task scheduling system. According to the method provided by the application, if a certain scheduling end fails or goes offline, the task responsible for scheduling can be continuously responsible for scheduling by other scheduling ends, and the reliability is higher. In addition, the scheduling end cluster can dynamically expand or contract, namely, the expandability is high, each scheduling end does not need to have high configuration, and the resource waste can be reduced.
Description
Technical Field
The present application relates to the field of computer technologies, and in particular, to a task scheduling method, device, system, and storage medium.
Background
Currently, users can package business logic into tasks, and send relevant data of the tasks to a task scheduling system through a client, wherein the task scheduling system is responsible for configuration, control and management of the tasks, monitoring of task execution states and the like. Generally, the task scheduling system may include a data server, a scheduling end and an executing end, where relevant data of each task uploaded by the client is stored in the data server, the scheduling end may acquire relevant data of the task from the data server, send the acquired relevant data of the task to the executing end, and the executing end executes the corresponding task according to the relevant data of the task.
In the related art, a scheduling end in a task scheduling system adopts a main and standby scheme for deployment, that is, the main scheduling end and the standby scheduling end coexist. Under normal conditions, the main dispatching end is in a working state, the standby dispatching end is in a dormant state, and when the main dispatching end is abnormal, the standby dispatching end is switched to the working state. That is, under normal conditions, the master scheduling end may poll the data server at regular time, and when it is monitored that a task meets an execution condition, acquire relevant data of the task, and send the relevant data to the execution end to execute the task. When the main dispatching end is abnormal, the standby dispatching end can take over the work of the main dispatching end.
However, when the main scheduling end is abnormal and the standby scheduling end is abnormal, the whole task scheduling system is paralyzed, namely the reliability is lower. And when the task scale is larger, the processing capacity of the main scheduling end is limited, and large-scale task scheduling cannot be met, namely the expandability is poor. In addition, in order to promote the processing capacity of the scheduling end as much as possible, the scheduling end generally adopts a high-performance machine with very high configuration, and when the task scale is smaller, the advantage of high configuration cannot be exerted, so that resource waste is caused, namely, the resource utilization rate is lower.
Disclosure of Invention
The application provides a task scheduling method, a device, a system and a storage medium, which can improve the reliability and the expandability of a task scheduling system and can improve the resource utilization rate. The technical scheme is as follows:
in a first aspect, a task scheduling method is provided, the method including:
the target scheduling end receives a state change message sent by the state monitoring equipment, wherein the state change message is used for indicating information of a scheduling end with changed task processing capacity caused by state change, and the target scheduling end refers to any scheduling end in a normal working state in a scheduling end cluster; and the target scheduling end redetermines the task scheduling range of the target scheduling end according to the state change message and the number of the scheduling ends in a normal working state in the scheduling end cluster so as to schedule tasks to be executed by the task scheduling system.
In the application, the task scheduling system comprises a scheduling end cluster and a state monitoring device, wherein the scheduling end cluster comprises a plurality of scheduling ends, the state monitoring device is used for monitoring the working state of each scheduling end in the scheduling end cluster, and when the scheduling end with the changed task processing capacity caused by the changed state is monitored in the scheduling end cluster, a state change message can be sent to a target scheduling end to instruct the target scheduling end to redetermine the task scheduling range of the target scheduling end, so that the scheduling end in the normal working state in the scheduling end cluster can schedule the task to be executed by the task scheduling system, namely the reliability of the task scheduling system is ensured.
For any one of the dispatching end clusters, the dispatching end can register on the state monitoring device after power-on, that is, send an online request to the state monitoring device. Thus, the state monitoring device can determine that a newly-online scheduling end is currently monitored.
The condition monitoring device may then establish a communication connection with the dispatch, and transmit a heartbeat message with the dispatch over the communication connection. If the state monitoring device and the dispatching end can normally transmit the heartbeat message, the state monitoring device can determine that the dispatching end is in a normal working state, and if the state monitoring device and the dispatching end can not normally transmit the heartbeat message, the state monitoring device can determine that the dispatching end fails.
In addition, for any one of the schedulers in the scheduler cluster, if for some reason this scheduler needs to be offline, the scheduler may send an offline request to the status monitoring device. Thus, the state monitoring device can determine that a new offline scheduling end is currently monitored.
In the application, if the state monitoring equipment monitors that the scheduling end with changed state exists in the scheduling end cluster, a state change message can be generated, and the generated state change message is sent to the scheduling end in a normal working state.
Therefore, in the application, each time a certain dispatching end in the dispatching end cluster fails, goes off line or goes on line, the state monitoring equipment can send a state change message to the dispatching end in a normal working state in real time, and timely inform the dispatching end in the normal working state that the dispatching end has the dispatching end with changed task processing capacity in the current dispatching end cluster. The scheduling end with the changed task processing capacity caused by the changed state comprises a current fault scheduling end, a current offline scheduling end or a current online scheduling end.
When a faulty scheduling end exists in the scheduling end cluster, the current time refers to the time when the scheduling end changes from a normal working state to the faulty scheduling end; when a scheduling end which is offline exists in the scheduling end cluster, the current time refers to the moment when the scheduling end is changed from a normal working state to offline; when there is an online scheduling end in the scheduling end cluster, the current refers to the moment when the scheduling end changes from a dormant (standby), fault or uncharged state to online state. Or, because the communication delay between the state monitoring device and the dispatching end cluster is very low, the state monitoring device can be ignored, and the moment when the state monitoring device monitors the dispatching end with the state changed in the dispatching end cluster, the moment when the state change message is generated, or the moment when the state change message is sent can also be referred to.
From the foregoing, it can be seen that the scheduling end with the changed task processing capability caused by the changed state includes the scheduling end that is currently failed, currently offline or currently online, that is, the state change message may be sent when the state monitoring device monitors that the scheduling end with the failed, offline or online exists in the scheduling end cluster. When the scheduling end cluster has a fault or offline scheduling end, the current scheduling end in a normal working state is reduced, and when the scheduling end cluster has an online scheduling end, the current scheduling end in the normal working state is increased. Under the condition that the dispatching ends in the dispatching end cluster are reduced or increased, the target dispatching end redetermines the realization mode of the task dispatching range according to the state change message and the number of the dispatching ends in the normal working state in the dispatching end cluster.
Optionally, the scheduling end with the changed task processing capability caused by the changed state includes a currently failed or currently offline scheduling end, and the state change message carries an identifier of the currently failed or currently offline scheduling end. In this case, the target scheduling end redetermines its task scheduling range according to the state change message and the number of scheduling ends in a normal working state in the scheduling end cluster, including: the target dispatching end determines the identifiers of one or more dispatching ends in a normal working state in the dispatching end cluster according to the identifiers which are stored in the identifiers of the dispatching ends and are out of the identifiers of the dispatching ends which are in failure or are in line at present; and the target scheduling end redetermines the task scheduling range of the target scheduling end according to the identification of the one or more scheduling ends and the total number of the one or more scheduling ends.
In the application, as the state monitoring equipment monitors the scheduling end with changed state, the identification of the scheduling end with changed working state is sent to the scheduling end in the normal working state in the scheduling end cluster, and the state monitoring equipment can synchronize the identifications of other scheduling ends in the normal working state to the scheduling end in the current online state for the scheduling end in the new online state, therefore, each scheduling end in the scheduling end cluster stores the identification of the scheduling end in the normal working state. Thus, when the target dispatching end receives the identification carrying the dispatching end which is currently in fault or is currently in line, the target dispatching end can determine the identifications of the remaining dispatching ends except the identification of the dispatching end which is currently in fault or is currently in line in the stored identifications of the dispatching ends as the identifications of one or more dispatching ends still in a normal working state in the dispatching end cluster. Then, the target scheduling end can redetermine the task scheduling range of the target scheduling end according to the identification of the one or more scheduling ends and the total number of the one or more scheduling ends.
Optionally, the scheduling end that changes the task processing capability due to the change of the state includes a current online scheduling end, and the state change message carries an identifier of the current online scheduling end. In this case, the target scheduling end redetermines its task scheduling range according to the state change message and the number of scheduling ends in a normal working state in the scheduling end cluster, including: the target dispatching end determines the identification of the dispatching end which is on line at present and the stored identification of the dispatching end as the identification of one or more dispatching ends in a normal working state in the dispatching end cluster; and the target scheduling end redetermines the task scheduling range of the target scheduling end according to the identification of the one or more scheduling ends and the total number of the one or more scheduling ends.
Based on the above description, each scheduling end in the scheduling end cluster stores the identifier of the scheduling end in the normal working state, if the scheduling end in the online state exists currently, the target scheduling end can determine the identifier of the scheduling end carried by the state change message and the identifier of the scheduling end stored currently as the identifier of one or more scheduling ends in the normal working state in the scheduling end cluster, that is, in this case, the one or more scheduling ends include the scheduling end in the online state currently. Then, the target scheduling end can redetermine the task scheduling range of the target scheduling end according to the identification of the one or more scheduling ends and the total number of the one or more scheduling ends.
It should be noted that, the task scheduling ranges redetermined by the scheduling end in the normal working state in the scheduling end cluster do not have overlap and are combined into a target task range, and the target task range is a preset range and is used for representing the maximum range of the tasks that can be scheduled and executed by the task scheduling system.
In the application, after determining the identifiers of one or more scheduling ends in a normal working state, the target scheduling end can redetermine the task scheduling range of the target scheduling end according to the identifiers of the one or more scheduling ends and the total number of the one or more scheduling ends.
In a first implementation manner, the target scheduling end re-determines the task scheduling range of the target scheduling end according to the identification of one or more scheduling ends and the total number of the one or more scheduling ends, including: the target scheduling end determines the ordering positions of the target scheduling end in the one or more scheduling ends according to the identification of the one or more scheduling ends; the target scheduling end determines a number range from the hash ring according to the ordering positions of the target scheduling end in the one or more scheduling ends, the total number of the one or more scheduling ends and the initial position on the hash ring, wherein a plurality of numbers are distributed on the hash ring and correspond to tasks to be executed in the task scheduling range; the target scheduling end determines the task range corresponding to the number range as the task scheduling range of the target scheduling end.
Optionally, the target scheduling end determines, according to the identifiers of the one or more scheduling ends, a ranking position of the target scheduling end in the one or more scheduling ends, including: the target scheduling end can determine the online sequence of the one or more scheduling ends according to the identification of the one or more scheduling ends, and then the target scheduling end can determine the ordering position of the target scheduling end in the one or more scheduling ends according to the online sequence of the one or more scheduling ends.
In the application, each scheduling end can store the corresponding relation between the identifiers of the scheduling ends in the normal working state and the online time, the target scheduling end can determine the online time of one or more scheduling ends according to the identifiers of the one or more scheduling ends and the corresponding relation, determine the online sequence of the one or more scheduling ends according to the online time of the one or more scheduling ends, and determine the sorting position of the target scheduling end in the one or more scheduling ends, wherein the sorting position can be consistent with or opposite to the online sequence.
Optionally, since the identifier of the scheduling end may be a string formed by english letters, numbers, or characters of chinese characters, or a string formed by a combination of various types of characters, the determining, by the target scheduling end, the ranking position of the target scheduling end in the one or more scheduling ends according to the identifier of the one or more scheduling ends includes: the target scheduling end sequences the identifications of the one or more scheduling ends according to the sequence rule of the character strings to obtain the identification sequence of the one or more scheduling ends; and the target scheduling end determines the sequencing positions of the target scheduling end at the one or more scheduling ends according to the identification sequencing of the one or more scheduling ends. The sorting position may be consistent with the identification sequence or may be opposite, which is not limited in the present application.
After determining the ordering position of the target scheduling end in the one or more scheduling ends, the maximum value of the target task range can be divided by the total number of the one or more scheduling ends, and the result is modulo to ensure that an integer is obtained. Then the target scheduling end can divide the target task range into a plurality of ranges which are the same as the total number of the one or more scheduling ends according to the obtained integer and the initial position on the hash ring, then determine the range corresponding to the target task range from the plurality of ranges obtained by dividing according to the sorting positions of the target task range in the one or more scheduling ends, namely determine the corresponding number range from the hash ring, and determine the task range corresponding to the number range as the task scheduling range of the target task range.
In the implementation mode, the target task range can be divided evenly and distributed to the one or more scheduling ends, the operation mode is simpler, and the distribution efficiency is higher. Moreover, under the condition that the configuration of each scheduling end is equivalent, each scheduling end in a normal working state can be balanced.
In a second implementation manner, the target scheduling end re-determines the task scheduling range of the target scheduling end according to the identification of the one or more scheduling ends and the total number of the one or more scheduling ends, including: the target scheduling end determines configuration information of the one or more scheduling ends and ordering positions of the target scheduling end in the one or more scheduling ends according to the identification of the one or more scheduling ends; the target scheduling end determines a number range from the hash ring according to configuration information of the one or more scheduling ends, ordering positions of the target scheduling end in the one or more scheduling ends and initial positions of the target scheduling end on the hash ring, wherein a plurality of numbers are distributed on the hash ring and correspond to tasks to be executed by a task scheduling system; the target scheduling end determines the task range corresponding to the number range as the task scheduling range of the target scheduling end.
It should be noted that, in this implementation manner, the implementation manner of determining, by the target scheduling end, the ordering positions of the target scheduling end in the one or more scheduling ends according to the identifiers of the one or more scheduling ends may refer to the foregoing related description, which is not repeated herein.
In the application, each scheduling end in the scheduling end cluster can store configuration information of each scheduling end, and the configuration information of the scheduling end can be used for representing scheduling capability, namely load capability, of the corresponding scheduling end. After determining the identification of the one or more scheduling ends, the target scheduling end may determine the configuration information of the one or more scheduling ends from the stored configuration information of each scheduling end.
And then, the target dispatching end can determine the capacity ratio of each of the one or more dispatching ends according to the configuration information of the one or more dispatching ends, wherein the sum of the capacity ratios is equal to 1. Then, the target scheduling end may divide the target task range into a plurality of corresponding ranges according to the capability ratio of each scheduling end, the maximum value of the target task range and the initial position on the hash ring, and the target scheduling end may determine its own corresponding range from the divided plurality of ranges according to the sorting positions of itself in the one or more scheduling ends, that is, determine the corresponding number range from the hash ring, and determine the task range corresponding to the number range as its own task scheduling range.
In the implementation manner, the number range determined by the target scheduling end is consistent with the scheduling capability represented by the configuration information of the target scheduling end, so that the scheduling ends in the normal working state in the scheduling end cluster are ensured to achieve load balancing as much as possible, namely, the scheduling ends in the normal working state can divide the target task range according to the configuration information, so that the load balancing is ensured.
In the application, the task scheduling system can also comprise an execution end cluster and a data server, wherein the execution end cluster comprises a plurality of execution ends. After the target scheduling end re-determines the task scheduling range of the target scheduling end, the target scheduling end can schedule tasks which the task scheduling system needs to execute. That is, the target scheduling end can poll the corresponding task from the data server according to the task scheduling range of the target scheduling end, and when one or more tasks are polled to meet the execution condition, the relevant data of the one or more tasks are sent to the execution end in the execution end cluster.
Optionally, the number corresponding to the target task added on the hash ring is located in a target task scheduling range, the target task scheduling range is a task scheduling range selected from one or more task scheduling ranges according to a load balancing strategy, the one or more task scheduling ranges are task scheduling ranges of a scheduling end in a normal working state in a scheduling end cluster, and the target task is any task required to be executed by a task scheduling system; the load balancing strategy comprises the following steps: carrying out hash operation on an initial number according to the total number of scheduling ends in a normal working state in a scheduling end cluster to obtain a hash value, determining a task scheduling range from the one or more task scheduling ranges according to the hash value, wherein the initial number is a number generated for a target task according to a task uploading sequence; alternatively, the load balancing policy includes: and determining a task scheduling range from the one or more task scheduling ranges according to the load condition and/or configuration information of the scheduling end in the normal working state in the scheduling end cluster.
In the present application, the hash ring may be stored in a data server, and the data server may add a number corresponding to the task to the hash ring, and the related implementation manner may refer to the method provided in the second aspect described below.
In a second aspect, a task scheduling method is provided, and the method includes:
the data server receives a task submission request sent by a client, wherein the task submission request carries relevant data of a target task; the data server can store the related data of the target task and add the number corresponding to the target task on the hash ring, wherein a plurality of numbers are distributed on the hash ring and correspond to the tasks in the data server one by one, the maximum range of the numbers on the hash ring is consistent with the target task range, and the target task range refers to the maximum range of the tasks which can be scheduled and executed by the task scheduling system.
In the application, the hash ring can be stored in the data server, the task in the data server is the task which needs to be executed by the task scheduling system, and the maximum range of the number on the hash ring is consistent with the range of the target task which can be scheduled and executed by the task scheduling system, so that the scheduling end in the normal working state in the scheduling end cluster can be ensured to schedule each task in the data server.
In the application, the data server can number the task uploaded by the user through the client, and the task number is added on the hash ring. It should be noted that, there may be multiple implementations of determining the number corresponding to the target task by the data server and adding the number to the hash ring, and three implementations will be described below.
In a first implementation manner, the adding, by the data server, a number corresponding to the target task on the hash ring includes: the data server generates a random number, wherein the random number is in the range of the target task; the data server determines the position corresponding to the random number from the hash ring; the data server may add the random number as a number at a corresponding location on the hash ring.
That is, in this implementation, the data server may add the number corresponding to the task to the hash ring in a manner that randomly generates the number.
In a second implementation manner, the adding, by the data server, a number corresponding to the target task on the hash ring includes: the data server generates an initial number, wherein the initial number refers to the number generated by the data server for the target task according to the uploading sequence of each task; the data server carries out hash operation on the initial number according to the total number of the dispatching terminals in the normal working state in the dispatching terminal cluster to obtain a hash value; the data server determines a task scheduling range from one or more task scheduling ranges according to the hash value, wherein the one or more task scheduling ranges are task scheduling ranges of a scheduling end in a normal working state in a scheduling end cluster and serve as target task scheduling ranges; the data server generates a random number according to the target task scheduling range, wherein the random number is positioned in the target task scheduling range; the data server determines the location on the hash ring corresponding to the random number, and may add the random number as a number at the corresponding location on the hash ring.
That is, in this implementation manner, according to the first load balancing policy provided in the first aspect, the data server may add the number corresponding to the target task to the hash ring, that is, the data server may perform a hash algorithm, and add the number corresponding to the target task to the hash ring.
It should be noted that, as the number of tasks uploaded to the data server increases, the initial number may exceed the target task range. The hash operation may be that the initial number of the target task is divided by the total number of the scheduling ends in the normal working state, and the obtained remainder is the hash value.
Since the hash value obtained by performing the hash operation on the initial number is related to the total number of the scheduling ends in the normal working state in the scheduling end cluster, and the hash value obtained by performing the hash operation on the initial number is also possibly 0, in the embodiment of the present application, the data server may add 1 to the hash value to obtain a value, and select, from the one or more task scheduling ranges, one task scheduling range with the sequence of the value as the target task scheduling range, that is, the number of the value is the task scheduling range with the order of the number of the bits as the target task scheduling range.
After determining the target task scheduling scope, the data server may generate a random number within the target task scheduling scope, that is, the generated random number is located within the target task scheduling scope. The data server may then determine the location corresponding to the random number from the stored hash ring and add the random number as a number at the location of the hash ring. That is, the data server may use the random number as the number corresponding to the target task.
In the second implementation manner, the data server may determine a target task scheduling range according to the hash operation, and then generate a random number in the target task scheduling range as the number corresponding to the target task, so that load balancing between scheduling ends in a normal working state can be achieved as much as possible.
In a third implementation manner, the adding, by the data server, a number corresponding to the target task on the hash ring includes: the data server determines a task scheduling range from one or more task scheduling ranges as a target task scheduling range according to load conditions and/or configuration information of a scheduling end in a normal working state in a plurality of scheduling ends, wherein the one or more task scheduling ranges are task scheduling ranges of the scheduling end in the normal working state in a scheduling end cluster; the data server generates a random number according to the target task scheduling range, wherein the random number is positioned in the target task scheduling range; the data server determines the position corresponding to the random number from the hash ring, and adds the random number as a number at the position of the hash ring.
That is, in this implementation manner, the data server may add the number corresponding to the target task to the hash ring according to the second load balancing policy provided in the first aspect.
Because the load condition or the configuration information of the scheduling end performs all effects on task scheduling on the scheduling end, in some embodiments, the data server may determine, from the scheduling ends in the normal working state, a scheduling end with a smaller current load and/or a higher configuration according to the load condition and/or the configuration information of the scheduling end in the normal working state, and use the task scheduling range of the scheduling end as the target task scheduling range.
The load condition of the scheduling end may be represented by the number of the numbers of the tasks added in the task scheduling range of the scheduling end on the hash ring, or may be represented by the ratio of the number of the tasks currently added to the number of the tasks that can be added in the corresponding task scheduling range. The configuration information of the scheduling end may represent the load capacity of the scheduling end.
In the application, the data server can determine the target task scheduling range according to the load condition of the scheduling end, namely, the data server can take the task scheduling range of the scheduling end with the least load in the load condition as the target task scheduling range. Or, the data server may determine the target task scheduling range according to the configuration information of the scheduling end, that is, the data server may use the task scheduling range of the scheduling end configured highest in the configuration information as the target task scheduling range.
Or, the data server may determine the target task scheduling range according to the load condition and the configuration information of the scheduling end. In this case, the data server may set a capability value for each scheduling end according to the configuration information of the scheduling ends, where the higher the configuration is, the larger the capability value is set, and the lower the configuration is, the smaller the capability value is set. In this way, the data server can perform weighting operation on the load condition of the scheduling end and the capacity value corresponding to the configuration information to obtain a corresponding weighting value, and the task scheduling range of the scheduling end with the largest weighting value is taken as the target task scheduling range.
In a third aspect, a task scheduling device is provided, where the task scheduling device has a function of implementing the task scheduling method behavior in the first aspect. The task scheduling device comprises at least one module, and the at least one module is used for realizing the task scheduling method provided by the first aspect.
That is, the present application provides a task scheduling device, which is applied to a target scheduling end, where the target scheduling end refers to any scheduling end in a normal working state in the scheduling end cluster, and the device includes:
The receiving module is used for receiving a state change message sent by the state monitoring equipment, wherein the state change message can be used for indicating information of a scheduling end with changed task processing capacity caused by state change;
and the determining module is used for redetermining the task scheduling range of the target scheduling end according to the state change message and the number of the scheduling ends in the normal working state in the scheduling end cluster so as to schedule the tasks to be executed by the task scheduling system.
Optionally, the scheduling end with the task processing capability changed due to the status change includes a currently failed or currently offline scheduling end, where the status change message carries an identifier of the currently failed or currently offline scheduling end;
the determining module comprises:
the first determining submodule is used for determining the identifiers of one or more scheduling ends in a normal working state in the scheduling end cluster as the identifiers of the scheduling ends which are in failure currently or are offline currently in the stored identifiers of the scheduling ends;
and the second determining submodule is used for determining the task scheduling range of the target scheduling end again according to the identification of the one or more scheduling ends and the total number of the one or more scheduling ends.
Optionally, the scheduling end with the changed task processing capability caused by the changed state includes a current online scheduling end, and the state change message carries an identifier of the current online scheduling end;
the first determining submodule is used for determining the identification of the scheduling end which is on line currently and the stored identification of the scheduling end as the identification of one or more scheduling ends in a normal working state in the scheduling end cluster;
and the second determining submodule is used for determining the task scheduling range of the target scheduling end again according to the identification of the one or more scheduling ends and the total number of the one or more scheduling ends.
Optionally, the second determining submodule is specifically configured to:
determining the ordering position of the target scheduling end in the one or more scheduling ends according to the identification of the one or more scheduling ends;
determining a number range from the hash ring according to the ordering position of the target scheduling end in the one or more scheduling ends, the total number of the one or more scheduling ends and the initial position on the hash ring, wherein a plurality of numbers are distributed on the hash ring and correspond to tasks to be executed by a task scheduling system;
And determining the task range corresponding to the number range as the task scheduling range of the target scheduling end.
Optionally, the second determining submodule is specifically configured to:
determining configuration information of the one or more scheduling terminals and ordering positions of the target scheduling terminal in the one or more scheduling terminals according to the identification of the one or more scheduling terminals;
determining a number range from the hash ring according to configuration information of the one or more scheduling ends, ordering positions of the target scheduling end in the one or more scheduling ends and initial positions on the hash ring;
and determining the task range corresponding to the number range as the task scheduling range of the target scheduling end.
Optionally, the second determining submodule is further specifically configured to:
determining the online sequence of the one or more scheduling ends according to the identification of the one or more scheduling ends;
and determining the ordering position of the target scheduling end in the one or more scheduling ends according to the online sequence of the one or more scheduling ends.
Optionally, the number corresponding to the target task added on the hash ring is located in a target task scheduling range, the target task scheduling range is a task scheduling range selected from one or more task scheduling ranges according to a load balancing strategy, the one or more task scheduling ranges are task scheduling ranges of a scheduling end in a normal working state in a scheduling end cluster, and the target task is any task required to be executed by a task scheduling system;
The load balancing strategy comprises the following steps: carrying out hash operation on an initial number according to the total number of scheduling ends in a normal working state in a scheduling end cluster to obtain a hash value, determining a task scheduling range from the one or more task scheduling ranges according to the hash value, wherein the initial number is a number generated for a target task according to a task uploading sequence; or,
the load balancing strategy comprises the following steps: and determining a task scheduling range from the one or more task scheduling ranges according to the load condition and/or configuration information of the scheduling end in the normal working state in the scheduling end cluster.
In a fourth aspect, a task scheduling device is provided, which has a function of implementing the task scheduling method behavior in the second aspect. The task scheduling device comprises at least one module, and the at least one module is used for realizing the task scheduling method provided by the second aspect.
That is, the present application provides a task scheduling device applied to a data server, the device comprising:
the receiving module is used for receiving a task submitting request sent by the client, wherein the task submitting request carries relevant data of a target task;
The storage module is used for storing related data of a target task and adding a number corresponding to the target task to the hash ring, wherein a plurality of numbers are distributed on the hash ring and correspond to the tasks in the data server one by one, the maximum range of the numbers on the hash ring is consistent with the target task range, and the target task range refers to the maximum range of the tasks which can be scheduled and executed by the task scheduling system.
Optionally, the storage module includes:
a first generation sub-module for generating a random number, the random number being within a target task range;
the first determining submodule is used for determining the position corresponding to the random number from the hash ring;
and the first adding submodule is used for adding the random number as a number at the corresponding position of the hash ring.
Optionally, the storage module includes:
the second generation sub-module is used for generating an initial number, wherein the initial number is a number generated for a target task according to the uploading sequence of each task by the data server;
the hash operation module is used for carrying out hash operation on the initial number according to the total number of the dispatching terminals in the normal working state in the dispatching terminal cluster to obtain a hash value;
The second determining submodule is used for determining a task scheduling range from one or more task scheduling ranges according to the hash value, wherein the task scheduling range is used as a target task scheduling range, and the one or more task scheduling ranges refer to the task scheduling range of a scheduling end in a normal working state in a scheduling end cluster;
a third generation sub-module, configured to generate a random number according to the target task scheduling range, where the random number is located in the target task scheduling range;
and the second adding submodule is used for determining the position corresponding to the random number from the hash ring and adding the random number as a number at the corresponding position of the hash ring.
Optionally, the storage module includes:
a third determining submodule, configured to determine a task scheduling range from one or more task scheduling ranges according to load conditions and/or configuration information of a scheduling end in a normal working state in the plurality of scheduling ends, where the one or more task scheduling ranges are task scheduling ranges of the scheduling end in the normal working state in the scheduling end cluster;
a fourth generation sub-module, configured to generate a random number according to a target task scheduling range, where the random number is located in the target task scheduling range;
And the third adding sub-module is used for determining the position corresponding to the random number from the hash ring and adding the random number as a number at the corresponding position of the hash ring.
In a fifth aspect, a task scheduling system is provided, where the task scheduling system includes a scheduling end cluster, an execution end cluster, a state monitoring device and a data server, where the scheduling end cluster includes a plurality of scheduling ends, the execution end cluster includes a plurality of execution ends, and the state monitoring device is used to monitor a working state of each scheduling end in the scheduling end cluster;
the target scheduling end is used for executing the task scheduling method provided in the first aspect, wherein the target scheduling end refers to any scheduling end in a normal working state in the scheduling end cluster;
the data server is configured to perform the task scheduling method provided in the second aspect.
In a sixth aspect, there is provided a task scheduling device including a receiver, a processor, and a memory for storing a program for executing the task scheduling method provided in the first or second aspect described above, and storing data for implementing the task scheduling method provided in the first or second aspect described above. The processor is configured to execute a program stored in the memory. The operating means of the memory device may further comprise a communication bus for establishing a connection between the processor and the memory.
That is, the receiver is configured to receive a status change message sent by the status monitoring device, where the status change message is used to indicate information of a scheduling end that changes a task processing capability due to a status change;
the processor is configured to redetermine a task scheduling range of a target scheduling end according to the state change message and the number of scheduling ends in a normal working state in the scheduling end cluster, so as to schedule a task to be executed by the task scheduling system, where the target scheduling end refers to any scheduling end in the normal working state in the scheduling end cluster.
Optionally, the scheduling end with the task processing capability changed due to the status change includes a currently failed or currently offline scheduling end, and the status change message carries an identifier of the currently failed or currently offline scheduling end;
the processor is specifically configured to: determining the identifiers of one or more scheduling ends in a normal working state in the scheduling end cluster as the identifiers of the scheduling ends which are in failure currently or are not in line currently in the identifiers of the scheduling ends stored in the memory; and re-determining the task scheduling range of the target scheduling end according to the identification of the one or more scheduling ends and the total number of the one or more scheduling ends.
Optionally, the scheduling end with the changed task processing capability caused by the changed state includes a current online scheduling end, and the state change message carries an identifier of the current online scheduling end;
the processor is specifically configured to: determining the identification of the scheduling end which is currently on line and the identification of the scheduling end stored in the memory as the identification of one or more scheduling ends in a normal working state in the scheduling end cluster; and re-determining the task scheduling range of the target scheduling end according to the identification of the one or more scheduling ends and the total number of the one or more scheduling ends.
Optionally, the processor is specifically configured to: determining the ordering position of the target scheduling end in the one or more scheduling ends according to the identification of the one or more scheduling ends; determining a number range from the hash ring according to the ordering position of the target scheduling end in the one or more scheduling ends, the total number of the one or more scheduling ends and the initial position on the hash ring, wherein a plurality of numbers are distributed on the hash ring and correspond to tasks to be executed by a task scheduling system; and determining the task range corresponding to the number range as the task scheduling range of the target scheduling end.
Optionally, the processor is specifically configured to: determining configuration information of the one or more scheduling terminals and ordering positions of the target scheduling terminal in the one or more scheduling terminals according to the identification of the one or more scheduling terminals; determining a number range from the hash ring according to configuration information of the one or more scheduling ends, ordering positions of the target scheduling end in the one or more scheduling ends and initial positions on the hash ring, wherein a plurality of numbers are distributed on the hash ring and correspond to tasks to be executed by a task scheduling system; and determining the task range corresponding to the number range as the task scheduling range of the target scheduling end.
Optionally, the processor is specifically configured to: determining the online sequence of the one or more scheduling ends according to the identification of the one or more scheduling ends; and determining the ordering position of the target scheduling end in the one or more scheduling ends according to the online sequence of the one or more scheduling ends.
Optionally, the number corresponding to the target task added on the hash ring is located in a target task scheduling range, the target task scheduling range is a task scheduling range selected from one or more task scheduling ranges according to a load balancing strategy, the one or more task scheduling ranges are task scheduling ranges of a scheduling end in a normal working state in a scheduling end cluster, and the target task is any task required to be executed by a task scheduling system;
The load balancing strategy comprises the following steps: carrying out hash operation on an initial number according to the total number of scheduling ends in a normal working state in a scheduling end cluster to obtain a hash value, and determining a task scheduling range from the one or more task scheduling ranges according to the hash value, wherein the initial number is a number generated for a target task according to a task uploading sequence; or,
the load balancing strategy comprises the following steps: and determining a task scheduling range from the one or more task scheduling ranges according to the load condition and/or configuration information of the scheduling end in the normal working state in the scheduling end cluster.
In a seventh aspect, there is provided a computer readable storage medium having instructions stored therein which, when run on a computer, cause the computer to perform the task scheduling method of the first or second aspect described above.
In an eighth aspect, there is provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform the task scheduling method of the first or second aspect described above.
The technical effects obtained by the third, fourth, fifth, sixth, seventh and eighth aspects are similar to the technical effects obtained by the corresponding technical means in the first and second aspects, and are not described in detail herein.
The technical scheme provided by the application has at least the following beneficial effects:
in the application, the state monitoring device can monitor the working state of each scheduling end in the scheduling end cluster, and when the scheduling end with the changed task processing capacity caused by the changed state exists in the scheduling end cluster, each scheduling end in the normal working state can redetermine the task scheduling range according to the state change message sent by the state monitoring device so as to schedule the task to be executed by the task scheduling system. According to the method provided by the application, at a certain scheduling end, the task which is responsible for scheduling is failed or is offline, and the other scheduling ends can be continuously responsible for scheduling, namely, the reliability of the scheme is higher. In addition, according to the method provided by the application, the dispatching end cluster can be dynamically expanded or contracted, namely, the expandability of the scheme is higher. Because the dispatching end clusters can be dynamically expanded, each dispatching end does not need to have very high configuration, and resource waste can be reduced, namely, the resource utilization rate is higher.
Drawings
Fig. 1 is a system architecture diagram related to a task scheduling method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a computer device according to an embodiment of the present application;
FIG. 3 is a flowchart of a task scheduling method according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a scheduling end node created by a state monitoring device according to an embodiment of the present application;
FIG. 5 is a schematic diagram showing task scheduling ranges of each scheduling end on a hash ring according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a target scheduling end redetermining its task scheduling range according to an embodiment of the present application;
FIG. 7 is a schematic diagram of another target scheduling end according to an embodiment of the present application to redetermine its task scheduling scope;
FIG. 8 is a flowchart of a method for adding a number corresponding to a target task to a hash ring by a data server according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a task scheduling device according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of another task scheduling device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
Fig. 1 is a system architecture diagram related to a task scheduling method according to an embodiment of the present application. Referring to fig. 1, the system architecture includes a task scheduling system 100 and a client 200. Task scheduling system 100 may include a scheduling end cluster 101, an execution end cluster 102, a status monitoring device 103, and a data server 104, where scheduling end cluster 101 may include a plurality of scheduling ends, and execution end cluster 102 may include a plurality of execution ends. Wherein one scheduling peer may communicate with each executing peer, the status monitoring device 103 may communicate with each scheduling peer, the data server 104 may communicate with each scheduling peer, and the data server 104 may also communicate with the client 200.
In an embodiment of the present application, a user may package business logic into tasks and send relevant data of the tasks to the data server 104 through the client 200. The data server 104 stores the related data of the task uploaded by the client, and the data server 104 may also number the task and add the number of the task to the hash ring. The data server 104 may also store task execution states, task execution results, and the like of the respective tasks. The user may also query the data server 104 for task execution status, task execution results, etc. through the client 200.
It should be noted that, the task uploaded by the client 200 may be to detect the heartbeat of the host once every minute, collect the access log of the server once every hour, synchronize and calculate data from one database to another in the morning every day, count the web browsing amount of the last week of the website every week, and access amount of the user.
Each scheduling end in the scheduling end cluster 101 can poll the corresponding task from the data server 104 according to the task scheduling range in charge of the scheduling end on the hash ring, and send the related data of the polled task meeting the execution condition to the execution end in the execution end cluster 102. Each scheduling end may also send the task execution result fed back by the execution end to the data server 104.
Each executing end in the executing end cluster 102 can send a heartbeat message and its own load condition to each scheduling end in the scheduling end cluster 101, so as to instruct the corresponding scheduling end to issue a task according to the heartbeat message and the load condition of the executing end, and the executing end executes the corresponding task after receiving the related data of the task sent by the scheduling end. And then, the task execution result can be fed back to the dispatching end in the dispatching end cluster.
The state monitoring device 103 may be configured to monitor an operation state of each scheduling end in the scheduling end cluster 101, and when a scheduling end with an operation state changed exists in the scheduling end cluster 101, send a state change message to each scheduling end currently in a normal operation state, where the state change message is used to instruct each scheduling end in the scheduling end cluster 101 to redetermine a task scheduling range.
It should be noted that, in order to improve the reliability of the task scheduling system, as shown in fig. 1, the task scheduling system may include one or more state monitoring devices 103, where one state monitoring device 103 may be a primary state monitoring device, and the remaining state monitoring devices 103 may be standby state monitoring devices, where under normal conditions, the primary state monitoring device may be in a working state, the standby state monitoring device may be in a dormant state, and when the primary state monitoring device fails, one standby state monitoring device may be reselected to replace the working of the primary state monitoring device. In addition, the system architecture may include a client 200, and may also include a plurality of clients 200 (not shown), each of which may upload tasks, query task execution states, query task execution results, and the like.
In some embodiments, the task scheduling system 100 may include only the scheduling end cluster 101 and the status monitoring device 103, in which case the task scheduling system 100 may be used to schedule tasks.
In other embodiments, the task scheduling system 100 may include a scheduling end cluster 101, a status monitoring device 103, and a data server 104, in which case the task scheduling system 100 may be configured to receive tasks and schedule tasks. Alternatively, the task scheduling system 100 may comprise a scheduling end cluster 101, a status monitoring device 103 and an execution end cluster 102, in which case the task scheduling system 100 may be used to schedule tasks as well as execute tasks.
The scheduling end, the executing end and the state monitoring device can be a physical machine or a virtual machine running on the physical machine. The data server may be a server, or may be a server cluster formed by a plurality of servers, or may also be a cloud computing service center. The client may be any electronic product that can interact with a user by one or more of a keyboard, a touch pad, a touch screen, a remote control, a voice interaction or handwriting device, such as a PC (personal computer ), a mobile phone, a smart phone, a PDA (personal digital assistant ), a wearable device, a tablet computer, a smart car machine, a smart television, a smart speaker, etc.
Referring to fig. 2, fig. 2 is a schematic structural diagram of a computer device according to an embodiment of the present application, which may be the scheduling end shown in fig. 1, and may also be referred to as a task scheduling device. The computer device includes one or more processors 201, a communication bus 202, memory 203, and one or more communication interfaces 204.
The processor 201 may be a general purpose central processing unit (central processing unit, CPU), network Processor (NP), microprocessor, or may be one or more integrated circuits for implementing aspects of the application, such as application-specific integrated circuits (ASIC), programmable logic devices (programmable logic device, PLD), or a combination thereof. The PLD may be a complex programmable logic device (complex programmable logic device, CPLD), a field-programmable gate array (field-programmable gate array, FPGA), general-purpose array logic (generic array logic, GAL), or any combination thereof.
Communication bus 202 is used to transfer information between the above-described components. Communication bus 202 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The memory 203 may be, but is not limited to, a read-only memory (ROM) or other type of static storage device that can store static information and instructions, a random access memory (random access memory, RAM) or other type of dynamic storage device that can store information and instructions, an electrically erasable programmable read-only memory (electrically erasable programmable read-only memory, EEPROM), a compact disc read-only memory (compact disc read-only memory) or other optical disk storage, optical disk storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk storage media, or other magnetic storage devices, or any other medium that can be used to carry or store the desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory 203 may be stand alone and be coupled to the processor 201 via the communication bus 202. Memory 203 may also be integrated with processor 201.
The communication interface 204 uses any transceiver-like means for communicating with other devices or communication networks, for example, the communication interface 204 may use a receiver for receiving data transmitted by other devices. Communication interface 204 includes a wired communication interface and may also include a wireless communication interface. The wired communication interface may be, for example, an ethernet interface. The ethernet interface may be an optical interface, an electrical interface, or a combination thereof. The wireless communication interface may be a wireless local area network (wireless local area networks, WLAN) interface, a cellular network communication interface, a combination thereof, or the like.
In a particular implementation, as one embodiment, processor 201 may include one or more CPUs, such as CPU0 and CPU1 shown in FIG. 2.
In a particular implementation, as one embodiment, a computer device may include multiple processors, such as processor 201 and processor 205 shown in FIG. 2. Each of these processors may be a single-core processor (single-CPU) or a multi-core processor (multi-CPU). A processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions).
In a specific implementation, the computer device may also include an output device 206 and an input device 207, as one embodiment. The output device 206 communicates with the processor 201 and may display information in a variety of ways. For example, the output device 206 may be a liquid crystal display (liquid crystal display, LCD), a light emitting diode (light emitting diode, LED) display device, a Cathode Ray Tube (CRT) display device, or a projector (projector), or the like. The input device 207 is in communication with the processor 201 and may receive user input in a variety of ways. For example, the input device 207 may be a mouse, a keyboard, a touch screen device, a sensing device, or the like.
In some embodiments, the memory 203 is configured to store program code 210 for performing aspects of the present application, and the processor 201 may execute the program code 210 stored in the memory 203, and the receiver is configured to receive messages sent by the condition monitoring device. That is, the computer device may implement the task scheduling method provided in the embodiment of fig. 3 below through the receiver, the processor 201, and the program code 210 in the memory 203.
Fig. 3 is a flowchart of a task scheduling method provided by an embodiment of the present application, where the task scheduling system includes a scheduling end cluster and a state monitoring device, and the scheduling end cluster includes a plurality of scheduling ends. In the embodiment of the application, the task scheduling system is described by taking an example that the task scheduling system comprises a state monitoring device. Referring to fig. 3, the method includes the following steps.
Step 301: the state monitoring device monitors the working state of each scheduling end in the scheduling end cluster.
For any one of the dispatching end clusters, the dispatching end can register on the state monitoring device after power-on, that is, send an online request to the state monitoring device. Thus, the state monitoring device can determine that a newly-online scheduling end is currently monitored.
The condition monitoring device may then establish a communication connection with the dispatch, and transmit a heartbeat message with the dispatch over the communication connection. If the state monitoring device and the dispatching end can normally transmit the heartbeat message, the state monitoring device can determine that the dispatching end is in a normal working state, and if the state monitoring device and the dispatching end can not normally transmit the heartbeat message, the state monitoring device can determine that the dispatching end fails.
In addition, for any one of the schedulers in the scheduler cluster, if for some reason this scheduler needs to be offline, the scheduler may send an offline request to the status monitoring device. Thus, the state monitoring device can determine that a new offline scheduling end is currently monitored.
It should be noted that, for the newly online scheduling end, the state monitoring device may acquire and store information of the scheduling end, and the state monitoring device may further synchronize information of each scheduling end in a normal working state in the scheduling end cluster to the currently online scheduling end. For a failed scheduling end or a downlink scheduling end, the state monitoring device may delete information of the scheduling end. The information of the scheduling end may include an identifier of the scheduling end and an online time.
In some embodiments, the condition monitoring device may create a scheduling end node when a scheduling end is online and store the identity of the corresponding scheduling end in the scheduling end node. And the state monitoring equipment can also acquire the identifiers of the scheduling ends stored in other scheduling end nodes and send the identifiers to the scheduling end which is on line currently. When one scheduling end fails or goes offline, the state monitoring device may delete the corresponding scheduling end node. Wherein, each scheduling end node may be a short ordered node arranged according to the online sequence, that is, the sequence of each scheduling end node may indicate the online sequence of each scheduling end. In this case, the identity of the scheduling end may be stored in the scheduling end node instead of the time to line.
For example, the scheduling end a, the scheduling end B and the scheduling end C are on line sequentially, the scheduling end nodes created by the state monitoring device are node 1, node 2 and node 3 sequentially, and it can be determined that the scheduling end a is on line first, then the scheduling end B is on line, and then the scheduling end C is on line according to the sequence of each scheduling end node. When the scheduling end a goes offline, the state monitoring device may delete the node 1.
It should be noted that, the plurality of scheduling ends in the scheduling end cluster may be implemented by using a physical machine or a virtual machine. In addition, the dispatching end cluster can dynamically determine the number of dispatching ends on line according to the size of the current task size, if the current task size is larger, the dispatching end cluster can be used for connecting more dispatching ends on line, and if the current task size is smaller, the dispatching end cluster can be used for connecting some dispatching ends off line.
In the embodiment of the application, the monitoring function of the state monitoring device can be realized by a zookeeper component, and the zookeeper component is a high-performance and high-availability distributed coordination service, and is called zookeeper for short. In addition, the state monitoring device may be implemented by other possible devices with a state monitoring function, which is not limited in the embodiment of the present application.
Illustratively, each of the schedulers in the schedulers cluster may register on the zookeeper to be on-line, each of the schedulers that are on-line may communicate with the zookeeper, e.g., the schedulers may communicate with the zookeeper via their own application program interface (Application Programming Interface, API) interface, and the schedulers may periodically send heartbeat messages via the API interface to maintain a communication connection with the zookeeper. When a scheduling end is on line, a heartbeat message can be sent through an API interface to inform the zookeeper of the on-line of the scheduling end, and the zookeeper can create a scheduling end node to store the information of the scheduling end, wherein the stored information of the scheduling end comprises the identification of the corresponding scheduling end and the on-line time.
It should be noted that, the node structure in the zookeeper may be a directory-level structure, so when the information of the scheduling end is stored in the manner of the scheduling end node, the zookeeper may create the scheduling end node under the service node of the root directory, the scheduling end node under the service node is a short ordered node arranged according to the online sequence of the scheduling end, and the service node may be understood as a parent node of each scheduling end node, and each scheduling end node is a child node of the service node.
As shown in fig. 4, assuming that the current scheduling ends A, B and C are sequentially on line, three corresponding scheduling end nodes, namely node 1, node 2 and node 3, are sequentially created under the service node of the zookeeper. When the zookeeper monitors that a scheduling end with a downlink exists in the scheduling end A, B, C, the corresponding scheduling end node can be deleted, or when the scheduling end is on-line, a new scheduling end node can be created. For example, when scheduler B fails, the zookeeper may delete node 2 and when scheduler B is brought back online, the zookeeper may create a new node 4.
Step 302: if the state monitoring equipment monitors that a scheduling end with changed state exists in the scheduling end cluster, the state monitoring equipment sends a state changing message to the scheduling end in a normal working state.
In the embodiment of the application, if the state monitoring device monitors that the scheduling end with the changed working state exists in the scheduling end cluster, a state change message can be generated and sent to the scheduling end in the normal working state, and the state change message can be used for indicating the information of the scheduling end with the changed task processing capacity caused by the state change.
In some embodiments, the state monitoring device may carry the identifier of the scheduling end with the changed state in the state change message, and send the identifier to the scheduling end in the normal working state in the scheduling end cluster. That is, when the state monitoring device monitors that a faulty scheduling end exists in the scheduling end cluster, the state change message generated by the state monitoring device may carry the identifier of the faulty scheduling end currently. When the state monitoring device monitors that the offline dispatching end exists in the dispatching end cluster, the state change message generated by the state monitoring device can carry the identification of the current offline dispatching end. When the state monitoring device monitors that the online dispatching end exists in the dispatching end cluster, the state change message generated by the state monitoring device can carry the identification of the current online dispatching end.
Therefore, in the embodiment of the application, each time a certain scheduling end in the scheduling end cluster fails, goes offline or goes online, the state monitoring device can send a state change message to the scheduling end in the normal working state in real time, and timely inform the scheduling end in the normal working state that the scheduling end has the scheduling end with the task processing capacity changed in the current scheduling end cluster. The scheduling end with the changed task processing capacity caused by the changed state comprises a current fault scheduling end, a current offline scheduling end or a current online scheduling end.
When a faulty scheduling end exists in the scheduling end cluster, the current time refers to the time when the scheduling end changes from a normal working state to the faulty scheduling end; when a scheduling end which is offline exists in the scheduling end cluster, the current time refers to the moment when the scheduling end is changed from a normal working state to offline; when there is an online scheduling end in the scheduling end cluster, the current refers to the moment when the scheduling end changes from a dormant (standby), fault or uncharged state to online state. Or, because the communication delay between the state monitoring device and the dispatching end cluster is very low, the state monitoring device can be ignored, and the moment when the state monitoring device monitors the dispatching end with the state changed in the dispatching end cluster, the moment when the state change message is generated, or the moment when the state change message is sent can also be referred to.
For example, each scheduling end in the scheduling end cluster may have a monitoring node for communicating with the zookeeper, the monitoring node of the scheduling end may be used to obtain related information of a service node in the zookeeper, and the scheduling end in a normal working state may monitor the service node of the zookeeper through its own monitoring node. Namely, the zookeeper can send the state change message of the dispatching end node under the service node to the monitoring node of the dispatching end in the normal working state. For example, the node of the scheduling end created by the zookeeper for the scheduling end B is node2, the identifier of the scheduling end B is B, and if the scheduling end B fails, the zookeeper may send a state change message to the listening node of the scheduling end still in a normal working state, where the state change message may carry the identifier B of the scheduling end B. For another example, assuming that the scheduling end node created by the zookeeper for the scheduling end D is node4, the identifier of the scheduling end D is D, the zookeeper may send the D carried in the state change message to the monitoring node of the scheduling end in the normal working state.
In the embodiment of the present application, after the state monitoring device sends the state change message to the scheduling end in the normal working state in the scheduling end cluster by the method of the above steps 301 to 302, the scheduling end in the normal working state in the scheduling end cluster may respond to the state change message and implement task scheduling according to the method of the following steps 303 to 305. The method that the scheduling end in the normal working state responds to the state change message and realizes task scheduling is the same, so the method that the target scheduling end responds to the state change message and realizes task scheduling is introduced by taking the target scheduling end as an example. The target scheduling end refers to any scheduling end in a normal working state in the scheduling end cluster.
Step 303: the target dispatching terminal receives the state change message sent by the state monitoring device, and redetermines the task dispatching range of the target dispatching terminal according to the state change message and the quantity of dispatching terminals in a normal working state in a dispatching terminal cluster.
In the embodiment of the application, each scheduling end in a normal working state in the scheduling end cluster corresponds to a task scheduling range, the task scheduling ranges are not overlapped, the union of the task scheduling ranges is a target task range, and the target task range can be a preset range and is used for representing the maximum range of tasks which can be scheduled and executed by the task scheduling system. For example, the target task range may be 0-100000000.
In the embodiment of the application, after the target scheduling end receives the state change message, the task scheduling range of the target scheduling end can be redetermined according to the state change message and the number of the scheduling ends in the normal working state in the scheduling end cluster, so that each task with the number within the target task range can be scheduled by the scheduling end in the normal working state in the scheduling end cluster, namely, the reliability of the task scheduling system is ensured.
From the foregoing, it can be seen that the scheduling end with the changed task processing capability caused by the changed state includes the scheduling end that is currently failed, currently offline or currently online, that is, the state change message may be sent when the state monitoring device monitors that the current scheduling end cluster has the scheduling end that is failed, offline or online. When the scheduling end cluster has a fault or offline scheduling end, the current scheduling end in a normal working state is reduced, and when the scheduling end cluster has an online scheduling end, the current scheduling end in the normal working state is increased. Under the condition that the dispatching ends in the dispatching end cluster are reduced or increased, the target dispatching end redetermines the task dispatching range of the target dispatching end according to the state change message and the number of the dispatching ends in the normal working state in the dispatching end cluster, so that the realization modes of the task dispatching range of the target dispatching end are different, and the target dispatching end redetermines the task dispatching range of the target dispatching end.
First caseWhen the state change message carries the identifier of the currently failed or currently offline scheduling end, the target scheduling end can determine the identifier of the currently failed or currently offline scheduling end in the stored identifiers of the scheduling ends as the identifier of one or more scheduling ends in a normal working state in the scheduling end cluster. Then, the target scheduling end can redetermine the task scheduling range of the target scheduling end according to the identification of the one or more scheduling ends and the total number of the one or more scheduling ends. Wherein, the task scheduling ranges redetermined by the one or more scheduling ends do not overlap, and the union of the task scheduling ranges redetermined by the one or more scheduling ends is the target task range.
In the embodiment of the application, as the state monitoring equipment monitors the scheduling end with the changed state, the identifier of the scheduling end with the changed working state is sent to the scheduling end in the normal working state in the scheduling end cluster, and the state monitoring equipment can synchronize the identifiers of other scheduling ends in the normal working state to the scheduling end on line at present for the scheduling end on line newly, therefore, each scheduling end in the scheduling end cluster stores the identifier of the scheduling end in the normal working state. Thus, when the target dispatching end receives the identification carrying the dispatching end which is currently in fault or is currently in line, the target dispatching end can determine the identifications of the remaining dispatching ends except the identification of the dispatching end which is currently in fault or is currently in line in the stored identifications of the dispatching ends as the identifications of one or more dispatching ends still in a normal working state in the dispatching end cluster.
For example, assume that the identifiers of the scheduling ends in the normal working state stored on the scheduling end a are A, B, C, a is the identifier of the scheduling end a, B is the identifier of the scheduling end B, and C is the identifier of the scheduling end C. When the scheduling end A receives a state change message carrying the identification of the currently-offline scheduling end B, the scheduling end A can determine that the scheduling ends still in a normal working state in the scheduling end cluster are the scheduling end A and the scheduling end C.
In the embodiment of the present application, after determining the identifiers of one or more scheduling ends in a normal working state, the target scheduling end may redetermine the task scheduling scope of itself according to the identifiers of the one or more scheduling ends and the total number of the one or more scheduling ends, and two implementation manners will be described below.
In a first implementation manner, the target scheduling end may determine, according to the identifiers of the one or more scheduling ends, a ranking position where the target scheduling end itself is located in the one or more scheduling ends. Then, the target scheduling end may determine a number range from the hash ring according to the ordering positions of the target scheduling end in the one or more scheduling ends, the total number of the one or more scheduling ends, and the initial position on the hash ring, and then the target scheduling end may determine a task range corresponding to the number range as the task scheduling range of the target scheduling end. The hash ring is distributed with a plurality of numbers, and the numbers correspond to tasks to be executed by the task scheduling system.
In the embodiment of the application, each scheduling end can store the identifier of the scheduling end in a normal working state, and the target scheduling end can determine the implementation manner of the ordering position of the target scheduling end in the one or more scheduling ends according to the identifiers of the one or more scheduling ends.
In one possible implementation manner, the target scheduling end may determine an online sequence of the one or more scheduling ends according to the identifiers of the one or more scheduling ends, and then, the target scheduling end may determine an ordering position of the target scheduling end in the one or more scheduling ends according to the online sequence of the one or more scheduling ends.
In the embodiment of the present application, each scheduling end may store a correspondence between an identifier of a scheduling end in a normal working state and an online time, and the target scheduling end may determine the online time of the one or more scheduling ends according to the identifier of the one or more scheduling ends and the correspondence, determine an online sequence of the one or more scheduling ends according to the online time of the one or more scheduling ends, and determine an ordering position where the scheduling end itself is located in the one or more scheduling ends, where the ordering position may be consistent with or opposite to the online sequence. For example, the order of the scheduled ends a and B is that the scheduled end a and the scheduled end B are sequentially scheduled, that is, the scheduled end a is scheduled first and the scheduled end B is scheduled later, and then the ordering position may be that the scheduled end a is arranged at the first position and the scheduled end B is arranged at the second position, or may be that the scheduled end a is arranged at the second position and the scheduled end B is arranged at the first position.
In other possible implementations, the identifiers of the scheduling end may be a string formed by english letters, numbers, characters of chinese characters, or the like, or may be a string formed by a combination of various types of characters, so that the target scheduling end may sort the identifiers of the one or more scheduling ends according to the sort rule of the string, to obtain the identification sequence of the one or more scheduling ends. And then, the target scheduling end can determine the ordering positions of the target scheduling end on the one or more scheduling ends according to the identification ordering of the one or more scheduling ends, wherein the ordering positions can be consistent with the identification order or opposite to the identification order, and the embodiment of the application is not limited to the identification ordering.
The order rule of the character strings may be a rule that the respective character strings are arranged in ascending or descending order according to the size of ASCII code values of characters such as english alphabets, the size of numerals, the pinyin sequence of chinese characters, the number of strokes of chinese characters, or any other rule that can obtain only one order. That is, in the embodiment of the present application, the target scheduling end may sort the identifiers of the one or more scheduling ends according to any one possible sort rule of the character strings, so as to obtain only one identifier sort.
For example, assuming that the one or more scheduling ends are scheduling end a and scheduling end B, the identifier of the scheduling end a is d_a, the identifier of the scheduling end B is d_b, that is, the first two characters of the scheduling end a and the scheduling end B are the same, the third character is compared, and since the character 'a' is smaller than the ASCII code value of the character 'B', the determined identification sequence is arranged before the identification of the scheduling end B according to the rule of ascending order, the ordering position may be that the scheduling end a is arranged in the first position, the scheduling end B is arranged in the second position, or that the scheduling end a is arranged in the second position, and the scheduling end B is arranged in the first position.
In the embodiment of the present application, after determining the ordering positions of the target scheduling end in the one or more scheduling ends, the maximum value of the target task range may be divided by the total number of the one or more scheduling ends, and the result may be modulo to ensure that an integer is obtained. Then the target scheduling end can divide the target task range into a plurality of ranges which are the same as the total number of the one or more scheduling ends according to the obtained integer and the initial position on the hash ring, then determine the range corresponding to the target task range from the plurality of ranges obtained by dividing according to the sorting positions of the target task range in the one or more scheduling ends, namely determine the corresponding number range from the hash ring, and determine the task range corresponding to the number range as the task scheduling range of the target task range.
In the implementation mode, the target task range can be divided evenly and distributed to the one or more scheduling ends, the operation mode is simpler, and the distribution efficiency is higher. Moreover, under the condition that the configuration of each scheduling end is equivalent, each scheduling end in a normal working state can be balanced.
For example, assume that the target scheduling end determines its own ordering position according to the online sequence, as shown in fig. 5, the scheduling ends A, B and C are in a normal working state, and the task scheduling ranges of the scheduling ends A, B and C are 0-33333333, 33333334-66666666, 66666667-100000000, respectively. On the basis of fig. 5, as shown in fig. 6, it is assumed that the current scheduling end B is down, for the scheduling end a, the scheduling end a may determine, according to the foregoing method, that a plurality of scheduling ends still in a normal working state are the scheduling end a and the scheduling end C, and it is assumed that the order of the upper lines of the scheduling ends a and C is the scheduling end a and the scheduling end C in turn, that is, the scheduling end a is arranged in the first position, the scheduling end C is arranged in the second position, and 2 scheduling ends are altogether. The scheduling end a may divide 100000000 by 2 and take a modulus to obtain 50000000, and assume that the initial position on the hash ring is the position with the number 0, and since the scheduling end a is arranged in the first position, the scheduling end a may determine the task range corresponding to the number range 0-50000000 on the hash ring as the task scheduling range of itself, that is, the task scheduling range redetermined by the scheduling end a is 0-50000000. For the scheduling end C, since the scheduling end C is arranged in the second position, the scheduling end C may determine the task range corresponding to the number 50000001-100000000 on the hash ring as the task scheduling range of itself, that is, the task scheduling range redetermined by the scheduling end C is 50000001-100000000.
In a second implementation manner, the target scheduling end may determine configuration information of the one or more scheduling ends and a ranking position of the target scheduling end in the one or more scheduling ends according to the identifiers of the one or more scheduling ends. Then, the target scheduling end can determine a number range from the hash ring according to the configuration information of the one or more scheduling ends, the ordering positions of the target scheduling end in the one or more scheduling ends, and the initial position on the hash ring. Then, the target scheduling end can determine the task range corresponding to the number range as the task scheduling range of the target scheduling end.
It should be noted that, in this implementation manner, the implementation manner of determining, by the target scheduling end, the ordering positions of the target scheduling end in the one or more scheduling ends according to the identifiers of the one or more scheduling ends may refer to the foregoing related description, which is not repeated herein.
In the embodiment of the application, the configuration information of each scheduling end can be stored in each scheduling end in the scheduling end cluster, and the configuration information of the scheduling end can be used for representing the scheduling capability, namely the load capability, of the corresponding scheduling end. After determining the identification of the one or more scheduling ends, the target scheduling end may determine the configuration information of the one or more scheduling ends from the stored configuration information of each scheduling end.
And then, the target dispatching end can determine the capacity ratio of each of the one or more dispatching ends according to the configuration information of the one or more dispatching ends, wherein the sum of the capacity ratios is equal to 1. Then, the target scheduling end may divide the target task range into a plurality of corresponding ranges according to the capability ratio of each scheduling end, the maximum value of the target task range and the initial position on the hash ring, and the target scheduling end may determine its own corresponding range from the divided plurality of ranges according to the sorting positions of itself in the one or more scheduling ends, that is, determine the corresponding number range from the hash ring, and determine the task range corresponding to the number range as its own task scheduling range.
In the calculation process, the numerical values corresponding to the endpoints of each range are integers.
In the implementation manner, the number range determined by the target scheduling end is consistent with the scheduling capability represented by the configuration information of the target scheduling end, so that the scheduling ends in the normal working state in the scheduling end cluster are ensured to achieve load balancing as much as possible, namely, the scheduling ends in the normal working state can divide the target task range according to the configuration information, so that the load balancing is ensured.
For example, it is assumed that the scheduling capability of the scheduling end a is higher, that is, the load capability of the scheduling end a is higher, and the scheduling capability of the scheduling end B is lower, that is, the load capability of the scheduling end B is lower, so that the capability ratio of the scheduling end a to the scheduling end B is determined to be 4:3 according to the configuration information of the scheduling end a and the configuration information of the scheduling end B. Assuming that the target task range is 0-100000000, the scheduling end A is arranged at the first position, the scheduling end B is arranged at the second position, the initial position of the hash ring is the position with the number of 0, and the task scheduling ranges determined by the scheduling end A and the scheduling end B according to the configuration information can be 0-57000000 and 57000001-100000000 respectively so as to ensure load balancing.
Second caseWhen the state change message carries the identifier of the current online scheduling end, the target scheduling end can determine the identifier of the current online scheduling end and the stored identifier of the scheduling end as the identifiers of one or more scheduling ends in a normal working state in the scheduling end cluster. Then, the target scheduling end can redetermine the task scheduling range of the target scheduling end according to the identification of the one or more scheduling ends and the total number of the one or more scheduling ends.
Based on the above description, each scheduling end in the scheduling end cluster stores the identifier of the scheduling end in the normal working state, if the scheduling end in the online state exists currently, the target scheduling end can determine the identifier of the scheduling end carried by the state change message and the identifier of the scheduling end stored currently as the identifier of one or more scheduling ends in the normal working state in the scheduling end cluster, that is, in this case, the one or more scheduling ends include the scheduling end in the online state currently.
Then, the target scheduling end can re-determine the task scheduling range of the target scheduling end by referring to the related implementation mode. The implementation manner of the target scheduling end to redetermine the task scheduling range of the target scheduling end may refer to the two foregoing implementation manners, which is not described herein again.
For example, taking an example of determining the ordering position according to the order of the online and evenly distributing the target task range, on the basis of fig. 6, referring to fig. 7, it is assumed that the scheduling end D is online, for the scheduling end a, the scheduling end a may determine that the scheduling ends in the normal working state are the scheduling end a, the scheduling end C and the scheduling end D according to the foregoing method, and it is assumed that the order of the online of the three scheduling ends is the scheduling ends A, C and D in sequence, the ordering position is consistent with the order of the online, that is, the scheduling end a is arranged in the first position, the scheduling end C is arranged in the second position, the scheduling end D is arranged in the third position, and there are 3 scheduling ends in total, and the initial position on the hash ring is the position with the number of 0. Scheduler a may divide 100000000 by 3 and modulo 33333333. Because the scheduling end a is arranged in the first position, the scheduling end a can determine the task range corresponding to the number range 0-33333333 on the hash ring as the task scheduling range of the scheduling end a, that is, the task scheduling determined by the scheduling end a again is 0-33333333. For the scheduling end C, since the scheduling end C is arranged at the second position, the scheduling end C may determine the task range corresponding to the number 33333334-666666666 on the hash ring as the task scheduling range of itself. For the scheduler D, since the scheduler D ranks in the third position, the scheduler D may use 66666667-100000000 as its own task scheduling scope.
As can be seen from the foregoing description, when the state change message carries the currently online scheduling end, the currently online scheduling end may be used as the target scheduling end to redetermine its task scheduling range according to the foregoing related description.
In other embodiments, the implementation manner of the currently online scheduling end to redetermine the task scheduling range of the currently online scheduling end may be different from the related implementation manner of the target scheduling end, that is, the target scheduling end may not include the currently online scheduling end. From the foregoing, for the newly-online scheduling end, the state monitoring device may synchronize the identifiers of the other scheduling ends in the normal working state to the currently-online scheduling end, so that the state monitoring device may not need to send a state change message to the currently-online scheduling end, and the currently-online scheduling end may determine the stored identifiers of the scheduling ends as identifiers of one or more scheduling ends in the normal working state in the scheduling end cluster, and then determine, according to the identifiers of the one or more scheduling ends and the identifiers of the scheduling ends, the total number of scheduling ends in the scheduling end cluster that can currently schedule tasks, and re-determine, according to the total number, the task scheduling range of the scheduling end.
Step 304: and the target scheduling end schedules tasks to be executed by the task scheduling system according to the redetermined task scheduling range.
In the embodiment of the application, the task scheduling system can further comprise an execution end cluster and a data server, wherein the execution end cluster comprises a plurality of execution ends. After the target scheduling end redetermines the task scheduling range, the target scheduling end can poll the corresponding task from the data server according to the redetermined task scheduling range, and when one or more tasks are polled to meet the execution condition, the related data of the one or more tasks are sent to the execution end in the execution end cluster.
It should be noted that, in the embodiment of the present application, the hash ring may be stored in the data server, where the task in the data server is the task that needs to be executed by the task scheduling system, and the maximum range of numbers on the hash ring is consistent with the range of target tasks that can be scheduled and executed by the task scheduling system, so that it can be ensured that the scheduling end in the normal working state in the scheduling end cluster may schedule each task in the data server.
In the embodiment of the application, the target scheduling end can poll the corresponding task from the data server according to the redetermined task scheduling range and can determine the currently online execution end according to the heartbeat message sent by the execution end in the execution end cluster. And then, when one or more tasks are polled to meet the execution conditions, the related data of the one or more tasks can be sent to online execution ends in the execution end cluster according to the load conditions sent by the execution ends.
That is, each of the execution ends in the execution end cluster may periodically send heartbeat messages and load conditions to each of the dispatch ends in the dispatch end cluster, so that each of the dispatch ends in the dispatch end cluster may determine which of the execution ends are currently online. In addition, the target scheduling end can periodically poll the data server according to the task scheduling range of the target scheduling end, namely the target scheduling end can periodically poll the data server, when one or more tasks are polled to meet the execution conditions, related data of the one or more tasks can be obtained from the data server, and according to the load condition of each execution end in the execution end cluster, the corresponding execution end is selected from the execution end cluster for the one or more tasks and the tasks are issued, so that the load of each execution end in the execution end cluster can be ensured to be balanced.
In some embodiments, the target scheduler may determine an executable queue after polling that one or more tasks satisfy the execution condition, the executable queue including the one or more tasks. Then, the target scheduling end can select a corresponding executing end for each task in the executable queue from the executing end cluster according to the load condition of each executing end in the executing end cluster, and transmit the task to the corresponding executing end after instantiation according to the related data of the task.
In the embodiment of the application, after the execution end completes execution of a task, the execution end can send the task execution result to any scheduling end in the scheduling end cluster, the scheduling end can send the task execution result to the data server, and the data server can store the task execution result.
In addition, the data server may also have task execution status stored therein. That is, the task execution state of a task may be as yet not executed until the task is not polled. When the task meets the execution condition and is polled by the target scheduling end, namely, the target scheduling end acquires the related data of the task, the task execution state of the task can be updated to be in execution. When the data server receives the task execution result of the task, the task execution state of the task may be updated to be execution completed or execution failure. Thus, when the user needs to inquire the task execution state or the task execution result of a certain task, the client can send the inquiry condition to the data server, so that the task execution state or the task execution result of the task is acquired from the data server and displayed on the client.
As can be seen from the foregoing, the data server may store a hash ring, where a plurality of numbers are distributed on the hash ring, and the plurality of numbers correspond to tasks that need to be executed by the task scheduling system, and the tasks in the data server are tasks that need to be executed by the task scheduling system, where each task corresponds to a plurality of numbers on the hash ring one by one. That is, the data server may number the task uploaded by the user through the client, and add the task number to the hash ring.
In some embodiments, the number corresponding to the target task added on the hash ring is located in a target task scheduling scope, where the target task scheduling scope may be a task scheduling scope selected from one or more task scheduling scopes according to a load balancing policy. The target task is any task that needs to be executed by the task scheduling system, that is, any task in the data server, and the one or more task scheduling ranges refer to a task scheduling range of a scheduling end in a normal working state in the scheduling end cluster.
The load balancing strategy may include: and carrying out hash operation on the initial number according to the total number of the dispatching terminals in the normal working state in the dispatching terminal cluster to obtain a hash value, and determining a task dispatching range from the one or more task dispatching ranges according to the hash value. The initial number refers to a number generated for a target task according to a task uploading sequence.
Alternatively, the load balancing policy may include: and determining a task scheduling range from the one or more task scheduling ranges according to the load condition and/or configuration information of the scheduling end in the normal working state in the scheduling end cluster.
That is, the data server may number the task uploaded according to the load balancing policy, and add the number corresponding to the task to the hash ring.
In other embodiments, the number corresponding to the target task added to the hash ring may be a number within a target task range generated randomly, that is, the data server may simply number the task uploaded by randomly generating the number without using a load balancing policy, and add the number corresponding to the task to the hash ring, so that the workload of the data server may be reduced, and in a case that the scheduling end in the scheduling end cluster determines its task scheduling range according to the configuration information, the load balancing of the scheduling end cluster may also be ensured.
As can be seen from the foregoing, in the embodiment of the present application, the data server may add the number corresponding to the task to the hash ring according to the load balancing policy or by randomly generating the number. Next, a method for adding the number corresponding to the task to the hash ring by the data server will be described in detail. Referring to fig. 8, the method may include steps 801 and 802.
Step 801: and the data server receives a task submission request sent by the client, wherein the task submission request carries the related data of the target task.
Step 802: the data server stores the related data of the target task, and adds the number corresponding to the target task on the hash ring.
In the embodiment of the application, after receiving the task submission request carrying the related data of the target task, the data server can store the related data of the target task, determine the number corresponding to the target task, and add the number corresponding to the target task to the hash ring.
It should be noted that, there may be multiple implementations of determining the number corresponding to the target task and adding the number to the hash ring by the data server, and three implementations will be described below.
In a first implementation, the data server may generate a random number that is within the target task range. Then, the data server may determine a location corresponding to the random number from the hash ring, and add the random number as a number at the location corresponding to the hash ring. That is, in the embodiment of the present application, the data server may generate a random number within the scope of the target task, and add the random number as the number corresponding to the target task to the hash ring.
Illustratively, assuming a target task range of 0-100000000, the data server can generate a random number within 0-100000000, e.g., 1000, then the data server can add 1000 as a number on the hash ring. I.e. the number corresponding to the target task is 1000.
That is, in this implementation, the data server may add the number corresponding to the task to the hash ring in a manner that randomly generates the number.
In a second implementation manner, the data server may add the number corresponding to the target task to the hash ring according to the first load balancing policy, that is, the data server may add the number corresponding to the target task to the hash ring according to the following steps (1) to (5) according to the hash algorithm.
(1) The data server generates an initial number, which is a number generated by the data server for the target task according to the uploading sequence of each task.
In the embodiment of the application, the data server can generate an initial number for the target task according to the uploading sequence of each task. It should be noted that, as the number of tasks to be uploaded increases, the initial number may exceed the target task range.
For example, assuming that the target task is the 1000 th uploaded task, the data server may take 1000 as the initial number corresponding to the target task.
(2) And the data server carries out hash operation on the initial number according to the total number of the dispatching ends in the normal working state in the dispatching end cluster to obtain a hash value.
In the embodiment of the application, the data server can carry out hash operation on the initial number according to the total number of the dispatching ends in the normal working state in the dispatching end cluster, namely the initial number is divided by the total number, and the obtained remainder is the hash value.
For example, assuming that there are 4 scheduling ends in the normal working state in the scheduling end cluster, the initial number corresponding to the target task is 1000, the data server may divide 1000 by 4 to obtain a remainder 0, that is, the obtained hash value is 0.
(3) The data server determines a task scheduling scope from one or more task scheduling scopes as a target task scheduling scope according to the hash value.
Since the hash value obtained by performing the hash operation on the initial number is related to the total number of the scheduling ends in the normal working state in the scheduling end cluster, and the hash value obtained by performing the hash operation on the initial number is also possibly 0, in the embodiment of the present application, the data server may add 1 to the hash value to obtain a value, and select, from one or more task scheduling ranges, one task scheduling range with the sequence of the value as the target task scheduling range, that is, the task scheduling range with the number of the bits is to be arranged as the target task scheduling range. The one or more task scheduling ranges refer to task scheduling ranges of scheduling ends in a normal working state in the scheduling end cluster.
For example, assuming that there are 3 scheduling ends in a normal working state, the task scheduling ranges of the 3 scheduling ends are 0-33333333, 33333334-66666666 and 66666667-100000000, the initial number of the target task is 1000, and the remainder obtained by dividing 1000 by 3 is 1, that is, the hash value obtained by the data server is 1. Then, the data server may add 1 to the hash value to obtain a value of 2, and then use the task scheduling range with the order of 2 in the task scheduling ranges of the 3 scheduling ends as the target task scheduling range, that is, may use 33333334-66666666 as the target task scheduling range.
(4) The data server generates a random number according to the target task scheduling range, wherein the random number is positioned in the target task scheduling range.
In the embodiment of the present application, after determining the target task scheduling range, the data server may generate a random number within the target task scheduling range, that is, the generated random number is located within the target task scheduling range.
Illustratively, if the target task scheduling range determined by the data server is 33333334-66666666, a random number within the target task scheduling range may be generated, e.g., the generated random number may be 33333357.
(5) The data server determines the position corresponding to the random number from the hash ring, and adds the random number as a number at the position of the hash ring.
In the embodiment of the application, after the data server generates the random number, the position corresponding to the random number can be determined from the stored hash ring, and the random number is added at the position of the hash ring as a number. That is, the data server may use the random number as the number corresponding to the target task.
For example, referring to table 1, in table 1, the first column is the initial number generated by the data server for each target task according to the task uploading sequence, the second column is the task identifier of the target task, and the third column is the number corresponding to each target task finally determined by the data server according to the hash operation.
TABLE 1
Initial numbering | Task identification | Numbering on hash ring |
1 | Task 1 | 100 |
2 | Task 2 | 8000 |
3 | Task 3 | 6231 |
4 | Task 4 | 2100 |
In the second implementation manner, the data server may determine a target task scheduling range according to the hash operation, and then generate a random number in the target task scheduling range as the number corresponding to the target task, so that load balancing between scheduling ends in a normal working state can be achieved as much as possible.
In a third implementation manner, the data server may add a number corresponding to the target task to the hash ring according to the second load balancing policy. That is, the data server may determine a task scheduling scope from one or more task scheduling scopes according to the load condition and/or configuration information of the scheduling end in the normal working state in the scheduling end cluster, as the target task scheduling scope. Then, the data server may generate a random number according to the target task scheduling range, where the random number is located in the target task scheduling range, and then the data server may determine a location corresponding to the random number from the hash ring, and add the random number as a number at the location of the hash ring. The one or more task scheduling ranges refer to the task scheduling range of the scheduling end in the normal working state in the scheduling end cluster.
Because the load condition or the configuration information of the scheduling end performs all effects on task scheduling on the scheduling end, in some embodiments, the data server may determine, from the scheduling ends in the normal working state, a scheduling end with a smaller current load and/or a higher configuration according to the load condition and/or the configuration information of the scheduling end in the normal working state, and use the task scheduling range of the scheduling end as the target task scheduling range.
The load condition of the scheduling end may be represented by the number of the numbers of the tasks added in the task scheduling range of the scheduling end on the hash ring, or may be represented by the ratio of the number of the tasks currently added to the number of the tasks that can be added in the corresponding task scheduling range. The configuration information of the scheduling end may represent the load capacity of the scheduling end.
In the embodiment of the application, the data server can determine the target task scheduling range according to the load condition of the scheduling end, namely, the data server can take the task scheduling range of the scheduling end with the least load in the load condition as the target task scheduling range. Or, the data server may determine the target task scheduling range according to the configuration information of the scheduling end, that is, the data server may use the task scheduling range of the scheduling end configured highest in the configuration information as the target task scheduling range.
For example, it is assumed that the scheduling ends in the normal working state are a scheduling end a and a scheduling end B, and task scheduling ranges of the scheduling end a and the scheduling end B are 0-50000000 and 50000001-100000000 respectively. The current load ratio of the scheduling end A is 50%, and the current load ratio of the scheduling end B is 30%. Since the load condition of the scheduling end B is smaller than that of the scheduling end a, the data server can determine the task scheduling range of the scheduling end B as the target task scheduling range, that is, the target task scheduling range is 50000001-100000000.
Or, assuming that the scheduling end in the normal working state includes a scheduling end a and a scheduling end B, the configuration information of the scheduling ends a and B indicates that the configuration of the scheduling end a is higher and the configuration of the scheduling end B is lower, the data server may determine the task scheduling range of the scheduling end a as the target task scheduling range.
Or, the data server may determine the target task scheduling range according to the load condition and the configuration information of the scheduling end. In this case, the data server may set a capability value for each scheduling end according to the configuration information of the scheduling ends, where the higher the configuration is, the larger the capability value is set, and the lower the configuration is, the smaller the capability value is set. In this way, the data server can perform weighting operation on the load condition of the scheduling end and the capacity value corresponding to the configuration information to obtain a corresponding weighting value, and the task scheduling range of the scheduling end with the largest weighting value is taken as the target task scheduling range.
For example, assuming that the scheduling end in the normal working state includes a scheduling end a and a scheduling end B, the current load ratio of the scheduling end a is 50%, the current load ratio of the scheduling end B is 30%, the configuration information of the scheduling ends a and B indicates that the configuration of the scheduling end a is higher, the configuration of the scheduling end B is lower, the capability values set by the data server for the scheduling ends a and B are respectively 0.8 and 0.7, the preset negative weight is-0.5, and the positive weight is 0.5, so that the weighting value of the scheduling end a may be-0.5x50% + 0.5x0.8=0.15, the weighting value of the scheduling end B may be-0.5x30% + 0.5x0.7=0.2, and since 0.2 is greater than 0.15, the data server may take the task scheduling range of the scheduling end B as the target task scheduling range.
In summary, in the embodiment of the present application, the state monitoring device may monitor the working state of each scheduling end in the scheduling end cluster, and when there is a scheduling end in the scheduling end cluster, where the task processing capability is changed due to the change of the state, each scheduling end in a normal working state may redetermine its task scheduling range according to the state change message sent by the state monitoring device, so as to schedule the task that needs to be executed by the task scheduling system. According to the method provided by the application, at a certain scheduling end, the task which is responsible for scheduling is failed or is offline, and the other scheduling ends can be continuously responsible for scheduling, namely, the reliability of the scheme is higher. In addition, according to the method provided by the application, the dispatching end cluster can be dynamically expanded or contracted, namely, the expandability of the scheme is higher. Because the dispatching end clusters can be dynamically expanded, each dispatching end does not need to have very high configuration, and resource waste can be reduced, namely, the resource utilization rate is higher.
Fig. 9 is a schematic structural diagram of a task scheduling device 900 according to an embodiment of the present application, where the task scheduling device 900 may be implemented by software, hardware, or a combination of both as part or all of a computer device, and the computer device may be the scheduling end shown in fig. 1. In the embodiment of the application, the task scheduling system comprises a scheduling end cluster and a state monitoring device, wherein the scheduling end cluster comprises a plurality of scheduling ends, and the state monitoring device is used for monitoring the working state of each scheduling end in the scheduling end cluster. The apparatus 900 is applied to a target scheduling end, where the target scheduling end refers to any scheduling end in a normal working state in the scheduling end cluster. Referring to fig. 9, the apparatus 900 includes: a receiving module 901 and a determining module 902.
A receiving module 901, configured to receive a status change message sent by a status monitoring device, where the status change message may be used to indicate information of a scheduling end that changes a task processing capability due to a status change;
and the determining module 902 is configured to redetermine a task scheduling range of the target scheduling end according to the state change message and the number of scheduling ends in a normal working state in the scheduling end cluster, so as to schedule tasks to be executed by the task scheduling system.
Optionally, the scheduling end with the task processing capability changed due to the status change includes a currently failed or currently offline scheduling end, and the status change message carries an identifier of the currently failed or currently offline scheduling end;
the determining module 902 includes:
the first determining submodule is used for determining the identifiers of one or more scheduling ends in a normal working state in the scheduling end cluster as the identifiers of the scheduling ends which are in failure currently or are offline currently in the stored identifiers of the scheduling ends;
and the second determining submodule is used for determining the task scheduling range of the target scheduling end again according to the identification of the one or more scheduling ends and the total number of the one or more scheduling ends.
Optionally, the scheduling end with the changed task processing capability caused by the changed state includes a current online scheduling end, and the state change message carries an identifier of the current online scheduling end;
the first determining submodule is used for determining the identification of the scheduling end which is on line currently and the stored identification of the scheduling end as the identification of one or more scheduling ends in a normal working state in the scheduling end cluster;
and the second determining submodule is used for determining the task scheduling range of the target scheduling end again according to the identification of the one or more scheduling ends and the total number of the one or more scheduling ends.
Optionally, the second determining submodule is specifically configured to:
determining the ordering position of the target scheduling end in the one or more scheduling ends according to the identification of the one or more scheduling ends;
determining a number range from the hash ring according to the ordering position of the target scheduling end in the one or more scheduling ends, the total number of the one or more scheduling ends and the initial position on the hash ring, wherein a plurality of numbers are distributed on the hash ring and correspond to tasks to be executed by a task scheduling system;
And determining the task range corresponding to the number range as the task scheduling range of the target scheduling end.
Optionally, the second determining submodule is specifically configured to:
determining configuration information of the one or more scheduling terminals according to the identification of the one or more scheduling terminals;
determining a number range from the hash ring according to configuration information of the one or more scheduling ends, ordering positions of the target scheduling end in the one or more scheduling ends and initial positions of the hash ring, wherein a plurality of numbers are distributed on the hash ring and correspond to tasks to be executed by a task scheduling system;
and determining the task range corresponding to the number range as the task scheduling range of the target scheduling end.
Optionally, the second determining submodule is further specifically configured to:
determining the online sequence of the one or more scheduling ends according to the identification of the one or more scheduling ends;
and determining the ordering position of the target scheduling end in the one or more scheduling ends according to the online sequence of the one or more scheduling ends.
Optionally, the number corresponding to the target task added on the hash ring is located in a target task scheduling range, the target task scheduling range is a task scheduling range selected from one or more task scheduling ranges according to a load balancing strategy, the one or more task scheduling ranges are task scheduling ranges of a scheduling end in a normal working state in a scheduling end cluster, and the target task is any task required to be executed by a task scheduling system;
The load balancing strategy comprises the following steps: carrying out hash operation on an initial number according to the total number of scheduling ends in a normal working state in a scheduling end cluster to obtain a hash value, determining a task scheduling range from the one or more task scheduling ranges according to the hash value, wherein the initial number is a number generated for a target task according to a task uploading sequence; or,
the load balancing strategy comprises the following steps: and determining a task scheduling range from the one or more task scheduling ranges according to the load condition and/or configuration information of the scheduling end in the normal working state in the scheduling end cluster.
In the embodiment of the application, the state monitoring device can monitor the working state of each scheduling end in the scheduling end cluster, and when the scheduling end with the changed task processing capacity caused by the changed state exists in the scheduling end cluster, each scheduling end in the normal working state can redetermine the task scheduling range according to the state change message sent by the state monitoring device so as to schedule the task to be executed by the task scheduling system. According to the method provided by the application, at a certain scheduling end, the task which is responsible for scheduling is failed or is offline, and the other scheduling ends can be continuously responsible for scheduling, namely, the reliability of the scheme is higher. In addition, according to the method provided by the application, the dispatching end cluster can be dynamically expanded or contracted, namely, the expandability of the scheme is higher. Because the dispatching end clusters can be dynamically expanded, each dispatching end does not need to have very high configuration, and resource waste can be reduced, namely, the resource utilization rate is higher.
It should be noted that: in the task scheduling device provided in the above embodiment, only the division of the above functional modules is used for illustration, and in practical application, the above functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the task scheduling device and the task scheduling method provided in the foregoing embodiments belong to the same concept, and detailed implementation processes of the task scheduling device and the task scheduling method are shown in the method embodiments, which are not repeated herein.
Fig. 10 is a schematic structural diagram of another task scheduling device 1000 according to an embodiment of the present application, where the task scheduling device 1000 may be implemented as part or all of a computer device, which may be the data server shown in fig. 1, by software, hardware, or a combination of both. In an embodiment of the application, the task scheduling system comprises a data server. The apparatus 1000 is applied to a data server, see fig. 10, the apparatus 1000 comprising:
a receiving module 1001, configured to receive a task submission request sent by a client, where the task submission request carries relevant data of a target task;
The storage module 1002 is configured to store relevant data of a target task, and add a number corresponding to the target task to a hash ring, where the hash ring is distributed with a plurality of numbers, the plurality of numbers correspond to tasks in the data server one by one, and a maximum range of numbers on the hash ring is consistent with a target task range, where the target task range refers to a maximum range of tasks that can be scheduled and executed by a task scheduling system.
Optionally, the storage module 1002 includes:
a first generation sub-module for generating a random number, the random number being within a target task range;
the first determining submodule is used for determining the position corresponding to the random number from the hash ring;
and the first adding submodule is used for adding the random number as a number at the corresponding position of the hash ring.
Optionally, the storage module 1002 includes:
the second generation sub-module is used for generating an initial number, wherein the initial number is a number generated for a target task according to the uploading sequence of each task by the data server;
the hash operation module is used for carrying out hash operation on the initial number according to the total number of the dispatching terminals in the normal working state in the dispatching terminal cluster to obtain a hash value;
The second determining submodule is used for determining a task scheduling range from one or more task scheduling ranges according to the hash value, wherein the task scheduling range is used as a target task scheduling range, and the one or more task scheduling ranges refer to the task scheduling range of a scheduling end in a normal working state in a scheduling end cluster;
a third generation sub-module, configured to generate a random number according to the target task scheduling range, where the random number is located in the target task scheduling range;
and the second adding submodule is used for determining the position corresponding to the random number from the hash ring and adding the random number as a number at the corresponding position of the hash ring.
Optionally, the storage module 1002 includes:
a third determining submodule, configured to determine a task scheduling range from one or more task scheduling ranges according to load conditions and/or configuration information of a scheduling end in a normal working state in the plurality of scheduling ends, where the one or more task scheduling ranges are task scheduling ranges of the scheduling end in the normal working state in the scheduling end cluster;
a fourth generation sub-module, configured to generate a random number according to a target task scheduling range, where the random number is located in the target task scheduling range;
And the third adding sub-module is used for determining the position corresponding to the random number from the hash ring and adding the random number as a number at the corresponding position of the hash ring.
In the embodiment of the application, the state monitoring device can monitor the working state of each scheduling end in the scheduling end cluster, and when the scheduling end with the changed task processing capacity caused by the changed state exists in the scheduling end cluster, each scheduling end in the normal working state can redetermine the task scheduling range according to the state change message sent by the state monitoring device so as to schedule the task to be executed by the task scheduling system. According to the method provided by the application, at a certain scheduling end, the task which is responsible for scheduling is failed or is offline, and the other scheduling ends can be continuously responsible for scheduling, namely, the reliability of the scheme is higher. In addition, according to the method provided by the application, the dispatching end cluster can be dynamically expanded or contracted, namely, the expandability of the scheme is higher. Because the dispatching end clusters can be dynamically expanded, each dispatching end does not need to have very high configuration, and resource waste can be reduced, namely, the resource utilization rate is higher.
It should be noted that: in the task scheduling device provided in the above embodiment, only the division of the above functional modules is used for illustration, and in practical application, the above functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the task scheduling device and the task scheduling method provided in the foregoing embodiments belong to the same concept, and detailed implementation processes of the task scheduling device and the task scheduling method are shown in the method embodiments, which are not repeated herein.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer instructions are loaded and executed on a computer, the processes or functions described in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, data subscriber line (digital subscriber line, DSL)) or wireless (e.g., infrared, wireless, microwave, etc.) means. The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., digital versatile disk (digital versatile disc, DVD)), or a semiconductor medium (e.g., solid State Disk (SSD)), etc. It is noted that the computer readable storage medium mentioned in the present application may be a non-volatile storage medium, in other words, a non-transitory storage medium.
The above embodiments are not intended to limit the present application, and any modifications, equivalent substitutions, improvements, etc. within the spirit and principle of the present application should be included in the scope of the present application.
Claims (16)
1. A method of task scheduling, the method comprising:
the target scheduling end receives a state change message sent by the state monitoring equipment, wherein the state change message is used for indicating information of a scheduling end with changed task processing capacity caused by state change, and the target scheduling end refers to any scheduling end in a normal working state in a scheduling end cluster;
the target dispatching end redetermines the task dispatching range of the target dispatching end according to the state change message and the number of dispatching ends in a normal working state in the dispatching end cluster so as to dispatch tasks to be executed by a task dispatching system, the task dispatching ranges redetermined by the dispatching ends in the normal working state in the dispatching end cluster are not overlapped and are combined to form a target task range, the target task range is a preset range, and the target task range is used for representing the maximum range of the tasks which can be dispatched and executed by the task dispatching system.
2. The method of claim 1, wherein the scheduling end that changes task processing capacity due to a state change includes a currently failed or currently down-line scheduling end, and the state change message carries an identification of the currently failed or currently down-line scheduling end;
the target scheduling end re-determines the task scheduling range of the target scheduling end according to the state change message and the number of the scheduling ends in a normal working state in the scheduling end cluster, and the target scheduling end comprises the following steps:
the target dispatching end determines the stored identifications of the dispatching ends which are currently faulty or are not the identifications of the dispatching ends which are currently off-line as the identifications of one or more dispatching ends in a normal working state in the dispatching end cluster;
and the target scheduling end redetermines the task scheduling range of the target scheduling end according to the identification of the one or more scheduling ends and the total number of the one or more scheduling ends.
3. The method of claim 1, wherein the scheduling end that changes task processing capacity due to a state change includes a current online scheduling end, and the state change message carries an identifier of the current online scheduling end;
The target scheduling end re-determines the task scheduling range of the target scheduling end according to the state change message and the number of the scheduling ends in a normal working state in the scheduling end cluster, and the target scheduling end comprises the following steps:
the target dispatching end determines the identification of the dispatching end which is on line currently and the stored identification of the dispatching end as the identification of one or more dispatching ends in a normal working state in the dispatching end cluster;
and the target scheduling end redetermines the task scheduling range of the target scheduling end according to the identification of the one or more scheduling ends and the total number of the one or more scheduling ends.
4. A method according to claim 2 or 3, wherein the target scheduling peer re-determines its task scheduling scope according to the identities of the one or more scheduling peers and the total number of the one or more scheduling peers, comprising:
the target scheduling end determines the ordering positions of the target scheduling end in the one or more scheduling ends according to the identification of the one or more scheduling ends;
the target scheduling end determines a number range from the hash ring according to the ordering positions of the target scheduling end in the one or more scheduling ends, the total number of the one or more scheduling ends and the initial position on the hash ring, wherein a plurality of numbers are distributed on the hash ring and correspond to tasks to be executed by the task scheduling system;
And the target scheduling end determines the task range corresponding to the number range as the task scheduling range of the target scheduling end.
5. A method according to claim 2 or 3, wherein the target scheduling peer re-determines its task scheduling scope according to the identities of the one or more scheduling peers and the total number of the one or more scheduling peers, comprising:
the target scheduling end determines configuration information of the one or more scheduling ends and ordering positions of the target scheduling end in the one or more scheduling ends according to the identification of the one or more scheduling ends;
the target scheduling end determines a number range from the hash ring according to configuration information of the one or more scheduling ends, ordering positions of the target scheduling end in the one or more scheduling ends and initial positions of the target scheduling end on the hash ring, wherein a plurality of numbers are distributed on the hash ring and correspond to tasks to be executed by the task scheduling system;
and the target scheduling end determines the task range corresponding to the number range as the task scheduling range of the target scheduling end.
6. The method of claim 4 or 5, wherein the determining, by the target scheduling end, a ranking position of the target scheduling end in the one or more scheduling ends according to the identifiers of the one or more scheduling ends includes:
The target scheduling end determines the online sequence of the one or more scheduling ends according to the identification of the one or more scheduling ends;
and the target scheduling end determines the ordering positions of the target scheduling end in the one or more scheduling ends according to the online sequence of the one or more scheduling ends.
7. The method of claim 4 or 5, wherein a number corresponding to a target task added to the hash ring is located in a target task scheduling range, the target task scheduling range is a task scheduling range selected from one or more task scheduling ranges according to a load balancing policy, the one or more task scheduling ranges are task scheduling ranges of a scheduling end in a normal working state in the scheduling end cluster, and the target task is any task to be executed by the task scheduling system;
the load balancing strategy comprises the following steps: performing hash operation on an initial number according to the total number of scheduling ends in a normal working state in the scheduling end cluster to obtain a hash value, and determining a task scheduling range from the one or more task scheduling ranges according to the hash value, wherein the initial number is a number generated for the target task according to a task uploading sequence; or,
The load balancing strategy comprises the following steps: and determining a task scheduling range from the one or more task scheduling ranges according to the load condition and/or configuration information of the scheduling end in the normal working state in the scheduling end cluster.
8. A task scheduling device, characterized in that the device comprises a receiver, a processor and a memory;
the memory is used for storing programs required to be executed by the processor and storing data related in the process of executing the programs;
the receiver is used for receiving a state change message sent by the state monitoring equipment, wherein the state change message is used for indicating information of a scheduling end with changed task processing capacity caused by state change;
the processor is configured to redetermine a task scheduling range of a target scheduling end according to the state change message and the number of scheduling ends in a normal working state in the scheduling end cluster, so as to schedule a task to be executed by the task scheduling system, where the target scheduling end refers to any scheduling end in the normal working state in the scheduling end cluster, and the task scheduling ranges redetermined by each scheduling end in the normal working state in the scheduling end cluster are not overlapped and are combined to form a target task range, where the target task range is a preset range, and the target task range is used to represent a maximum range of tasks that can be scheduled and executed by the task scheduling system.
9. The apparatus of claim 8, wherein the scheduling end that changes task processing capacity due to a state change includes a currently failed or currently down-line scheduling end, and the state change message carries an identification of the currently failed or currently down-line scheduling end;
the processor is specifically configured to:
determining the identifiers of one or more scheduling ends in a normal working state in the scheduling end cluster as the identifiers of the scheduling ends which are in failure currently or are not in line currently in the identifiers of the scheduling ends stored in the memory;
and re-determining the task scheduling range of the target scheduling end according to the identification of the one or more scheduling ends and the total number of the one or more scheduling ends.
10. The apparatus of claim 8, wherein the scheduling end that changes task processing capacity due to a state change includes a current online scheduling end, and the state change message carries an identifier of the current online scheduling end;
the processor is specifically configured to:
determining the identification of the scheduling end which is currently on line and the identification of the scheduling end stored in the memory as the identification of one or more scheduling ends in a normal working state in the scheduling end cluster;
And re-determining the task scheduling range of the target scheduling end according to the identification of the one or more scheduling ends and the total number of the one or more scheduling ends.
11. The apparatus of claim 9 or 10, wherein the processor is specifically configured to:
determining the ordering position of the target scheduling end in the one or more scheduling ends according to the identification of the one or more scheduling ends;
determining a number range from a hash ring according to the ordering positions of the target scheduling end in the one or more scheduling ends, the total number of the one or more scheduling ends and the initial position on the hash ring, wherein a plurality of numbers are distributed on the hash ring and correspond to tasks to be executed by the task scheduling system;
and determining the task range corresponding to the number range as the task scheduling range of the target scheduling end.
12. The apparatus of claim 9 or 10, wherein the processor is specifically configured to:
determining configuration information of the one or more scheduling ends and ordering positions of the target scheduling end in the one or more scheduling ends according to the identification of the one or more scheduling ends;
Determining a number range from a hash ring according to configuration information of the one or more scheduling ends, ordering positions of the target scheduling end in the one or more scheduling ends and initial positions of the hash ring, wherein a plurality of numbers are distributed on the hash ring and correspond to tasks to be executed by the task scheduling system;
and determining the task range corresponding to the number range as the task scheduling range of the target scheduling end.
13. The apparatus of claim 11 or 12, wherein the processor is specifically configured to:
determining the online sequence of the one or more scheduling ends according to the identification of the one or more scheduling ends;
and determining the ordering positions of the target scheduling end in the one or more scheduling ends according to the online sequence of the one or more scheduling ends.
14. The apparatus of claim 11 or 12, wherein a number corresponding to a target task added to the hash ring is located in a target task scheduling range, the target task scheduling range is a task scheduling range selected from one or more task scheduling ranges according to a load balancing policy, the one or more task scheduling ranges are task scheduling ranges of a scheduling end in a normal working state in the scheduling end cluster, and the target task is any task that needs to be executed by the task scheduling system;
The load balancing strategy comprises the following steps: performing hash operation on an initial number according to the total number of scheduling ends in a normal working state in the scheduling end cluster to obtain a hash value, and determining a task scheduling range from the one or more task scheduling ranges according to the hash value, wherein the initial number is a number generated for the target task according to a task uploading sequence; or,
the load balancing strategy comprises the following steps: and determining a task scheduling range from the one or more task scheduling ranges according to the load condition and/or configuration information of the scheduling end in the normal working state in the scheduling end cluster.
15. The task scheduling system is characterized by comprising a scheduling end cluster, an execution end cluster, a state monitoring device and a data server, wherein the scheduling end cluster comprises a plurality of scheduling ends, the execution end cluster comprises a plurality of execution ends, and the state monitoring device is used for monitoring the working state of each scheduling end in the scheduling end cluster;
the target scheduling end is used for executing the method of any one of claims 1-7, and the target scheduling end refers to any scheduling end in a normal working state in the scheduling end cluster.
16. A computer-readable storage medium, characterized in that the storage medium has stored therein a computer program which, when executed by a processor, implements the steps of the method of any of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010042580.6A CN111240822B (en) | 2020-01-15 | 2020-01-15 | Task scheduling method, device, system and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010042580.6A CN111240822B (en) | 2020-01-15 | 2020-01-15 | Task scheduling method, device, system and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111240822A CN111240822A (en) | 2020-06-05 |
CN111240822B true CN111240822B (en) | 2023-11-17 |
Family
ID=70876108
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010042580.6A Active CN111240822B (en) | 2020-01-15 | 2020-01-15 | Task scheduling method, device, system and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111240822B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112035721A (en) * | 2020-07-22 | 2020-12-04 | 大箴(杭州)科技有限公司 | Crawler cluster monitoring method and device, storage medium and computer equipment |
CN112699626B (en) * | 2020-12-31 | 2024-04-12 | 北京物芯科技有限责任公司 | Scheduling detection method and device, equipment and computer readable storage medium |
CN114650320B (en) * | 2022-03-10 | 2024-03-15 | 腾讯科技(深圳)有限公司 | Task scheduling method and device, storage medium and electronic equipment |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102457814A (en) * | 2010-10-22 | 2012-05-16 | 中兴通讯股份有限公司 | Cluster dispatching method and system |
CN103678000A (en) * | 2013-09-11 | 2014-03-26 | 北京工业大学 | Computational grid balance task scheduling method based on reliability and cooperative game |
CN105302641A (en) * | 2014-06-04 | 2016-02-03 | 杭州海康威视数字技术股份有限公司 | Node scheduling method and apparatus in virtual cluster |
CN105468450A (en) * | 2015-12-29 | 2016-04-06 | 华为技术有限公司 | Task scheduling method and system |
CN107111519A (en) * | 2014-11-11 | 2017-08-29 | 亚马逊技术股份有限公司 | For managing the system with scheduling container |
CN107968802A (en) * | 2016-10-19 | 2018-04-27 | 华为技术有限公司 | The method, apparatus and filtering type scheduler of a kind of scheduling of resource |
US10133619B1 (en) * | 2015-06-08 | 2018-11-20 | Nutanix, Inc. | Cluster-wide virtual machine health monitoring |
US10387179B1 (en) * | 2014-12-16 | 2019-08-20 | Amazon Technologies, Inc. | Environment aware scheduling |
-
2020
- 2020-01-15 CN CN202010042580.6A patent/CN111240822B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102457814A (en) * | 2010-10-22 | 2012-05-16 | 中兴通讯股份有限公司 | Cluster dispatching method and system |
CN103678000A (en) * | 2013-09-11 | 2014-03-26 | 北京工业大学 | Computational grid balance task scheduling method based on reliability and cooperative game |
CN105302641A (en) * | 2014-06-04 | 2016-02-03 | 杭州海康威视数字技术股份有限公司 | Node scheduling method and apparatus in virtual cluster |
CN107111519A (en) * | 2014-11-11 | 2017-08-29 | 亚马逊技术股份有限公司 | For managing the system with scheduling container |
US10387179B1 (en) * | 2014-12-16 | 2019-08-20 | Amazon Technologies, Inc. | Environment aware scheduling |
US10133619B1 (en) * | 2015-06-08 | 2018-11-20 | Nutanix, Inc. | Cluster-wide virtual machine health monitoring |
CN105468450A (en) * | 2015-12-29 | 2016-04-06 | 华为技术有限公司 | Task scheduling method and system |
CN107968802A (en) * | 2016-10-19 | 2018-04-27 | 华为技术有限公司 | The method, apparatus and filtering type scheduler of a kind of scheduling of resource |
Also Published As
Publication number | Publication date |
---|---|
CN111240822A (en) | 2020-06-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11632441B2 (en) | Methods, systems, and devices for electronic note identifier allocation and electronic note generation | |
CN111240822B (en) | Task scheduling method, device, system and storage medium | |
CN107087019B (en) | Task scheduling method and device based on end cloud cooperative computing architecture | |
CN112486648A (en) | Task scheduling method, device, system, electronic equipment and storage medium | |
CN112860695B (en) | Monitoring data query method, device, equipment, storage medium and program product | |
CN110458468A (en) | A kind of task processing method, device, electronic equipment and storage medium | |
CN111694646A (en) | Resource scheduling method and device, electronic equipment and computer readable storage medium | |
CN111913670B (en) | Processing method and device for load balancing, electronic equipment and storage medium | |
CN109614227A (en) | Task resource concocting method, device, electronic equipment and computer-readable medium | |
EP3422186A1 (en) | Method, device and system for preventing memory data loss | |
CN111782341B (en) | Method and device for managing clusters | |
CN104123183A (en) | Cluster assignment dispatching method and device | |
CN111966502A (en) | Method and device for adjusting number of instances, electronic equipment and readable storage medium | |
CN111418187A (en) | Scalable statistics and analysis mechanism in cloud networks | |
CN111158904A (en) | Task scheduling method, device, server and medium | |
CN115237595A (en) | Data processing method, data processing device, distribution server, data processing system, and storage medium | |
CN110912967A (en) | Service node scheduling method, device, equipment and storage medium | |
CN111885158A (en) | Cluster task processing method and device, electronic equipment and storage medium | |
US9110823B2 (en) | Adaptive and prioritized replication scheduling in storage clusters | |
CN115577958A (en) | Risk processing method, device, equipment and storage medium | |
WO2023029485A1 (en) | Data processing method and apparatus, computer device, and computer-readable storage medium | |
CN113722079B (en) | Task scheduling distribution method, device, equipment and medium based on target application | |
CN115756740A (en) | Container virtual machine resource management method and device and electronic equipment | |
US20240176762A1 (en) | Geographically dispersed hybrid cloud cluster | |
CN110209475B (en) | Data acquisition method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |