CN112035236A - Task scheduling method, device and storage medium based on multi-factor cooperation - Google Patents

Task scheduling method, device and storage medium based on multi-factor cooperation Download PDF

Info

Publication number
CN112035236A
CN112035236A CN202010933923.8A CN202010933923A CN112035236A CN 112035236 A CN112035236 A CN 112035236A CN 202010933923 A CN202010933923 A CN 202010933923A CN 112035236 A CN112035236 A CN 112035236A
Authority
CN
China
Prior art keywords
scheduling
factor
task
scheduled
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010933923.8A
Other languages
Chinese (zh)
Other versions
CN112035236B (en
Inventor
陈国礼
彭传强
杨静
何国庆
罗赞
陈友
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Tydic Information Technology Co ltd
Original Assignee
Shenzhen Tydic Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Tydic Information Technology Co ltd filed Critical Shenzhen Tydic Information Technology Co ltd
Priority to CN202010933923.8A priority Critical patent/CN112035236B/en
Publication of CN112035236A publication Critical patent/CN112035236A/en
Application granted granted Critical
Publication of CN112035236B publication Critical patent/CN112035236B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5021Priority

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multi Processors (AREA)

Abstract

The invention provides a task scheduling method, a device and a storage medium based on multi-factor cooperation, wherein the method comprises the following steps: calling a task to be scheduled, and adding the task to be scheduled into a waiting queue; defining a priority for each task to be scheduled according to the importance degree of each task to be scheduled; calculating a first scheduling factor and a second scheduling factor of each scheduling sub-server and a third scheduling factor of a related platform, wherein the first scheduling factor is related to the resource condition of the scheduling sub-server, the second scheduling factor is related to the load condition of the scheduling sub-server, and the third scheduling factor is related to the resource condition of the related platform; and scheduling each task to be scheduled to each scheduling sub-server according to the priority of each task to be scheduled, the first scheduling factor, the second scheduling factor and the third scheduling factor. The invention can achieve the effect of timely scheduling, and can not generate the phenomena of high concurrent task backlog and memory collapse caused by high load of the scheduling sub-server.

Description

Task scheduling method, device and storage medium based on multi-factor cooperation
Technical Field
The invention relates to the technical field of computer application, in particular to a task scheduling method and device based on multi-factor cooperation and a storage medium.
Background
Task scheduling is an important component of an operating system, and for a real-time operating system, task scheduling directly affects its real-time performance. At present, the types of task scheduling algorithms can be divided into two types, one type is a task scheduling algorithm based on time drive, and common tools include crontab of quartz and linux, Timer of JAVA, and the like; the other is based on an event-driven task scheduling algorithm. The task scheduling algorithm based on time drive mainly performs specific operations at specific time according to the preconfigured time. The task scheduling algorithm based on event driving is mainly to arrange the execution sequence of tasks according to the priority and the sequence of events.
For data processing of enterprises, a background has thousands of tasks such as data acquisition, extraction, processing, analysis and the like every day. These tasks are required to be able to be performed timely and stably. The existing open source scheduling products can only schedule tasks on time, and cannot consider other factors such as resources, authorities and the like. Meanwhile, due to factors such as machine performance, high concurrency, large data volume and the like, phenomena such as untimely task scheduling, high concurrent task backlog, memory crash caused by high load of a scheduling execution server and the like often occur.
Therefore, there is a need for an improved method of task scheduling as described above.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the task scheduling method, the task scheduling device, the task scheduling equipment and the storage medium based on multi-factor cooperation are provided, and the problem that the existing task scheduling method has the phenomena of untimely task scheduling, high concurrent task backlog and memory crash caused by high load of a scheduling execution server is solved.
In order to solve the technical problems, the invention adopts the technical scheme that:
a first aspect of an embodiment of the present invention provides a task scheduling method based on multi-factor coordination, where the method is used to schedule each task to be scheduled to each scheduling sub-server, so as to run the task to be scheduled, and the method includes:
the task to be scheduled is called up, and the task to be scheduled is added into a waiting queue;
defining a priority for each task to be scheduled according to the importance degree of each task to be scheduled;
calculating a first scheduling factor and a second scheduling factor of each scheduling sub-server and a third scheduling factor of a related platform, wherein the first scheduling factor is related to the resource condition of the scheduling sub-server, the second scheduling factor is related to the load condition of the scheduling sub-server, and the third scheduling factor is related to the resource condition of the related platform;
and scheduling each task to be scheduled to each scheduling sub-server according to the priority of each task to be scheduled, the first scheduling factor, the second scheduling factor and the third scheduling factor.
In some embodiments, the calculating the first scheduling factor of each scheduling sub-server specifically includes:
acquiring the residual CPU or residual memory of each scheduling sub-server;
respectively judging whether the residual CPU or the residual memory of each scheduling sub-server is lower than a preset threshold value;
defining the scheduling sub-server with the residual CPU or residual memory lower than a preset threshold value as a saturated scheduling sub-server, wherein the saturated scheduling sub-server does not receive the task to be scheduled;
and multiplying the residual CPU or the residual memory of the scheduling sub-server of which the residual CPU or the residual memory is greater than or equal to a preset threshold value by a first weight coefficient to obtain a first scheduling factor of each scheduling sub-server.
In some embodiments, the calculating the second scheduling factor of each scheduling sub-server specifically includes:
acquiring the number of the running tasks of each scheduling sub-server;
calculating the total predicted completion time of the tasks running by each scheduling sub-server based on the average time of each scheduling sub-server for completing one task;
generating a load factor of each scheduling sub-server according to the number of tasks currently operated by each scheduling sub-server and the total predicted completion time of the tasks currently operated by each scheduling sub-server, wherein the total predicted completion time of the tasks currently operated by the scheduling sub-servers is the same, the more the number of the tasks currently operated by the scheduling sub-servers is, the smaller the load factor is, the same the number of the tasks currently operated by the scheduling sub-servers is, the larger the total predicted completion time of the tasks currently operated by the scheduling sub-servers is, and the smaller the load factor is;
and multiplying the load factor of each scheduling sub-server by a second weight coefficient to obtain a second scheduling factor of each scheduling sub-server.
In some embodiments, the calculating the third scheduling factor of the relevant platform specifically includes:
acquiring a resource upper limit value and a current resource value of the related platform;
generating a resource factor according to the resource upper limit value and the current resource value of the relevant platform, wherein the closer the current resource value of the relevant platform is to the resource upper limit value, the smaller the resource factor is;
and multiplying the resource factor by a third weight coefficient to obtain a third scheduling factor of the relevant platform.
In some embodiments, the scheduling, according to the priority of each task to be scheduled, and the first scheduling factor, the second scheduling factor, and the third scheduling factor, each task to be scheduled to each scheduling sub-server specifically includes:
according to the priority of each task to be scheduled, obtaining the scheduling sequence of each task to be scheduled;
multiplying the first scheduling factor by a fourth weight coefficient, adding the second scheduling factor by a fifth weight coefficient, and adding the third scheduling factor by a sixth weight coefficient to obtain scheduling scores of the scheduling sub-servers;
according to the scheduling score of each scheduling sub-server, obtaining the sequence of the scheduling sub-servers for receiving the tasks to be scheduled;
and scheduling each task to be scheduled to each scheduling sub-server according to the scheduling sequence of each task to be scheduled and the sequence of each scheduling sub-server receiving the task to be scheduled.
In some implementations, the sum of the fourth, fifth, and sixth weighting coefficients is 1.
In some embodiments, after defining a priority for each of the tasks to be scheduled according to the importance degree of each of the tasks to be scheduled, the method further includes:
and carrying out high-frequency heartbeat detection on each scheduling sub-server to obtain a dead scheduling sub-server, wherein the dead scheduling sub-server does not receive the task to be scheduled.
In some embodiments, the invoking of the task to be scheduled specifically includes:
calling the task to be scheduled according to preset time; or,
and calling the task to be scheduled according to a preset event.
A second aspect of the embodiments of the present invention provides a task scheduling device based on multi-factor coordination, including:
the call-up module is used for calling up the task to be scheduled and adding the task to be scheduled into a waiting queue;
the priority definition module is used for defining the priority for each task to be scheduled according to the importance degree of each task to be scheduled;
a calculating module, configured to calculate a first scheduling factor and a second scheduling factor of each scheduling sub-server, and a third scheduling factor of a related platform, where the first scheduling factor is related to a resource condition of the scheduling sub-server, the second scheduling factor is related to a load condition of the scheduling sub-server, and the third scheduling factor is related to a resource condition of the related platform;
and the scheduling module is used for scheduling each task to be scheduled to each scheduling sub-server according to the priority of each task to be scheduled, the first scheduling factor, the second scheduling factor and the third scheduling factor.
A third aspect of embodiments of the present invention provides a computer-readable storage medium having stored thereon executable instructions that, when executed, perform the method according to the first aspect of embodiments of the present invention.
From the above description, compared with the prior art, the invention has the following beneficial effects:
after the task to be scheduled is called, the task to be scheduled is not immediately scheduled to the scheduling sub-server for operation, but the called task to be scheduled is added into the waiting queue, and the priority is defined for each task to be scheduled in the waiting queue. Meanwhile, the tasks to be scheduled in the waiting queue are sequentially scheduled to the scheduling sub-servers based on a first scheduling factor related to the resource condition of the scheduling sub-servers, a second scheduling factor related to the load condition of the scheduling sub-servers, a third scheduling factor related to the resource condition of the related platform and the priorities of the tasks to be scheduled. After the tasks to be scheduled are called, the importance degree of each task to be scheduled, the resource and load condition of each scheduling sub-server and the resource condition of a related platform are comprehensively considered, so that the effect of scheduling in time can be achieved in the scheduling process of the whole task, and the phenomena of high concurrent task backlog and memory collapse caused by high load of the scheduling sub-servers can be avoided.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is to be understood that the drawings in the following description are of some, but not all, embodiments of the invention. For a person skilled in the art, other figures can also be obtained from the provided figures without inventive effort.
Fig. 1 is a schematic flowchart of a task scheduling method based on multi-factor coordination according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of step S13 in fig. 1 according to an embodiment of the present invention;
fig. 3 is a schematic flowchart of step S14 in fig. 1 according to an embodiment of the present invention;
fig. 4 is a schematic flowchart of step S15 in fig. 1 according to an embodiment of the present invention;
fig. 5 is a flowchart illustrating a specific process of step S16 in fig. 1 according to an embodiment of the present invention;
fig. 6 is a schematic flowchart of another task scheduling method based on multi-factor coordination according to an embodiment of the present invention;
FIG. 7 is a block diagram of a task scheduling apparatus based on multi-factor coordination according to an embodiment of the present invention;
fig. 8 is a block diagram of a task scheduling apparatus according to an embodiment of the present invention;
fig. 9 is a block diagram of a computer-readable storage medium according to an embodiment of the present invention.
Detailed Description
For purposes of promoting a clear understanding of the objects, aspects and advantages of the invention, reference will now be made in detail to the present embodiments of the invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements throughout. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Example 1
Referring to fig. 1, fig. 1 is a flowchart illustrating a task scheduling method based on multi-factor coordination according to an embodiment of the present invention.
As shown in fig. 1, a task scheduling method based on multi-factor coordination according to a first embodiment of the present invention is used to schedule tasks to be scheduled to scheduling sub-servers respectively, so as to run the tasks to be scheduled, and the method includes:
s11, calling the task to be scheduled, and adding the task to be scheduled into a waiting queue;
s12, defining priority for each task to be scheduled according to the importance degree of each task to be scheduled;
s13, calculating a first scheduling factor of each scheduling sub-server, wherein the first scheduling factor is related to the resource condition of the scheduling sub-server;
s14, calculating a second scheduling factor of each scheduling sub-server, wherein the second scheduling factor is related to the load condition of the scheduling sub-server;
s15, calculating a third scheduling factor of the relevant platform, wherein the third scheduling factor is relevant to the resource condition of the relevant platform;
and S16, scheduling each task to be scheduled to each scheduling sub-server according to the priority of each task to be scheduled, the first scheduling factor, the second scheduling factor and the third scheduling factor.
It should be understood that, when the task to be scheduled is invoked, the task to be scheduled may be invoked according to a preset time, or may be invoked according to a preset event.
In the method for task scheduling based on multi-factor cooperation according to the first embodiment of the present invention, after a task to be scheduled is invoked, the task to be scheduled is not immediately scheduled to a scheduling sub-server for operation, but the invoked task to be scheduled is added to a waiting queue, and a priority is defined for each task to be scheduled in the waiting queue. Meanwhile, the tasks to be scheduled in the waiting queue are sequentially scheduled to the scheduling sub-servers based on a first scheduling factor related to the resource condition of the scheduling sub-servers, a second scheduling factor related to the load condition of the scheduling sub-servers, a third scheduling factor related to the resource condition of the related platform and the priorities of the tasks to be scheduled. After the tasks to be scheduled are called, the importance degree of each task to be scheduled, the resource and load condition of each scheduling sub-server and the resource condition of a related platform are comprehensively considered, so that the effect of scheduling in time can be achieved in the scheduling process of the whole task, and the phenomena of high concurrent task backlog and memory collapse caused by high load of the scheduling sub-servers can be avoided.
Example 2
Referring to fig. 2 to 5, fig. 2 is a schematic flowchart of step S13 in fig. 1 according to an embodiment of the present invention, fig. 3 is a schematic flowchart of step S14 in fig. 1 according to an embodiment of the present invention, fig. 4 is a schematic flowchart of step S15 in fig. 1 according to an embodiment of the present invention, and fig. 5 is a schematic flowchart of step S16 in fig. 1 according to an embodiment of the present invention.
Compared with the multi-factor cooperation-based task scheduling method provided by the first embodiment of the present invention, the second embodiment of the present invention designs steps S13-S16 in detail.
As shown in fig. 2, step S13 specifically includes:
s131, acquiring the residual CPU or residual memory of each scheduling sub-server;
s132, respectively judging whether the residual CPU or the residual memory of each scheduling sub-server is lower than a preset threshold value;
s133, defining the scheduling sub-server with the residual CPU or residual memory lower than a preset threshold value as a saturated scheduling sub-server, wherein the saturated scheduling sub-server does not receive the task to be scheduled;
and S134, multiplying the residual CPUs or the residual memories of the scheduling sub-servers with the residual CPUs or the residual memories being larger than or equal to the preset threshold value by a first weight coefficient to obtain a first scheduling factor of each scheduling sub-server.
It should be noted that, in step S133, the scheduling sub-servers defined as the saturated scheduling sub-servers do not receive the tasks to be scheduled, that is, the saturated scheduling sub-servers are excluded from the target scheduling sub-servers of the tasks to be scheduled. However, when the remaining CPU or the remaining memory of the scheduling sub-server defined as the saturation scheduling sub-server is greater than or equal to the preset threshold, the "saturated" tag is removed and is re-listed in the target scheduling sub-server of the task to be scheduled.
As shown in fig. 3, step S14 specifically includes:
s141, acquiring the number of running tasks of each scheduling sub-server;
s142, calculating the total predicted completion time of the tasks running by each scheduling sub-server based on the average time of each scheduling sub-server for completing one task;
s143, generating load factors of each scheduling sub-server according to the number of the tasks which are operated by each scheduling sub-server and the total predicted completion time of the tasks which are operated by each scheduling sub-server;
and S144, multiplying the load factor of each scheduling sub-server by a second weight coefficient to obtain a second scheduling factor of each scheduling sub-server.
It should be noted that, when calculating the total predicted completion time of the tasks being executed by a scheduling sub-server, the predicted completion time of each currently executed task needs to be defined as the average time required for completing one task among the tasks that have already been completed.
It should be understood that, when the total predicted completion time of the tasks being executed by the scheduling sub-server is the same, the more the number of the tasks being executed by the scheduling sub-server is, the smaller the load factor is; when the number of the tasks which are being operated by the scheduling sub-server is the same, the larger the total predicted completion time of the tasks which are being operated by the scheduling sub-server is, the smaller the load factor is.
As shown in fig. 4, step S15 specifically includes:
s151, acquiring a resource upper limit value and a current resource value of a relevant platform;
s152, generating a resource factor according to the resource upper limit value and the current resource value of the relevant platform;
and S153, multiplying the resource factor by the third weight coefficient to obtain a third scheduling factor of the relevant platform.
Here, the relevant platform is exemplified: such as databases, hadoop clusters, etc. The resource upper limit value of the relevant platform is exemplified: such as the maximum number of shell scripts set by the linux server, the limitation of the number of database connections, the limitation of the number of large data platform data engine connections, and the like.
It should be appreciated that the closer the current resource value of the relevant platform is to the resource upper value, the smaller the resource factor.
As shown in fig. 5, step S16 specifically includes:
s161, obtaining the scheduling sequence of each task to be scheduled according to the priority of each task to be scheduled;
s162, multiplying the first scheduling factor by a fourth weight coefficient, adding the second scheduling factor by a fifth weight coefficient, and adding the third scheduling factor by a sixth weight coefficient to obtain scheduling scores of each scheduling sub-server;
s163, obtaining the sequence of the scheduling sub-servers for receiving the tasks to be scheduled according to the scheduling scores of the scheduling sub-servers;
and S164, scheduling each task to be scheduled to each scheduling sub-server according to the scheduling sequence of each task to be scheduled and the sequence of each scheduling sub-server receiving the tasks to be scheduled.
Here, the sum of the fourth weight coefficient, the fifth weight coefficient, and the sixth weight coefficient is 1.
It should be further noted that, when performing step S164, the task to be scheduled, whose scheduled sequence is earlier, is preferentially scheduled to the scheduling sub-server whose sequence is earlier for receiving the task to be scheduled.
In the task scheduling method based on multi-factor cooperation provided in the second embodiment of the present invention, the order in which each scheduling sub-server receives the tasks to be scheduled is generated based on the remaining CPUs or the remaining memories of each scheduling sub-server, the number of the tasks being executed, the total predicted completion time of the tasks being executed, and the upper limit value and the current resource value of the resources of the relevant platform. Meanwhile, based on the sequence of the scheduled tasks, a complete route is formed between the tasks to be scheduled and the scheduling sub-servers, so that the high-concurrency tasks to be scheduled can be scheduled in time, the backlog problem cannot be caused, the scheduling sub-servers with higher loads cannot receive the tasks to be scheduled, and the memory crash problem cannot be caused.
Example 3
Referring to fig. 6, fig. 6 is a flowchart illustrating another task scheduling method based on multi-factor coordination according to an embodiment of the present invention.
Compared with the multi-factor cooperation-based task scheduling method provided by the first embodiment of the present invention, the third embodiment of the present invention has a different step flow.
As shown in fig. 6, a task scheduling method based on multi-factor coordination according to a third embodiment of the present invention includes:
s21, calling the task to be scheduled, and adding the task to be scheduled into a waiting queue;
s22, defining priority for each task to be scheduled according to the importance degree of each task to be scheduled;
s23, performing high-frequency heartbeat detection on each scheduling sub-server to obtain a dead scheduling sub-server, wherein the dead scheduling sub-server does not receive the task to be scheduled;
s24, calculating a first scheduling factor and a second scheduling factor of each scheduling sub-server and a third scheduling factor of a related platform;
and S25, scheduling each task to be scheduled to each scheduling sub-server according to the priority of each task to be scheduled, the first scheduling factor, the second scheduling factor and the third scheduling factor.
It should be understood that in step S23, the dead scheduling sub-servers do not receive the tasks to be scheduled, i.e., the dead scheduling sub-servers are excluded from the target scheduling sub-servers of the tasks to be scheduled.
The task scheduling method based on multi-factor cooperation provided by the third embodiment of the present invention performs high-frequency heartbeat detection on each scheduling sub-server, finds out the dead scheduling sub-servers, and excludes the dead scheduling sub-servers from the target scheduling sub-servers of the task to be scheduled, thereby providing convenience for the subsequent scheduling of the task to be scheduled and greatly shortening the scheduling time.
Example 4
In order to clearly understand the task scheduling method based on multi-factor coordination provided in the embodiment of the present invention, the following describes, by way of example, the steps and flows of the task scheduling method based on multi-factor coordination with reference to the first embodiment to the third embodiment of the present invention:
s101, calling a task to be scheduled, and adding the task to be scheduled into a waiting queue;
s102, defining a priority for each task to be scheduled according to the importance degree of each task to be scheduled;
s103, performing high-frequency heartbeat detection on each scheduling sub-server to obtain a dead scheduling sub-server, wherein the dead scheduling sub-server does not receive the task to be scheduled;
s104, acquiring the residual CPU or the residual memory of each scheduling sub-server;
s105, respectively judging whether the residual CPU or the residual memory of each scheduling sub-server is lower than a preset threshold value;
s106, defining the scheduling sub-server with the residual CPU or residual memory lower than a preset threshold value as a saturated scheduling sub-server, wherein the saturated scheduling sub-server does not receive the task to be scheduled;
s107, multiplying the residual CPUs or the residual memories of the scheduling sub-servers with the residual CPUs or the residual memories being larger than or equal to the preset threshold value by a first weight coefficient to obtain a first scheduling factor of each scheduling sub-server;
s108, acquiring the number of the running tasks of each scheduling sub-server;
s109, calculating the total predicted completion time of the tasks running by each scheduling sub-server based on the average time of each scheduling sub-server for completing one task;
s110, generating load factors of the scheduling sub-servers according to the number of the tasks which are operated by the scheduling sub-servers and the total predicted completion time of the tasks which are operated by the scheduling sub-servers;
s111, multiplying the load factor of each scheduling sub-server by a second weight coefficient to obtain a second scheduling factor of each scheduling sub-server;
s112, acquiring a resource upper limit value and a current resource value of a relevant platform;
s113, generating a resource factor according to the resource upper limit value and the current resource value of the relevant platform;
s114, multiplying the resource factor by a third weight coefficient to obtain a third scheduling factor of the relevant platform;
s115, according to the priority of each task to be scheduled, obtaining the scheduling sequence of each task to be scheduled;
s116, multiplying the first scheduling factor by a fourth weight coefficient, adding the second scheduling factor by a fifth weight coefficient, and adding the third scheduling factor by a sixth weight coefficient to obtain scheduling scores of each scheduling sub-server;
s117, obtaining the sequence of the scheduling sub-servers for receiving the tasks to be scheduled according to the scheduling scores of the scheduling sub-servers;
and S118, scheduling each task to be scheduled to each scheduling sub-server according to the scheduling sequence of each task to be scheduled and the sequence of each scheduling sub-server receiving the tasks to be scheduled.
Example 5
Referring to fig. 7, fig. 7 is a block diagram of a task scheduling apparatus based on multi-factor coordination according to an embodiment of the present invention.
As shown in fig. 7, corresponding to the method for scheduling a task based on multi-factor coordination according to the first embodiment of the present invention, a task scheduling apparatus 100 based on multi-factor coordination according to a fifth embodiment of the present invention includes:
the evoking module 101 is used for evoking the task to be scheduled and adding the task to be scheduled into the waiting queue;
a priority definition module 102, configured to define a priority for each task to be scheduled according to the importance degree of each task to be scheduled;
a calculating module 103, configured to calculate a first scheduling factor and a second scheduling factor of each scheduling sub-server, and a third scheduling factor of a related platform, where the first scheduling factor is related to a resource condition of the scheduling sub-server, the second scheduling factor is related to a load condition of the scheduling sub-server, and the third scheduling factor is related to a resource condition of the related platform;
and the scheduling module 104 is configured to schedule each task to be scheduled to each scheduling sub-server according to the priority of each task to be scheduled, the first scheduling factor, the second scheduling factor, and the third scheduling factor.
Example 6
Referring to fig. 8, fig. 8 is a block diagram of a task scheduling apparatus according to an embodiment of the present invention.
As shown in fig. 8, a task scheduling apparatus 200 according to a sixth embodiment of the present invention includes:
a storage device 201 and one or more processors 202, the storage device 201 being configured to store one or more programs, wherein the one or more programs, when executed by the one or more processors 202, cause the one or more processors 202 to perform the method according to any one of the first to fourth embodiments of the present invention.
It should be noted that the intelligent warning device 200 provided in the present embodiment further includes a bus 203 for communication connection between the storage 201 and the one or more processors 202.
Example 7
Referring to fig. 9, fig. 9 is a block diagram of a computer-readable storage medium according to an embodiment of the present invention.
As shown in fig. 9, a seventh embodiment of the invention provides a computer-readable storage medium 300 having stored thereon an executable instruction 301, where the executable instruction 301 is executed to perform the method according to any one of the first to fourth embodiments of the invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, digital subscriber line) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk), among others.
It should be noted that, in the summary of the present invention, each embodiment is described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the method class embodiment, since it is similar to the product class embodiment, the description is simple, and the relevant points can be referred to the partial description of the product class embodiment.
It is further noted that, in the present disclosure, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined in this disclosure may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A task scheduling method based on multi-factor cooperation is characterized in that the method is used for respectively scheduling each task to be scheduled to each scheduling sub-server so as to run the task to be scheduled, and the method comprises the following steps:
the task to be scheduled is called up, and the task to be scheduled is added into a waiting queue;
defining a priority for each task to be scheduled according to the importance degree of each task to be scheduled;
calculating a first scheduling factor and a second scheduling factor of each scheduling sub-server and a third scheduling factor of a related platform, wherein the first scheduling factor is related to the resource condition of the scheduling sub-server, the second scheduling factor is related to the load condition of the scheduling sub-server, and the third scheduling factor is related to the resource condition of the related platform;
and scheduling each task to be scheduled to each scheduling sub-server according to the priority of each task to be scheduled, the first scheduling factor, the second scheduling factor and the third scheduling factor.
2. The multi-factor collaboration-based task scheduling method according to claim 1, wherein the calculating the first scheduling factor of each of the scheduling sub-servers specifically includes:
acquiring the residual CPU or residual memory of each scheduling sub-server;
respectively judging whether the residual CPU or the residual memory of each scheduling sub-server is lower than a preset threshold value;
defining the scheduling sub-server with the residual CPU or residual memory lower than a preset threshold value as a saturated scheduling sub-server, wherein the saturated scheduling sub-server does not receive the task to be scheduled;
and multiplying the residual CPU or the residual memory of the scheduling sub-server of which the residual CPU or the residual memory is greater than or equal to a preset threshold value by a first weight coefficient to obtain a first scheduling factor of each scheduling sub-server.
3. The multi-factor cooperation-based task scheduling method according to claim 2, wherein the calculating the second scheduling factor of each of the scheduling sub-servers specifically includes:
acquiring the number of the running tasks of each scheduling sub-server;
calculating the total predicted completion time of the tasks running by each scheduling sub-server based on the average time of each scheduling sub-server for completing one task;
generating a load factor of each scheduling sub-server according to the number of tasks currently operated by each scheduling sub-server and the total predicted completion time of the tasks currently operated by each scheduling sub-server, wherein the total predicted completion time of the tasks currently operated by the scheduling sub-servers is the same, the more the number of the tasks currently operated by the scheduling sub-servers is, the smaller the load factor is, the same the number of the tasks currently operated by the scheduling sub-servers is, the larger the total predicted completion time of the tasks currently operated by the scheduling sub-servers is, and the smaller the load factor is;
and multiplying the load factor of each scheduling sub-server by a second weight coefficient to obtain a second scheduling factor of each scheduling sub-server.
4. The multi-factor collaboration-based task scheduling method according to claim 3, wherein the calculating of the third scheduling factor of the relevant platform specifically includes:
acquiring a resource upper limit value and a current resource value of the related platform;
generating a resource factor according to the resource upper limit value and the current resource value of the relevant platform, wherein the closer the current resource value of the relevant platform is to the resource upper limit value, the smaller the resource factor is;
and multiplying the resource factor by a third weight coefficient to obtain a third scheduling factor of the relevant platform.
5. The method according to claim 4, wherein the scheduling each task to be scheduled to each scheduling sub-server according to the priority of each task to be scheduled, the first scheduling factor, the second scheduling factor, and the third scheduling factor specifically comprises:
according to the priority of each task to be scheduled, obtaining the scheduling sequence of each task to be scheduled;
multiplying the first scheduling factor by a fourth weight coefficient, adding the second scheduling factor by a fifth weight coefficient, and adding the third scheduling factor by a sixth weight coefficient to obtain scheduling scores of the scheduling sub-servers;
according to the scheduling score of each scheduling sub-server, obtaining the sequence of the scheduling sub-servers for receiving the tasks to be scheduled;
and scheduling each task to be scheduled to each scheduling sub-server according to the scheduling sequence of each task to be scheduled and the sequence of each scheduling sub-server receiving the task to be scheduled.
6. The multi-factor synergy-based task scheduling method of claim 5 wherein the sum of the fourth, fifth and sixth weighting coefficients is 1.
7. The method for task scheduling based on multi-factor coordination according to claim 1, wherein after defining the priority for each task to be scheduled according to the importance of each task to be scheduled, the method further comprises:
and carrying out high-frequency heartbeat detection on each scheduling sub-server to obtain a dead scheduling sub-server, wherein the dead scheduling sub-server does not receive the task to be scheduled.
8. The multi-factor collaboration-based task scheduling method according to claim 1, wherein the evoking of the task to be scheduled specifically comprises:
calling the task to be scheduled according to preset time; or,
and calling the task to be scheduled according to a preset event.
9. A task scheduling device based on multi-factor collaboration is characterized by comprising:
the call-up module is used for calling up the task to be scheduled and adding the task to be scheduled into a waiting queue;
the priority definition module is used for defining the priority for each task to be scheduled according to the importance degree of each task to be scheduled;
a calculating module, configured to calculate a first scheduling factor and a second scheduling factor of each scheduling sub-server, and a third scheduling factor of a related platform, where the first scheduling factor is related to a resource condition of the scheduling sub-server, the second scheduling factor is related to a load condition of the scheduling sub-server, and the third scheduling factor is related to a resource condition of the related platform;
and the scheduling module is used for scheduling each task to be scheduled to each scheduling sub-server according to the priority of each task to be scheduled, the first scheduling factor, the second scheduling factor and the third scheduling factor.
10. A computer-readable storage medium having stored thereon executable instructions that, when executed, perform the method of any one of claims 1-8.
CN202010933923.8A 2020-09-08 2020-09-08 Task scheduling method, device and storage medium based on multi-factor cooperation Active CN112035236B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010933923.8A CN112035236B (en) 2020-09-08 2020-09-08 Task scheduling method, device and storage medium based on multi-factor cooperation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010933923.8A CN112035236B (en) 2020-09-08 2020-09-08 Task scheduling method, device and storage medium based on multi-factor cooperation

Publications (2)

Publication Number Publication Date
CN112035236A true CN112035236A (en) 2020-12-04
CN112035236B CN112035236B (en) 2023-02-14

Family

ID=73585208

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010933923.8A Active CN112035236B (en) 2020-09-08 2020-09-08 Task scheduling method, device and storage medium based on multi-factor cooperation

Country Status (1)

Country Link
CN (1) CN112035236B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112463339A (en) * 2020-12-11 2021-03-09 北京浪潮数据技术有限公司 Multitask scheduling method, system, equipment and storage medium
CN113515358A (en) * 2021-04-30 2021-10-19 北京奇艺世纪科技有限公司 Task scheduling method and device, electronic equipment and storage medium
CN113986512A (en) * 2021-11-08 2022-01-28 中国人民财产保险股份有限公司 Task scheduling method and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102567086A (en) * 2010-12-30 2012-07-11 中国移动通信集团公司 Task scheduling method, equipment and system
CN102708011A (en) * 2012-05-11 2012-10-03 南京邮电大学 Multistage load estimating method facing task scheduling of cloud computing platform
US20160098292A1 (en) * 2014-10-03 2016-04-07 Microsoft Corporation Job scheduling using expected server performance information
US9430290B1 (en) * 2015-03-31 2016-08-30 International Business Machines Corporation Determining storage tiers for placement of data sets during execution of tasks in a workflow
US20180321979A1 (en) * 2017-05-04 2018-11-08 Salesforce.Com, Inc. Systems, methods, and apparatuses for implementing a scheduler with preemptive termination of existing workloads to free resources for high priority items
CN111506398A (en) * 2020-03-03 2020-08-07 平安科技(深圳)有限公司 Task scheduling method and device, storage medium and electronic device
CN111597044A (en) * 2020-05-14 2020-08-28 Oppo广东移动通信有限公司 Task scheduling method and device, storage medium and electronic equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102567086A (en) * 2010-12-30 2012-07-11 中国移动通信集团公司 Task scheduling method, equipment and system
CN102708011A (en) * 2012-05-11 2012-10-03 南京邮电大学 Multistage load estimating method facing task scheduling of cloud computing platform
US20160098292A1 (en) * 2014-10-03 2016-04-07 Microsoft Corporation Job scheduling using expected server performance information
US9430290B1 (en) * 2015-03-31 2016-08-30 International Business Machines Corporation Determining storage tiers for placement of data sets during execution of tasks in a workflow
US20180321979A1 (en) * 2017-05-04 2018-11-08 Salesforce.Com, Inc. Systems, methods, and apparatuses for implementing a scheduler with preemptive termination of existing workloads to free resources for high priority items
CN111506398A (en) * 2020-03-03 2020-08-07 平安科技(深圳)有限公司 Task scheduling method and device, storage medium and electronic device
CN111597044A (en) * 2020-05-14 2020-08-28 Oppo广东移动通信有限公司 Task scheduling method and device, storage medium and electronic equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112463339A (en) * 2020-12-11 2021-03-09 北京浪潮数据技术有限公司 Multitask scheduling method, system, equipment and storage medium
CN113515358A (en) * 2021-04-30 2021-10-19 北京奇艺世纪科技有限公司 Task scheduling method and device, electronic equipment and storage medium
CN113515358B (en) * 2021-04-30 2024-04-12 北京奇艺世纪科技有限公司 Task scheduling method and device, electronic equipment and storage medium
CN113986512A (en) * 2021-11-08 2022-01-28 中国人民财产保险股份有限公司 Task scheduling method and electronic equipment

Also Published As

Publication number Publication date
CN112035236B (en) 2023-02-14

Similar Documents

Publication Publication Date Title
CN112035236B (en) Task scheduling method, device and storage medium based on multi-factor cooperation
CN108256115B (en) Spark Sql-oriented HDFS small file real-time combination implementation method
CN109542935B (en) Execution method of rule engine, storage medium and server
CN110569252B (en) Data processing system and method
US11086657B2 (en) Method and system for scheduling transactions in a data system
US9251227B2 (en) Intelligently provisioning cloud information services
US8458136B2 (en) Scheduling highly parallel jobs having global interdependencies
CN111258726B (en) Task scheduling method and device
CN111190892A (en) Method and device for processing abnormal data in data backfilling
CN114816709A (en) Task scheduling method, device, server and readable storage medium
CN114489942B (en) Queue task scheduling method and system for application cluster
CN107247784B (en) Distributed transaction control method and transaction manager
US10599472B2 (en) Information processing apparatus, stage-out processing method and recording medium recording job management program
Ouyang et al. An approach for modeling and ranking node-level stragglers in cloud datacenters
JP2006221516A (en) Communication server setting value determining device, its program and its method
CN115220131B (en) Meteorological data quality inspection method and system
CN112181443A (en) Automatic service deployment method and device and electronic equipment
CN112486638A (en) Method, apparatus, device and storage medium for executing processing task
CN111290868B (en) Task processing method, device and system and flow engine
CN115373829A (en) Method, device and system for scheduling CPU (Central processing Unit) resources
Chen et al. Development of a cyber-physical-style continuous yield improvement system for manufacturing industry
CN110825493A (en) Virtual machine tuning method and device
CN118113443B (en) Task scheduling method, system, program product, device and medium
CN114945909B (en) Optimized query scheduling for resource utilization optimization
CN117742928B (en) Algorithm component execution scheduling method for federal learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant