CN116719631B - Distributed task scheduling method and device, storage medium and electronic equipment - Google Patents

Distributed task scheduling method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN116719631B
CN116719631B CN202311010107.XA CN202311010107A CN116719631B CN 116719631 B CN116719631 B CN 116719631B CN 202311010107 A CN202311010107 A CN 202311010107A CN 116719631 B CN116719631 B CN 116719631B
Authority
CN
China
Prior art keywords
objective function
determining
preset
allocation strategy
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311010107.XA
Other languages
Chinese (zh)
Other versions
CN116719631A (en
Inventor
徐涛
陈红阳
何军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lab
Original Assignee
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lab filed Critical Zhejiang Lab
Priority to CN202311010107.XA priority Critical patent/CN116719631B/en
Publication of CN116719631A publication Critical patent/CN116719631A/en
Application granted granted Critical
Publication of CN116719631B publication Critical patent/CN116719631B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

In the distributed task scheduling method provided by the specification, each candidate allocation strategy is determined based on a task to be scheduled and a computing node state parameter of a computing platform, and a first objective function of each candidate allocation strategy is respectively determined according to a preset allocation constraint condition; determining a second objective function of each candidate allocation strategy according to whether the candidate allocation strategy meets allocation constraint conditions or not; and determining the suitability of the candidate allocation strategy according to the first objective function and the second objective function, determining the objective allocation strategy through an evolutionary algorithm, and dispatching the task to a computing node corresponding to the objective allocation strategy for computing.

Description

Distributed task scheduling method and device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of communications technologies, and in particular, to a distributed task scheduling method, a device, a storage medium, and an electronic apparatus.
Background
In recent years, with the development of internet technology, it has become normal to implement task scheduling using a distributed framework. In the graph computing platform, different tasks are distributed to each processor, so that each processor can process the tasks distributed by each processor at the same time, and the task processing efficiency is improved.
However, when processing large-scale graph computation tasks using conventional distributed task scheduling methods, local optimization is often too focused, resulting in load imbalance. For example, the server receives 5 tasks and distributes the tasks to each computing node for processing, wherein the processing time of the tasks can be shortened by utilizing the computing node A to process the tasks, so that the server distributes 4 tasks to the computing node A for processing, and the rest tasks are randomly distributed to one computing node for processing, while the computing nodes which are not distributed to the tasks are idle.
Although the processing task of the a computing node is fast, the a computing node needs to process a plurality of tasks, and the latter task needs to wait before the former task is completed. This results in tasks that are not handled in time and some of the compute nodes are not utilized efficiently, resulting in load imbalance.
For this purpose, the present specification provides a distributed task scheduling method.
Disclosure of Invention
The application provides a distributed task scheduling method, a distributed task scheduling device, a storage medium and electronic equipment, so as to partially solve the problems existing in the prior art.
The application provides a distributed task scheduling method, which comprises the following steps:
Determining state parameters of each task to be scheduled and each computing node of the distributed computing platform;
determining each candidate allocation strategy according to each task and each computing node, wherein each candidate allocation strategy comprises allocation relation between the task and the computing node;
determining the number of tasks distributed by each computing node in the candidate distribution strategy, determining the number of computing nodes with the number of the distributed tasks being larger than a preset first numerical value as a first constraint value, and determining a first sub-item of a first objective function according to the first constraint value; predicting a prediction state of each computing node when executing the task allocated by the candidate allocation strategy according to the candidate allocation strategy and the state parameter, and determining a second sub-item of the first objective function according to the prediction state and a preset allocation constraint condition; adding the first sub-item and the second sub-item of the first objective function to obtain a first objective function;
when the candidate non-allocation strategy meets a preset allocation constraint condition, determining a second objective function according to the candidate allocation strategy and a preset optimization target; when the candidate allocation strategy meets a preset allocation constraint condition, determining a second objective function according to the candidate allocation strategy, a bias parameter of the second objective function and the preset optimization target, wherein the bias parameter is used for increasing the distance between the second objective function and a desired target;
According to a preset evolution algorithm, determining the adaptability of each candidate allocation strategy according to a first objective function and a second objective function of each candidate allocation strategy so as to adjust each candidate allocation strategy, and determining a target allocation strategy according to the adaptability of each adjusted candidate allocation strategy when the evolution ending condition is met;
and dispatching each task to each computing node according to the target allocation strategy for computing.
Optionally, determining each task to be scheduled specifically includes:
receiving a graph calculation request;
determining a directed acyclic graph corresponding to graph calculation to be executed according to the graph calculation request;
and determining each task to be scheduled and the dependency relationship among the tasks according to the directed acyclic graph.
Optionally, determining each candidate allocation policy according to each task and each computing node specifically includes:
and generating a plurality of candidate halving allocation strategies for respectively allocating the tasks to different computing nodes according to the dependency relationship among the tasks.
Optionally, the state parameters include at least: one of a communication resource of a computing node, an energy consumption of the computing node, a storage resource of the computing node, and a computing resource of the computing node, wherein the communication resource comprises a frequency, a bandwidth, a real-time state, and a real-time hardware parameter; the energy consumption includes power and heat loss; the storage resources comprise memory resources and external memory resources; the computing resources include CPU resources and network resources.
Optionally, determining the second sub-term of the first objective function according to the prediction state and a preset allocation constraint condition specifically includes:
aiming at each state parameter, determining an optimization objective function of the state parameter according to a predicted state corresponding to the state parameter and a preset constraint threshold corresponding to the state parameter;
and determining a second sub-item of the first objective function according to the optimized objective function of each state parameter.
Optionally, before determining the second objective function, the method further comprises:
judging whether a candidate allocation strategy meeting a preset allocation constraint condition exists in each candidate allocation strategy;
if yes, determining the total consumption of any candidate allocation strategy meeting the preset allocation constraint condition, and taking the total consumption as a bias parameter of a second objective function;
if not, determining the preset second value as the bias parameter of the second objective function.
Optionally, determining the second objective function, the method further comprises:
for each candidate allocation strategy, when the candidate allocation strategy meets a preset allocation constraint condition, adding the bias parameter on the basis of setting a second objective function at least when the total consumption of each task is completed, and determining the second objective function;
And when the candidate allocation strategy does not meet the preset allocation constraint condition so as to complete the total consumption of each task, determining the second objective function at minimum.
Optionally, determining the fitness of each candidate allocation policy to adjust each candidate allocation policy, until the evolution end condition is met, and determining the target allocation policy according to the fitness of each candidate allocation policy, which specifically includes:
and determining the fitness of each candidate allocation strategy and storing the fitness.
And screening a designated number of candidate allocation strategies from the candidate allocation strategies according to the adaptability of the candidate allocation strategies, and taking the designated number of candidate allocation strategies as parent populations.
And generating a reorganization allocation strategy corresponding to each candidate allocation strategy in the parent population according to the parent population and the mutation parameters.
And determining each candidate allocation strategy of the next population according to each candidate allocation strategy and each recombination allocation strategy in the parent population, and redetermining the fitness of each candidate allocation strategy of the next population until the evolution ending condition is met, and determining a target allocation strategy according to the stored fitness of each candidate allocation strategy.
Optionally, generating a reorganization allocation policy corresponding to each candidate allocation policy in the parent population according to the parent population and the mutation parameter, which specifically includes:
And determining a preset transpose matrix, an initialized normal distribution vector and variation intensity of the next generation population, and determining variation parameters.
And aiming at each candidate allocation strategy in the parent population, carrying out mutation on the average allocation strategy of the parent population according to the mutation parameters, and determining a mutation result.
And judging whether the variation result exceeds a preset parameter range.
If yes, the variation result is adjusted according to the parameter range, so that the variation result does not exceed the parameter range, and the variation result is used as a reorganization and distribution strategy.
If not, the mutation result is used as a reorganization and distribution strategy.
Wherein the average allocation policy is determined according to each candidate allocation policy of the parent population.
Optionally, before generating the reorganization allocation policy corresponding to each candidate allocation policy in the parent population according to the parent population and the mutation parameter, the method further includes:
and determining the difference value between the iteration round number of the last adjustment variation parameter and the current iteration round number.
And judging whether the difference value reaches a preset wheel number difference or not.
If yes, determining the first objective function gradient and the second objective function gradient, and adjusting the variation parameters according to the determined gradients.
Optionally, adjusting the mutation parameter according to the determined gradient specifically includes:
and when the second objective function gradient is larger than a first preset value, the first objective function gradient is smaller than or equal to the absolute value of the second preset value, and the first objective function of the target allocation strategy is larger than the second preset value, the variation parameters are adjusted according to preset parameters.
And when the second objective function gradient is smaller than or equal to the absolute value of the first preset value, the first objective function gradient is smaller than the absolute value of the second preset value, and the first objective function of the target allocation strategy is larger than or equal to the two preset values, the variation parameters are adjusted according to preset parameters.
And when the second objective function gradient is smaller than or equal to the absolute value of the first preset value and the first objective function gradient is smaller than the negative second preset value, the variation parameter is adjusted, and the variation parameter after adjustment is consistent with the variation parameter before adjustment.
And when the second objective function gradient is smaller than the negative first preset value and the first objective function gradient is smaller than the absolute value of the second preset value, adjusting the variation parameter, wherein the variation parameter after adjustment is consistent with the variation parameter before adjustment.
Optionally, the mutation parameter at least includes: normal distribution vector and variation intensity;
according to preset parameters, the variation parameters are adjusted, which specifically comprises:
and determining a pseudo-inverse matrix according to the identity matrix.
And determining the mutation vector according to the ratio of the mutation intensity corresponding to the previous generation of allocation strategy to the new allocation strategy change.
And determining a normal distribution vector according to the product of the pseudo-inverse matrix and the variation vector.
And adjusting the variation intensity according to the preset maximum variation intensity.
Optionally, determining the fitness of each candidate allocation policy according to the first objective function and the second objective function of each candidate allocation policy specifically includes:
and determining the recombination weight corresponding to each candidate allocation strategy in the parent population according to the position of each allocation strategy in the population.
And taking the initial weight as the weight of the second objective function, and determining the weight of the first objective function according to the weight of the second objective function, wherein the weight of the second objective function and the weight of the first objective function are normalized.
And determining the fitness of each candidate allocation strategy according to the first objective function and the second objective function of each candidate allocation strategy and the corresponding weight.
Optionally, the method further comprises:
and determining the difference value between the iteration round number of the last adjustment variation parameter and the current iteration round number.
And judging whether the difference value reaches a preset wheel number difference or not.
If yes, the weight of the second objective function is adjusted.
And when the second objective function gradient is larger than a first preset value, the first objective function gradient is smaller than or equal to the absolute value of a second preset value, and the first objective function of the objective allocation strategy is larger than the second preset value, determining the weight of the second objective function as any weight value.
And when the second objective function gradient is smaller than or equal to the absolute value of the first preset value, the first objective function gradient is smaller than or equal to the absolute value of the second preset value, and the first objective function of the objective allocation strategy is larger than the second preset value, determining the weight of the second objective function as any weight value.
When the second objective function gradient is smaller than or equal to the absolute value of the first preset value, the first objective function gradient is smaller than or equal to the absolute value of the second preset value, and the first objective function of the objective allocation strategy is equal to the second preset value, the weight of the second objective function is adjusted, and the weight of the second objective function after adjustment is consistent with the weight of the second objective function before adjustment.
And when the second objective function gradient is smaller than or equal to the absolute value of the first preset value and the first objective function gradient is smaller than the negative second preset value, determining the weight of the second objective function according to the maximum weight, the initial weight, the last adjusted second objective function and the self-adaptive strength.
And when the second objective function gradient is smaller than the negative first preset value and the first objective function gradient is smaller than the absolute value of the second preset value, determining the weight of the second objective function according to the maximum weight value, the initial weight, the last adjusted second objective function and the self-adaptive strength.
The present specification provides a computer readable storage medium storing a computer program which when executed by a processor implements the method of distributed task scheduling described above.
The present specification provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the above method of distributed task scheduling when executing the program.
The above-mentioned at least one technical scheme that this specification adopted can reach following beneficial effect:
In the distributed task scheduling method provided by the specification, a scheduling center determines a task to be scheduled and a computing platform computing node state parameter, determines a candidate allocation strategy, and determines a first objective function according to the candidate allocation strategy and a preset allocation constraint condition; determining a second objective function according to different situations that the candidate allocation strategy meets allocation constraint conditions; according to the evolution algorithm, determining the suitability of the candidate allocation strategy according to the first objective function and the second objective function, adjusting the candidate allocation strategy until the evolution ending condition is met, determining the objective allocation strategy according to the suitability of the adjusted candidate allocation strategy, and scheduling the task to a computing node corresponding to the objective allocation strategy for computing.
According to the method, the candidate allocation strategy which does not meet the allocation constraint condition is not directly abandoned, the strong constraint condition is converted into the objective function, the constraint condition limits on optimizing are reduced, the candidate allocation strategy can be optimized in a larger range, the accuracy of determining the target allocation strategy is improved, the utilization rate of the computing node in the distributed task scheduling scene is higher, and the load is more balanced.
Drawings
The accompanying drawings, which are included to provide a further understanding of the specification, illustrate and explain the exemplary embodiments of the present specification and their description, are not intended to limit the specification unduly. In the drawings:
FIG. 1 is a schematic flow chart of a distributed task scheduling provided in the present specification;
FIG. 2 is a table of references for adjusting fitness of an evolution strategy provided in the present specification;
FIG. 3 is a table of weight references for adjusting an evolutionary strategy provided herein;
FIG. 4 is a schematic flow chart of task scheduling of the computing platform provided in the present specification;
FIG. 5 is a schematic diagram of an apparatus for distributed task scheduling provided in the present specification;
fig. 6 is a schematic view of the apparatus corresponding to fig. 1 provided in the present specification.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the present specification more apparent, the technical solutions of the present specification will be clearly and completely described below with reference to specific embodiments of the present specification and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present specification. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are intended to be within the scope of the present application based on the embodiments herein.
The following describes in detail the technical solutions provided by the embodiments of the present specification with reference to the accompanying drawings.
Fig. 1 is a flow chart of a method for distributed task scheduling in the present specification, specifically including the following steps:
s101: and determining state parameters of each task to be scheduled and each computing node of the distributed computing platform.
The distributed task scheduling is a multi-task scheduling method, which is a process of distributing multiple tasks to a plurality of different computing nodes, the specific flow is shown in fig. 4, a graph computing platform receives a task A, a task B, a task C and a task D, a dependency relationship a exists between the task A and the task B, a dependency relationship B exists between the task A and the task C, a dependency relationship C exists between the task B and the task C, a dependency relationship D exists between the task C and the task D, and the scheme is not limited if the dependency relationship a, the dependency relationship B, the dependency relationship C and the dependency relationship are of the same type; generating an allocation strategy 1-9 according to each received task, wherein a target allocation strategy exists; and distributing each task to the computing node A, the computing node B, the computing node C, the computing node D and the computing node E according to the target distribution strategy, wherein communication can be carried out among the computing nodes. The distribution is usually performed by a distributed system dispatching center, and the dispatching center is a server, or other devices capable of realizing the same function or devices with similar functions. For convenience of description, the following description will be made taking a server as an example.
In one or more embodiments of the present disclosure, the server first determines each task that needs to be currently scheduled, i.e., a task to be scheduled. In general, the task is divided into a plurality of subtasks after being received, and the number of the tasks is not limited, namely, a plurality of tasks can be equally distributed when being received, and the distributed task scheduling can be performed as long as the received tasks can exist independently. In order to determine how to allocate, in general, the target allocation policy is to make each computing node reach the balance of efficiency and load as much as possible, so that each computing node reaches the highest working efficiency, so the server should have the capability of collecting the states of each computing node and implement the method.
Specifically, since the state parameters required by the server for performing the distributed task scheduling are different, the state parameters to be collected are also different, but generally, the state parameters should at least include: one of a communication resource of the computing node, an energy consumption of the computing node, a storage resource of the computing node, and a computing resource of the computing node. The communication resource at least comprises extractable parameters of the same type such as frequency, bandwidth, real-time state, real-time hardware parameters and the like; the energy consumption at least comprises extractable parameters such as power, heat loss and the like; the storage resources at least comprise extractable parameters of the same type such as memory resources, external memory resources and the like; the computing resources at least comprise extractable parameters of the same type, such as CPU resources and network resources.
And because the graph calculation requests received by the server are different, the directed acyclic graph corresponding to the graph calculation to be executed can be determined according to the graph calculation corresponding request. And then, calculating a corresponding directed acyclic graph according to each graph, and determining each task to be scheduled. In the graph calculation request, the tasks to be scheduled are often associated with each other, and especially the front and back execution conditions between the tasks are often greatly influenced. For example, the required calculation force is close to that of the task B, and the task B is a task which can be performed only based on the data obtained by the task A. Whereas conventional allocation methods, such as: a shortest task First (Shortest Job First, SJF) or a Least Load First (LLF) may be the case when two tasks are assigned to two different computing nodes with computational power proximity at the same time, or when one task is executed, the other task is assigned to two different computing nodes with computational power proximity. In any of the above allocation methods, the computing node performing the task B always belongs to a waiting state before the computing node a completes the task, thereby greatly reducing the task execution efficiency. Thus, in the embodiment of the present specification, before task allocation is performed, the dependency relationship between the tasks may be determined.
Further, in the embodiment of the present disclosure, the server may further determine information such as computational complexity, input data size, output data size, and estimated execution time of each task, which is used to evaluate the execution effect of each task on the distributed computing platform.
S103: and determining each candidate allocation strategy according to each task and each computing node, wherein the candidate allocation strategy comprises allocation relation between the task and the computing node.
In one or more embodiments of the present disclosure, the server first determines the scheduled task output in S101, and generates each candidate allocation policy according to the location of each computing node of the distributed computing platform and the dependency relationship of the scheduled task. In each candidate allocation strategy, each scheduling task allocated to each computing node has no quantity limitation or sequence requirement, and does not need to meet the dependency relationship.
S105: determining the number of tasks distributed by each computing node in the candidate distribution strategy, determining the number of computing nodes with the number of the distributed tasks being larger than a preset first numerical value as a first constraint value, and determining a first sub-item of a first objective function according to the first constraint value; predicting a prediction state of each computing node when executing the task allocated by the candidate allocation strategy according to the candidate allocation strategy and the state parameter, and determining a second sub-item of the first objective function according to the prediction state and a preset allocation constraint condition; and adding the first sub-item and the second sub-item of the first objective function to obtain the first objective function.
In one or more embodiments of the present description, after determining each candidate allocation policy, the server may determine a corresponding first objective function for each candidate allocation policy. Wherein the first objective function is determined according to a preset allocation constraint.
Specifically, in this specification, the server may store allocation constraints of task scheduling in advance. The allocation constraint condition is used for constraining what tasks are allocated to what computing nodes, one or more of the allocation constraint conditions can be set, and the allocation constraint condition is not limited in the specification and can be set according to requirements. Generally, allocation constraints may be set based on human experience.
In one or more embodiments herein, the allocation constraint may limit the number of tasks that a computing node may allocate. Assume that the allocation constraint is: a constraint is that only one task is assigned to one compute node. The server may determine the number of tasks allocated by each computing node in the candidate allocation policy, and determine the number of computing nodes for which the number of allocated tasks is greater than a preset first value, as the first constraint value. The allocation constraint can then be expressed using the following formula:
Wherein,for indicating whether the ith task is assigned to the jth computing node,>indicating that the ith task is assigned to the jth computing node,/->Indicating that the ith task is not assigned to the jth computing node +.>Indicating the number of tasks allocated to the jth computing node if +.>Greater than 1, the candidate allocation policy does not satisfy the allocation constraint.
In one or more embodiments herein, the allocation constraint may limit the total time spent by each computing node to complete an allocated task to no more than a preset total time. The allocation constraint can then be expressed using the following formula:
i.e. +.>
Wherein,representing task->In the processor->Execution time on->Representing the total time taken for each computing node to complete the assigned task, +.>Indicating a preset total duration. The preset total duration can be set according to needs, and the specification does not limit the preset total duration.
In one or more embodiments herein, the allocation constraint may limit the amount of traffic required by each computing node to complete the allocated task to no more than a preset total amount of traffic. The allocation constraint can then be expressed using the following formula:
i.e. +.>
Wherein,the traffic required for the computing node k and the computing node l to transmit the data of the task i and the task j when the task i is allocated to the computing node k and the task j is allocated to the computing node l is represented. / >Indicating a preset total traffic.
In one or more embodiments of the present description, the allocation constraint may limit the total amount of energy consumption required by each computing node to complete the allocated task to no more than a preset total energy consumption. The allocation constraint can then be expressed using the following formula:
i.e. +.>
Wherein,representing computing node->Execution task->Energy consumption of->Representing the maximum energy consumption allowed by the computing node j.
In one or more embodiments of the present disclosure, the allocation constraint may limit the amount of memory required by each computing node to complete the allocated task to no more than a predetermined total memory usage. The allocation constraint can then be expressed using the following formula:
i.e. +.>
Wherein,representing computing node->Execution task->Memory usage of->Representing the maximum memory usage allowed by compute node j.
Of course, in the present specification, the above allocation constraints may be used alone or in combination, and specific allocation constraints may be set as needed. In this specification, for convenience of description, the following description will take the above-mentioned allocation constraints as examples, and the allocation constraints include:
however, if the allocation constraint conditions belong to strong constraints, if the allocation policy is determined by the allocation constraint conditions, the candidate allocation policy is not considered any more as long as any allocation constraint condition is not satisfied, so that the search space in the subsequent optimization is smaller. In practice, there may be cases where the candidate allocation policy does not satisfy a certain constraint, which is better overall. For example, there are allocation policy a and allocation policy B, where a is half an hour earlier than B is when the computing task is completed, but the preset energy consumption constraint condition is not satisfied, that is, a situation that a computing node is overheated in a short time is caused; and B completely meets all constraint conditions, and the strategy A is preferentially selected and the strategy B is abandoned.
Therefore, in the present specification, the service area may determine the first objective function according to the candidate allocation policy and a preset allocation constraint condition.
In particular, in the above-described allocation constraints,is a limitation on the number of tasks allocated to a compute node, while other allocation constraints are limitations on the consumption of compute node resources. The server may determine a first sub-term of the first objective function based on the first allocation constraint described above. That is, the server may determine the number of tasks allocated by each computing node in the candidate allocation policy, and then determine, as the first constraint value, the number of computing nodes having a number of allocated tasks greater than a preset first value, and determine the first sub-item of the first objective function according to the first constraint value. As formula->As shown.
Further, according to other allocation constraintsAccording to the candidate allocationAnd predicting the predicted state of each computing node when executing the task allocated by the candidate allocation strategy according to the strategy and the state parameter, and determining a second sub-item of the first objective function according to the predicted state and a preset allocation constraint condition. As formula->As shown.
And adding the first sub-item and the second sub-item of the first objective function to obtain the first objective function. Such as formula As shown.
Through the first objective function, the strong constraint of each allocation constraint condition is converted into the expectation of the objective function, so that the server can search in a larger search space, and the final allocation strategy is determined.
S107: when the candidate allocation strategy does not meet the preset allocation constraint condition, determining a second objective function according to the candidate allocation strategy and a preset optimization target; and when the candidate allocation strategy meets a preset allocation constraint condition, determining a second objective function according to the candidate allocation strategy, the bias parameter of the second objective function and the preset optimization target, wherein the bias parameter is used for increasing the distance between the second objective function and the expected target.
In one or more embodiments of the present disclosure, the above step S105 converts the allocation constraint condition used for filtering the allocation policy into the first objective function, but there may be a plurality of candidate allocation policies that satisfy the first objective function, so it is also required to set the second objective function with minimum total consumption for completing each task. The second objective function can then be expressed using the following formula:
i.e. +.>
Wherein,representing computing node->Total time spent completing each task +. >Indicating the total time taken for the allocation policy to complete the task.
Further, since the server does not filter the candidate allocation policies determined in step S103 based on the allocation constraint conditions, each determined candidate allocation policy may satisfy the allocation constraint conditions, or may only partially satisfy the candidate allocation policies, and may not even satisfy the candidate allocation policies. And adding bias parameters for each candidate allocation strategy when the candidate allocation strategy meets a preset allocation constraint condition, and determining a second objective function on the basis of setting the second objective function at minimum when the total consumption of each task is completed. And when the candidate allocation strategy does not meet the preset allocation constraint condition so as to complete the total consumption of each task, determining the second objective function at minimum. The finally determined target allocation strategy is optimized by taking the minimum total consumption for completing each task and the minimum total consumption for satisfying allocation constraint conditions as much as possible as targets.
Specifically, based on the second objective function set to the minimum total consumption time for completing each task in this step, the server may directly determine the second objective function for the case where it is determined in S103 that none of the candidate allocation policies satisfies the allocation constraint condition. The second objective function can then be expressed using the following formula:
I.e. +.>
Wherein,indicating the total consumption of allocation policy to complete tasksWhen (I)>Indicating that none of the candidate allocation policies satisfies allocation constraints;
for the case where it is determined in S103 that the candidate allocation policy portion does not satisfy the allocation constraint, the server may add the bias parameter of the second objective function to the candidate allocation policy portion satisfying the allocation constraint, and determine the second objective function. The second objective function can then be expressed using the following formula:
wherein C represents increasing the bias parameter of the second objective function,indicating that the candidate allocation policy satisfies the allocation constraint;
for the case where it is determined in S103 that the candidate allocation policies all satisfy the allocation constraint, the server may add the bias parameter of the second objective function to the portion where the candidate allocation policies satisfy the allocation constraint, and determine the second objective function. The second objective function can then be expressed using the following formula:
i.e. +.>
Since the candidate allocation strategy is determined in step S103 without being based on the first objective function in step S105, in the present allocation method, an adaptive matrix adjustment evolution strategy based on the equivalent and auxiliary objectives is used (Matrix Adaptation Evolution Strategy). The strategy can simultaneously consider a plurality of distribution targets, so that the target distribution strategy can realize global load balancing. As the name suggests, this strategy requires the calculation of an equivalent matrix for the first objective function and an auxiliary matrix, where the equivalent matrix is difficult to quantify due to an infinite number, and in this method, the establishment of the auxiliary matrix is focused on the subsequent steps, which are called the second objective function in the following description for convenience of reading.
S109: according to a preset evolution algorithm, determining the adaptability of each candidate allocation strategy according to a first objective function and a second objective function of each candidate allocation strategy, so as to adjust each candidate allocation strategy, and determining the objective allocation strategy according to the adaptability of each candidate allocation strategy after adjustment until the evolution ending condition is met.
In one or more embodiments of the present disclosure, the server may employ a preset evolutionary algorithm to find an optimal allocation policy by determining the fitness of each candidate allocation policy and adjusting each candidate allocation policy, searching in a search space. And when the iterative process of the evolutionary algorithm meets the evolution ending condition, determining a target allocation strategy according to the determined adaptability of each candidate allocation strategy.
Specifically, for each iteration process of the server, first, each initial candidate allocation strategy in the iteration process of the round may be used as an individual to be evolved in the group, and the fitness of each candidate allocation strategy is determined and stored according to the first objective function and the second objective function of each candidate allocation strategy. The fitness is determined using the following formula:
wherein,indicating that the first objective function is obtained by S105, < - >The second objective function is shown obtained by S107.
And secondly, screening a designated number of candidate allocation strategies from the candidate allocation strategies according to the adaptability of the candidate allocation strategies, and taking the designated number of candidate allocation strategies as parent populations. Using the formulaDetermining a parent population, wherein->Representing each candidate allocation policy.
And then, generating a reorganization allocation strategy corresponding to each candidate allocation strategy in the parent population according to the parent population and the preset variation parameters. For each candidate allocation policy of the parent population, the server may determine a variation value range and a variation direction according to a preset variation parameter, that is, the server may determine a random value range of the variation parameter according to a preset initial variation strength and a preset maximum variation strength, and determine a random variation direction according to a preset effective population number, a preset learning rate of a search path and a preset evolution path, thereby the reorganization allocation policy of the candidate allocation policy.
And finally, determining each candidate allocation strategy of the next population according to each candidate allocation strategy and each recombination allocation strategy in the parent population.
That is, for each candidate allocation policy of the parent population, the candidate allocation policy is determined, and after adjustment is performed based on the reorganization allocation policy of the candidate allocation policy, the obtained candidate allocation policy is used as the candidate allocation policy of the next population.
And repeating the iterative process until the evolution ending condition is met, and determining a target allocation strategy according to the adaptability of each stored candidate allocation strategy.
It should be noted that, in each iteration of one or more embodiments of the present disclosure, since the second constraint function is set to be the smallest when the total consumption of each task is completed, and the candidate allocation policy has a value of 0 when the allocation constraint condition is satisfied, the optimization objective is to obtain a candidate allocation policy with low fitness. Therefore, the server can select the allocation strategy with the lowest fitness as the target allocation strategy according to the fitness of each stored candidate allocation strategy. Or, the server may also sequence the candidate allocation policies according to the order from small to large of the adaptability of the candidate allocation policies, determine each candidate allocation policy with a difference value from the first candidate allocation policy within a preset third numerical value, and determine the candidate allocation policy with the value of the first constraint function being 0 as the target allocation policy. That is, if the candidate allocation policy with the minimum fitness cannot meet the allocation constraint condition, the server may also determine, as the target allocation policy, a candidate allocation policy that meets the allocation constraint condition and has a small adaptability difference from the candidate allocation policy with the minimum fitness (i.e., the difference is within a preset third value).
S111: and scheduling each task to each computing node according to the target allocation strategy for computing.
In one or more embodiments of the present disclosure, the server determines S109 to output the target allocation policy, and then allocates the task to be scheduled determined in S101 to each computing node determined in S101 according to the target allocation policy, thereby completing the distributed task scheduling.
Based on the distributed task scheduling method provided in fig. 1, determining a task to be scheduled and a computing platform computing node state parameter by the server, determining a candidate allocation strategy, and determining a first objective function according to the candidate allocation strategy and a preset allocation constraint condition; determining a second objective function according to different situations that the candidate allocation strategy meets allocation constraint conditions; according to the evolution algorithm, determining the suitability of the candidate allocation strategy according to the first objective function and the second objective function, adjusting the candidate allocation strategy until the evolution ending condition is met, determining the objective allocation strategy according to the suitability of the adjusted candidate allocation strategy, and scheduling the task to a computing node corresponding to the objective allocation strategy for computing.
According to the method, the candidate allocation strategy which does not meet the allocation constraint condition is not directly abandoned, the strong constraint condition is converted into the objective function, and the limit of the constraint condition on optimizing is reduced, so that the candidate allocation strategy can be optimized in a larger range, the accuracy of determining the target allocation strategy is improved, the utilization rate of the computing nodes is higher under the distributed task scheduling scene by the target allocation strategy, and the load is more balanced.
In addition, when each candidate allocation policy of the next population is determined in step S109, after the parent population of the current iteration process is determined, the recombined parent individuals are determined according to each candidate allocation policy in the parent population. So as to generate a part of candidate allocation strategies of the next generation population according to the recombined parent individuals.
Specifically, first, the server may employ the formulaThe recombined parent individuals of the round of iterative process are determined. Wherein t represents the algebra of evolution (i.e. number of iteration rounds),>represents the number of parent individuals,/->Representing the ith individual in the parent population, +.>Representing the recombination weight of the ith individual in the parent population at the t-th generation. Wherein the recombination weight of the ith individual is represented by the following formula:
and secondly, determining how to generate partial candidate allocation strategies of the next generation population through mutation according to the recombined parent individuals. The server may be according to the formulaAnd determining a candidate allocation strategy of the ith next generation population. Wherein (1)>Representing the variant vector->The variation intensity at the t-th generation is shown. />Is a unit matrix->Transposed matrix of>Representing a standard normal distribution vector.
Further, when generating the candidate allocation policy of the next generation population, the generated candidate allocation policy may have a larger difference compared with the case of the individual difference of the recombined parent, where the candidate allocation policy with a larger difference belongs to an abnormal case, which may cause the "step length" of the optimizing process to be too long, so that the efficiency is reduced. Thus for out of parameter range The server may also make adjustments to redetermine candidate allocation policies for the next generation population.
Specifically, the server may employ the formula:and re-determining candidate allocation strategies of the next generation population. The keeplage function is used for adjusting the candidate allocation policy exceeding the parameter range, and is a candidate allocation policy not exceeding the parameter range, and is a relatively common function, so the description of the function will not be repeated further.
Further, ifIf the variation vector is adjusted, the variation vector can be restored by the server if the variation parameter is possibly too large. The variance vector is restored using the formula: />. The normal distribution vector is adjusted as follows using the formula: />. Wherein (1)>Is the pseudo-inverse of the transposed matrix. The pseudo-inverse process may be determined using the following equation: />
In addition, in one or more embodiments of the present disclosure, in order to reduce occurrence of a situation of sinking into a locally optimal solution and improve optimizing efficiency, the server may adjust a base parameter for determining a variation parameter according to a preset cycle of iteration number.
Specifically, first, the server may screen a specified number of candidate allocation policies from among the candidate allocation policies according to the fitness of each candidate allocation policy, as a parent population.
And secondly, the server determines the difference value between the iteration round number of the last adjustment mutation parameter and the current iteration round number. Since whether the variation parameter needs to be adjusted, the variation parameter needs to be determined through a plurality of iterations. For example, the fitness of the candidate allocation policy obtained only according to one round of iterative optimization cannot be determined whether there is a "trapped local value". In order to adjust the variation parameters according to the period, the server can judge whether the period of the variation parameters to be adjusted is reached in each iteration process, so that the difference value of the iteration rounds needs to be calculated.
And then, the server judges whether the difference value reaches the preset round number difference, if so, the server can determine that the adjustment period is reached, can determine the first objective function gradient and the second objective function gradient, and adjusts the variation parameters according to the determined gradients. If not, the variation parameters are not adjusted.
When the server adjusts the variation parameters, different adjustment modes can be adopted according to different situations.
Specifically, reference may be made to fig. 2 for a different manner of adjustment. FIG. 2 is a table of references for adjusting fitness of an evolution strategy according to the present disclosure, wherein the gradient of the evolution strategy is expressed as [ ] by a second objective function ) The first objective function gradient is changed) Change and target allocation policy first objective function (+)>) Variation, determination of the variation parameter transpose matrix (M [t] ) Variation parameter variation intensity (++>) How to adjust.
When the gradient of the second objective function is larger than a first preset value, the search is faster towards the second objective function, the gradient of the first objective function is smaller than or equal to the absolute value of the second preset value, and the first objective function of the objective allocation strategy is larger than the second preset value, the search is faster towards the first objective function, and the server adjusts the variation parameters according to the preset parameters. Wherein the second objective function gradient the server may be based onDetermining a formula; first objective function gradient the server may be based on +.>And (5) determining a formula.
When the gradient of the second objective function is smaller than or equal to the absolute value of the first preset value, the condition that the search is stopped in the first objective direction is indicated, the gradient of the first objective function is smaller than the absolute value of the second preset value, and the target allocation strategy is larger than the two preset values, the condition that the current population is not close to the feasible solution area is indicated, and the server adjusts the variation parameters according to the preset parameters.
When the gradient of the second objective function is smaller than or equal to the absolute value of the first preset value, the search is stopped in the first objective direction, the gradient of the first objective function is smaller than the absolute value of the second preset value, and the target allocation strategy is that the first objective function is equal to the two preset values, the server adjusts the variation parameters according to the preset parameters, and the feasible solution is found.
When the second objective function gradient is smaller than or equal to the absolute value of the first preset value, the searching is stopped in the first objective direction, and when the first objective function gradient is smaller than the negative second preset value, the searching is accelerated to be far away from the feasible solution area, the server adjusts the variation parameters, and the variation parameters after adjustment are consistent with the variation parameters before adjustment.
When the second objective function gradient is smaller than a negative first preset value, the searching is slower towards the second objective function, and the first objective function gradient is smaller than an absolute value of the second preset value, the searching is accelerated towards a region far away from a feasible solution, the server adjusts the variation parameters, and the variation parameters after adjustment are consistent with the variation parameters before adjustment.
In addition, in one or more embodiments of the present disclosure, since the importance of the first objective function and the second objective function may be different, when the server determines the fitness in step S109, the server may further determine the fitness of each candidate allocation policy according to the first objective function and the second objective function of each candidate allocation policy, and the corresponding weights.
The fitness can be further determined accurately using a method of adding weights. The fitness is determined using the formula:wherein (1)>Is weight->WhereinIf the weight is negative, the new calculation of the load is taken.
Further, the server may also adjust the weights on a periodic basis, similar to the reason for adjusting the variation parameters on a periodic basis.
In particular, different situations can employThe adjustment may be made with reference to fig. 3. FIG. 3 is a table of references for adjusting the weight of the evolution strategy provided in the present specification, FIG. 3 is a table of references for adjusting the fitness of the evolution strategy provided in the present specification, wherein the gradient is [ ] through a second objective function) Variation, first objective function gradient (++>) Change and target allocation policy first objective function (+)>) The change determines the weight of the second objective function (+.>) How to adjust.
First, the server determines the difference between the iteration round number of the last adjustment mutation parameter and the current iteration round number. And then the server judges whether the difference value reaches the preset round number difference, if so, the server adjusts the weight of the second objective function, otherwise, the server determines not to adjust the weight.
When the gradient of the second objective function is larger than a first preset value, the search is faster towards the second objective function, the gradient of the first objective function is smaller than or equal to the absolute value of the second preset value, and the first objective function of the target allocation strategy is larger than the second preset value, the search is faster towards the first objective function, and the server determines the weight of the second objective function as any weight value.
When the gradient of the second objective function is smaller than or equal to the absolute value of the first preset value, which indicates that the search is in stagnation in the first objective direction, the gradient of the first objective function is smaller than or equal to the absolute value of the second preset value, and the first objective function of the objective allocation strategy is larger than the second preset value, which indicates that the current population is not close to the feasible solution area, the server determines the weight of the second objective function as any weight value.
When the gradient of the second objective function is smaller than or equal to the absolute value of the first preset value, the searching is stopped in the first objective direction, the gradient of the first objective function is smaller than or equal to the absolute value of the second preset value, and the first objective function of the objective allocation strategy is equal to the second preset value, the server adjusts the weight of the second objective function, and the weight of the second objective function after adjustment is consistent with the weight of the second objective function before adjustment.
When the gradient of the second objective function is smaller than or equal to the absolute value of the first preset value, the searching is stopped in the first objective direction, and the gradient of the first objective function is smaller than the second preset value, the searching is accelerated to be far away from the feasible solution area, the server determines the weight of the second objective function according to the maximum weight value, the initial weight, the second objective function adjusted last time and the self-adaptive strength, and the operation can help the population acceleration to search towards the feasible solution area.
When the second objective function gradient is smaller than the negative first preset value, the searching is slower towards the second objective function, and the first objective function gradient is smaller than the absolute value of the second preset value, the searching is accelerated towards a far-away feasible solution area, and the server determines the weight of the second objective function according to the maximum weight value, the initial weight, the last adjusted second objective function and the self-adaptive strength, so that the searching of the population acceleration towards the feasible solution area can be assisted.
It should be noted that, in the present specification, the method for adjusting the weight of the server is adjusted based on the number of rounds, so that in order to better target allocation policy, if the method is adjusted away from the number of rounds, the method for directly adjusting the weight of the server is still feasible, and only the output target allocation policy is affected variably.
Here, it should be additionally noted that, in this step, the server screens out a specified number of candidate allocation policies from the candidate allocation policies, when the candidate allocation policies are used as parent population, the formula uses a value of 1/3, and the value is not necessarily a fixed value, but a preset value, and when a person in the related art uses the method of the present specification, the preset value can be adjusted according to specific situations, so as to obtain a better target allocation policy, and the preset value is not specifically limited in the present specification.
S111: and scheduling each task to each computing node according to the target allocation strategy for computing.
In one or more embodiments of the present disclosure, the server determines S109 to output the target allocation policy, and then allocates the task to be scheduled determined in S101 to each computing node determined in S101 according to the target allocation policy, thereby completing the distributed task scheduling.
The above method for scheduling distributed tasks provided for one or more embodiments of the present disclosure, based on the same concept, further provides a corresponding device for scheduling distributed tasks, as shown in fig. 5. Fig. 5 is a schematic structural diagram of a distributed task scheduling device provided in the present specification, including:
the parameter module 200 is used for determining state parameters of each task to be scheduled and each computing node of the distributed computing platform;
a candidate allocation module 204, configured to determine, according to the tasks and the computing nodes, candidate allocation policies, where the candidate allocation policies include allocation relationships between the tasks and the computing nodes;
the first constraint module 206 determines the number of tasks allocated to each computing node in the candidate allocation policy, determines the number of computing nodes with the number of allocated tasks being greater than a preset first value, and determines a first sub-item of a first objective function according to the first constraint value as a first constraint value; predicting a prediction state of each computing node when executing the task allocated by the candidate allocation strategy according to the candidate allocation strategy and the state parameter, and determining a second sub-item of the first objective function according to the prediction state and a preset allocation constraint condition; adding the first sub-item and the second sub-item of the first objective function to obtain a first objective function;
A second constraint module 208, configured to determine a second objective function according to the candidate allocation policy and a preset optimization objective when the candidate allocation policy does not satisfy a preset allocation constraint condition; when the candidate allocation strategy meets a preset allocation constraint condition, determining a second objective function according to the candidate allocation strategy, a bias parameter of the second objective function and the preset optimization target, wherein the bias parameter is used for increasing the distance between the second objective function and a desired target;
the optimization module 209 determines the fitness of each candidate allocation policy according to a first objective function and a second objective function of each candidate allocation policy according to a preset evolution algorithm, so as to adjust each candidate allocation policy, and determines a target allocation policy according to the fitness of each adjusted candidate allocation policy until the evolution end condition is met;
and the scheduling module 210 schedules each task to each computing node for computing according to the target allocation strategy.
The present specification also provides a computer readable storage medium storing a computer program operable to perform the distributed task scheduling method provided in fig. 1 described above.
Optionally, the parameter module 200 determines each task to be scheduled, and specifically includes:
receiving a graph calculation request;
determining a directed acyclic graph corresponding to graph calculation to be executed according to the graph calculation request;
and determining each task to be scheduled and the dependency relationship among the tasks according to the directed acyclic graph.
Optionally, the candidate allocation module 204 determines each candidate allocation policy according to each task and each computing node, which specifically includes:
and generating a plurality of candidate halving allocation strategies for respectively allocating the tasks to different computing nodes according to the dependency relationship among the tasks.
Optionally, the parameter module 200 includes at least: one of a communication resource of a computing node, an energy consumption of the computing node, a storage resource of the computing node, and a computing resource of the computing node, wherein the communication resource comprises a frequency, a bandwidth, a real-time state, and a real-time hardware parameter; the energy consumption includes power and heat loss; the storage resources comprise memory resources and external memory resources; the computing resources include CPU resources and network resources.
Optionally, the first constraint module 206 determines the second sub-term of the first objective function according to the prediction state and a preset allocation constraint condition, and specifically includes:
Aiming at each state parameter, determining an optimization objective function of the state parameter according to a predicted state corresponding to the state parameter and a preset constraint threshold corresponding to the state parameter;
and determining a second sub-item of the first objective function according to the optimized objective function of each state parameter.
Optionally, the first constraint module 206, before determining the second objective function, the method further includes:
judging whether a candidate allocation strategy meeting a preset allocation constraint condition exists in each candidate allocation strategy;
if yes, determining the total consumption of any candidate allocation strategy meeting the preset allocation constraint condition, and taking the total consumption as a bias parameter of a second objective function;
if not, determining the preset second value as the bias parameter of the second objective function.
Optionally, the second constraint module 208 determines a second objective function, and the method further includes:
for each candidate allocation strategy, when the candidate allocation strategy meets a preset allocation constraint condition, adding the bias parameter on the basis of setting a second objective function at least when the total consumption of each task is completed, and determining the second objective function;
and when the candidate allocation strategy does not meet the preset allocation constraint condition so as to complete the total consumption of each task, determining the second objective function at minimum.
Optionally, the optimizing module 209 determines the fitness of each candidate allocation policy, so as to adjust each candidate allocation policy, until the evolution end condition is met, and determines the target allocation policy according to the fitness of each candidate allocation policy, which specifically includes:
and determining the fitness of each candidate allocation strategy and storing the fitness.
And screening a designated number of candidate allocation strategies from the candidate allocation strategies according to the adaptability of the candidate allocation strategies, and taking the designated number of candidate allocation strategies as parent populations.
And generating a reorganization allocation strategy corresponding to each candidate allocation strategy in the parent population according to the parent population and the mutation parameters.
And determining each candidate allocation strategy of the next population according to each candidate allocation strategy and each recombination allocation strategy in the parent population, and redetermining the fitness of each candidate allocation strategy of the next population until the evolution ending condition is met, and determining a target allocation strategy according to the stored fitness of each candidate allocation strategy.
Optionally, the optimizing module 209 generates a reorganization allocation policy corresponding to each candidate allocation policy in the parent population according to the parent population and the mutation parameter, which specifically includes:
And determining a preset transpose matrix, an initialized normal distribution vector and variation intensity of the next generation population, and determining variation parameters.
And aiming at each candidate allocation strategy in the parent population, carrying out mutation on the average allocation strategy of the parent population according to the mutation parameters, and determining a mutation result.
And judging whether the variation result exceeds a preset parameter range.
If yes, the variation result is adjusted according to the parameter range, so that the variation result does not exceed the parameter range, and the variation result is used as a reorganization and distribution strategy.
If not, the mutation result is used as a reorganization and distribution strategy.
Wherein the average allocation policy is determined according to each candidate allocation policy of the parent population.
Optionally, before the optimizing module 209 generates the reorganization allocation policy corresponding to each candidate allocation policy in the parent population according to the parent population and the mutation parameter, the method further includes:
and determining the difference value between the iteration round number of the last adjustment variation parameter and the current iteration round number.
And judging whether the difference value reaches a preset wheel number difference or not.
If yes, determining the first objective function gradient and the second objective function gradient, and adjusting the variation parameters according to the determined gradients.
Optionally, the optimizing module 209 adjusts the mutation parameter according to the determined gradient, which specifically includes:
and when the second objective function gradient is larger than a first preset value, the first objective function gradient is smaller than or equal to the absolute value of the second preset value, and the first objective function of the target allocation strategy is larger than the second preset value, the variation parameters are adjusted according to preset parameters.
And when the second objective function gradient is smaller than or equal to the absolute value of the first preset value, the first objective function gradient is smaller than the absolute value of the second preset value, and the first objective function of the target allocation strategy is larger than or equal to the two preset values, the variation parameters are adjusted according to preset parameters.
And when the second objective function gradient is smaller than or equal to the absolute value of the first preset value and the first objective function gradient is smaller than the negative second preset value, the variation parameter is adjusted, and the variation parameter after adjustment is consistent with the variation parameter before adjustment.
And when the second objective function gradient is smaller than the negative first preset value and the first objective function gradient is smaller than the absolute value of the second preset value, adjusting the variation parameter, wherein the variation parameter after adjustment is consistent with the variation parameter before adjustment.
Optionally, the optimizing module 209 includes at least: normal distribution vector and variation intensity;
according to preset parameters, the variation parameters are adjusted, which specifically comprises:
and determining a pseudo-inverse matrix according to the identity matrix.
And determining a variation vector according to the ratio of the variation intensity corresponding to the previous generation of allocation strategy to the variation of the new allocation strategy.
And determining a normal distribution vector according to the product of the pseudo-inverse matrix and the variation vector.
And adjusting the variation intensity according to the preset maximum variation intensity.
Optionally, the optimizing module 209 determines the fitness of each candidate allocation policy according to the first objective function and the second objective function of each candidate allocation policy, which specifically includes:
and determining the recombination weight corresponding to each candidate allocation strategy in the parent population according to the position of each allocation strategy in the population.
And taking the initial weight as the weight of the second objective function, and determining the weight of the first objective function according to the weight of the second objective function, wherein the weight of the second objective function and the weight of the first objective function are normalized.
And determining the fitness of each candidate allocation strategy according to the first objective function and the second objective function of each candidate allocation strategy and the corresponding weight.
Optionally, the optimizing module 209, the method further includes:
and determining the difference value between the iteration round number of the last adjustment variation parameter and the current iteration round number.
And judging whether the difference value reaches a preset wheel number difference or not.
If yes, the weight of the second objective function is adjusted.
And when the second objective function gradient is larger than a first preset value, the first objective function gradient is smaller than or equal to the absolute value of a second preset value, and the first objective function of the objective allocation strategy is larger than the second preset value, determining the weight of the second objective function as any weight value.
And when the second objective function gradient is smaller than or equal to the absolute value of the first preset value, the first objective function gradient is smaller than or equal to the absolute value of the second preset value, and the first objective function of the objective allocation strategy is larger than the second preset value, determining the weight of the second objective function as any weight value.
When the second objective function gradient is smaller than or equal to the absolute value of the first preset value, the first objective function gradient is smaller than or equal to the absolute value of the second preset value, and the first objective function of the objective allocation strategy is equal to the second preset value, the weight of the second objective function is adjusted, and the weight of the second objective function after adjustment is consistent with the weight of the second objective function before adjustment.
And when the second objective function gradient is smaller than or equal to the absolute value of the first preset value and the first objective function gradient is smaller than the negative second preset value, determining the weight of the second objective function according to the maximum weight, the initial weight, the last adjusted second objective function and the self-adaptive strength.
And when the second objective function gradient is smaller than the negative first preset value and the first objective function gradient is smaller than the absolute value of the second preset value, determining the weight of the second objective function according to the maximum weight value, the initial weight, the last adjusted second objective function and the self-adaptive strength.
The present specification also provides a schematic structural diagram of the electronic device shown in fig. 6. At the hardware level, as shown in fig. 6, the electronic device includes a processor, an internal bus, a network interface, a memory, and a nonvolatile storage, and may of course include hardware required by other services. The processor reads the corresponding computer program from the nonvolatile memory into the memory and then runs to implement the distributed task scheduling method described above with reference to fig. 1. Of course, other implementations, such as logic devices or combinations of hardware and software, are not excluded from the present description, that is, the execution subject of the following processing flows is not limited to each logic unit, but may be hardware or logic devices.
In the 90 s of the 20 th century, improvements to one technology could clearly be distinguished as improvements in hardware (e.g., improvements to circuit structures such as diodes, transistors, switches, etc.) or software (improvements to the process flow). However, with the development of technology, many improvements of the current method flows can be regarded as direct improvements of hardware circuit structures. Designers almost always obtain corresponding hardware circuit structures by programming improved method flows into hardware circuits. Therefore, an improvement of a method flow cannot be said to be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (e.g., field programmable gate array (Field Programmable Gate Array, FPGA)) is an integrated circuit whose logic function is determined by the programming of the device by a user. A designer programs to "integrate" a digital system onto a PLD without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, such programming is mostly implemented by using "logic compiler" software, which is similar to the software compiler used in program development and writing, and the original code before the compiling is also written in a specific programming language, which is called hardware description language (Hardware Description Language, HDL), but not just one of the hdds, but a plurality of kinds, such as ABEL (Advanced Boolean Expression Language), AHDL (Altera Hardware Description Language), confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware Description Language), lava, lola, myHDL, PALASM, RHDL (Ruby Hardware Description Language), etc., VHDL (Very-High-Speed Integrated Circuit Hardware Description Language) and Verilog are currently most commonly used. It will also be apparent to those skilled in the art that a hardware circuit implementing the logic method flow can be readily obtained by merely slightly programming the method flow into an integrated circuit using several of the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, application specific integrated circuits (Application Specific Integrated Circuit, ASIC), programmable logic controllers, and embedded microcontrollers, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic of the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller may thus be regarded as a kind of hardware component, and means for performing various functions included therein may also be regarded as structures within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in one or more software and/or hardware elements when implemented in the present specification.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
It will be appreciated by those skilled in the art that embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the present specification may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present description can take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing is merely exemplary of the present disclosure and is not intended to limit the disclosure. Various modifications and alterations to this specification will become apparent to those skilled in the art. Any modifications, equivalent substitutions, improvements, or the like, which are within the spirit and principles of the present description, are intended to be included within the scope of the claims of the present application.

Claims (16)

1. A method of distributed task scheduling, comprising:
determining state parameters of each task to be scheduled and each computing node of the distributed computing platform;
determining each candidate allocation strategy according to each task and each computing node, wherein each candidate allocation strategy comprises allocation relation between the task and the computing node;
determining the number of tasks distributed by each computing node in the candidate distribution strategy, determining the number of computing nodes with the number of the distributed tasks being larger than a preset first numerical value as a first constraint value, and determining a first sub-item of a first objective function according to the first constraint value; predicting a prediction state of each computing node when executing the task allocated by the candidate allocation strategy according to the candidate allocation strategy and the state parameter, and determining a second sub-item of the first objective function according to the prediction state and a preset allocation constraint condition; adding the first sub-item and the second sub-item of the first objective function to obtain a first objective function;
when the candidate allocation strategy does not meet the preset allocation constraint condition, determining a second objective function according to the candidate allocation strategy and a preset optimization target; when the candidate allocation strategy meets a preset allocation constraint condition, determining a second objective function according to the candidate allocation strategy, a bias parameter of the second objective function and the preset optimization target, wherein the bias parameter is used for increasing the distance between the second objective function and a desired target;
According to a preset evolution algorithm, determining the adaptability of each candidate allocation strategy according to a first objective function and a second objective function of each candidate allocation strategy so as to adjust each candidate allocation strategy, and determining a target allocation strategy according to the adaptability of each adjusted candidate allocation strategy when the evolution ending condition is met;
and dispatching each task to each computing node according to the target allocation strategy for computing.
2. The method according to claim 1, wherein determining each task to be scheduled comprises:
receiving a graph calculation request;
determining a directed acyclic graph corresponding to graph calculation to be executed according to the graph calculation request;
and determining each task to be scheduled and the dependency relationship among the tasks according to the directed acyclic graph.
3. The method of claim 1, wherein the status parameters include at least: one of a communication resource of a computing node, an energy consumption of the computing node, a storage resource of the computing node, and a computing resource of the computing node, wherein the communication resource comprises a frequency, a bandwidth, a real-time state, and a real-time hardware parameter; the energy consumption includes power and heat loss; the storage resources comprise memory resources and external memory resources; the computing resources include CPU resources and network resources.
4. The method according to claim 1, wherein determining the second sub-term of the first objective function based on the prediction state and a preset allocation constraint, comprises:
aiming at each state parameter, determining an optimization objective function of the state parameter according to a predicted state corresponding to the state parameter and a preset constraint threshold corresponding to the state parameter;
and determining a second sub-item of the first objective function according to the optimized objective function of each state parameter.
5. The method of claim 1, wherein prior to determining the second objective function, the method further comprises:
judging whether a candidate allocation strategy meeting a preset allocation constraint condition exists in each candidate allocation strategy;
if yes, determining the total consumption of any candidate allocation strategy meeting the preset allocation constraint condition, and taking the total consumption as a bias parameter of a second objective function;
if not, determining the preset second value as the bias parameter of the second objective function.
6. The method of claim 5, wherein determining a second objective function, the method further comprising:
for each candidate allocation strategy, when the candidate allocation strategy meets a preset allocation constraint condition, adding the bias parameter on the basis of setting a second objective function at least when the total consumption of each task is completed, and determining the second objective function;
And when the candidate allocation strategy does not meet the preset allocation constraint condition so as to complete the total consumption of each task, determining the second objective function at minimum.
7. The method of claim 1, wherein determining the fitness of each candidate allocation policy to adjust each candidate allocation policy until an end of evolution condition is met, determining a target allocation policy based on the fitness of each candidate allocation policy, specifically comprises:
determining the fitness of each candidate allocation strategy and storing the fitness;
according to the adaptability of each candidate allocation strategy, a designated number of candidate allocation strategies are screened out from each candidate allocation strategy to be used as parent population;
generating a reorganization allocation strategy corresponding to each candidate allocation strategy in the parent population according to the parent population and the mutation parameters;
and determining each candidate allocation strategy of the next population according to each candidate allocation strategy and each recombination allocation strategy in the parent population, and redetermining the fitness of each candidate allocation strategy of the next population until the evolution ending condition is met, and determining a target allocation strategy according to the stored fitness of each candidate allocation strategy.
8. The method of claim 7, wherein generating a reorganization allocation policy corresponding to each candidate allocation policy in the parent population according to the parent population and the mutation parameter, specifically comprises:
Determining a preset transpose matrix, an initialized normal distribution vector and variation intensity of the next generation population, and determining variation parameters;
aiming at each candidate allocation strategy in the parent population, carrying out mutation on the average allocation strategy of the parent population according to the mutation parameters, and determining a mutation result;
judging whether the variation result exceeds a preset parameter range;
if yes, the variation result is adjusted according to the parameter range, so that the variation result does not exceed the parameter range, and the variation result is used as a reorganization and distribution strategy;
if not, taking the mutation result as a reorganization and distribution strategy;
wherein the average allocation policy is determined according to each candidate allocation policy of the parent population.
9. The method of claim 7, wherein prior to generating a reorganization allocation strategy corresponding to each candidate allocation strategy in the parent population according to the parent population and the mutation parameters, the method further comprises:
determining the difference value between the iteration round number of the last adjustment variation parameter and the current iteration round number;
judging whether the difference value reaches a preset wheel number difference or not;
if yes, determining the first objective function gradient and the second objective function gradient, and adjusting the variation parameters according to the determined gradients.
10. The method of claim 9, wherein adjusting the variation parameter based on the determined gradient comprises:
when the second objective function gradient is larger than a first preset value, the first objective function gradient is smaller than or equal to the absolute value of the second preset value, and the first objective function of the target allocation strategy is larger than the second preset value, the variation parameters are adjusted according to preset parameters;
when the second objective function gradient is smaller than or equal to the absolute value of the first preset value, the first objective function gradient is smaller than the absolute value of the second preset value, and the first objective function of the target allocation strategy is larger than or equal to the two preset values, the variation parameters are adjusted according to preset parameters;
when the second objective function gradient is smaller than or equal to the absolute value of the first preset value and the first objective function gradient is smaller than the negative second preset value, the variation parameter is adjusted, and the variation parameter after adjustment is consistent with the variation parameter before adjustment;
and when the second objective function gradient is smaller than the negative first preset value and the first objective function gradient is smaller than the absolute value of the second preset value, adjusting the variation parameter, wherein the variation parameter after adjustment is consistent with the variation parameter before adjustment.
11. The method of claim 9, wherein the variation parameters comprise at least: normal distribution vector and variation intensity;
according to preset parameters, the variation parameters are adjusted, which specifically comprises:
determining a pseudo-inverse matrix according to the identity matrix;
determining a variation vector according to the ratio of the variation intensity corresponding to the previous generation of allocation strategy to the variation of the new allocation strategy;
determining a normal distribution vector according to the product of the pseudo-inverse matrix and the variation vector;
and adjusting the variation intensity according to the preset maximum variation intensity.
12. The method of claim 9, wherein determining the fitness of each candidate allocation policy according to the first objective function and the second objective function of each candidate allocation policy comprises:
determining the recombination weight corresponding to each candidate allocation strategy in the parent population according to the position of each allocation strategy in the population;
taking the initial weight as the weight of the second objective function, and determining the weight of the first objective function according to the weight of the second objective function, wherein the weight of the second objective function and the weight of the first objective function are normalized;
And determining the fitness of each candidate allocation strategy according to the first objective function and the second objective function of each candidate allocation strategy and the corresponding weight.
13. The method of claim 12, wherein the method further comprises:
determining the difference value between the iteration round number of the last adjustment variation parameter and the current iteration round number;
judging whether the difference value reaches a preset wheel number difference or not;
if yes, adjusting the weight of the second objective function;
when the second objective function gradient is larger than a first preset value, the first objective function gradient is smaller than or equal to the absolute value of a second preset value, and the first objective function of the target allocation strategy is larger than the second preset value, randomly determining the weight of the second objective function in a preset weight value range;
when the second objective function gradient is smaller than or equal to the absolute value of the first preset value, the first objective function gradient is smaller than or equal to the absolute value of the second preset value, and the first objective function of the target allocation strategy is larger than the second preset value, randomly determining the weight of the second objective function in a preset weight range;
when the second objective function gradient is smaller than or equal to the absolute value of the first preset value, the first objective function gradient is smaller than or equal to the absolute value of the second preset value, and the first objective function of the objective allocation strategy is equal to the second preset value, the weight of the second objective function is adjusted, and the weight of the second objective function after adjustment is consistent with the weight of the second objective function before adjustment;
When the second objective function gradient is smaller than or equal to the absolute value of the first preset value and the first objective function gradient is smaller than the negative second preset value, determining the weight of the second objective function according to the maximum weight value, the initial weight, the last adjusted second objective function weight and the self-adaptive strength;
and when the second objective function gradient is smaller than the negative first preset value and the first objective function gradient is smaller than the absolute value of the second preset value, determining the weight of the second objective function according to the maximum weight value, the initial weight, the last adjusted second objective function weight and the self-adaptive strength.
14. An apparatus for distributed task scheduling, comprising:
the parameter module is used for determining state parameters of each task to be scheduled and each computing node of the distributed computing platform;
the candidate allocation module is used for determining each candidate allocation strategy according to each task and each computing node, wherein the candidate allocation strategy comprises allocation relation between the task and the computing node;
the first constraint module is used for determining the number of tasks distributed by each computing node in the candidate distribution strategy, determining the number of computing nodes with the number of the distributed tasks being larger than a preset first numerical value, and determining a first sub-item of a first objective function according to the first constraint value; predicting a prediction state of each computing node when executing the task allocated by the candidate allocation strategy according to the candidate allocation strategy and the state parameter, and determining a second sub-item of the first objective function according to the prediction state and a preset allocation constraint condition; adding the first sub-item and the second sub-item of the first objective function to obtain a first objective function;
The second constraint module is used for determining a second objective function according to the candidate allocation strategy and a preset optimization target when the candidate allocation strategy does not meet the preset allocation constraint condition; when the candidate allocation strategy meets a preset allocation constraint condition, determining a second objective function according to the candidate allocation strategy, a bias parameter of the second objective function and the preset optimization target, wherein the bias parameter is used for increasing the distance between the second objective function and a desired target;
the optimization module is used for determining the fitness of each candidate allocation strategy according to a first objective function and a second objective function of each candidate allocation strategy according to a preset evolution algorithm so as to adjust each candidate allocation strategy until the evolution ending condition is met, and determining the target allocation strategy according to the fitness of each adjusted candidate allocation strategy;
and the scheduling module schedules each task to each computing node for computing according to the target allocation strategy.
15. A computer storage medium, characterized in that the storage medium stores a computer program which, when executed by a processor, implements the method according to any of the preceding claims 1-13.
16. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of any of the preceding claims 1-13 when executing the program.
CN202311010107.XA 2023-08-11 2023-08-11 Distributed task scheduling method and device, storage medium and electronic equipment Active CN116719631B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311010107.XA CN116719631B (en) 2023-08-11 2023-08-11 Distributed task scheduling method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311010107.XA CN116719631B (en) 2023-08-11 2023-08-11 Distributed task scheduling method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN116719631A CN116719631A (en) 2023-09-08
CN116719631B true CN116719631B (en) 2024-01-09

Family

ID=87866570

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311010107.XA Active CN116719631B (en) 2023-08-11 2023-08-11 Distributed task scheduling method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN116719631B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108880663A (en) * 2018-07-20 2018-11-23 大连大学 Incorporate network resource allocation method based on improved adaptive GA-IAGA
CN110851247A (en) * 2019-10-12 2020-02-28 华东师范大学 Cost optimization scheduling method for constrained cloud workflow
CN110929960A (en) * 2019-12-12 2020-03-27 支付宝(杭州)信息技术有限公司 Policy selection optimization method and device
CN111182582A (en) * 2019-12-30 2020-05-19 东南大学 Multitask distributed unloading method facing mobile edge calculation
CN111258743A (en) * 2020-02-17 2020-06-09 武汉轻工大学 Cloud task scheduling method, device, equipment and storage medium based on discrete coding
CN113835894A (en) * 2021-09-28 2021-12-24 南京邮电大学 Intelligent calculation migration method based on double-delay depth certainty strategy gradient
CN114461386A (en) * 2021-12-30 2022-05-10 科大讯飞股份有限公司 Task allocation method and task allocation device
CN115168017A (en) * 2022-09-08 2022-10-11 天云融创数据科技(北京)有限公司 Task scheduling cloud platform and task scheduling method thereof
CN116451585A (en) * 2023-04-25 2023-07-18 南京航空航天大学 Adaptive real-time learning task scheduling method based on target detection model
CN116521380A (en) * 2023-07-05 2023-08-01 之江实验室 Resource self-adaptive collaborative model training acceleration method, device and equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10311358B2 (en) * 2015-07-10 2019-06-04 The Aerospace Corporation Systems and methods for multi-objective evolutionary algorithms with category discovery

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108880663A (en) * 2018-07-20 2018-11-23 大连大学 Incorporate network resource allocation method based on improved adaptive GA-IAGA
CN110851247A (en) * 2019-10-12 2020-02-28 华东师范大学 Cost optimization scheduling method for constrained cloud workflow
CN110929960A (en) * 2019-12-12 2020-03-27 支付宝(杭州)信息技术有限公司 Policy selection optimization method and device
CN111182582A (en) * 2019-12-30 2020-05-19 东南大学 Multitask distributed unloading method facing mobile edge calculation
CN111258743A (en) * 2020-02-17 2020-06-09 武汉轻工大学 Cloud task scheduling method, device, equipment and storage medium based on discrete coding
CN113835894A (en) * 2021-09-28 2021-12-24 南京邮电大学 Intelligent calculation migration method based on double-delay depth certainty strategy gradient
CN114461386A (en) * 2021-12-30 2022-05-10 科大讯飞股份有限公司 Task allocation method and task allocation device
CN115168017A (en) * 2022-09-08 2022-10-11 天云融创数据科技(北京)有限公司 Task scheduling cloud platform and task scheduling method thereof
CN116451585A (en) * 2023-04-25 2023-07-18 南京航空航天大学 Adaptive real-time learning task scheduling method based on target detection model
CN116521380A (en) * 2023-07-05 2023-08-01 之江实验室 Resource self-adaptive collaborative model training acceleration method, device and equipment

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于云环境的双QoS约束多目标工作流调度;薛庆水;《计算机工程与设计》;第40卷(第08期);全文 *
基于改进遗传算法的生鲜农产品多目标配送路径优化;曹倩;邵举平;孙延安;;工业工程(第01期);全文 *
集成化服务链多目标全局优化模型与算法;吴映波;王旭;刘昕;;重庆大学学报(第08期);全文 *

Also Published As

Publication number Publication date
CN116719631A (en) 2023-09-08

Similar Documents

Publication Publication Date Title
CN111953758B (en) Edge network computing unloading and task migration method and device
CN107770088B (en) Flow control method and device
JP6716149B2 (en) Blockchain-based data processing method and apparatus
CN108681484B (en) Task allocation method, device and equipment
CN111930486B (en) Task selection data processing method, device, equipment and storage medium
CN109391680B (en) Timed task data processing method, device and system
Ghafouri et al. A budget constrained scheduling algorithm for executing workflow application in infrastructure as a service clouds
CN109002357B (en) Resource allocation method and device and Internet of things system
CN108920183B (en) Service decision method, device and equipment
CN116225669B (en) Task execution method and device, storage medium and electronic equipment
Alboaneen et al. Glowworm swarm optimisation based task scheduling for cloud computing
CN116719631B (en) Distributed task scheduling method and device, storage medium and electronic equipment
CN112596898A (en) Task executor scheduling method and device
CN116932175B (en) Heterogeneous chip task scheduling method and device based on sequence generation
CN115964181B (en) Data processing method and device, storage medium and electronic equipment
CN113032119A (en) Task scheduling method and device, storage medium and electronic equipment
CN116304212A (en) Data processing system, method, equipment and storage medium
CN116781532A (en) Optimization mapping method of service function chains in converged network architecture and related equipment
CN113826078A (en) Resource scheduling and information prediction method, device, system and storage medium
CN113177632B (en) Model training method, device and equipment based on pipeline parallelism
Jamali et al. A new method of cloud-based computation model for mobile devices: energy consumption optimization in mobile-to-mobile computation offloading
KR101558807B1 (en) Processor scheduling method for the cooperation processing between host processor and cooperation processor and host processor for performing the method
Ullah et al. Task priority-based cached-data prefetching and eviction mechanisms for performance optimization of edge computing clusters
Meriam et al. Multiple QoS priority based scheduling in cloud computing
CN116384472A (en) Data processing system, method, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant