CN111782626A - Task allocation method and device, distributed system, electronic device and medium - Google Patents

Task allocation method and device, distributed system, electronic device and medium Download PDF

Info

Publication number
CN111782626A
CN111782626A CN202010822709.5A CN202010822709A CN111782626A CN 111782626 A CN111782626 A CN 111782626A CN 202010822709 A CN202010822709 A CN 202010822709A CN 111782626 A CN111782626 A CN 111782626A
Authority
CN
China
Prior art keywords
node
nodes
weight
change
task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010822709.5A
Other languages
Chinese (zh)
Inventor
杨毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
ICBC Technology Co Ltd
Original Assignee
ICBC Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ICBC Technology Co Ltd filed Critical ICBC Technology Co Ltd
Priority to CN202010822709.5A priority Critical patent/CN111782626A/en
Publication of CN111782626A publication Critical patent/CN111782626A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/182Distributed file systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Software Systems (AREA)
  • Strategic Management (AREA)
  • General Engineering & Computer Science (AREA)
  • Game Theory and Decision Science (AREA)
  • Educational Administration (AREA)
  • Data Mining & Analysis (AREA)
  • Development Economics (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Databases & Information Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The utility model provides a task allocation method, which is used for a distributed system, wherein the distributed system comprises M nodes, M is an integer greater than or equal to 1, and the method comprises the following steps: distributing tasks for the M nodes based on the initial weight of each node in the M nodes; determining the change weight of each node in the process of executing tasks by the M nodes, wherein the change weight of each node is determined based on respective running information; and distributing tasks for the M nodes based on the change weight of each node. The present disclosure also provides a distributed system, a task allocation apparatus, an electronic device, and a computer-readable storage medium.

Description

Task allocation method and device, distributed system, electronic device and medium
Technical Field
The present disclosure relates to the field of distributed technologies, and more particularly, to a task allocation method, a distributed system, a task allocation apparatus, an electronic device, and a computer-readable storage medium.
Background
With the rapid development of computer technology and network technology, the operation tasks of various services are increased, the operation requirements are continuously increased, and in order to improve the operation speed, a distributed system can be adopted to execute the operation tasks. The distributed system can comprise a plurality of servers, each server can be used as a node, when a plurality of tasks need to be executed, the plurality of tasks can be executed in parallel by utilizing the nodes, and the task execution speed is greatly improved.
In implementing the disclosed concept, the inventors found that there are at least the following problems in the related art:
the authority assignment of the current node is generally an artificial advance, for example, a fixed weight of the node is artificially set according to the hardware configuration of the server, and a higher weight is given to a server with a higher configuration, so that more tasks can be assigned in the task assignment process. However, with this method of fixing the weight, it is easy to assign a high time-consuming task or a large number of tasks to the node with the higher fixed weight, which may cause the node to run in an overload manner.
Disclosure of Invention
In view of the above, the present disclosure provides a task allocation method, a distributed system, a task allocation apparatus, an electronic device, and a computer-readable storage medium.
One aspect of the present disclosure provides a task allocation method for a distributed system, where the distributed system includes M nodes, where M is an integer greater than or equal to 1, and the method includes: distributing tasks to the M nodes based on the initial weight of each node in the M nodes; determining a change weight of each node in the process of executing tasks by the M nodes, wherein the change weight of each node is determined based on respective running information; and distributing tasks to the M nodes based on the change weight of each node.
According to an embodiment of the present disclosure, in the process of executing tasks by the M nodes, determining the change weight of each node includes: and in the process of executing tasks by the M nodes, re-determining the change weight of each node every preset time.
According to an embodiment of the present disclosure, in the process of executing tasks by the M nodes, determining the change weight of each node includes: performing the following for each node: acquiring actual values of N operation reference indexes in the operation information and N coefficients corresponding to the N operation reference indexes respectively, wherein N is an integer greater than or equal to 1; and calculating the change weight based on the actual values of the N running reference indexes and the N coefficients.
According to an embodiment of the present disclosure, the N operation reference indicators include: at least one of a network transmission performance index, a central processing unit computing capacity index, a disk read-write performance index, a graphics processor computing capacity index and a downtime frequency index.
According to an embodiment of the present disclosure, the method further comprises: obtaining a standard weight of each node in the M nodes; determining a weight stability interval of each node based on the standard weight of each node; determining a node with a change weight exceeding a corresponding weight stable interval from the M nodes as a target node; and adjusting the operation condition of the target node so that the change weight of the target node is positioned in the corresponding weight stable interval range.
According to an embodiment of the present disclosure, the obtaining the standard weight of each of the M nodes includes: obtaining a standard weight of each node based on user input; or calculating to obtain the standard weight of each node based on the standard values of the N operating reference indexes of each node and the N coefficients respectively corresponding to the N operating reference indexes, wherein N is an integer greater than or equal to 1.
Another aspect of the present disclosure provides a distributed system, comprising: the node comprises a first node and P second nodes, wherein the first node and each second node have respective initial weights, and P is an integer greater than or equal to 1; the first node is used for distributing tasks to the first node and the P second nodes based on the initial weights of the first node and the P second nodes; the first node and each second node are used for determining respective change weight based on respective operation information in the process of executing the task; the first node is further configured to assign tasks to the first node and the P second nodes based on the change weights of the first node and the P second nodes.
Another aspect of the present disclosure provides a task allocation apparatus for a distributed system, where the distributed system includes M nodes, M being an integer greater than or equal to 1, and the apparatus includes: a first distribution module, configured to distribute tasks to the M nodes based on respective initial weights of the M nodes; a weight determining module, configured to determine a change weight of each node in the M nodes based on operation information of each node in the M nodes in a process of executing a task by the M nodes; and a second allocating module, configured to allocate tasks to the M nodes based on the change weight of each node.
Another aspect of the present disclosure provides a computer-readable storage medium storing computer-executable instructions for implementing the method as described above when executed.
Another aspect of the disclosure provides a computer program comprising computer executable instructions for implementing the method as described above when executed.
According to the embodiment of the disclosure, the weight is determined again according to the running condition of each node in the task execution process, and the follow-up task distribution is performed according to the changed weight, so that the permission of the node can be changed in real time according to the change of the running condition, the artificial electricity is not relied on, the hardware configuration is not relied on, the fair scheduling is performed as far as possible, the fairness of node competition in the distributed system is improved, and the stability and the resource utilization rate of the distributed system are improved.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent from the following description of embodiments of the present disclosure with reference to the accompanying drawings, in which:
FIG. 1 schematically illustrates an exemplary system architecture to which a task allocation method may be applied, according to an embodiment of the present disclosure;
FIG. 2 schematically illustrates a flow chart of a task assignment method according to an embodiment of the present disclosure;
FIG. 3 schematically illustrates a flow diagram of a task assignment method according to another embodiment of the present disclosure;
FIG. 4 schematically illustrates a schematic diagram of a distributed system according to an embodiment of the disclosure;
FIG. 5 schematically illustrates a block diagram of a task allocation apparatus according to an embodiment of the present disclosure; and
FIG. 6 schematically shows a block diagram of an electronic device adapted to implement a task assignment method according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a convention analogous to "A, B or at least one of C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B or C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). It will be further understood by those within the art that virtually any disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms. For example, the phrase "a or B" should be understood to include the possibility of "a" or "B", or "a and B".
The embodiment of the disclosure provides a task allocation method, which is used for a distributed system, wherein the distributed system comprises M nodes, and M is an integer greater than or equal to 1. The task allocation comprises the following steps: tasks are assigned to the M nodes based on the initial weight of each of the M nodes. And determining the change weight of each node in the process of executing the tasks by the M nodes, wherein the change weight of each node is determined based on the respective running information. Tasks are assigned to the M nodes based on the change weight of each node.
Fig. 1 schematically illustrates an exemplary system architecture 100 to which a task allocation method may be applied, according to an embodiment of the present disclosure. It should be noted that fig. 1 is only an example of a system architecture to which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, and does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 1, the system architecture 100 according to this embodiment may include a first node 101 and second nodes 102, 103, and 104, etc., where each node may be implemented as a server, and the first node 101 and the second nodes 102, 103, and 104 may form a distributed cluster, which may also be referred to as a distributed system hereinafter.
According to the embodiment of the disclosure, each node in the distributed cluster can perform an operation task or an IO task, wherein the IO task includes a network transmission task or a disk read-write task. There are some specific requirements for the nodes in the distributed network in the whole network, for example, some nodes may function as gateways, and may provide multiple functions such as load, current limiting, forwarding, and fusing for subsequent services.
The distributed cluster in the embodiment of the present disclosure may be maintained based on a consistency algorithm such as paxos algorithm or raft algorithm, or may adopt a master-slave distributed network capable of synchronizing the states of each node in real time, mainly for the purpose of achieving self-consistency of the distributed network, where the states of each node have a unified consensus and do not depend on a third-party monitoring service. In the distributed cluster, one node may be designated as a leader node, and the other nodes may be follower nodes, the first node 101 may be, for example, a leader node, and the second nodes 102, 103, and 104 may be, for example, follower nodes. The first node 101 and the second nodes 102, 103, and 104 may all receive tasks, after the second nodes 102, 103, and 104 receive the tasks, the tasks may be sent to the first node 101, the first node 101 may place the tasks in a task queue of a cluster, and then may take the tasks from the task queue and distribute the tasks to the respective nodes according to the weights of the respective nodes.
According to the embodiment of the disclosure, after receiving the task, the first node or the second node may attach its head information to the task, and then synchronize the task to the task queue of the cluster, so that after the task is executed, the node executing the task may return the task execution result to the node receiving the task according to the head information, and then feed back the task execution result to the task sender by the node receiving the task.
The task allocation method provided by the embodiment of the present disclosure may be executed by the first node 101, for example. Accordingly, the task allocation apparatus provided by the embodiments of the present disclosure may be generally disposed in the first node 101. The task allocation method provided by the embodiment of the present disclosure may also be executed by a server or a server cluster different from the first node 101 and capable of communicating with the first node 101 and the second nodes 102, 103, and 104. Accordingly, the task allocation device provided by the embodiment of the present disclosure may also be disposed in a server or a server cluster different from the first node 101 and capable of communicating with the first node 101 and the second nodes 102, 103, and 104.
It should be understood that the number of first nodes and second nodes in fig. 1 is merely illustrative. There may be any number of first nodes and second nodes, as desired for implementation.
Fig. 2 schematically shows a flow chart of a task allocation method according to an embodiment of the present disclosure.
As shown in fig. 2, the method includes operations S210 to S230.
In operation S210, tasks are allocated to the M nodes based on the initial weight Wi of each of the M nodes.
For example, the M nodes may include the first node 101 and the second nodes 102, 103, and 104 described above. The initial weight Wi of a node may refer to the weight of the node at a certain time a. If the time a is before the cluster receives the task, the initial weight of the node may be, for example, a standard weight of the node, where the standard weight is determined based on the hardware configuration of the node, and if the cluster receives the task at a time B after the time a, the task may be distributed according to the initial weight of each node.
For example, the task queue includes a plurality of subtasks K1 to K20, the initial weights of the first node 101 and the second nodes 102, 103, and 104 are, for example, 2, 3, 2, and 3, respectively, when allocating tasks, the subtasks may be sequentially taken out from the task queue and allocated to each node according to the weight ratio of the node, for example, the subtasks K1 and K2 may be allocated to the first node 101 for processing, the subtasks K3 to K5 may be allocated to the second node 102 for processing, the subtasks K6 and K7 may be allocated to the second node 103 for processing, and the subtasks K8 to K10 may be allocated to the second node 104 for processing.
In operation S220, during the task execution of the M nodes, a change weight Wc of each node is determined, wherein the change weight Wc of each node is determined based on the respective running information.
For example, during the task execution process of each node, the operation parameters may change, for example, the network transmission performance or the processor computation performance may change, and at a certain time C after the time B, the weight may be re-determined according to the operation information after the node change to obtain the changed weight in operation S220.
In the embodiment of the present disclosure, the change weight of each node may be determined by the corresponding node according to its real-time operation information, for example, the first node 101 and the second nodes 102, 103, and 104 may calculate the respective change weights according to the respective operation information, and then the second nodes 102, 103, and 104 may transmit the change weights to the first node 101. In another embodiment of the present disclosure, the first node 101 may determine the change weight of each node, for example, the second nodes 102, 103, and 104 may send respective operation information to the first node 101, and the first node 101 and each second node calculate the change weight of the first node 101 and each second node.
In operation S230, tasks are allocated to the M nodes based on the change weight of each node.
For example, when the subsequent subtasks need to be allocated, the tasks may be allocated according to the changed weight of each node. For example, if it is determined at time D that the change weights of the first node 101 and the second nodes 102, 103, and 104 are 1, 2, 1, and 2, respectively, then when performing the subsequent sub-task assignment, the sub-task K11 may be assigned to the first node 101 for processing, the sub-tasks K12 and K13 may be assigned to the second node 102 for processing, the sub-task K14 may be assigned to the second node 103 for processing, and the sub-tasks K15 and K16 may be assigned to the second node 104 for processing.
According to the embodiment of the disclosure, the weight is determined again according to the running condition of each node in the process of executing the task, and the follow-up task distribution is performed according to the changed weight, so that the authority of the node can be changed in real time according to the change of the running condition, the performance of artificial and hardware configuration is not depended on, the fair scheduling is performed as far as possible, the fairness of node competition in the distributed system is improved, and the stability and the resource utilization rate of the distributed system are improved.
According to the embodiment of the disclosure, in the process of executing tasks by the M nodes, determining the change weight of each node comprises: and in the process of executing tasks by the M nodes, the change weight of each node is determined again at intervals of preset time.
For example, since the operation condition of the node may change at any time, the weight change may be determined again every predetermined time period, which may be shorter, so as to achieve the purpose of updating the weight in real time.
According to the embodiment of the disclosure, in the process of executing tasks by the M nodes, determining the change weight of each node comprises the following operations for each node: (1) acquiring actual values of N operation reference indexes in the operation information and N coefficients respectively corresponding to the N operation reference indexes, wherein N is an integer greater than or equal to 1; (2) and calculating to obtain the change weight based on the actual values and the N coefficients of the N operation reference indexes.
According to an embodiment of the present disclosure, the N operation reference indicators may include: at least one of a network transmission performance index, a central processing unit computing capacity index, a disk read-write performance index, a graphics processor computing capacity index and a downtime frequency index.
According to the embodiment of the disclosure, the network transmission performance index may refer to network IO performance, and a representation mode of the network IO performance index ns may be set to Mb/s. If the ns value of a certain node is 500, it indicates that the network device of the node can transmit and receive network data of 500Mb per second.
According to the embodiment of the disclosure, the CPU computing power may refer to a CPU computing power, and the representation mode of the CPU computing power index cs may be set to be instruction/second. If the cs value of a certain node is 10000, the CPU of the node can process 10000 instructions per second.
According to the embodiment of the disclosure, the disk read-write performance may refer to a disk IO performance, and a representation mode of a disk IO performance index ds may be set to be Gb/s. If the ds value of a certain node is 150, it indicates that the disk of the node can read and write 150Gb of data per second.
According to the embodiment of the disclosure, the GPU computing capability may be referred to as a GPU computing capability, and the representation manner of the GPU computing capability index gs may be set to TFLOPS, where TFLOPS is one trillion floating point operations per second. If the gs value of a certain node is 3, the GPU of the node can process 3 million single-precision operations per second.
According to the embodiment of the disclosure, the representation manner of the downtime frequency index ms can be set to be that the downtime condition occurs once every ms, and if the ms value of a certain node is 300, the downtime condition occurs once every 300 days by the node is represented.
According to an embodiment of the present disclosure, in the case where the operation reference index is plural, a coefficient thereof may be set for each index according to the importance of each index, and the sum of the coefficients of each index may be 1. For example, taking the operation reference indexes including a network IO performance index ns, a CPU computation capability index cs, a disk IO performance index ds, and a downtime frequency index ms as an example, a coefficient nc corresponding to the network IO performance index ns may be set to 0.35; setting a coefficient cc corresponding to the CPU computing power index cs to 0.2; setting a coefficient dc corresponding to the disk IO performance index ds to be 0.05; and setting a coefficient me corresponding to the downtime frequency index ms to be 0.4, wherein the sum nc + dc + cc + mc of the coefficients is 1.
According to the embodiment of the disclosure, in the operation process of each node, the actual value of the operation parameter index of each node may be obtained, and then the change weight Wc is calculated by using the following formula (1):
(ns + nc + ds + dc + cs + cc + ms + mc)/index number (1)
According to the embodiment of the disclosure, the changed weight is obtained by weighting and calculating the running reference indexes such as the CPU performance index, the disk IO performance index and the downtime frequency index and the coefficient corresponding to each index, and the changed weight can be accurately determined. And setting coefficients according to the importance degree of each performance index, the influence degree of the important index (such as the downtime frequency index ms) on the weight can be increased, and further, the task amount distributed to the node with the lower important index can be reduced.
FIG. 3 schematically shows a flow chart of a task assignment method according to another embodiment of the present disclosure.
As shown in fig. 3, according to an embodiment of the present disclosure, the task allocation method may include operations S310 to S340 in addition to the above-described operations S210 to S230.
In operation S310, a standard weight Ws of each of the M nodes is obtained.
According to an embodiment of the present disclosure, obtaining the standard weight Ws of each of the M nodes includes: obtaining a standard weight of each node based on user input; or calculating to obtain the standard weight of each node based on the standard values of the N operating reference indexes of each node and N coefficients respectively corresponding to the N operating reference indexes, wherein N is an integer greater than or equal to 1.
For example, a user may set a standard weight for each node according to the hardware configuration information of each node, and a node with a higher hardware configuration may have a higher standard weight, for example. Or, the standard weight of each node may be obtained through calculation, the hardware configuration information of each node may include standard values (also referred to as theoretical values) of indexes such as a network transmission performance index, a central processing unit calculation capability index, a disk read-write performance index, a graphics processor calculation capability index, and a downtime frequency index, and then the standard values and coefficients of the corresponding indexes may be substituted into the formula (1) to obtain the standard weight through calculation.
In operation S320, a weight stabilization interval of each node is determined based on the standard weight Ws of each node.
For example, for each node, [ n × Ws to Ws ] may be used as the weight stability interval of the node, where n may be a number between 0.6 and 1, for example. If the weight of the node is within the range of the weight stable interval, the node can be considered to run stably.
In operation S330, a node, of which the change weight exceeds the corresponding weight stability interval, is determined as a target node from among the M nodes.
For example, if the change weight of the second node 102 shown in fig. 1 exceeds the weight stable interval of the second node 102, for example, the value of the network IO performance index of the second node 102 is low or the value of the downtime frequency index is low, the second node 102 may be a target object to be adjusted.
In operation S340, the operation condition of the target node is adjusted such that the changed weight of the target node is within the corresponding weight stability interval.
For example, a target node that needs to be adjusted and an operation condition thereof may be output to a user to notify the user to adjust the operation condition of the target node, and if the value of the network IO performance index of the second node 102 is low, the network condition of the second node 102 may be checked, a cause may be found, a fault may be eliminated, and finally the weight of the second node 102 may be stabilized within the weight stabilization interval range.
According to the embodiment of the disclosure, stable operation of the cluster can be ensured by setting the weight stable interval, monitoring the weight change of each node, and adjusting the nodes with weights exceeding the weight stable interval.
Another aspect of the embodiments of the present disclosure also provides a distributed system.
Fig. 4 schematically illustrates a schematic diagram of a distributed system according to an embodiment of the disclosure.
As shown in fig. 4, the distributed system 400 includes a first node 410 and P second nodes 420, the first node and each second node having a respective initial weight, P being an integer greater than or equal to 1. The first node is used for distributing tasks to the first node and the P second nodes based on the initial weights of the first node and the P second nodes. The first node and each second node are used for determining respective change weight based on respective operation information in the process of executing the task; the first node is further configured to assign tasks to the first node and the P second nodes based on the change weights of the first node and the P second nodes.
According to the embodiment of the disclosure, the distributed network can be maintained based on consistency algorithms such as paxos or raft, or a master-slave distributed network capable of synchronizing the states of the nodes in real time. The purpose is to obtain the state of each node to have uniform consensus, not rely on third-party monitoring service, achieve the purpose of distributed network self-consistency, and avoid the situation that the whole cluster service cannot work due to the fact that the third-party monitoring service fails.
According to the embodiment of the disclosure, the weight is determined again according to the running condition of each node in the process of executing the task, and the follow-up task distribution is performed according to the changed weight, so that the authority of the node can be changed in real time according to the change of the running condition, the performance of artificial and hardware configuration is not depended on, the fair scheduling is performed as far as possible, the fairness of node competition in the distributed system is improved, and the stability and the resource utilization rate of the distributed system are improved.
According to the embodiment of the disclosure, in the process of executing tasks by the M nodes, determining the change weight of each node comprises: and in the process of executing tasks by the M nodes, the change weight of each node is determined again at intervals of preset time.
According to an embodiment of the disclosure, determining the respective alteration weights based on the respective operational information comprises: acquiring actual values of N operation reference indexes in the operation information and N coefficients respectively corresponding to the N operation reference indexes, wherein N is an integer greater than or equal to 1; and calculating to obtain the change weight based on the actual values and the N coefficients of the N operation reference indexes.
According to an embodiment of the present disclosure, the N operation reference indexes include: at least one of a network transmission performance index, a central processing unit computing capacity index, a disk read-write performance index, a graphics processor computing capacity index and a downtime frequency index.
According to an embodiment of the disclosure, each second node is further configured to: obtaining a standard weight; and determining a weight stable interval based on the standard weight. The first node is further configured to: and determining a node with the change weight exceeding the corresponding weight stable interval from the first node and the P second nodes as a target node, and informing a user of adjusting the operation condition of the target node so as to enable the change weight of the target node to be positioned in the range of the corresponding weight stable interval.
According to an embodiment of the present disclosure, obtaining the standard weight includes: obtaining a standard weight based on user input; or calculating to obtain the standard weight of the node based on the standard values of the N operating reference indexes of the node and N coefficients respectively corresponding to the N operating reference indexes, wherein N is an integer greater than or equal to 1.
Another aspect of the embodiments of the present disclosure further provides a task allocation apparatus, which is used for a distributed system, where the distributed system includes M nodes, and M is an integer greater than or equal to 1.
Fig. 5 schematically shows a block diagram of a task assigning apparatus according to an embodiment of the present disclosure.
As shown in fig. 5, the apparatus 500 includes a first assignment module 510, a weight determination module 520, and a second assignment module 530.
The first assignment module 510 is configured to assign tasks to the M nodes based on their respective initial weights.
The weight determining module 520 is configured to determine a change weight of each node based on the operation information of each node in the M nodes during the task execution of the M nodes.
The second assignment module 530 is configured to assign tasks to the M nodes based on the change weight of each node.
According to the embodiment of the disclosure, the weight is determined again according to the running condition of each node in the process of executing the task, and the follow-up task distribution is performed according to the changed weight, so that the authority of the node can be changed in real time according to the change of the running condition, the performance of artificial and hardware configuration is not depended on, the fair scheduling is performed as far as possible, the fairness of node competition in the distributed system is improved, and the stability and the resource utilization rate of the distributed system are improved.
According to the embodiment of the disclosure, in the process of executing tasks by the M nodes, determining the change weight of each node comprises: and in the process of executing tasks by the M nodes, the change weight of each node is determined again at intervals of preset time.
According to the embodiment of the disclosure, in the process of executing tasks by the M nodes, determining the change weight of each node comprises: performing the following for each node: acquiring actual values of N operation reference indexes in the operation information and N coefficients respectively corresponding to the N operation reference indexes, wherein N is an integer greater than or equal to 1; and calculating to obtain the change weight based on the actual values and the N coefficients of the N operation reference indexes.
According to an embodiment of the present disclosure, the N operation reference indexes include: at least one of a network transmission performance index, a central processing unit computing capacity index, a disk read-write performance index, a graphics processor computing capacity index and a downtime frequency index.
According to the embodiment of the disclosure, the changed weight is obtained by weighting and calculating the running reference indexes such as the CPU performance index, the disk IO performance index and the downtime frequency index and the coefficient corresponding to each index, and the changed weight can be accurately determined. And setting coefficients according to the importance degree of each performance index, the influence degree of the important index (such as the downtime frequency index ms) on the weight can be increased, and further, the task amount distributed to the node with the lower important index can be reduced.
According to an embodiment of the present disclosure, the task assigning apparatus may further include a standard weight module, an interval determining module, a target node module, and an adjusting module. The standard weight module is used for obtaining the standard weight of each node in the M nodes; the interval determining module is used for determining a weight stable interval of each node based on the standard weight of each node; the target node module is used for determining a node with a change weight exceeding a corresponding weight stable interval from the M nodes as a target node; the adjusting module is used for adjusting the running condition of the target node so that the change weight of the target node is located in the corresponding weight stable interval range.
According to an embodiment of the present disclosure, obtaining the standard weight of each of the M nodes includes: obtaining a standard weight of each node based on user input; or calculating to obtain the standard weight of each node based on the standard values of the N operating reference indexes of each node and N coefficients respectively corresponding to the N operating reference indexes, wherein N is an integer greater than or equal to 1.
According to the embodiment of the disclosure, stable operation of the cluster can be ensured by setting the weight stable interval, monitoring the weight change of each node, and adjusting the nodes with weights exceeding the weight stable interval.
Any number of modules, sub-modules, units, sub-units, or at least part of the functionality of any number thereof according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present disclosure may be implemented by being split into a plurality of modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in any other reasonable manner of hardware or firmware by integrating or packaging a circuit, or in any one of or a suitable combination of software, hardware, and firmware implementations. Alternatively, one or more of the modules, sub-modules, units, sub-units according to embodiments of the disclosure may be at least partially implemented as a computer program module, which when executed may perform the corresponding functions.
For example, any plurality of the first allocating module 510, the weight determining module 520, the second allocating module 530, the standard weight module, the interval determining module, the target node module and the adjusting module may be combined and implemented in one module/unit/sub-unit, or any one of the modules/units/sub-units may be split into a plurality of modules/units/sub-units. Alternatively, at least part of the functionality of one or more of these modules/units/sub-units may be combined with at least part of the functionality of other modules/units/sub-units and implemented in one module/unit/sub-unit. According to an embodiment of the present disclosure, at least one of the first allocating module 510, the weight determining module 520, the second allocating module 530, the standard weight module, the interval determining module, the target node module, and the adjusting module may be at least partially implemented as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented by hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or implemented by any one of three implementations of software, hardware, and firmware, or in a suitable combination of any of them. Alternatively, at least one of the first assignment module 510, the weight determination module 520, the second assignment module 530, the standard weight module, the interval determination module, the target node module, and the adjustment module may be implemented at least in part as a computer program module that, when executed, may perform corresponding functions.
It should be noted that, the task allocation apparatus portion in the embodiment of the present disclosure corresponds to the task allocation method portion in the embodiment of the present disclosure, and the description of the task allocation apparatus portion specifically refers to the task allocation method portion, which is not described herein again.
Another aspect of the embodiments of the present disclosure also provides an electronic device, including: one or more processors; memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the task allocation method as described above.
Fig. 6 schematically shows a block diagram of an electronic device adapted to implement the above described method according to an embodiment of the present disclosure. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, an electronic device 600 according to an embodiment of the present disclosure includes a processor 601, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. Processor 601 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or associated chipset, and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), among others. The processor 601 may also include onboard memory for caching purposes. Processor 601 may include a single processing unit or multiple processing units for performing different actions of a method flow according to embodiments of the disclosure.
In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 are stored. The processor 601, the ROM602, and the RAM 603 are connected to each other via a bus 604. The processor 601 performs various operations of the method flows according to the embodiments of the present disclosure by executing programs in the ROM602 and/or RAM 603. It is to be noted that the programs may also be stored in one or more memories other than the ROM602 and RAM 603. The processor 601 may also perform various operations of the method flows according to embodiments of the present disclosure by executing programs stored in the one or more memories.
Electronic device 600 may also include input/output (I/O) interface 605, input/output (I/O) interface 605 also connected to bus 604, according to an embodiment of the disclosure. The system 600 may also include one or more of the following components connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output portion 607 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted in the storage section 608 as necessary.
According to embodiments of the present disclosure, method flows according to embodiments of the present disclosure may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611. The computer program, when executed by the processor 601, performs the above-described functions defined in the system of the embodiments of the present disclosure. The systems, devices, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
The present disclosure also provides a computer-readable storage medium, which may be contained in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement the method according to an embodiment of the disclosure.
According to embodiments of the present disclosure, a computer-readable storage medium may be a computer-readable signal medium or a computer-readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer-readable signal medium may include a propagated data signal with computer-readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable storage medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, optical fiber cable, radio frequency signals, etc., or any suitable combination of the foregoing.
For example, according to embodiments of the present disclosure, a computer-readable storage medium may include the ROM602 and/or RAM 603 described above and/or one or more memories other than the ROM602 and RAM 603.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
The embodiments of the present disclosure have been described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described separately above, this does not mean that the measures in the embodiments cannot be used in advantageous combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be devised by those skilled in the art without departing from the scope of the present disclosure, and such alternatives and modifications are intended to be within the scope of the present disclosure.

Claims (10)

1. A task allocation method is used for a distributed system, the distributed system comprises M nodes, M is an integer greater than or equal to 1, and the method comprises the following steps:
distributing tasks to the M nodes based on the initial weight of each node in the M nodes;
determining a change weight of each node in the process of executing tasks by the M nodes, wherein the change weight of each node is determined based on respective operation information; and
and distributing tasks to the M nodes based on the change weight of each node.
2. The method of claim 1, wherein said determining said change weight for each node during task execution by said M nodes comprises:
and in the process of executing tasks by the M nodes, re-determining the change weight of each node every preset time.
3. The method of claim 1 or 2, wherein: the determining the change weight of each node during the task execution of the M nodes includes:
performing the following for each node:
acquiring actual values of N operation reference indexes in the operation information and N coefficients corresponding to the N operation reference indexes respectively, wherein N is an integer greater than or equal to 1;
and calculating the change weight based on the actual values of the N running reference indexes and the N coefficients.
4. The method of claim 3, wherein:
the N operational reference indicators include: at least one of a network transmission performance index, a central processing unit computing capacity index, a disk read-write performance index, a graphics processor computing capacity index and a downtime frequency index.
5. The method of claim 1, further comprising:
obtaining a standard weight of each node in the M nodes;
determining a weight stability interval of each node based on the standard weight of each node;
determining a node with a change weight exceeding a corresponding weight stable interval from the M nodes as a target node;
and adjusting the operation condition of the target node so that the change weight of the target node is positioned in the corresponding weight stable interval range.
6. The method of claim 1, wherein the obtaining the standard weight for each of the M nodes comprises:
obtaining a standard weight of each node based on user input; or
And calculating to obtain the standard weight of each node based on the standard values of the N operation reference indexes of each node and N coefficients respectively corresponding to the N operation reference indexes, wherein N is an integer greater than or equal to 1.
7. A distributed system, comprising:
the node comprises a first node and P second nodes, wherein the first node and each second node have respective initial weights, and P is an integer greater than or equal to 1;
the first node is used for distributing tasks to the first node and the P second nodes based on the initial weights of the first node and the P second nodes;
the first node and each second node are used for determining respective change weight based on respective operation information in the process of executing the task;
the first node is further configured to assign tasks to the first node and the P second nodes based on the change weights of the first node and the P second nodes.
8. A task assigning apparatus for a distributed system including M nodes, M being an integer greater than or equal to 1, the apparatus comprising:
a first distribution module, configured to distribute tasks to the M nodes based on respective initial weights of the M nodes;
a weight determining module, configured to determine a change weight of each node in the M nodes based on operation information of each node in the M nodes in a process of executing a task by the M nodes; and
and the second distribution module is used for distributing tasks to the M nodes based on the change weight of each node.
9. An electronic device, comprising:
one or more processors;
a memory for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-6.
10. A computer readable storage medium having stored thereon executable instructions which, when executed by a processor, cause the processor to carry out the method of any one of claims 1 to 6.
CN202010822709.5A 2020-08-14 2020-08-14 Task allocation method and device, distributed system, electronic device and medium Pending CN111782626A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010822709.5A CN111782626A (en) 2020-08-14 2020-08-14 Task allocation method and device, distributed system, electronic device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010822709.5A CN111782626A (en) 2020-08-14 2020-08-14 Task allocation method and device, distributed system, electronic device and medium

Publications (1)

Publication Number Publication Date
CN111782626A true CN111782626A (en) 2020-10-16

Family

ID=72762864

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010822709.5A Pending CN111782626A (en) 2020-08-14 2020-08-14 Task allocation method and device, distributed system, electronic device and medium

Country Status (1)

Country Link
CN (1) CN111782626A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113112139A (en) * 2021-04-07 2021-07-13 上海联蔚盘云科技有限公司 Cloud platform bill processing method and equipment
CN114826882A (en) * 2022-04-26 2022-07-29 中煤科工集团重庆智慧城市科技研究院有限公司 Communication adaptation method and system applied to smart city

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100146122A1 (en) * 2007-12-26 2010-06-10 Symantec Corporation Balanced Consistent Hashing for Distributed Resource Management
WO2011151569A1 (en) * 2010-06-01 2011-12-08 Bull Sas Method of pseudo-dynamic routing in a cluster comprising static communication links and computer program implementing this method
CN104270402A (en) * 2014-08-25 2015-01-07 浪潮电子信息产业股份有限公司 Adaptive data loading method for heterogeneous cluster storage
CN107229511A (en) * 2017-05-11 2017-10-03 东软集团股份有限公司 Cluster task equalization scheduling method, device, storage medium and electronic equipment
CN108667878A (en) * 2017-03-31 2018-10-16 北京京东尚科信息技术有限公司 Server load balancing method and device, storage medium, electronic equipment
CN108762924A (en) * 2018-05-28 2018-11-06 郑州云海信息技术有限公司 A kind of method, apparatus and computer readable storage medium of load balancing
CN108809848A (en) * 2018-05-28 2018-11-13 北京奇艺世纪科技有限公司 Load-balancing method, device, electronic equipment and storage medium
CN108958942A (en) * 2018-07-18 2018-12-07 郑州云海信息技术有限公司 A kind of distributed system distribution multitask method, scheduler and computer equipment
CN109120715A (en) * 2018-09-21 2019-01-01 华南理工大学 Dynamic load balancing method under a kind of cloud environment
CN109144689A (en) * 2018-06-29 2019-01-04 华为技术有限公司 Method for scheduling task, device and computer program product
CN109617826A (en) * 2018-12-29 2019-04-12 南京航空航天大学 A kind of storm dynamic load balancing method based on cuckoo search
CN109710407A (en) * 2018-12-21 2019-05-03 浪潮电子信息产业股份有限公司 Distributed system real-time task scheduling method, device, equipment and storage medium
CN109814987A (en) * 2017-11-20 2019-05-28 北京京东尚科信息技术有限公司 Task processing method, system, electronic equipment and computer-readable medium
CN110149395A (en) * 2019-05-20 2019-08-20 华南理工大学 One kind is based on dynamic load balancing method in the case of mass small documents high concurrent
CN110597626A (en) * 2019-08-23 2019-12-20 第四范式(北京)技术有限公司 Method, device and system for allocating resources and tasks in distributed system
CN110795233A (en) * 2019-09-18 2020-02-14 北京你财富计算机科技有限公司 Distributed resource allocation method and device and electronic equipment

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100146122A1 (en) * 2007-12-26 2010-06-10 Symantec Corporation Balanced Consistent Hashing for Distributed Resource Management
WO2011151569A1 (en) * 2010-06-01 2011-12-08 Bull Sas Method of pseudo-dynamic routing in a cluster comprising static communication links and computer program implementing this method
CN104270402A (en) * 2014-08-25 2015-01-07 浪潮电子信息产业股份有限公司 Adaptive data loading method for heterogeneous cluster storage
CN108667878A (en) * 2017-03-31 2018-10-16 北京京东尚科信息技术有限公司 Server load balancing method and device, storage medium, electronic equipment
CN107229511A (en) * 2017-05-11 2017-10-03 东软集团股份有限公司 Cluster task equalization scheduling method, device, storage medium and electronic equipment
CN109814987A (en) * 2017-11-20 2019-05-28 北京京东尚科信息技术有限公司 Task processing method, system, electronic equipment and computer-readable medium
CN108809848A (en) * 2018-05-28 2018-11-13 北京奇艺世纪科技有限公司 Load-balancing method, device, electronic equipment and storage medium
CN108762924A (en) * 2018-05-28 2018-11-06 郑州云海信息技术有限公司 A kind of method, apparatus and computer readable storage medium of load balancing
CN109144689A (en) * 2018-06-29 2019-01-04 华为技术有限公司 Method for scheduling task, device and computer program product
CN108958942A (en) * 2018-07-18 2018-12-07 郑州云海信息技术有限公司 A kind of distributed system distribution multitask method, scheduler and computer equipment
CN109120715A (en) * 2018-09-21 2019-01-01 华南理工大学 Dynamic load balancing method under a kind of cloud environment
CN109710407A (en) * 2018-12-21 2019-05-03 浪潮电子信息产业股份有限公司 Distributed system real-time task scheduling method, device, equipment and storage medium
CN109617826A (en) * 2018-12-29 2019-04-12 南京航空航天大学 A kind of storm dynamic load balancing method based on cuckoo search
CN110149395A (en) * 2019-05-20 2019-08-20 华南理工大学 One kind is based on dynamic load balancing method in the case of mass small documents high concurrent
CN110597626A (en) * 2019-08-23 2019-12-20 第四范式(北京)技术有限公司 Method, device and system for allocating resources and tasks in distributed system
CN110795233A (en) * 2019-09-18 2020-02-14 北京你财富计算机科技有限公司 Distributed resource allocation method and device and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
熊建波: "FastDFS分布式文件系统负载均衡算法的改进研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113112139A (en) * 2021-04-07 2021-07-13 上海联蔚盘云科技有限公司 Cloud platform bill processing method and equipment
CN114826882A (en) * 2022-04-26 2022-07-29 中煤科工集团重庆智慧城市科技研究院有限公司 Communication adaptation method and system applied to smart city

Similar Documents

Publication Publication Date Title
US10572290B2 (en) Method and apparatus for allocating a physical resource to a virtual machine
CN109726005B (en) Method, server system and computer readable medium for managing resources
US10936377B2 (en) Distributed database system and resource management method for distributed database system
US11095531B2 (en) Service-aware serverless cloud computing system
WO2022111453A1 (en) Task processing method and apparatus, task allocation method, and electronic device and medium
KR101553650B1 (en) Apparatus and method for load balancing in multi-core system
CN111782626A (en) Task allocation method and device, distributed system, electronic device and medium
CN114787830A (en) Machine learning workload orchestration in heterogeneous clusters
CN114327843A (en) Task scheduling method and device
US10733022B2 (en) Method of managing dedicated processing resources, server system and computer program product
CN112749002A (en) Method and device for dynamically managing cluster resources
CN113986497B (en) Queue scheduling method, device and system based on multi-tenant technology
KR20210095687A (en) Scheduling method and apparatus, electronic device and recording medium
CN112925616A (en) Task allocation method and device, storage medium and electronic equipment
CN114138428A (en) SLO (Simultaneous task oriented) guaranteeing method, device, node and storage medium for multi-priority tasks
CN114116173A (en) Method, device and system for dynamically adjusting task allocation
CN115640113A (en) Multi-plane flexible scheduling method
CN109842665B (en) Task processing method and device for task allocation server
WO2022111466A1 (en) Task scheduling method, control method, electronic device and computer-readable medium
CN115775199A (en) Data processing method and device, electronic equipment and computer readable storage medium
CN114896070A (en) GPU resource allocation method for deep learning task
CN111694670B (en) Resource allocation method, apparatus, device and computer readable medium
CN115700482A (en) Task execution method and device
CN114546630A (en) Task processing method and distribution method, device, electronic equipment and medium
CN113377539A (en) Processing method and device for realizing load balance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210120

Address after: 100140, 55, Fuxing Avenue, Xicheng District, Beijing

Applicant after: INDUSTRIAL AND COMMERCIAL BANK OF CHINA

Applicant after: ICBC Technology Co.,Ltd.

Address before: 071700 unit 111, 1st floor, building C, enterprise office area, xiong'an Civic Service Center, Rongcheng County, xiong'an District, China (Hebei) pilot Free Trade Zone, Hebei Province

Applicant before: ICBC Technology Co.,Ltd.

TA01 Transfer of patent application right