CN114281537A - Resource allocation method, device, electronic equipment and storage medium - Google Patents
Resource allocation method, device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN114281537A CN114281537A CN202111578914.2A CN202111578914A CN114281537A CN 114281537 A CN114281537 A CN 114281537A CN 202111578914 A CN202111578914 A CN 202111578914A CN 114281537 A CN114281537 A CN 114281537A
- Authority
- CN
- China
- Prior art keywords
- task
- execution queue
- tasks
- resource pool
- task execution
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Memory System Of A Hierarchy Structure (AREA)
Abstract
The application relates to a resource allocation method, a device, an electronic device and a storage medium, wherein the resource allocation method comprises the following steps: determining whether to add the tasks in the task cache queue to the task execution queue according to the fixed resource pool task number threshold, the maximum task number threshold and the task number of the task execution queue; and allocating resources of the fixed resource pool or resources of the elastic resource pool to the tasks in the task execution queue according to a target resource amount and the total amount of the resources of the fixed resource pool, wherein the target resource amount is the resource amount required for executing the tasks in the task execution queue. According to the method and the device, the tasks in the task cache queue can be added into the task execution queue in time according to the fixed resource pool task number threshold, the maximum task number threshold and the task number of the task execution queue, so that congestion caused by excessive tasks is reduced, resources of different resource pools are reasonably distributed for the tasks in the task execution queue, and task processing efficiency is improved.
Description
Technical Field
The present application relates to the field of resource allocation technologies, and in particular, to a resource allocation method and apparatus, an electronic device, and a storage medium.
Background
In the prior art, tasks initiated by users are executed according to a receiving sequence, and resources are allocated to the received tasks initiated by the users according to the maximum resource demand. In the case of limited resources, receiving a large number of tasks at a time may cause task congestion, eventually resulting in a task failure due to timeout execution.
Disclosure of Invention
The application provides a resource allocation method, a resource allocation device, electronic equipment and a storage medium, which are used for solving the problem that in the prior art, tasks are blocked due to the fact that a large number of tasks are received at one time, and finally the tasks are failed to execute due to overtime.
In a first aspect, the present application provides a resource allocation method, including:
determining whether to add the tasks in the task cache queue to the task execution queue according to the fixed resource pool task number threshold, the maximum task number threshold and the task number of the task execution queue;
and allocating resources of the fixed resource pool or resources of the elastic resource pool to the tasks in the task execution queue according to a target resource amount and the total amount of the resources of the fixed resource pool, wherein the target resource amount is the resource amount required for executing the tasks in the task execution queue.
Optionally, the determining whether to add the task in the task buffer queue to the task execution queue according to the fixed resource pool task number threshold, the maximum task number threshold, and the task number of the task execution queue includes:
if the number of the tasks in the task execution queue is smaller than the maximum task number threshold value, adding the tasks in the task cache queue to the task execution queue;
and if the number of the tasks in the task execution queue is greater than or equal to the maximum task number threshold value, not adding the tasks to the task execution queue.
Optionally, the determining whether to add the task in the task buffer queue to the task execution queue according to the fixed resource pool task number threshold, the maximum task number threshold, and the task number of the task execution queue includes:
if the number of tasks in the task execution queue is smaller than a fixed resource pool task number threshold value, adding the tasks in the task cache queue to the task execution queue;
if the number of tasks in the task execution queue is greater than or equal to the fixed resource pool task number threshold and less than the maximum task number threshold, adding the high-priority tasks in the task cache queue to the task execution queue;
and if the number of the tasks in the task execution queue is greater than or equal to the maximum task number threshold value, not adding the tasks to the task execution queue.
Optionally, before the adding the high-priority task in the task buffer queue to the task execution queue, the method further includes:
and for any task in the task cache queue, determining the priority of the task according to the identity of the publisher of the task, wherein the priority comprises a high priority and a low priority.
Optionally, the allocating resources of the fixed resource pool or resources of the elastic resource pool to the task in the task execution queue according to the target resource amount and the total resource amount of the fixed resource pool includes:
if the target resource amount is less than or equal to the total resource amount of the fixed resource pool, allocating resources in the fixed resource pool to the tasks in the task execution queue;
if the target resource amount is larger than the total resource amount of the fixed resource pool, allocating resources in the fixed resource pool or resources in the elastic resource pool to the tasks in the task execution queue according to the priority of the tasks in the task execution queue, wherein the resource amount in the elastic resource pool is determined according to the target resource amount and the total resource amount of the fixed resource pool.
Optionally, the allocating, according to the priority of the task in the task execution queue, a resource in a fixed resource pool or a resource in an elastic resource pool to the task in the task execution queue includes:
for any task in the task execution queue, performing the following operations:
if the priority of any task is high, allocating resources in the elastic resource pool to any task;
and if the priority of any task is low, allocating the resources in the fixed resource pool to any task.
Optionally, the method further comprises:
adjusting the state identifier of the task added to the task execution queue from the first identifier to a second identifier; the first identification represents that the task is not pushed to the task execution queue, and the second identification represents that the task is pushed to the task execution queue;
and adjusting the state identifier of the executed task into a third identifier, wherein the third identifier represents that the task is executed.
In a second aspect, the present application provides a resource allocation apparatus, including:
the determining module is used for determining whether to add the tasks in the task cache queue to the task execution queue according to the fixed resource pool task number threshold, the maximum task number threshold and the task number of the task execution queue;
and the allocation module is used for allocating resources of the fixed resource pool or resources of the elastic resource pool to the tasks in the task execution queue according to a target resource amount and the total amount of the resources of the fixed resource pool, wherein the target resource amount is the amount of the resources required for executing the tasks in the task execution queue.
In a third aspect, the present application provides an electronic device, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete mutual communication through the communication bus;
a memory for storing a computer program;
a processor, configured to implement the steps of the resource allocation method according to any one of the embodiments of the first aspect when executing the program stored in the memory.
In a fourth aspect, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the resource allocation method according to any one of the embodiments of the first aspect.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages:
according to the resource allocation method provided by the embodiment of the application, the tasks in the task cache queue are added into the task execution queue in time according to the fixed resource pool task number threshold, the maximum task number threshold and the task number of the task execution queue, so that congestion caused by excessive tasks is reduced, resources of different resource pools are reasonably allocated for the tasks in the task execution queue, and the task processing efficiency is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a schematic flowchart of a resource allocation method according to an embodiment of the present application;
fig. 2 is a schematic diagram of a resource allocation process according to an embodiment of the present application;
fig. 3 is a schematic diagram of a resource allocation apparatus according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In order to solve the problem that in the prior art, a task is blocked due to the fact that a large number of tasks are received at one time, and finally the task is failed to be executed due to timeout, the embodiment of the application provides a resource allocation method, which is applied to a processor, wherein the processor is located in any device, such as a server and the like.
As shown in fig. 1, the resource allocation method includes steps 101 to 102:
step 101: and determining whether to add the tasks in the task cache queue to the task execution queue or not according to the threshold value of the number of the tasks in the fixed resource pool, the threshold value of the maximum number of the tasks and the number of the tasks in the task execution queue.
The fixed resource pool task number threshold is the maximum number of tasks that can be supported by the resources in the fixed resource pool. The maximum task count threshold may be a fixed value set in advance.
Illustratively, the fixed resource pool task number threshold may be denoted by taskExpandThrreadd, and the maximum task number threshold may be denoted by taskMaxCount.
The task execution queue may be a message queue (mq), and the tasks in the queue are executed sequentially.
Illustratively, the task execution queue includes task a, and then task b is added to the task execution queue, and in the task execution queue, task a is executed first, and then task b is executed.
In a possible implementation manner, if the number of tasks in the task execution queue is smaller than the maximum task number threshold, adding the tasks in the task cache queue to the task execution queue; and if the number of the tasks in the task execution queue is greater than or equal to the maximum task number threshold value, not adding the tasks to the task execution queue. And the number of the tasks in the task execution queue is the number of the real-time tasks in the task execution queue.
Illustratively, the number of tasks in the task execution queue may be represented by currentTask.
Specifically, if the number of tasks in the task execution queue is less than the maximum task number threshold, adding the tasks in the task buffer queue to the task execution queue.
Illustratively, the maximum task number threshold is 100, the task number of the task execution queue is currentTask, and if currentTask > is 100, no task is added to the task execution queue; if currentTask <100, add the task in the task buffer queue to the task execution queue.
Specifically, in the process of adding the tasks in the task buffer queue to the task execution queue, the number of the tasks added to the task execution queue is at least one. It is noted that the number of tasks added to the task execution queue is less than or equal to the difference between the maximum number of tasks threshold and the number of tasks in the task execution queue.
For example, if currentTask <100, then add the task in the task buffer queue to the task execution queue, then add (100-currentTask) tasks in the task buffer queue to the task buffer queue at one time. Or if currentTask <100, adding the tasks in the task cache queue to the task execution queue one by one, judging whether currentTask <100 is established or not every time one task is added, if so, continuously adding the next task in the task cache queue to the task execution queue, and if not, not adding the task to the task cache queue.
In another possible implementation, a maximum number of tasks threshold is determined. If the number of tasks in the task execution queue is smaller than the threshold value of the number of tasks in the fixed resource pool, adding the tasks in the task cache queue to the task execution queue; if the task number of the task execution queue is greater than or equal to the fixed resource pool task number threshold and less than the maximum task number threshold, adding the high-priority tasks in the task cache queue to the task execution queue; and if the number of the tasks in the task execution queue is greater than or equal to the maximum task number threshold value, not adding the tasks to the task execution queue. And the number of the tasks in the task execution queue is the number of the real-time tasks in the task execution queue.
Illustratively, the fixed resource pool task number threshold is 40, the maximum task number threshold is 100, and the task number of the task execution queue is currentTask. If currentTask > is 100, not adding the task to the task execution queue; if 40< ═ currentTask <100, adding the task with high priority in the task cache queue to the task execution queue; if currentTask <40, add the task in the task buffer queue to the task execution queue.
Specifically, if the number of tasks in the task execution queue is smaller than the threshold of the number of tasks in the fixed resource pool, adding the tasks in the task cache queue to the task execution queue. And if the task number of the task execution queue is greater than or equal to the fixed resource pool task number threshold and less than the maximum task number threshold, adding the high-priority tasks in the task cache queue to the task execution queue.
The specific process of adding the task in the task buffer queue to the task execution queue can be referred to above.
In addition, before the task with high priority in the task cache queue is added to the task execution queue, the priority of any task in the task cache queue is determined according to the identity of the publisher of the task, wherein the priority comprises high priority and low priority.
In one possible implementation, when a task is added to a task cache queue, the priority of the task is determined according to the identity of the publisher of the task.
Specifically, the above identity is used to indicate that the publisher of the task is a vip (very important person) user or a general user. Illustratively, the identity includes a first identity and a second identity. The first identity is used for indicating that the task publisher is a VIP user, and the second identity is used for indicating that the task publisher is a common user. If the identity corresponding to the task is the first identity, the issuer of the task is the VIP user, and the priority of the task is determined to be high priority; and if the identity corresponding to the task is the second identity, the publisher of the task is a common user, and the priority of the task is determined to be low.
Illustratively, TA is a task issued (or proposed) by the VIP user a, TB is a task issued by the VIP user B, TC is a task issued by the general user C, the fixed resource pool task number threshold taskexpandtask is 40, the maximum task number threshold taskexcount is 100, and the task number (queued to be executed) in the task execution queue mq is currentTask. When currentTask > is 100, the task is not put into mq; when 40< ═ currentTask <100, putting into TA, TB to mq; when currentTask <40, put into TA, TB, TC to mq.
For example, the task in the embodiment of the present application may be a task provided by a user for a building construction drawing to determine a position of a door window in the building construction drawing.
It should be noted that, the above process of adding the task in the task buffer queue to the task execution queue may be to add a task with a high priority in the task buffer queue to the task execution queue. In addition, by differentiating the priority and preferentially executing the tasks proposed by the VIP users, the situation that the execution speed of the tasks proposed by the VIP users is slow due to the fact that resource pool resources are occupied by a large number of tasks proposed by common users can be avoided.
In a possible implementation manner, the task buffer queue and the task execution queue are located in the same database table, so as to distinguish the tasks in the two queues and avoid repeatedly executing the same task, and a state identifier is added to the task in the database table. The state identifier comprises a first identifier and a second identifier, the first identifier is used for indicating that the task is not pushed to the task execution queue, and the second identifier is used for indicating that the task is pushed to the task execution queue.
Specifically, the state identifier of the task added to the task execution queue is adjusted from the first identifier to the second identifier.
For example, the first identifier may be "not pushed", that is, the task with the status identifier, that is, the first identifier, is not pushed (or added) to the task execution queue, and the second identifier may be "pushed", that is, the task with the status identifier, that is, the second identifier, is pushed to the task execution queue.
In another possible implementation manner, the task buffer queue, the task execution queue, and the already executed tasks are located in the same database table, and in order to distinguish the task buffer queue, the task execution queue, and the already executed tasks and avoid repeatedly executing the same task, a state identifier is added to the task in the database table. The state identifier includes a first identifier, a second identifier, and a third identifier, the first identifier and the second identifier are introduced as described above, and the third identifier indicates that the task has been executed and ended.
For example, the third flag may be "executed", that is, the task having the status flag, that is, the third flag, has been executed and ended.
Specifically, the state identifier of the task added to the task execution queue is adjusted from the first identifier to the second identifier, and the state identifier of the task that has completed execution (i.e., completed execution) is adjusted to the third identifier.
That is to say, after a task proposed by a user is acquired, the task is added to a task cache queue, and a state identifier, that is, a first identifier is added to the task. And subsequently, if the task is added to the task execution queue, the state identifier of the task is adjusted to be the second identifier. And finally, if the task is finished, adjusting the state identifier of the task to be a third identifier.
It should be noted that, through the above process, when the number of tasks that can be supported by the resource pool and the number of tasks in the task execution queue are considered, and the resource pool can support other tasks except the task in the current task execution queue, the tasks in the task cache queue are added to the task execution queue to gradually execute the tasks proposed by the user in batches, so that the high-concurrency tasks are processed by decoupling the task cache queue from the task execution queue, and the problem of task blocking caused by the high-concurrency tasks is solved by using the characteristic that the queue can cut peaks and fill valleys.
In one possible implementation, the above step 101 may be performed periodically. Illustratively, every 5s, checking the number of tasks in the task buffer queue and the number of tasks in the task execution queue, and executing the step 101, and adding the tasks in the task buffer queue to the task execution queue according to the above flow under corresponding circumstances.
Specifically, step 101 described above may be performed periodically by the open source component xxl-joba.
Step 102: and allocating the resources of the fixed resource pool or the resources of the elastic resource pool to the tasks in the task execution queue according to the target resource amount and the total resource amount of the fixed resource pool.
The target resource amount is the resource amount required for executing the tasks in the task execution queue. In addition, the resource amount in the elastic resource pool is determined according to the target resource amount and the total resource amount of the fixed resource pool.
In one possible implementation, if the target resource amount is less than or equal to the total resource amount of the fixed resource pool, allocating resources in the fixed resource pool to the tasks in the task execution queue; and if the target resource amount is larger than the total resource amount of the fixed resource pool, allocating the resources in the fixed resource pool or the resources in the elastic resource pool to the tasks in the task execution queue according to the priorities of the tasks in the task execution queue.
Specifically, when the target resource amount is greater than the total resource amount of the fixed resource pool, the resource of the elastic resource pool is applied. It should be noted that the resource amount of the elastic resource pool to be applied is the difference between the target resource amount and the total resource amount of the fixed resource pool.
Specifically, in the process of allocating resources in the fixed resource pool or resources in the elastic resource pool to the tasks in the task execution queue according to the priorities of the tasks in the task execution queue, for any task in the task execution queue, the following operations are performed: if the priority of any task is high, allocating resources in the elastic resource pool to any task; and if the priority of any task is low, allocating the resources in the fixed resource pool to any task.
That is, when there is a task with a high priority in the task execution queue, the elastic resource pool is requested for resources according to the requirement of the task with the high priority, or the amount of resources in the elastic resource pool is adjusted, and then the elastic resource pool adjusts the amount of resources contained therein in response to the request, and allocates the resources required by the task with the high priority to the task with the high priority. And sending a corresponding request to the fixed resource pool according to the requirement of the low-priority task so as to distribute the resources in the fixed resource pool to the low-priority task.
It should be noted that, through this process, resources in the fixed resource pool and/or the elastic resource pool can be allocated to tasks in the task execution queue according to resource requirements of the tasks in the task execution queue, and the elastic expansion and contraction of machine resources (i.e., the elastic resource pool) is utilized to accurately control resource usage, thereby reducing unnecessary resource consumption, reducing cost, and ensuring processing efficiency of the tasks. When the resource amount of the fixed resource pool is insufficient, namely the target resource amount is larger than the total resource amount of the fixed resource pool, the processing efficiency of the task with higher priority can be ensured by allocating the resources in the elastic resource pool to the task with higher priority in the task execution queue.
In addition, if the resources that can be provided by one server are regarded as resources in one resource pool, in the prior art, resources are usually allocated to tasks provided by different users through different servers, that is, resources are allocated to different tasks in a physical isolation manner, which may increase the complexity of device maintenance, and cause excessive resource waste in the case that a fixed resource pool can allocate resources to tasks provided by all users. Compared with the prior art, the method and the device have the advantages that when the resource amount of the fixed resource pool is insufficient, namely the target resource amount is larger than the total resource amount of the fixed resource pool, the resources in the elastic resource pool are allocated to the high-priority tasks in the task execution queue, high resource utilization rate can be guaranteed, and unnecessary resource waste can be well avoided.
For determining the priority of the task in the task execution queue, reference may be made to the above determining manner of the priority of the task in the task buffer queue, and details are not described here again.
In one possible implementation manner, the state identifier of the task to be added into the task execution queue, which is obtained by adding the task in the task cache queue into the task execution queue, is adjusted from the first identifier to the second identifier; the first identification represents that the task is not pushed to the task execution queue, and the second queue represents that the task is pushed to the task execution queue; and adjusting the state identifier of the executed task to be a third identifier, wherein the third identifier represents that the task is executed.
It should be noted that, through the above process, the tasks in the task cache queue can be added to the task execution queue in time according to the fixed resource pool task number threshold, the maximum task number threshold and the task number of the task execution queue, so as to reduce congestion caused by too many tasks, reasonably allocate resources of different resource pools for the tasks in the task execution queue, and accelerate task processing efficiency.
For example, a schematic diagram of the flow of resource allocation may be as shown in fig. 2. The VIP user A, VIP, user B and the general user C propose tasks through application software and the like, and after receiving the tasks proposed by the users (i.e., the VIP user A, VIP, user B and the general user C), cache the tasks in a task cache queue in a database table. At this time, the tasks in the task cache queue are TA1, TB, TC1, TC2 and TA2 according to the proposed time sequence, where TA1 and TA2 are the tasks proposed by VIP user a, TB is the task proposed by VIP user B, and TC1 and TC2 are the tasks proposed by general user B. Subsequently, TA1, TA2, and TB are added to the task execution queue by the task scheduler following the procedure in step 101 above. And finally, allocating the resources in the fixed resource pool or the elastic resource pool to the tasks in the task execution queue, and executing the corresponding tasks through the allocated resources. Wherein, taskThreshold: 50 indicates that the fixed resource pool task number threshold, i.e., the taskExpandThread, is 50.
As shown in fig. 3, an embodiment of the present application provides a resource allocation apparatus, which includes a determining module 301 and an allocating module 302.
The determining module 301 is configured to determine whether to add a task in the task buffer queue to the task execution queue according to the fixed resource pool task number threshold, the maximum task number threshold, and the task number of the task execution queue.
An allocating module 302, configured to allocate resources of the fixed resource pool or resources of the elastic resource pool to the task in the task execution queue according to a target resource amount and a total resource amount of the fixed resource pool, where the target resource amount is a resource amount required for executing the task in the task execution queue.
As shown in fig. 4, the embodiment of the present application provides an electronic device, which includes a processor 401, a communication interface 402, a memory 403, and a communication bus 404, where the processor 401, the communication interface 402, and the memory 403 complete mutual communication via the communication bus 404,
a memory 403 for storing a computer program;
in an embodiment of the present application, the processor 401 is configured to implement the steps of the resource allocation method provided in any one of the foregoing method embodiments when executing the program stored in the memory 403.
The present application also provides a computer readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the steps of the resource allocation method provided in any one of the foregoing method embodiments.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present invention, which enable those skilled in the art to understand or practice the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (10)
1. A method for resource allocation, the method comprising:
determining whether to add the tasks in the task cache queue to the task execution queue according to the fixed resource pool task number threshold, the maximum task number threshold and the task number of the task execution queue;
and allocating resources of the fixed resource pool or resources of the elastic resource pool to the tasks in the task execution queue according to a target resource amount and the total amount of the resources of the fixed resource pool, wherein the target resource amount is the resource amount required for executing the tasks in the task execution queue.
2. The method according to claim 1, wherein the determining whether to add the task in the task buffer queue to the task execution queue according to the fixed resource pool task number threshold, the maximum task number threshold, and the task number of the task execution queue comprises:
if the number of the tasks in the task execution queue is smaller than the maximum task number threshold value, adding the tasks in the task cache queue to the task execution queue;
and if the number of the tasks in the task execution queue is greater than or equal to the maximum task number threshold value, not adding the tasks to the task execution queue.
3. The method according to claim 1, wherein the determining whether to add the task in the task buffer queue to the task execution queue according to the fixed resource pool task number threshold, the maximum task number threshold, and the task number of the task execution queue comprises:
if the number of tasks in the task execution queue is smaller than a fixed resource pool task number threshold value, adding the tasks in the task cache queue to the task execution queue;
if the number of tasks in the task execution queue is greater than or equal to the fixed resource pool task number threshold and less than the maximum task number threshold, adding the high-priority tasks in the task cache queue to the task execution queue;
and if the number of the tasks in the task execution queue is greater than or equal to the maximum task number threshold value, not adding the tasks to the task execution queue.
4. The method according to claim 3, wherein before said adding the high priority task in the task buffer queue to the task execution queue, the method further comprises:
and for any task in the task cache queue, determining the priority of the task according to the identity of the publisher of the task, wherein the priority comprises a high priority and a low priority.
5. The method according to claim 4, wherein the allocating resources of the fixed resource pool or resources of the elastic resource pool to the task in the task execution queue according to the target resource amount and the total resource amount of the fixed resource pool comprises:
if the target resource amount is less than or equal to the total resource amount of the fixed resource pool, allocating resources in the fixed resource pool to the tasks in the task execution queue;
if the target resource amount is larger than the total resource amount of the fixed resource pool, allocating resources in the fixed resource pool or resources in the elastic resource pool to the tasks in the task execution queue according to the priority of the tasks in the task execution queue, wherein the resource amount in the elastic resource pool is determined according to the target resource amount and the total resource amount of the fixed resource pool.
6. The method according to claim 5, wherein the allocating resources in a fixed resource pool or resources in an elastic resource pool for the tasks in the task execution queue according to the priorities of the tasks in the task execution queue comprises:
for any task in the task execution queue, performing the following operations:
if the priority of any task is high, allocating resources in the elastic resource pool to any task;
and if the priority of any task is low, allocating the resources in the fixed resource pool to any task.
7. The method of any one of claims 1-6, wherein the method further comprises:
adjusting the state identifier of the task added to the task execution queue from the first identifier to a second identifier; the first identification represents that the task is not pushed to the task execution queue, and the second identification represents that the task is pushed to the task execution queue;
and adjusting the state identifier of the executed task into a third identifier, wherein the third identifier represents that the task is executed.
8. A resource allocation apparatus, characterized in that the resource allocation apparatus comprises:
the determining module is used for determining whether to add the tasks in the task cache queue to the task execution queue according to the fixed resource pool task number threshold, the maximum task number threshold and the task number of the task execution queue;
and the allocation module is used for allocating resources of the fixed resource pool or resources of the elastic resource pool to the tasks in the task execution queue according to a target resource amount and the total amount of the resources of the fixed resource pool, wherein the target resource amount is the amount of the resources required for executing the tasks in the task execution queue.
9. An electronic device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the steps of the resource allocation method of any one of claims 1 to 7 when executing a program stored on a memory.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the resource allocation method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111578914.2A CN114281537A (en) | 2021-12-22 | 2021-12-22 | Resource allocation method, device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111578914.2A CN114281537A (en) | 2021-12-22 | 2021-12-22 | Resource allocation method, device, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114281537A true CN114281537A (en) | 2022-04-05 |
Family
ID=80874277
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111578914.2A Pending CN114281537A (en) | 2021-12-22 | 2021-12-22 | Resource allocation method, device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114281537A (en) |
-
2021
- 2021-12-22 CN CN202111578914.2A patent/CN114281537A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210200587A1 (en) | Resource scheduling method and apparatus | |
US10754706B1 (en) | Task scheduling for multiprocessor systems | |
CN107423120B (en) | Task scheduling method and device | |
US11977784B2 (en) | Dynamic resources allocation method and system for guaranteeing tail latency SLO of latency-sensitive application | |
US20200192711A1 (en) | Distributed system resource allocation method, apparatus, and system | |
US20190324819A1 (en) | Distributed-system task assignment method and apparatus | |
CN109697122B (en) | Task processing method, device and computer storage medium | |
CN112783659B (en) | Resource allocation method and device, computer equipment and storage medium | |
CN108681481B (en) | Service request processing method and device | |
WO2013117136A1 (en) | Capacity-based multi-task scheduling method, device and system | |
CN111897637B (en) | Job scheduling method, device, host and storage medium | |
US20090178045A1 (en) | Scheduling Memory Usage Of A Workload | |
CN111030945B (en) | Disaster recovery method, disaster recovery gateway, storage medium, device and system | |
CN113364697A (en) | Flow control method, device, equipment and computer readable storage medium | |
CN116185623A (en) | Task allocation method and device, electronic equipment and storage medium | |
CN106775975B (en) | Process scheduling method and device | |
CN109189581B (en) | Job scheduling method and device | |
CN115617497A (en) | Thread processing method, scheduling component, monitoring component, server and storage medium | |
CN111343275A (en) | Resource scheduling method and system | |
CN113391911B (en) | Dynamic scheduling method, device and equipment for big data resources | |
CN109491794A (en) | Method for managing resource, device and electronic equipment | |
CN115437755A (en) | Interrupt scheduling method, electronic device and storage medium | |
CN110175078B (en) | Service processing method and device | |
CN114281537A (en) | Resource allocation method, device, electronic equipment and storage medium | |
CN116820769A (en) | Task allocation method, device and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |