CN114490081A - Resource scheduling method and device and electronic equipment - Google Patents

Resource scheduling method and device and electronic equipment Download PDF

Info

Publication number
CN114490081A
CN114490081A CN202210135047.3A CN202210135047A CN114490081A CN 114490081 A CN114490081 A CN 114490081A CN 202210135047 A CN202210135047 A CN 202210135047A CN 114490081 A CN114490081 A CN 114490081A
Authority
CN
China
Prior art keywords
big data
pmem
data component
resources
resource
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210135047.3A
Other languages
Chinese (zh)
Inventor
宋文豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Inspur Intelligent Technology Co Ltd
Original Assignee
Suzhou Inspur Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Inspur Intelligent Technology Co Ltd filed Critical Suzhou Inspur Intelligent Technology Co Ltd
Priority to CN202210135047.3A priority Critical patent/CN114490081A/en
Publication of CN114490081A publication Critical patent/CN114490081A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the invention relates to a resource scheduling method, a resource scheduling device and electronic equipment, wherein the method comprises the following steps: periodically acquiring the use condition of PMEM resources corresponding to each big data component in all big data components; after a resource scheduling allocation request sent by a first big data component is received, determining distributable PMEM resources according to the acquired use condition of PMEM resources corresponding to each big data component except the first big data component in the current period; and allocating the allocable PMEM resources to the task queue corresponding to the first big data component to execute the task to be executed. By the method, each big data component can be guaranteed to have reasonable resources to support the execution of the task when the task is executed. When the resources of some big data components are insufficient, PMEM resources pre-configured by other big data components can be flexibly scheduled by the resource scheduling system for the big data components with insufficient resources to use, and reasonable distribution of all PMEM resources is realized.

Description

Resource scheduling method and device and electronic equipment
Technical Field
The embodiment of the invention relates to the technical field of computers, in particular to a resource scheduling method, a resource scheduling device and electronic equipment.
Background
The big data platform integrates a plurality of commonly used big data components, such as storage, scheduling and calculation type components of a Distributed File System (HDFS), a Hadoopdatabase (HBase), a Distributed computing engine Spark, a Distributed memory database Redis and the like, and relates to the aspect of the big data field.
Currently, when Persistent Memory (PMEM) resource allocation is performed for a big data component, the following two methods are included:
the first mode is as follows: in this case, although the execution efficiency of the big data component currently executing the task can be greatly improved, the tasks of other components cannot be executed, and the PMEM resource is executed by only one task, so that a lot of resources are often left, and resource waste is caused.
In the second mode, PMEM resources in the cluster nodes are shared by tasks of all big data components, and PMEM resources with different proportions are distributed aiming at different types of big data components. This situation may result in some types of big data components being insufficient PMEM resources and other types of big data components being wasted PMEM resources.
In summary, in the prior art, there is no scheme for reasonably scheduling PMEM resources, so as to ensure that reasonable resources can be allocated to all types of big data components as much as possible, and avoid resource waste.
Disclosure of Invention
The application provides a resource scheduling method, a resource scheduling device and electronic equipment, which are used for solving the technical problems that in the prior art, reasonable resources cannot be allocated to all types of big data assemblies as much as possible, and resource waste can be avoided.
In a first aspect, the present application provides a resource scheduling method, including:
periodically acquiring the use condition of the PMEM resource of the persistent memory corresponding to each big data component in all the big data components;
after a resource scheduling allocation request sent by a first big data component is received, determining distributable PMEM resources according to the acquired use condition of PMEM resources corresponding to each big data component except the first big data component in the current period;
and allocating the allocable PMEM resources to a task queue corresponding to the first big data component to execute the task to be executed, wherein the first big data component is any one of all the big data components.
In a second aspect, the present application provides an apparatus for scheduling resources, the apparatus comprising:
the acquisition module is used for periodically acquiring the use condition of the PMEM resource of the persistent memory corresponding to each big data component in all the big data components and receiving a resource scheduling allocation request sent by each big data component;
the processing module is used for determining distributable PMEM resources according to the acquired use condition of the PMEM resources corresponding to each big data component except the first big data component in the current period after receiving a resource scheduling allocation request sent by the first big data component;
and the allocation module is used for allocating the allocable PMEM resources to the task queue corresponding to the first big data component so as to execute the task to be executed, wherein the first big data component is any one of all the big data components.
In a third aspect, an electronic device is provided, where the electronic device carries a resource scheduling system, and includes a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
a processor, configured to implement the steps of the resource scheduling method according to any one of the embodiments of the first aspect when executing the program stored in the memory.
In a fourth aspect, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the resource scheduling method as defined in any one of the embodiments of the first aspect.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages:
according to the method provided by the embodiment of the application, the use condition of the PMEM resource corresponding to each big data component in all the big data components is periodically acquired. After a resource scheduling allocation request sent by a first big data component is received, determining distributable PMEM resources according to the acquired use condition of PMEM resources corresponding to each big data component except the first big data component in the current period. And then, allocating the allocable PMEM resources to a task queue corresponding to the first big data component for executing the task to be executed. By the method, each big data component can be guaranteed to have reasonable PMEM resources to support the execution of the task when the task is executed. Meanwhile, when the PMEM resources of some big data components are insufficient, the PMEM resources pre-configured by other big data components can be flexibly scheduled by the resource scheduling system to be used by the big data components with insufficient resources so as to complete the execution of tasks. Therefore, the situation that the big data component is insufficient for PMEM resources can be avoided as much as possible, the situation that the PMEM resources are wasted can be avoided, and all PMEM resources are reasonably distributed.
Drawings
Fig. 1 is a schematic flowchart of a resource scheduling method according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of another resource scheduling method according to an embodiment of the present invention;
fig. 3 is a schematic flowchart of another resource scheduling method according to an embodiment of the present invention;
fig. 4 is a schematic flowchart of another resource scheduling method according to an embodiment of the present invention;
fig. 5 is a schematic flowchart of another resource scheduling method according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a resource scheduling apparatus according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
For the convenience of understanding of the embodiments of the present invention, the following description will be further explained with reference to specific embodiments, which are not to be construed as limiting the embodiments of the present invention.
To solve the technical problems mentioned in the background art, an embodiment of the present application provides a resource scheduling method, specifically referring to fig. 1, where fig. 1 is a schematic flow chart of a resource scheduling method provided in an embodiment of the present invention.
Before the steps of the method are introduced, an application scenario to which the method is applicable will be described first. The method may be applied in server clusters, each comprising a plurality of servers (nodes). Each node working normally may carry the same PMEM device and carry tasks to be executed corresponding to all big data components, such as spark task, Hbase task, and Redis task. And each node is allocated with a task queue for executing tasks and different task queues, and PMEM resources with different proportions are preconfigured according to the type of the big data component. Of course, when the PMEM device on any node fails, the resource scheduling system can also know and inform the corresponding big data component to allocate the task to other nodes for execution.
In one example, assume that the PMEM resource on each node is 100T, there are 10 nodes in total, and the total resource is 1000T. Types of big data components include Spark, Hbase, and Redis. The 100T resource on the same node, allocated 40 percent to Spark, 30 percent to Hbase, and the remaining 30 percent to Redis. Then, the total amount of PMEM resources acquired by Spark is 400T, and the total amount of PMEM resources acquired by the other two big data components are 300T each.
The above is the allocation mode of the total resources and PMEM device resources on each node, that is, the amount of PMEM resources that can be occupied by various types of big data components after being preconfigured in advance. However, this configuration is not a constant one, but when the PMEM resource of a certain big data component is vacant at a certain time or within a certain time period, and just other big data components need more PMEM resources to execute tasks, the resource scheduling and allocating system can flexibly schedule and allocate the PMEM resources, so as to ensure that most or even all big data components have reasonable PMEM resources to execute tasks, and avoid wasting the PMEM resources as much as possible.
Hereinafter, how to implement resource scheduling will be described in detail, and specifically referring to fig. 1, the method is executed by a resource scheduling system, and the method specifically includes the following steps:
and step 110, periodically acquiring the use condition of the PMEM resource corresponding to each big data component in all the big data components.
Step 120, after receiving the resource scheduling allocation request sent by the first big data component, determining allocable PMEM resources according to the usage of the PMEM resources corresponding to each big data component except the first big data component, which has been acquired in the current period.
Specifically, the worker may configure a set of device information acquisition scripts on each node in advance, and set a timing task. Therefore, each node can periodically collect the use condition of PMEM resources corresponding to each big data component carried by the node, and the collection result is written into the MYSQL table.
The resource scheduling system can periodically read the use condition of the PMEM resource from MYSQL through a pre-configured collected data reading interface, and the use condition can comprise the use amount of the resource. Of course, the amount of PMEM resources remaining for each big data component may also be included.
In an alternative example, the period is, for example, 30 seconds. Considering that each task is typically executed for a time of, for example, about half an hour, a time acquisition of 30 seconds is sufficient to monitor PMEM resource usage of different big data components.
If the usage includes the remaining amount of PMEM resources for each big data component, the resource scheduling system may directly determine the allocable PMEM resources according to the remaining amount of PMEM resources for each big data component.
If the usage includes the total PMEM capacity of each big data component and the resource usage amount, the resource scheduling system may also determine the available PMEM resources for each big data component according to the total capacity and the resource usage amount.
Or, if the usage situation only includes the resource usage amount of each big data component, the total resource capacity for each big data component may be configured in advance in the resource scheduling system, or how many PMEM devices are configured in total, the capacity of each PMEM device, the configuration ratio of each big data component, and the like. The assignable PMEM resources for each big data component can also be computed.
The available PMEM resources can be determined by any method, and therefore, the specific method is not limited herein.
After receiving the resource scheduling allocation request sent by the first big data component, the allocable PMEM resources can be determined according to any one of the above manners.
Step 130, allocating the allocable PMEM resources to the task queue corresponding to the first big data component for executing the task to be executed.
Wherein the first big data component is any one of all big data components.
Specifically, after the distributable PMEM resource is obtained, the distributable PMEM resource is immediately distributed to a task queue corresponding to the first big data component to execute a corresponding task.
According to the resource scheduling method provided by the embodiment of the invention, the service condition of PMEM resources corresponding to each big data component in all big data components is periodically acquired. After a resource scheduling allocation request sent by a first big data component is received, determining distributable PMEM resources according to the acquired use condition of PMEM resources corresponding to each big data component except the first big data component in the current period. And then, allocating the allocable PMEM resources to a task queue corresponding to the first big data component for executing the task to be executed. By the method, each big data component can be guaranteed to have reasonable PMEM resources to support the execution of the task when the task is executed. Meanwhile, when the PMEM resources of some big data components are insufficient, the PMEM resources pre-configured by other big data components can be flexibly scheduled by the resource scheduling system to be used by the big data components with insufficient resources so as to complete the execution of tasks. Therefore, the situation that the big data component is insufficient for PMEM resources can be avoided as much as possible, the situation that the PMEM resources are wasted can be avoided, and all PMEM resources are reasonably distributed.
Optionally, on the basis of the foregoing embodiment, considering that the allocable PMEM resource is a resource allocated from task queues corresponding to other big data components, after the allocable PMEM resource is allocated to a task queue corresponding to a first big data component for executing a task to be executed, the method further includes the following method steps, which are specifically shown in fig. 2.
Step 210, after it is determined that the task queue corresponding to the first big data component executes the task to be executed, the distributable PMEM resource is recycled.
Step 220, the distributable PMEM resources are distributed to the task queues corresponding to the big data components to which the distributable PMEM resources originally belong.
Determining whether the task queue corresponding to the first big data component completes execution of the task to be executed, where the task queue corresponding to the first big data component is acquired by:
firstly, after a task queue corresponding to a first big data component executes a task to be executed, response information of the task execution completion is automatically reported.
Secondly, whether the task queue corresponding to the first big data component completes the task is determined according to the use condition of the PMEM resource corresponding to each big data component fed back periodically (after the task queue corresponding to the big data component completes the task, the PMEM resource is released automatically).
In any way, as long as the task queue corresponding to the first big data component is determined to complete the task, the PMEM resource previously allocated to (lent to) the first big data component is recycled, and then the resource is allocated to (returned to) the task queue corresponding to the original big data component again.
In a specific example, for example, the real-time performance of the Spark task is high, the duration of the task is not long, the task queue allocated to the Spark task is a high-priority task queue, and the corresponding PMEM resource is 40% -50% of the whole PMEM device. HBase has relatively high real-time query, is designated as a common priority queue, and allocates 30-40% of resources of the whole PMEM equipment; redis is used as a memory database, the requirements for memory and PMEM resources are continuous, and then the Redis can be allocated as a low-priority queue, and 20% -30% of the resources of the whole PMEM device are allocated.
When the HBase finds that the resources allocated by the HBase are insufficient while executing the task, it may send a resource scheduling allocation request to the resource scheduling system. After receiving the resource scheduling allocation request, the resource scheduling system determines whether allocable PMEM resources exist according to the acquired PMEM resource use condition of the big data component except HBase in the current period.
When determining that the distributable PMEM resources exist, for example, both Spark and Redis have enough margins, considering that the Spark task has a high real-time requirement and the task duration is not long, the occupied PMEM resources have the highest proportion, so that the resource usage condition in the task queue can be checked first, and if there is a margin, the PMEM resources of the task queue can be directly called to the HBase task queue.
Of course, if the PMEM resource corresponding to the Spark component is not enough to support the PMEM resource to be allocated required by the HBase component, the remaining PMEM resources of the multiple components may also be stacked so as to be able to support the task performed by the HBase component.
Optionally, the resource scheduling allocation request includes PMEM resources to be allocated, so that whether allocable PMEM resources are sufficient may be determined according to the PMEM resources to be allocated.
Optionally, on the basis of any of the above embodiments, the method may further include the following.
When it is determined that there is no available PMEM resource according to the usage of the PMEM resource corresponding to each big data component except the first big data component acquired in the current cycle, the method may further include the following steps, referring to the method embodiment shown in fig. 3 in particular.
And step 310, suspending the allocation of PMEM resources to the task queue corresponding to the first big data component.
And step 320, after the service condition of the PMEM resource corresponding to each big data component in all the big data components is obtained in the next period, determining the distributable PMEM resource.
And step 330, allocating the allocable PMEM resources to a task queue corresponding to the first big data component.
Specifically, as shown in the embodiment of fig. 1, each period is, for example, 30 seconds, so that the execution of the task is not greatly affected. Therefore, PMEM resources can be allocated to the task queue corresponding to the first big data component temporarily, and then the allocable PMEM resources are determined after the use condition of the PMEM resources corresponding to each big data component in all the big data components is acquired in the next period. The specific obtaining conditions are as described above, and are not described in detail here.
And allocating the allocable PMEM resources to the task queue corresponding to the first big data component. The execution situation is the same as above, and is not described in detail here.
Of course, it is also possible that there is no allocable resource in the next cycle, a cycle threshold may be set, that is, as long as it is ensured that the allocable PMEM resource can be acquired in the preset cycle, the allocable PMEM resource is allocated to the task queue corresponding to the first big data component.
In another alternative embodiment, considering that no allocable PMEM resource is scheduled to the first big data component even after the preset number of cycles, in this case, the method may further include an embodiment including:
and issuing a second control instruction to the first big data assembly, wherein the second control instruction is used for indicating the first big data assembly and executing the task to be executed in the task queue in the actual memory corresponding to the first big data assembly.
That is, when there is no available PMEM resource within the preset number of cycles, which indicates that there is a peak period of processing tasks at present, there is no spare resource that can be called by the first big data component, and therefore, the first big data component is required to execute the task to be executed in the task queue in the real memory corresponding to the first big data component.
Optionally, in addition to the case that the allocable PMEM resources are enough to support the task corresponding to the first big data component and the case that no allocable PMEM resources are included, it may be determined that there are allocable PMEM resources that are not enough to support the task to be performed by the resource lacking in the first big data component. In this case, then, the method may further comprise, in addition to the method steps of the above-mentioned embodiment, the following method steps, see in particular fig. 4, including:
step 410, comparing the size between allocable PMEM resources and PMEM resources to be allocated.
Wherein, the PMEM resource to be allocated is carried in the resource scheduling request.
And step 420, when the distributable PMEM resource is determined to be smaller than the to-be-distributed PMEM resource, determining a resource quantity difference value between the distributable PMEM resource and the to-be-distributed PMEM resource.
Step 430, a first control command is issued to the first big data component.
The first control instruction is used for indicating the first big data component to execute the task to be executed corresponding to the resource quantity difference value in the actual memory corresponding to the first big data component.
That is, the allocable resources are allocated to the first big data component as well, so that the first big data component executes the corresponding task according to the allocable resources, and the rest tasks need to be executed in the actual memory corresponding to the first big data component.
Specifically, the execution task referred to herein is actually to store data in an actual memory or to store data in the PMEM, so as to read the stored data subsequently to execute other operations.
Further optionally, on the basis of any of the above embodiments, there may be a special case where, for allocation of resource scheduling, the following operations may also be performed, specifically referring to fig. 5, where fig. 5 illustrates a flowchart of another allocation method for resource scheduling.
Specifically, on the basis of the above embodiment, before determining assignable PMEM resources according to the usage of PMEM resources corresponding to each big data component except the first big data component that has been acquired in the current cycle, the method further includes:
at step 510, the type of the first big data component is determined.
Step 520, when the type of the first big data component is determined to be a first preset type, determining distributable PMEM resources for the first big data component no longer; and issuing a third control instruction to the first big data component.
And the third control instruction is used for indicating the first big data component and executing the task to be executed in the task queue in the actual memory corresponding to the first big data component. The first preset type is a big data component which occupies the whole resource of the PMEM device and has a proportion higher than a preset threshold value.
In one specific example, for example, as introduced above, the Spark task has a high real-time requirement and the task duration is not long, and the PMEM resources occupied by the Spark task are already the highest proportion of the entire PMEM device. Therefore, when the resource required by the Spark task is insufficient, the resource scheduling system will not schedule PMEM resources of other big data components for it to execute its corresponding task. And directly issuing a third control instruction to indicate the first big data component and executing the task to be executed in the task queue in the actual memory corresponding to the first big data component.
Further optionally, on the basis of any of the above embodiments, there may be a special case where, for allocation of resource scheduling, the following operations may also be performed:
before determining allocable PMEM resources according to the acquired use condition of PMEM resources corresponding to each big data component except the first big data component in the current period, the method further comprises the following steps:
and determining the type of each big data component, so that PMEM resources allocable by the second big data component are not counted when the type of the second big data component is determined to be the second preset type. The second preset type is, for example, a big data component that occupies the PMEM device with a whole resource ratio lower than a second preset threshold and occupies the task with a time higher than a preset time threshold.
In one specific example, considering Redis as a memory database, for example, the demand for memory and PMEM resources is persistent, so as not to schedule the PMEM resources of Redis as much as possible for distribution to other big data components to perform tasks. Therefore, when determining the available PMEM resources according to the use condition of the PMEM resources corresponding to each big data component except the first big data component acquired in the current period, excluding the big data components of the type.
In the above, for the embodiments of the method for scheduling resources provided by the present application, other embodiments of the resource scheduling provided by the present application are described below, and specific reference is made to the following.
Fig. 6 is a schematic structural diagram of a resource scheduling apparatus according to an embodiment of the present invention, where the apparatus includes: an acquisition module 601, a processing module 602, and a distribution module 603.
An obtaining module 601, configured to periodically obtain a use condition of a persistent memory PMEM resource corresponding to each big data component in all big data components, and receive a resource scheduling allocation request sent by each big data component;
the processing module 602 is configured to, after receiving a resource scheduling allocation request sent by a first big data component, determine allocable PMEM resources according to a usage situation of PMEM resources corresponding to each big data component except the first big data component, which has been acquired in a current period;
the allocating module 603 is configured to allocate an allocable PMEM resource to a task queue corresponding to a first big data component for executing a task to be executed, where the first big data component is any one of all big data components.
Optionally, the apparatus further comprises: a recovery module 604;
the processing module 602 is further configured to determine whether a task queue corresponding to the first big data component has executed a task to be executed;
a recycling module 604, configured to recycle distributable PMEM resources after it is determined that the task queue corresponding to the first big data component completes execution of the task to be executed;
the allocating module 603 is further configured to allocate the allocable PMEM resource to a task queue corresponding to a big data component to which the allocable PMEM resource originally belongs.
Optionally, the allocating module 603 is further configured to suspend allocating PMEM resources to the task queue corresponding to the first big data component when it is determined that there is no allocable PMEM resource according to the usage of the PMEM resources corresponding to each big data component except the first big data component that has been acquired in the current period;
the processing module 602 is further configured to determine assignable PMEM resources after the next period acquiring module 601 acquires the use conditions of the PMEM resources corresponding to each big data component in all big data components;
the allocating module 603 is further configured to allocate the allocable PMEM resource to a task queue corresponding to the first big data component.
Optionally, the resource scheduling allocation request includes PMEM resources to be allocated;
the device also includes: a sending module 605;
the processing module 602 is further configured to determine a resource amount difference between the allocable PMEM resource and the PMEM resource to be allocated when it is determined that the allocable PMEM resource is smaller than the PMEM resource to be allocated;
the sending module 605 is further configured to issue a first control instruction to the first big data component, where the first control instruction is used to instruct the first big data component to execute the task to be executed corresponding to the resource amount difference in the actual memory corresponding to the first big data component.
Optionally, the sending module 605 is further configured to, when there is still no available PMEM resource after exceeding the preset period, issue a second control instruction to the first big data component, where the second control instruction is used to indicate the first big data component, and execute the task to be executed in the task queue in the actual memory corresponding to the first big data component.
Optionally, the processing module 602 is further configured to determine a type of the first big data component; when the type of the first big data component is determined to be a first preset type, determining distributable PMEM resources for the first big data component no longer;
and the issuing module is further used for issuing a third control instruction to the first big data component, wherein the third control instruction is used for indicating the first big data component and executing the task to be executed in the task queue in the actual memory corresponding to the first big data component.
Optionally, the processing module 602 is further configured to determine a type of each big data component, so that PMEM resources allocable by the second big data component are not counted any more when the type of the second big data component is determined to be the second preset type.
The functions executed by each component in the resource scheduling apparatus provided in the embodiment of the present invention have been described in detail in any of the above method embodiments, and therefore, are not described herein again.
The resource scheduling device provided by the embodiment of the invention periodically obtains the use condition of PMEM resources corresponding to each big data component in all big data components. After a resource scheduling allocation request sent by a first big data component is received, determining distributable PMEM resources according to the acquired use condition of PMEM resources corresponding to each big data component except the first big data component in the current period. And then, allocating the allocable PMEM resources to a task queue corresponding to the first big data component for executing the task to be executed. By the method, each big data component can be guaranteed to have reasonable PMEM resources to support the execution of the task when the task is executed. Meanwhile, when the PMEM resources of some big data components are insufficient, the PMEM resources pre-configured by other big data components can be flexibly scheduled by the resource scheduling system to be used by the big data components with insufficient resources so as to complete the execution of tasks. Therefore, the situation that the big data component is insufficient for PMEM resources can be avoided as much as possible, the situation that the PMEM resources are wasted can be avoided, and all PMEM resources are reasonably distributed.
As shown in fig. 7, an embodiment of the present application provides an electronic device, where the electronic device carries a resource scheduling system as mentioned in any of the above embodiments, and includes a processor 111, a communication interface 112, a memory 113, and a communication bus 114, where the processor 111, the communication interface 112, and the memory 113 complete communication with each other through the communication bus 114.
A memory 113 for storing a computer program;
in an embodiment of the present application, when the processor 111 is configured to execute the program stored in the memory 113, the method for scheduling resources provided in any one of the foregoing method embodiments is implemented, including:
periodically acquiring the use condition of the PMEM resource of the persistent memory corresponding to each big data component in all the big data components;
after a resource scheduling and allocating request sent by a first big data component is received, determining allocable PMEM resources according to the acquired use condition of PMEM resources corresponding to each big data component except the first big data component in the current period;
and allocating the allocable PMEM resources to a task queue corresponding to the first big data component to execute the task to be executed, wherein the first big data component is any one of all the big data components.
Optionally, allocating the assignable PMEM resource to the task queue corresponding to the first big data component, so as to execute the task to be executed, further including: and when the task queue corresponding to the first big data component is determined to finish the task to be executed, recovering the distributable PMEM resource and distributing the distributable PMEM resource to the task queue corresponding to the big data component to which the distributable PMEM resource originally belongs.
Optionally, when determining that there is no distributable PMEM resource according to the usage of the PMEM resource corresponding to each big data component except the first big data component that has been acquired in the current cycle, the method further includes:
suspending the allocation of PMEM resources to the task queue corresponding to the first big data component;
after the use condition of PMEM resources corresponding to each big data component in all big data components is obtained in the next period, determining distributable PMEM resources;
and allocating the allocable PMEM resources to the task queue corresponding to the first big data component.
Optionally, the resource scheduling allocation request includes PMEM resources to be allocated;
when determining that the allocable PMEM resources are smaller than the PMEM resources to be allocated, after allocating the allocable PMEM resources to the task queue corresponding to the first big data component, the method further includes:
determining a resource quantity difference value between the distributable PMEM resource and the PMEM resource to be distributed;
and issuing a first control instruction to the first big data component, wherein the first control instruction is used for indicating the first big data component to execute the task to be executed corresponding to the resource quantity difference value in the actual memory corresponding to the first big data component.
Optionally, when there is no available PMEM resource after exceeding the preset period, the method further includes:
and issuing a second control instruction to the first big data assembly, wherein the second control instruction is used for indicating the first big data assembly and executing the task to be executed in the task queue in the actual memory corresponding to the first big data assembly.
Optionally, before determining assignable PMEM resources according to the usage of the PMEM resources corresponding to each big data component except the first big data component that has been acquired in the current cycle, the method further includes:
determining a type of the first big data component;
when the type of the first big data component is determined to be a first preset type, determining distributable PMEM resources for the first big data component no longer; and issuing a third control instruction to the first big data component, wherein the third control instruction is used for indicating the first big data component and executing the task to be executed in the task queue in the actual memory corresponding to the first big data component.
Optionally, before determining assignable PMEM resources according to the usage of the PMEM resources corresponding to each big data component except the first big data component that has been acquired in the current cycle, the method further includes:
and determining the type of each big data component, so that PMEM resources allocable by the second big data component are not counted when the type of the second big data component is determined to be the second preset type.
The present application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the resource scheduling method provided in any one of the foregoing method embodiments.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present invention, which enable those skilled in the art to understand or practice the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method for resource scheduling, the method being performed by a resource scheduling system, the method comprising:
periodically acquiring the use condition of the PMEM resource of the persistent memory corresponding to each big data component in all the big data components;
after a resource scheduling allocation request sent by a first big data component is received, determining distributable PMEM resources according to the acquired use condition of PMEM resources corresponding to each big data component except the first big data component in the current period;
and allocating the allocable PMEM resource to a task queue corresponding to the first big data component to execute a task to be executed, wherein the first big data component is any one of all big data components.
2. The method according to claim 1, wherein after the allocating the allocable PMEM resource to the task queue corresponding to the first big data component for executing the task to be executed, the method further comprises:
and when the task queue corresponding to the first big data component is determined to finish the task to be executed, recovering the distributable PMEM resource and distributing the distributable PMEM resource to the task queue corresponding to the big data component to which the distributable PMEM resource belongs.
3. The method of claim 1, wherein when determining that no available PMEM resources are available according to the usage of the PMEM resources corresponding to each big-data component except the first big-data component that has been obtained in the current cycle, the method further comprises:
suspending allocation of PMEM resources to a task queue corresponding to the first big data component;
after the use condition of PMEM resources corresponding to each big data component in all big data components is obtained in the next period, determining distributable PMEM resources;
and allocating the allocable PMEM resources to the task queue corresponding to the first big data component.
4. A method according to any of claims 1-3, characterized in that the resource scheduling allocation request includes PMEM resources to be allocated;
when it is determined that the allocable PMEM resources are smaller than the to-be-allocated PMEM resources, after allocating the allocable PMEM resources to the task queue corresponding to the first big data component, the method further includes:
determining a resource amount difference value between the allocable PMEM resource and the PMEM resource to be allocated;
and issuing a first control instruction to the first big data component, wherein the first control instruction is used for instructing the first big data component to execute the task to be executed corresponding to the resource amount difference value in the actual memory corresponding to the first big data component.
5. The method of claim 3, wherein when there are still no available PMEM resources after the predetermined period is exceeded, the method further comprises:
and issuing a second control instruction to the first big data assembly, wherein the second control instruction is used for indicating the first big data assembly and executing the task to be executed in the task queue in the actual memory corresponding to the first big data assembly.
6. The method according to any one of claims 1 to 3, wherein before determining allocable PMEM resources according to the acquired PMEM resources usage of each big data component except the first big data component in the current period, the method further comprises:
determining a type of the first big data component;
when the type of the first big data component is determined to be a first preset type, determining allocable PMEM resources for the first big data component no longer;
and issuing a third control instruction to the first big data component, wherein the third control instruction is used for indicating the first big data component and executing the task to be executed in the task queue in an actual memory corresponding to the first big data component.
7. The method according to any one of claims 1 to 3, wherein before determining allocable PMEM resources according to the acquired PMEM resources usage of each big data component except the first big data component in the current period, the method further comprises:
determining the type of each big data component, so that PMEM resources allocable by a second big data component are not counted when the type of the second big data component is determined to be a second preset type.
8. An apparatus for scheduling resources, the apparatus comprising:
the acquisition module is used for periodically acquiring the use condition of the PMEM resource of the persistent memory corresponding to each big data component in all the big data components and receiving a resource scheduling allocation request sent by each big data component;
the processing module is used for determining distributable PMEM resources according to the acquired use condition of the PMEM resources corresponding to each big data component except the first big data component in the current period after receiving a resource scheduling and allocating request sent by the first big data component;
and the allocation module is used for allocating the allocable PMEM resources to the task queue corresponding to the first big data component so as to execute the task to be executed, wherein the first big data component is any one of all the big data components.
9. An electronic device is characterized in that the electronic device bears the resource scheduling system and comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory complete mutual communication through the communication bus;
a memory for storing a computer program;
a processor for implementing the steps of the resource scheduling method according to any one of claims 1 to 7 when executing a program stored in a memory.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the resource scheduling method according to any one of claims 1 to 7.
CN202210135047.3A 2022-02-11 2022-02-11 Resource scheduling method and device and electronic equipment Pending CN114490081A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210135047.3A CN114490081A (en) 2022-02-11 2022-02-11 Resource scheduling method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210135047.3A CN114490081A (en) 2022-02-11 2022-02-11 Resource scheduling method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN114490081A true CN114490081A (en) 2022-05-13

Family

ID=81479472

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210135047.3A Pending CN114490081A (en) 2022-02-11 2022-02-11 Resource scheduling method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN114490081A (en)

Similar Documents

Publication Publication Date Title
CN112199194B (en) Resource scheduling method, device, equipment and storage medium based on container cluster
CN106406983B (en) Task scheduling method and device in cluster
US8458712B2 (en) System and method for multi-level preemption scheduling in high performance processing
CN106293893B (en) Job scheduling method and device and distributed system
US11372678B2 (en) Distributed system resource allocation method, apparatus, and system
CN112052068A (en) Method and device for binding CPU (central processing unit) of Kubernetes container platform
CN111309440B (en) Method and equipment for managing and scheduling multiple types of GPUs
CN107168777B (en) Method and device for scheduling resources in distributed system
CN107515781B (en) Deterministic task scheduling and load balancing system based on multiple processors
WO2019170011A1 (en) Task allocation method and device, and distributed storage system
CN107430526B (en) Method and node for scheduling data processing
CN113886069A (en) Resource allocation method and device, electronic equipment and storage medium
CN112866314B (en) Method for switching slave nodes in distributed master-slave system, master node device and storage medium
CN105022668A (en) Job scheduling method and system
CN106775975B (en) Process scheduling method and device
CN111580951B (en) Task allocation method and resource management platform
CN111625339A (en) Cluster resource scheduling method, device, medium and computing equipment
CN114625533A (en) Distributed task scheduling method and device, electronic equipment and storage medium
CN111143063B (en) Task resource reservation method and device
CN114490081A (en) Resource scheduling method and device and electronic equipment
CN112114958A (en) Resource isolation method, distributed platform, computer device, and storage medium
CN115878910A (en) Line query method, device and storage medium
WO2021013124A1 (en) Method and device for managing automated testing resources
CN114418282A (en) Station scene management method, device, equipment and computer program product
CN116450328A (en) Memory allocation method, memory allocation device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination