CN111694672B - Resource allocation method, task submission method, device, electronic equipment and medium - Google Patents

Resource allocation method, task submission method, device, electronic equipment and medium Download PDF

Info

Publication number
CN111694672B
CN111694672B CN202010538197.XA CN202010538197A CN111694672B CN 111694672 B CN111694672 B CN 111694672B CN 202010538197 A CN202010538197 A CN 202010538197A CN 111694672 B CN111694672 B CN 111694672B
Authority
CN
China
Prior art keywords
granularity
resource
task
computing resource
computing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010538197.XA
Other languages
Chinese (zh)
Other versions
CN111694672A (en
Inventor
李亚坤
张云尧
师锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Douyin Vision Co Ltd
Original Assignee
Douyin Vision Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Douyin Vision Co Ltd filed Critical Douyin Vision Co Ltd
Priority to CN202010538197.XA priority Critical patent/CN111694672B/en
Publication of CN111694672A publication Critical patent/CN111694672A/en
Application granted granted Critical
Publication of CN111694672B publication Critical patent/CN111694672B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44505Configuring for program initiating, e.g. using registry, configuration files
    • G06F9/4451User profiles; Roaming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the disclosure discloses a resource allocation method, a task submitting method, a device, an electronic device and a computer readable medium. One embodiment of the method comprises the following steps: receiving a task request submitted by a client; responsive to the resource acquisition request including a first granularity-based computing resource demand, converting the first granularity-based computing resource demand to a second granularity-based computing resource demand; and determining a node for executing the task to be processed based on the available resource information of each resource node and the demand of the computing resources based on the second granularity, and distributing the computing resources with the second granularity for the task to be processed so as to enable the node to execute the task to be processed. The implementation mode realizes the support of the user to carry out the calculation resource allocation with different granularities, avoids the resource waste and improves the utilization rate of the whole resource.

Description

Resource allocation method, task submission method, device, electronic equipment and medium
Technical Field
Embodiments of the present disclosure relate to the field of computer technology, and in particular, to a resource allocation method, a task submission method, an apparatus, an electronic device, and a computer readable medium.
Background
For large-scale clusters, the resource management platform can be utilized to perform unified management and scheduling of resources. The scheduling system supports scheduling of various resources, including computing resources (e.g., CPUs), memory storage resources (memory), and the like. In the prior art, the scheduling unit of the CPU (Central Processing Unit ) is an integer number of virtual cores. The virtual core and the real physical core are configured in a certain proportional relation. Once the proportional relationship is determined, it is difficult to make a change due to the existence of the history task. In addition, for newly submitted tasks, if the granularity of the virtual cores is too large, resource waste can be caused, and the overall resource utilization rate is affected.
Disclosure of Invention
The disclosure is in part intended to introduce concepts in a simplified form that are further described below in the detailed description. The disclosure is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose a resource allocation method, apparatus, electronic device, and computer readable medium to solve the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide a resource allocation method, including: receiving a task request submitted by a client, wherein the task request comprises a resource acquisition request aiming at a task to be processed, and the resource acquisition request comprises a computing resource demand based on a first granularity or a computing resource demand based on a second granularity, the first granularity is an original granularity, the second granularity is a newly added granularity, and the second granularity is smaller than the first granularity; responsive to the resource acquisition request including a first granularity-based computing resource demand, converting the first granularity-based computing resource demand to a second granularity-based computing resource demand; and determining a node for executing the task to be processed based on the available resource information of each resource node and the demand of the computing resources based on the second granularity, and distributing the computing resources with the second granularity for the task to be processed so as to enable the node to execute the task to be processed.
In a second aspect, some embodiments of the present disclosure provide a task submission method, comprising: displaying at least two granularities supporting selection, wherein the at least two granularities comprise a first granularity and a second granularity, the first granularity is the original granularity, the second granularity is the newly added granularity, and the second granularity is smaller than the first granularity; in response to detecting a selection operation of a user in at least two granularities, determining the selected granularity as a resource granularity in the resource requirements of the task to be processed; receiving the resource demand quantity which is input by a user and is based on the granularity of the resource; and submitting task requests based on the resource granularity and the resource demand quantity based on the resource granularity.
In a third aspect, some embodiments of the present disclosure provide a resource allocation apparatus, including: the first receiving unit is configured to receive a task request submitted by a client, wherein the task request comprises a resource acquisition request aiming at a task to be processed, the resource acquisition request comprises a computing resource requirement based on a first granularity or a computing resource requirement based on a second granularity, the first granularity is an original granularity, the second granularity is a newly added granularity, and the second granularity is smaller than the first granularity; a conversion unit configured to convert the computing resource demand based on the first granularity into the demand based on the second granularity in response to the computing resource demand based on the first granularity included in the resource acquisition request; and the computing resource allocation unit is configured to determine the node for executing the task to be processed according to the available resource information of each resource node and the demand of the computing resources based on the second granularity, and allocate the computing resources with the second granularity to the task to be processed so as to enable the node to execute the task to be processed.
In a fourth aspect, some embodiments of the present disclosure provide a task submission apparatus, comprising: a display unit configured to display at least two granularities supporting the selection, the at least two granularities comprising a first granularity and a second granularity, the first granularity being an original granularity, the second granularity being a newly added granularity, the second granularity being smaller than the first granularity; the granularity determining unit is configured to respond to detection of selection operation of a user in at least two granularities, and determine the selected granularity as the resource granularity in the resource demands of the tasks to be processed; a second receiving unit configured to receive a resource demand quantity based on a resource granularity inputted by a user; and a submitting unit configured to submit the task request based on the resource granularity and the number of resource demands based on the resource granularity.
In a fifth aspect, some embodiments of the present disclosure provide an electronic device comprising: one or more processors; and a storage device having one or more programs stored thereon, which when executed by the one or more processors, cause the one or more processors to implement the method of any of the above.
In a sixth aspect, some embodiments of the present disclosure provide a computer readable medium having a computer program stored thereon, wherein the program when executed by a processor implements any of the methods described above.
One of the above embodiments of the present disclosure has the following advantageous effects: the user is supported to perform computing resource configuration with different granularities. On the basis, the larger first granularity computing resource requirement is converted to obtain the second granularity computing resource requirement, so that the unification of different granularities is realized under the condition of not affecting the precision. In addition, as the newly added second granularity is smaller than the original first granularity, for tasks with small resource requirements, the resource waste can be avoided, and the overall resource utilization rate is improved.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
Fig. 1 is a schematic diagram of one application scenario of a resource allocation method according to some embodiments of the present disclosure;
FIG. 2 is a flow chart of some embodiments of a resource allocation method according to the present disclosure;
FIG. 3 is a flow chart of further embodiments of a resource allocation method according to the present disclosure;
FIG. 4 is a schematic structural diagram of some embodiments of a resource allocation apparatus according to the present disclosure;
FIG. 5 is a flow chart of some embodiments of a task submission method according to the present disclosure;
FIG. 6 is a schematic structural diagram of some embodiments of a task submission apparatus according to the present disclosure;
fig. 7 is a schematic structural diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings. Embodiments of the present disclosure and features of embodiments may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 is a schematic diagram 100 of one application scenario of a resource allocation method according to some embodiments of the present disclosure.
The resource allocation method provided by some embodiments of the present disclosure may be performed by a server. The server may be hardware or software. When the server is hardware, it may be various electronic devices including, but not limited to, a smart phone, a tablet computer, an electronic book reader, a vehicle-mounted terminal, and the like. When the server is software, the server can be installed in the electronic device listed above. It may be implemented as a plurality of software or software modules, for example, for providing distributed services, or as a single software or software module. The present invention is not particularly limited herein.
In the context of the present application, the execution subject of the resource allocation method may be the computing device 101. The computing device 101 may be hardware or software. When computing device 101 is hardware, it may be a variety of electronic devices including, but not limited to: management dispatch servers, smartphones, tablets, e-book readers, vehicle terminals, and the like. When the computing device 101 is software, it may be installed in the electronic devices listed above. It may be implemented as a plurality of software or software modules, for example, for providing distributed services, or as a single software or software module. The present invention is not particularly limited herein. As an example, the computing device 101 may be a YARN (Yet Another Resource Negotiator, another resource coordinator). Which supports scheduling various tasks or jobs in a large-scale cluster.
The computing device 101 may first receive a task request 102 submitted by a client. Wherein the task request 102 comprises a resource acquisition request 103 for a task to be processed. The resource acquisition request 103 includes a computing resource requirement based on a first granularity or a computing resource requirement based on a second granularity. In practice, when a user submits a task, any granularity can be selected from the first granularity and the second granularity according to the requirement for configuration. The first granularity may be an existing scheduling unit, where the computing resource is exemplified by a CPU. On this basis, a second granularity may be added.
In the application scenario, the first granularity and the second granularity take one virtual core and one thousandth of virtual cores as examples respectively. Taking the example where the resource acquisition request 103 includes a computing resource requirement 104 based on a first granularity, the computing resource requirement 104 is "1", i.e., a virtual core is required. The computing device 101 may convert the computing resource requirements 104 based on the first granularity to computing resource requirements 105 based on the second granularity. Since the first particle size is 1000 times the second particle size. The computing resource requirement 104 may thus be multiplied by 1000 to yield the computing resource requirement 105, i.e., "1000".
On this basis, a node performing the task to be processed is determined based on the available resource information 106 of each resource node and based on the computing resource demand 105 of the second granularity, and computing resources 107 of the second granularity are allocated to the task to be processed to cause the node to perform the task to be processed. For example, a node with an available resource number greater than 1000 may be selected among the resource nodes, and the selected node may be determined as the node performing the task to be processed. In addition, 1000 thousandths of virtual cores may be allocated to the task to be processed to enable the node to perform the task to be processed using these computing resources.
With continued reference to fig. 2, a flow 200 of some embodiments of a resource allocation method according to the present disclosure is shown. The resource allocation method comprises the following steps:
step 201, a task request submitted by a client is received.
In some embodiments, an execution body (e.g., YARN) of the resource allocation method may receive task requests submitted by clients. Taking YARN as an example, YARN includes a global RM (resource manager) and a node NM (node manager). A user may submit a task to the yann through a client. For example, an application may be submitted. The RM allocates a first Container for the application and communicates with the corresponding NM, requiring it to start the application AM in this Container. The AM registers with the RM first, so that the user can directly check the running state of the application program through the RM, apply for resources, monitor its running state, until the running is finished. In the process, the AM applies for and retrieves resources from the RM through the RPC protocol in a polling mode. Once the AM applies for the resource, it communicates with the corresponding NM, asking it to start the task. After setting an operation environment (including environment variables, binary programs, etc.) for the task, the NM writes a task start command into a script, and starts the task by running the script. Each task reports the state and progress of the task to the AM through a certain RPC protocol, so that the AM can master the running state of each task at any time, and the task can be restarted when the task is out of service. After the application program is run, the AM logs off to the RM and shuts itself down.
In some embodiments, the task request includes a resource acquisition request for a task to be processed. The task to be processed may be various types of tasks, for example, running a certain application. The resource acquisition request may include a computing resource requirement. Optionally, storage resource requirements may also be included.
In some embodiments, the computing resources may be CPUs, GPUs, and the like. Taking YARN as an example, the scheduling unit of the CPU, that is, the original first granularity, is generally an integer number of virtual cores. The virtual core is configured in a proportional relationship with the real physical core. For example, a single virtual core may represent n physical cores (n being a real number greater than 0). For example, the first granularity is vcores=1 virtual cores. On the basis, the second granularity is increased, so that the user is supported to perform computing resource configuration with different granularities. Typically, the second particle size is smaller than the first particle size. So that smaller granularity of task processing may be supported. Alternatively, the second granularity may be one thousandth of a virtual core, e.g., the second granularity is VCores-milii=1/1000 virtual cores. Thus, the first granularity is greatly different from the first granularity in order of magnitude, and the task that the resource demand is in the thousandth of the virtual core demand can be dealt with. In addition, through cooperation with the first granularity, the supported resource allocation interval is greatly widened (theoretically, any resource requirement above one thousandth of a virtual core can be allocated).
In some alternative implementations of some embodiments, the second granularity is determined based on a plurality of historical task actual usage resources. Specifically, the actual used resources of the plurality of historical tasks in the historical time period can be counted, and the second granularity is determined according to the counted result. For example, statistics show that more than 30% of the historical tasks actually use less than one percent of the virtual cores, then the second granularity may be determined as one thousandth of the virtual cores.
In response to the resource acquisition request including the first granularity-based computing resource requirement, the first granularity-based computing resource requirement is converted to the second granularity-based computing resource requirement, step 202.
In some embodiments, in response to including a first granularity-based computing resource requirement in the resource acquisition request, the execution body may convert the first granularity-based computing resource requirement to a second granularity-based computing resource requirement. In practice, the first granularity of computing resource requirements and the second granularity of computing resource requirements need to be unified because they differ in granularity. The larger first granularity computing resource requirement is converted to obtain the second granularity computing resource requirement, so that the unification of different granularities is realized under the condition that the precision is not affected.
In some embodiments, the computing resource requirements based on the first granularity may be converted to computing resource requirements based on the second granularity in various ways.
In some alternative implementations of some embodiments, the demand for computing resources based on the second granularity may be derived from the demand for computing resources based on the first granularity, a transformation multiple between the first granularity and the second granularity.
In some embodiments, the conversion may also be performed in combination with the adjustment coefficient on the basis of the conversion factor, as an example. Specifically, the transformation result may be obtained by multiplying the computing resource requirement based on the first granularity by the transformation multiple. Then, the transformation result is multiplied by the adjustment coefficient to obtain the computing resource requirement based on the second granularity. The specific value of the adjustment coefficient can be set according to actual needs.
In step 203, a node executing the task to be processed is determined based on the available resource information of each resource node and based on the computing resource requirement of the second granularity, and computing resources of the second granularity are allocated to the task to be processed so that the node executes the task to be processed.
In some embodiments, the executing entity may determine a node that executes the task to be processed based on the available resource information of each resource node and based on the computing resource requirement of the second granularity, and allocate the computing resource of the second granularity to the task to be processed to enable the node to execute the task to be processed. Specifically, as an example, a node with the number of available resources greater than or equal to the computing resource demand based on the second granularity among the resource nodes is selected, and the selected node is determined as a node for executing the task to be processed. As yet another example, a plurality of nodes may also be selected to collectively perform the task to be processed, wherein a sum of the number of available resources of the selected plurality of nodes is greater than or equal to a computing resource demand based on the second granularity. In addition, according to the actual situation, computing resources with a second granularity can be allocated to the task to be processed. For example, computing resources that meet computing resource requirements based on a second granularity may be allocated. As another example, in some scenarios, if the priority of the task to be processed is low, other tasks with high priority may be preferentially satisfied. At this point, the task to be processed may be allocated computing resources that are slightly smaller than the computing resource requirements based on the second granularity.
In some optional implementations of some embodiments, storing the resource requirement information is further included in the resource acquisition request; the method further comprises the following steps: and allocating storage resources for the task to be processed based on the storage resource demand information.
In some alternative implementations of some embodiments, the method further comprises: adjusting the second granularity based on the history tasks completed by the plurality of processes to obtain a new second granularity; the computing resource requirements based on the first granularity or the pre-adjustment second granularity are converted to computing resource requirements based on the new second granularity. In practice, the execution subject may process multiple tasks according to steps 201-203. On the basis, the second granularity can be adjusted based on the historical tasks completed by a plurality of processes, so that a new second granularity is obtained. For example, the original second granularity is one hundredth of a virtual core. By counting historical tasks within 1 year, 30% of the tasks use one thousandth of virtual cores, and the original second granularity causes resource waste, so the tasks are adjusted to one thousandth of virtual cores. Thereby enabling dynamic adjustment of the second granularity.
Some embodiments of the present disclosure provide methods that support users to perform different granularity of computing resource configurations. On the basis, the larger first granularity computing resource requirement is converted to obtain the second granularity computing resource requirement, so that the unification of different granularities is realized under the condition of not affecting the precision. In addition, as the newly added second granularity is smaller than the original first granularity, for tasks with small resource requirements, the resource waste can be avoided, and the overall resource utilization rate is improved.
With further reference to fig. 3, a flow 300 of further embodiments of a resource allocation method is shown. The flow 300 of the resource allocation method comprises the steps of:
step 301, a task request submitted by a client is received.
In response to the resource acquisition request including the first granularity-based computing resource requirement, the first granularity-based computing resource requirement is converted to the second granularity-based computing resource requirement, step 302.
In step 303, the node executing the task to be processed is determined based on the available resource information of each resource node and based on the computing resource requirement of the second granularity, and the computing resource of the second granularity is allocated to the task to be processed so that the node executes the task to be processed.
In some embodiments, the specific implementation of steps 301 to 303 and the technical effects thereof may refer to those embodiments corresponding to fig. 2, and are not described herein.
Step 304, obtaining historical configuration information of the periodic task, wherein the historical configuration information comprises historical computing resource requirements based on a first granularity.
In some embodiments, the execution body of the resource allocation method may obtain historical configuration information of the periodic task. The historical configuration information includes historical computing resource requirements based on a first granularity. Wherein, the periodic task may be a task that is periodically performed over a period of time. The periodic task may be a task that the resource submitted before the second granularity is newly added. Thus, historical computing resource requirements based on the first granularity are included in the historical configuration information. After the second granularity is added, steps 304-306 are performed in case periodic tasks need to be performed.
In some alternative implementations of some embodiments, the periodic tasks include historical periodic tasks that have already been performed to completion and periodic tasks to be performed that have not yet been performed. The historical computing resource requirement based on the first granularity is the computing resource requirement in a resource acquisition request corresponding to the periodic task received from the client or the actual resource requirement of the periodic task obtained by real-time monitoring and analysis of the actual use resources of the historical periodic task.
Step 305 converts the historical computing resource requirements into periodic task computing resource requirements based on a second granularity.
In some embodiments, the execution body may convert the historical computing resource requirements into periodic task computing resource requirements based on a second granularity. The specific conversion method may refer to step 302, and will not be described herein.
Step 306, allocating computing resources of a second granularity for the periodic task based on the periodic task computing resource requirements.
In some embodiments, the specific implementation of the allocation in step 306 may refer to step 303, which is not described herein.
As can be seen from fig. 3, the process flow for the periodic task is increased compared to the description of some embodiments corresponding to fig. 2, so as to ensure that the execution of the periodic task is not affected by the newly added second granularity.
With further reference to fig. 4, as an implementation of the method shown in the above figures, the present disclosure provides some embodiments of a resource allocation apparatus, which correspond to those method embodiments shown in fig. 2, and which are particularly applicable in various electronic devices.
As shown in fig. 4, the resource allocation apparatus 400 of some embodiments includes: a first receiving unit 401, a converting unit 402, and a computing resource allocating unit 403. The first receiving unit 401 is configured to receive a task request submitted by a client, where the task request includes a resource acquisition request for a task to be processed, and the resource acquisition request includes a computing resource requirement based on a first granularity or a computing resource requirement based on a second granularity, where the first granularity is an original granularity, the second granularity is a newly added granularity, and the second granularity is smaller than the first granularity; a conversion unit 402 configured to convert, in response to the resource acquisition request including the computing resource demand based on the first granularity, the computing resource demand based on the first granularity into the demand based on the second granularity; the computing resource allocation unit 403 is configured to determine a node that performs a task to be processed from the available resource information of each resource node and the demand for computing resources based on the second granularity, and allocate computing resources of the second granularity to the task to be processed to cause the node to perform the task to be processed.
In an alternative implementation of some embodiments, the second granularity is one thousandth of a virtual core.
In an alternative implementation of some embodiments, the second granularity is determined based on a plurality of historical task actual usage resources.
In alternative implementations of some embodiments, the apparatus 400 may further include: an adjusting unit. The adjusting unit is configured to adjust the second granularity based on the historical tasks completed by the plurality of processes to obtain a new second granularity; the computing resource requirements based on the first granularity or the pre-adjustment second granularity are converted to computing resource requirements based on the new second granularity.
In an alternative implementation of some embodiments, the resource obtaining request further includes storing resource requirement information; the apparatus 400 may further include: and the storage resource allocation unit is configured to allocate storage resources for the task to be processed based on the storage resource demand information.
In alternative implementations of some embodiments, the conversion unit 402 may be configured to: and obtaining the demand of the computing resource based on the second granularity according to the demand of the computing resource based on the first granularity and the transformation multiple between the first granularity and the second granularity.
In alternative implementations of some embodiments, the apparatus 400 may further include: an acquisition unit configured to: historical configuration information of the periodic task is obtained, wherein the historical configuration information comprises historical computing resource requirements based on a first granularity. The conversion unit 402 may be further configured to: converting the historical computing resource requirements into periodic task computing resource requirements based on a second granularity; the computing resource allocation unit 403 is further configured to allocate computing resources of a second granularity for the periodic tasks based on the periodic task computing resource requirements.
In alternative implementations of some embodiments, the periodic tasks include historical periodic tasks that have already been performed to completion and periodic tasks to be completed that have not been performed. The historical computing resource requirement based on the first granularity is the computing resource requirement in a resource acquisition request corresponding to the periodic task received from the client or the actual resource requirement of the periodic task obtained by real-time monitoring and analysis of the actual use resources of the historical periodic task.
In some embodiments, users are supported for different granularity of computing resource configurations. On the basis, the larger first granularity computing resource requirement is converted to obtain the second granularity computing resource requirement, so that the unification of different granularities is realized under the condition of not affecting the precision. In addition, as the newly added second granularity is smaller than the original first granularity, for tasks with small resource requirements, the resource waste can be avoided, and the overall resource utilization rate is improved.
With continued reference to fig. 5, a flow 500 of some embodiments of a task submission method according to the present disclosure is illustrated. The task submitting method comprises the following steps:
step 501, displaying at least two granularities supporting selection, wherein the at least two granularities comprise a first granularity and a second granularity, the first granularity is an original granularity, the second granularity is a newly added granularity, and the second granularity is smaller than the first granularity.
In some embodiments, the execution subject of the task submission method may be a client. A user may submit a task to be processed through a client. During the task submission, the user may also enter resource requirements. For computing resources, the execution body may display at least two granularities supporting the selection. So that the user can select among at least two granularities by a click operation or the like.
In response to detecting a user selection operation in at least two granularities, step 502, determining the selected granularity as the granularity of resources in the task resource requirements to be processed.
In some embodiments, in response to detecting a user's selection operation among at least two granularities, the execution body may determine the selected granularity as a resource granularity among resource requirements of the task to be processed.
In step 503, a user input of a resource demand quantity based on a resource granularity is received.
In some embodiments, after selecting the resource granularity, the user may enter the number of resource requirements based on the resource granularity.
Step 504, submitting a task request based on the resource granularity and the number of resource demands based on the resource granularity.
In some embodiments, the execution body may submit the task request based on the resource granularity and the number of resource requirements. The task request may include the above-mentioned resource granularity and resource demand number.
In these embodiments, the task request is submitted based on the resource requirements entered by the user. Thus, flexible configuration of resource requirements by supporting users is realized.
With continued reference to fig. 6, as an implementation of the method shown in fig. 5, the present disclosure provides some embodiments of a task submission apparatus, corresponding to those method embodiments shown in fig. 5, which may find particular application in a variety of electronic devices.
As shown in fig. 6, the task submission apparatus 600 of some embodiments includes: a display unit 601, a granularity determining unit 602, a second receiving unit 603 and a submitting unit 604. Wherein the display unit 601 is configured to display at least two granularities supporting the selection, the at least two granularities comprising a first granularity and a second granularity, the first granularity being an original granularity, the second granularity being a newly added granularity, the second granularity being smaller than the first granularity; a granularity determining unit 602 configured to determine, in response to detecting a selection operation of a user among at least two granularities, the selected granularity as a resource granularity in the task resource requirements to be processed; the second receiving unit 603 is configured to receive a resource demand amount based on a resource granularity, which is input by a user; the submitting unit 604 is configured to submit the task request based on the resource granularity and the number of resource demands based on the resource granularity.
The specific implementation of the display unit 601, the granularity determining unit 602, the second receiving unit 603, and the submitting unit 604 in the task submitting apparatus 600 and the technical effects thereof may refer to the corresponding embodiment of fig. 5, and will not be described herein again.
In these embodiments, the task request is submitted based on the resource requirements entered by the user. Thus, flexible configuration of resource requirements by supporting users is realized.
Referring now to fig. 7, a schematic diagram of an electronic device 700 suitable for use in implementing some embodiments of the present disclosure is shown. The electronic device shown in fig. 7 is only one example and should not impose any limitations on the functionality and scope of use of embodiments of the present disclosure.
As shown in fig. 7, the electronic device 700 may include a processing means (e.g., a central processor, a graphics processor, etc.) 701, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 702 or a program loaded from a storage means 708 into a Random Access Memory (RAM) 703. In the RAM703, various programs and data required for the operation of the electronic device 700 are also stored. The processing device 701, the ROM 702, and the RAM703 are connected to each other through a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
In general, the following devices may be connected to the I/O interface 705: input devices 706 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 707 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 708 including, for example, magnetic tape, hard disk, etc.; and a communication device 709. The communication means 709 may allow the electronic device 700 to communicate wirelessly or by wire with other devices to exchange data. While fig. 7 shows an electronic device 700 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead. Each block shown in fig. 7 may represent one device or a plurality of devices as needed.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via communications device 709, or from storage 708, or from ROM 702. The above-described functions defined in the methods of some embodiments of the present disclosure are performed when the computer program is executed by the processing means 701.
It should be noted that the computer readable medium according to some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, the computer-readable signal medium may comprise a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: receiving a task request submitted by a client, wherein the task request comprises a resource acquisition request aiming at a task to be processed, and the resource acquisition request comprises a computing resource demand based on a first granularity or a computing resource demand based on a second granularity, the first granularity is an original granularity, the second granularity is a newly added granularity, and the second granularity is smaller than the first granularity; responsive to the resource acquisition request including a first granularity-based computing resource demand, converting the first granularity-based computing resource demand to a second granularity-based computing resource demand; and determining a node for executing the task to be processed based on the available resource information of each resource node and the demand of the computing resources based on the second granularity, and distributing the computing resources with the second granularity for the task to be processed so as to enable the node to execute the task to be processed. Or alternatively
Receiving resource requirements for a task to be processed, wherein the resource requirements comprise required resource granularity and resource requirement quantity based on the resource granularity, and the resource requirements are input by a user; based on the resource requirements, a task request is submitted.
Computer program code for carrying out operations for some embodiments of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The described units may also be provided in a processor, for example, described as: a processor includes a receiving unit, a converting unit, a generating unit, and a computing resource allocation unit. The names of these units do not constitute a limitation on the unit itself in some cases, and for example, the receiving unit may also be described as "a unit that receives a resource acquisition request for a task to be processed".
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
According to one or more embodiments of the present disclosure, there is provided a resource allocation method including: receiving a task request submitted by a client, wherein the task request comprises a resource acquisition request aiming at a task to be processed, and the resource acquisition request comprises a computing resource demand based on a first granularity or a computing resource demand based on a second granularity, the first granularity is an original granularity, the second granularity is a newly added granularity, and the second granularity is smaller than the first granularity; responsive to the resource acquisition request including a first granularity-based computing resource requirement, converting the first granularity-based computing resource requirement to a second granularity-based computing resource requirement; and determining a node for executing the task to be processed based on the available resource information of each resource node and the computing resource demand based on the second granularity, and distributing the computing resource with the second granularity for the task to be processed so that the node executes the task to be processed.
According to one or more embodiments of the present disclosure, the second granularity is one thousandth of a virtual core.
According to one or more embodiments of the present disclosure, the second granularity is determined based on a plurality of historical task actual usage resources.
According to one or more embodiments of the present disclosure, the method further comprises: adjusting the second granularity based on the history tasks completed by the plurality of processes to obtain a new second granularity; the computing resource requirements based on the first granularity or the pre-adjustment second granularity are converted to computing resource requirements based on the new second granularity.
According to one or more embodiments of the present disclosure, the resource acquisition request further includes storage resource requirement information; the method further comprises the following steps: and allocating storage resources for the task to be processed based on the storage resource demand information.
According to one or more embodiments of the present disclosure, converting a computing resource demand based on a first granularity to a demand based on a second granularity, comprises: and obtaining the demand of the computing resource based on the second granularity according to the demand of the computing resource based on the first granularity and the transformation multiple between the first granularity and the second granularity.
According to one or more embodiments of the present disclosure, the method further comprises: acquiring historical configuration information of a periodic task, wherein the historical configuration information comprises historical computing resource requirements based on a first granularity; converting the historical computing resource requirements into periodic task computing resource requirements based on a second granularity; computing resources of a second granularity are allocated to periodic tasks based on periodic task computing resource requirements.
According to one or more embodiments of the present disclosure, periodic tasks include historical periodic tasks that have been performed to completion and periodic tasks to be performed that have not been performed. The historical computing resource requirement based on the first granularity is the computing resource requirement in a resource acquisition request corresponding to the periodic task received from the client or the actual resource requirement of the periodic task obtained by real-time monitoring and analysis of the actual use resources of the historical periodic task.
According to one or more embodiments of the present disclosure, there is provided a task submission method including: displaying at least two granularities supporting selection, wherein the at least two granularities comprise a first granularity and a second granularity, the first granularity is the original granularity, the second granularity is the newly added granularity, and the second granularity is smaller than the first granularity; in response to detecting a selection operation of a user in at least two granularities, determining the selected granularity as a resource granularity in the resource requirements of the task to be processed; receiving the resource demand quantity which is input by a user and is based on the granularity of the resource; and submitting task requests based on the resource granularity and the resource demand quantity based on the resource granularity.
According to one or more embodiments of the present disclosure, there is provided a resource allocation apparatus including: the first receiving unit is configured to receive a task request submitted by a client, wherein the task request comprises a resource acquisition request aiming at a task to be processed, the resource acquisition request comprises a computing resource requirement based on a first granularity or a computing resource requirement based on a second granularity, the first granularity is an original granularity, the second granularity is a newly added granularity, and the second granularity is smaller than the first granularity; a conversion unit configured to convert the computing resource demand based on the first granularity into the demand based on the second granularity in response to the computing resource demand based on the first granularity included in the resource acquisition request; and the computing resource allocation unit is configured to determine the node for executing the task to be processed according to the available resource information of each resource node and the demand of the computing resources based on the second granularity, and allocate the computing resources with the second granularity to the task to be processed so as to enable the node to execute the task to be processed.
According to one or more embodiments of the present disclosure, the second granularity is one thousandth of a virtual core.
According to one or more embodiments of the present disclosure, the second granularity is determined based on a plurality of historical task actual usage resources.
In accordance with one or more embodiments of the present disclosure, the apparatus may further include: an adjusting unit. The adjusting unit is configured to adjust the second granularity based on the historical tasks completed by the plurality of processes to obtain a new second granularity; the computing resource requirements based on the first granularity or the pre-adjustment second granularity are converted to computing resource requirements based on the new second granularity.
According to one or more embodiments of the present disclosure, the resource acquisition request further includes storage resource requirement information; the apparatus may further include: and the storage resource allocation unit is configured to allocate storage resources for the task to be processed based on the storage resource demand information.
According to one or more embodiments of the present disclosure, the conversion unit may be configured to: and obtaining the demand of the computing resource based on the second granularity according to the demand of the computing resource based on the first granularity and the transformation multiple between the first granularity and the second granularity.
In accordance with one or more embodiments of the present disclosure, the apparatus may further include: an acquisition unit configured to: historical configuration information of the periodic task is obtained, wherein the historical configuration information comprises historical computing resource requirements based on a first granularity. The conversion unit may be further configured to: converting the historical computing resource requirements into periodic task computing resource requirements based on a second granularity; the computing resource allocation unit is further configured to allocate computing resources of a second granularity for the periodic task based on the periodic task computing resource demand.
According to one or more embodiments of the present disclosure, periodic tasks include historical periodic tasks that have been performed to completion and periodic tasks to be performed that have not been performed. The historical computing resource requirement based on the first granularity is the computing resource requirement in a resource acquisition request corresponding to the periodic task received from the client or the actual resource requirement of the periodic task obtained by real-time monitoring and analysis of the actual use resources of the historical periodic task.
According to one or more embodiments of the present disclosure, there is provided a task submitting apparatus including: a display unit configured to display at least two granularities supporting the selection, the at least two granularities comprising a first granularity and a second granularity, the first granularity being an original granularity, the second granularity being a newly added granularity, the second granularity being smaller than the first granularity; the granularity determining unit is configured to respond to detection of selection operation of a user in at least two granularities, and determine the selected granularity as the resource granularity in the resource demands of the tasks to be processed; a second receiving unit configured to receive a resource demand quantity based on a resource granularity inputted by a user; and a submitting unit configured to submit the task request based on the resource granularity and the number of resource demands based on the resource granularity.
According to one or more embodiments of the present disclosure, the second receiving unit may be further configured to: displaying at least two granularities supporting the selection; in response to detecting a user selection operation in at least two granularities, the selected granularity is determined to be the granularity of the resource in the resource demand.
According to one or more embodiments of the present disclosure, there is provided an electronic device including: one or more processors; and a storage device having one or more programs stored thereon, which when executed by the one or more processors, cause the one or more processors to implement a method as described above.
According to one or more embodiments of the present disclosure, a computer readable medium is provided, on which a computer program is stored, wherein the program, when executed by a processor, implements a method as described in any of the above.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above technical features, but encompasses other technical features formed by any combination of the above technical features or their equivalents without departing from the spirit of the invention. Such as the above-described features, are mutually substituted with (but not limited to) the features having similar functions disclosed in the embodiments of the present disclosure.

Claims (10)

1. A resource allocation method, comprising:
receiving a task request submitted by a client, wherein the task request comprises a resource acquisition request for a task to be processed, and the resource acquisition request comprises a computing resource requirement based on a first granularity or a computing resource requirement based on a second granularity, wherein the first granularity is an original granularity, the second granularity is a newly added granularity, and the second granularity is smaller than the first granularity;
the response to the resource obtaining request including the computing resource requirement based on the first granularity, converting the computing resource requirement based on the first granularity into the computing resource requirement based on the second granularity, specifically includes: obtaining the demand of the computing resource based on the second granularity according to the demand of the computing resource based on the first granularity and the transformation multiple between the first granularity and the second granularity;
and determining a node for executing the task to be processed based on the available resource information of each resource node and the computing resource requirement of the second granularity, and distributing the computing resource of the second granularity for the task to be processed so that the node executes the task to be processed.
2. The method of claim 1, wherein the second granularity is one thousandth of a virtual core.
3. The method of claim 1, wherein the second granularity is determined based on a plurality of historical task actual usage resources.
4. The method of claim 1, wherein the method further comprises:
adjusting the second granularity based on the history tasks completed by the plurality of processes to obtain a new second granularity;
the computing resource requirements based on the first granularity or the pre-adjustment second granularity are converted into computing resource requirements based on the new second granularity.
5. The method of claim 1, wherein the resource acquisition request further includes storage resource requirement information; and
the method further comprises the steps of:
and distributing storage resources for the task to be processed based on the storage resource demand information.
6. The method of claim 1, wherein the method further comprises:
acquiring historical configuration information of a periodic task, wherein the historical configuration information comprises historical computing resource requirements based on a first granularity;
converting the historical computing resource demand into periodic task computing resource demand based on the second granularity;
And allocating the computing resources with the second granularity for the periodic task based on the periodic task computing resource requirement.
7. The method of claim 6, wherein the periodic tasks include a history of periodic tasks that have been performed completed and periodic tasks to be completed that have not been performed; the historical computing resource requirement based on the first granularity is a computing resource requirement in a resource acquisition request corresponding to the periodic task received from a client or an actual resource requirement of the periodic task obtained by real-time monitoring and analysis of actual use resources of the historical periodic task.
8. A resource allocation apparatus, comprising:
the first receiving unit is configured to receive a task request submitted by a client, wherein the task request comprises a resource acquisition request for a task to be processed, and the resource acquisition request comprises a computing resource requirement based on a first granularity or a computing resource requirement based on a second granularity, the first granularity is an original granularity, the second granularity is a newly added granularity, and the second granularity is smaller than the first granularity;
a conversion unit configured to convert the computing resource demand based on the first granularity into a demand based on the second granularity in response to the computing resource demand based on the first granularity being included in the resource acquisition request, specifically configured to: obtaining the demand of the computing resource based on the second granularity according to the demand of the computing resource based on the first granularity and the transformation multiple between the first granularity and the second granularity;
And the computing resource allocation unit is configured to determine a node for executing the task to be processed based on the available resource information of each resource node and the requirement of the computing resource with the second granularity, and allocate the computing resource with the second granularity for the task to be processed so that the node executes the task to be processed.
9. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-7.
10. A computer readable medium having stored thereon a computer program, wherein the program when executed by a processor implements the method of any of claims 1-7.
CN202010538197.XA 2020-06-12 2020-06-12 Resource allocation method, task submission method, device, electronic equipment and medium Active CN111694672B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010538197.XA CN111694672B (en) 2020-06-12 2020-06-12 Resource allocation method, task submission method, device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010538197.XA CN111694672B (en) 2020-06-12 2020-06-12 Resource allocation method, task submission method, device, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN111694672A CN111694672A (en) 2020-09-22
CN111694672B true CN111694672B (en) 2023-04-25

Family

ID=72480824

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010538197.XA Active CN111694672B (en) 2020-06-12 2020-06-12 Resource allocation method, task submission method, device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN111694672B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116433388B (en) * 2023-06-09 2023-09-12 中信证券股份有限公司 Data storage resource partitioning method, device, electronic equipment and computer medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102426545A (en) * 2010-10-27 2012-04-25 微软公司 Reactive load balancing for distributed systems
CN102902587A (en) * 2011-07-28 2013-01-30 中国移动通信集团四川有限公司 Distribution type task scheduling method, distribution type task scheduling system and distribution type task scheduling device
CN103383653A (en) * 2012-05-02 2013-11-06 中国科学院计算技术研究所 Method and system for managing and dispatching cloud resource
WO2019237347A1 (en) * 2018-06-15 2019-12-19 富士通株式会社 Method and device for allocating and receiving resources and communication system
CN110914805A (en) * 2017-07-12 2020-03-24 华为技术有限公司 Computing system for hierarchical task scheduling

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9886328B2 (en) * 2016-03-11 2018-02-06 Intel Corporation Flexible binding of tasks to target resources

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102426545A (en) * 2010-10-27 2012-04-25 微软公司 Reactive load balancing for distributed systems
CN102902587A (en) * 2011-07-28 2013-01-30 中国移动通信集团四川有限公司 Distribution type task scheduling method, distribution type task scheduling system and distribution type task scheduling device
CN103383653A (en) * 2012-05-02 2013-11-06 中国科学院计算技术研究所 Method and system for managing and dispatching cloud resource
CN110914805A (en) * 2017-07-12 2020-03-24 华为技术有限公司 Computing system for hierarchical task scheduling
WO2019237347A1 (en) * 2018-06-15 2019-12-19 富士通株式会社 Method and device for allocating and receiving resources and communication system

Also Published As

Publication number Publication date
CN111694672A (en) 2020-09-22

Similar Documents

Publication Publication Date Title
US11210131B2 (en) Method and apparatus for assigning computing task
CN109408205B (en) Task scheduling method and device based on hadoop cluster
CN109117252B (en) Method and system for task processing based on container and container cluster management system
US20240118928A1 (en) Resource allocation method and apparatus, readable medium, and electronic device
CN111580974B (en) GPU instance allocation method, device, electronic equipment and computer readable medium
CN110768861B (en) Method, device, medium and electronic equipment for obtaining overtime threshold
CN115237589A (en) SR-IOV-based virtualization method, device and equipment
CN111694672B (en) Resource allocation method, task submission method, device, electronic equipment and medium
CN114035895A (en) Global load balancing method and device based on virtual service computing capacity
CN111813541B (en) Task scheduling method, device, medium and equipment
CN112561301A (en) Work order distribution method, device, equipment and computer readable medium
CN113792869B (en) Video processing method and device based on neural network chip and electronic equipment
CN116821187A (en) Database-based data processing method and device, medium and electronic equipment
CN109842665B (en) Task processing method and device for task allocation server
CN116302271A (en) Page display method and device and electronic equipment
CN111694670B (en) Resource allocation method, apparatus, device and computer readable medium
CN111625692B (en) Feature extraction method, device, electronic equipment and computer readable medium
CN112148448B (en) Resource allocation method, apparatus, device and computer readable medium
CN113778844A (en) Automatic performance testing method and device
CN115220910B (en) Resource scheduling method, device and equipment
CN116755889B (en) Data acceleration method, device and equipment applied to server cluster data interaction
CN115426414A (en) Server-side interface calling index monitoring statistical method and system
CN117973772A (en) Park resource scheduling and distributing method based on cloud platform
CN117176813A (en) Method and device for processing service request
CN114035942A (en) Resource scheduling method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant after: Douyin Vision Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant before: Tiktok vision (Beijing) Co.,Ltd.

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant after: Tiktok vision (Beijing) Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant before: BEIJING BYTEDANCE NETWORK TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant