Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 and 2 are schematic diagrams illustrating an application scenario in which the resource allocation method of some embodiments of the present disclosure may be applied.
As shown in FIG. 1, a computing device 101 may first obtain available resources 102 of a set of computing tasks (including computing task 1 through computing task 6) already running in a cluster described above in response to a change in resource demand of at least one native service in the cluster. In the present application scenario, the at least one native service includes "service 1" and "service 2". Where the resource requirements of service 2 change. Then, in response to the used resource 103 of the set of executed computing tasks being larger than the available resource 102, the weight of each executed computing task in the set of executed computing tasks is calculated. In the present application scenario, the used resource 103 is 55M larger than the usable resource 102 is 50M. On this basis, the computing device 101 calculates a weight of each computing task in the computing task set. In the present application scenario, as an example, the content indicated by reference numeral 104 is the weight of "calculation task 5", and its value is "2". Finally, as shown in FIG. 2, the computing device 101 may terminate a first number of the executed computing tasks based on the weights to cause the updated used resources 103 to meet the resource demand change. In the context of this application, the first number is "1". The terminated running calculation task is "calculation task 5" with the smallest weight. The updated used resource 201 is 46M, and the change in the resource demand is satisfied.
The computing device 101 may be hardware or software. When the computing device is hardware, it may be implemented as a distributed cluster composed of a plurality of servers or electronic devices, or may be implemented as a single server or a single electronic device. When the computing device is embodied as software, it may be implemented as multiple pieces of software or software modules, for example, to provide distributed services, or as a single piece of software or software module. And is not particularly limited herein.
It should be understood that the number of computing devices 101 in FIG. 1 or FIG. 2 is merely illustrative. There may be any number of computing devices 101, as desired for implementation.
With continued reference to fig. 3, a flow 300 of some embodiments of a resource allocation method according to the present disclosure is shown. The resource allocation method comprises the following steps:
step 301, in response to a change in resource demand of at least one native service in a cluster, acquiring available resources of a set of computing tasks that have been run in the cluster.
In some embodiments, the above-described executed computing task may be any task other than a native service in the execution body of the resource allocation method.
In some embodiments, the usable resource may be a difference between a total resource of an execution subject of the resource allocation method and a resource occupied by the native service.
In one of some embodimentsThese areIn an alternative implementation manner, the executing entity may first obtain the used resources of the running computing task set. Then, the reserved buffer resources and allocable free resources are obtained. And finally, determining the difference between the sum of the allocable free resources and the used resources and the reserved buffer resources as the usable resources. With the embodiments of the present implementation, by introducing the reserved buffer resources, it is not necessary to terminate the executed computing tasks in the set of executed computing tasks when the change in the resource demand of the native service does not exceed the above-mentioned reserved buffer resources. And the waste of computing resources caused by frequent scheduling is avoided.
As an example, the allocatable free resources described above may be unallocated resources.
In some optional implementation manners of some embodiments, the allocable idle resource may also be a sum of an unallocated resource and an unused resource in resources occupied by a computing task in a stable running state. By adopting the embodiments of the present implementation, the resources in the execution main body are fully utilized by using the unused resources in the resources occupied by the computing task with stable running state as allocable idle resources.
As an example, the above-described running state-stable computing task may be a preset some kind of native service.
Optionally, the computing task with stable operating state may further include: and within the preset time length for starting operation, the native service with the occupied resource fluctuation smaller than a preset threshold value and/or the native service with the operation time length exceeding a time length threshold value. Wherein the duration threshold is determined based on an average value of the fluctuation range of the native service occupancy resource. With the embodiments of the present implementation, the native service with stable operation state can be determined more accurately.
Step 302, in response to the used resources of the running calculation task set being larger than the available resources, calculating the weight of each running calculation task in the running calculation task set.
In some embodiments, the weight of the computing task may be determined by at least one of: the size of resources occupied by the computing task, the running time of the computing task and the total estimated running time of the computing task.
In some optional implementation manners of some embodiments, the execution subject may further calculate, for each executed computing task, a weight of the executed computing task based on a preset weight of one or more of task priority, starting time duration, resource usage rate, and task type. The attributes of the calculation tasks in the implementation mode are easy to obtain, and meanwhile, the importance degree of the corresponding calculation tasks can be accurately represented. As an example, the task type may be whether the computing task is an AM (application Master).
In some embodiments, the weight of the computing task may be a weighted sum of the values of the computing task attributes.
In some embodiments, the first number may be a preset number.
In some embodiments, the first number may also be determined based on used and available resources. For example, a ratio of a difference between the available resource and the used resource to an average occupied resource by the deployed computing task may be determined as the first number.
Step 303, terminating a first number of the executed computing tasks based on the weights, so that the updated used resources meet the resource demand change.
In some embodiments, the execution agent may terminate a first number of the executed computing tasks by marking resources occupied by the first number of the executed computing tasks as unused.
The method provided by some embodiments of the present disclosure achieves adaptive deployment of jobs, so that the running of native services is not affected while resources are fully utilized.
With further reference to fig. 4, a flow 400 of further embodiments of a resource allocation method is illustrated. The process 400 of the resource allocation method includes the following steps:
step 401, in response to a resource demand change of at least one native service in the cluster, acquiring a used resource of a running computing task set.
Step 402, acquiring allocable idle resources and reserved buffer resources of a plurality of physical machines in the cluster.
In some embodiments, the allocable free resources may be unallocated resources.
In some optional implementation manners of some embodiments, the allocable idle resource may also be a sum of an unallocated resource and an unused resource in resources occupied by a computing task in a stable running state. With these embodiments of the present implementation, the resources in the execution body described above are fully utilized.
In some embodiments, the above-described running state-stable computing task may be a preset type of native service.
In some optional implementations of some embodiments, the running state stable computing task may further include: and within the preset time length of starting operation, occupying the native service of which the fluctuation of the resource is less than the preset threshold value and/or the native service after the first time length of starting operation. Wherein the first duration is determined based on an average value of fluctuation amplitudes of at least one native service occupancy resource. With the embodiments of the present implementation, the native service with stable operation state can be determined more accurately.
In some embodiments, the reserved buffer resource may be a preset value.
In some optional implementation manners of some embodiments, the reserved buffer resources may also be directly proportional to the average fluctuation amplitude of the idle resources, and the ratio is a preset value. By adopting the embodiments of the implementation mode, the size of the reserved buffer resources is more reasonable.
Step 403, determining the difference between the sum of the allocable free resources and the used resources and the reserved buffer resources as the usable resources.
Step 404, in response to that the used resources of the running calculation task set are greater than the available resources, for each of the running calculation tasks, calculating the weight of the running calculation task based on a preset weight of one or more of task priority, starting time, resource usage rate, and task type.
In some embodiments, the execution agent may determine, as the weight of the executed computing task, a sum of products of a value of each of the at least one target attribute of the executed computing task and a preset weight of the target attribute.
In some embodiments, the execution agent may further determine, as the weight of the executed computing task, a product of a value of each target attribute of the at least one target attribute of the executed computing task and a preset weight of the target attribute.
Step 405, terminating a first number of the executed computing tasks based on the weights, so that the updated used resources meet the resource demand change.
In some embodiments, according to actual needs, as an example, the execution subject may sequentially terminate the deployed computing tasks with the highest weight in the running computing task set.
In some embodiments, according to actual needs, as an example, the execution main body may further terminate the executed computing task with the smallest weight in the deployed computing task set in turn.
As can be seen from fig. 4, compared with the description of some embodiments corresponding to fig. 3, the process 400 of the resource allocation method in some embodiments corresponding to fig. 4 embodies the steps of determining available resources and calculating the above-mentioned operated calculated task weight based on the preset weight of one or more of task priority, starting time, resource usage rate, and task type. Thus, by introducing reserved buffer resources, the solutions described in the embodiments may not terminate a run computing task in a set of run computing tasks when the resource demand of the native service does not vary beyond the reserved buffer resources described above. And the waste of computing resources caused by frequent scheduling is avoided. And determining the sum of the unallocated resources and the idle resources in the resources occupied by the computing task in a stable running state as the allocable idle resources to fully utilize the resources in the execution main body. And by introducing the weight of the computing task, resources can be occupied by the relatively more important computing task.
With further reference to fig. 5, as an implementation of the methods shown in the above figures, the present disclosure provides some embodiments of a resource allocation apparatus, which correspond to those shown in fig. 3, and which may be applied in various electronic devices.
As shown in fig. 5, the resource allocation apparatus 500 of some embodiments includes: an acquisition unit 501, a calculation unit 502, and a termination unit 503. The acquiring unit 501 is configured to, in response to a change in resource demand of at least one native service in a cluster, acquire a usable resource of a set of running computing tasks in the cluster; a calculating unit 502 configured to calculate a weight of each executed calculation task in the executed calculation task set in response to a used resource of the executed calculation task set being greater than the available resource; a terminating unit 503, configured to terminate a first number of the executed computing tasks based on the weight, so that the updated used resources meet the resource demand change.
In an optional implementation of some embodiments, the obtaining unit 501 is further configured to include: acquiring the used resources of the running computing task set; acquiring allocable idle resources and reserved buffer resources of a plurality of physical machines in a cluster; and determining the difference between the sum of the allocable free resources and the used resources and the reserved buffer resources as the usable resources.
In an alternative implementation of some embodiments, the allocable free resources include unused resources including unallocated resources and computing tasks with stable running states.
In an alternative implementation of some embodiments, the reserved buffer resources are determined based on the fluctuation amplitude of the allocable free resources described above.
In an alternative implementation of some embodiments, running a state-stable computational task includes: and in the preset time length for starting to operate, actually using the calculation tasks with the resource fluctuation less than the preset threshold value and/or the calculation tasks with the operation time length exceeding the time length threshold value.
In an alternative implementation of some embodiments, the duration threshold is determined based on an average of the actual usage of the resource fluctuation amplitude by the computing task.
In an optional implementation of some embodiments, the computing unit 502 is further configured to: and for each running calculation task, calculating the weight of the running calculation task based on one or more preset weights in task priority, starting time, resource utilization rate and task type.
It will be understood that the elements described in the apparatus 500 correspond to various steps in the method described with reference to fig. 3. Thus, the operations, features and resulting advantages described above with respect to the method are also applicable to the apparatus 500 and the units included therein, and are not described herein again.
Referring now to FIG. 6, a block diagram of an electronic device (e.g., the computing device of FIG. 1) 600 suitable for use in implementing some embodiments of the present disclosure is shown. The electronic device in some embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle-mounted terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, electronic device 600 may include a processing means (e.g., central processing unit, graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 illustrates an electronic device 600 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 6 may represent one device or may represent multiple devices as desired.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network through the communication device 609, or installed from the storage device 608, or installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium described in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: responding to the resource demand change of at least one native service in the cluster, and acquiring the usable resources of the running computing task set in the cluster; in response to the used resources of the running computing task set being larger than the available resources, calculating the weight of each running computing task in the running computing task set; based on the weights, terminating a first number of the executed computing tasks so that the updated used resources meet the resource demand change.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software, and may also be implemented by hardware. The described units may also be provided in a processor, and may be described as: a processor includes a first determination unit, a termination unit, and an update unit. Where the names of these units do not in some cases constitute a limitation on the units themselves, for example, an update unit may also be described as a "unit that updates a set of deployed computing tasks".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
According to one or more embodiments of the present disclosure, there is provided a resource allocation method including: responding to the resource demand change of at least one native service in the cluster, and acquiring the usable resources of the running computing task set in the cluster; in response to the used resources of the running computing task set being larger than the available resources, calculating the weight of each running computing task in the running computing task set; based on the weights, terminating a first number of the executed computing tasks so that the updated used resources meet the resource demand change.
According to one or more embodiments of the present disclosure, acquiring available resources of a set of computing tasks already running in the cluster includes: acquiring the used resources of the running computing task set; acquiring allocable idle resources and reserved buffer resources of a plurality of physical machines in a cluster; and determining the difference between the sum of the allocable free resources and the used resources and the reserved buffer resources as the usable resources.
In accordance with one or more embodiments of the present disclosure, allocable idle resources include unused resources including unallocated resources and computing tasks that are operating stably.
According to one or more embodiments of the present disclosure, the reserved buffer resources are determined based on the fluctuation amplitude of the allocable idle resources described above.
According to one or more embodiments of the present disclosure, running a state-stable computing task includes: and in the preset time length for starting to operate, actually using the calculation tasks with the resource fluctuation less than the preset threshold value and/or the calculation tasks with the operation time length exceeding the time length threshold value.
According to one or more embodiments of the present disclosure, the duration threshold is determined based on an average value of the fluctuation amplitudes of the resources actually used by the above-mentioned computing task.
According to one or more embodiments of the present disclosure, calculating a weight of each executed computing task in the set of executed computing tasks further includes: and for each running calculation task, calculating the weight of the running calculation task based on one or more preset weights in task priority, starting time, resource utilization rate and task type.
According to one or more embodiments of the present disclosure, there is provided a resource allocation apparatus including: the acquisition unit is configured to respond to the resource demand change of at least one native service in the cluster, and acquire usable resources of a running calculation task set in the cluster; a computing unit configured to compute a weight of each executed computing task in the set of executed computing tasks in response to a used resource of the set of executed computing tasks being greater than the available resource; a terminating unit configured to terminate a first number of the executed computing tasks based on the weight so that the updated used resources satisfy the resource demand change.
According to one or more embodiments of the present disclosure, the obtaining unit is further configured to include: acquiring the used resources of the running computing task set; acquiring allocable idle resources and reserved buffer resources of a plurality of physical machines in a cluster; and determining the difference between the sum of the allocable free resources and the used resources and the reserved buffer resources as the usable resources.
In accordance with one or more embodiments of the present disclosure, allocable idle resources include unused resources including unallocated resources and computing tasks that are operating stably.
According to one or more embodiments of the present disclosure, the reserved buffer resources are determined based on the fluctuation amplitude of the allocable idle resources described above.
According to one or more embodiments of the present disclosure, running a state-stable computing task includes: and in the preset time length for starting to operate, actually using the calculation tasks with the resource fluctuation less than the preset threshold value and/or the calculation tasks with the operation time length exceeding the time length threshold value.
According to one or more embodiments of the present disclosure, the duration threshold is determined based on an average value of the fluctuation amplitudes of the resources actually used by the above-mentioned computing task.
According to one or more embodiments of the present disclosure, the computing unit is further configured to: and for each running calculation task, calculating the weight of the running calculation task based on one or more preset weights in task priority, starting time, resource utilization rate and task type.
According to one or more embodiments of the present disclosure, there is provided an electronic device including: one or more processors; a storage device having one or more programs stored thereon which, when executed by one or more processors, cause the one or more processors to implement a method as in any above.
According to one or more embodiments of the present disclosure, a computer-readable medium is provided, on which a computer program is stored, wherein the program, when executed by a processor, implements the method as any one of the above.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.