CN113590326B - Service resource scheduling method and device - Google Patents
Service resource scheduling method and device Download PDFInfo
- Publication number
- CN113590326B CN113590326B CN202110878336.8A CN202110878336A CN113590326B CN 113590326 B CN113590326 B CN 113590326B CN 202110878336 A CN202110878336 A CN 202110878336A CN 113590326 B CN113590326 B CN 113590326B
- Authority
- CN
- China
- Prior art keywords
- service resource
- user
- service
- type
- estimated value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 65
- 230000004044 response Effects 0.000 claims abstract description 13
- 238000012423 maintenance Methods 0.000 claims 2
- 230000008569 process Effects 0.000 description 23
- 238000010586 diagram Methods 0.000 description 17
- 238000004590 computer program Methods 0.000 description 10
- 238000012545 processing Methods 0.000 description 10
- 238000004891 communication Methods 0.000 description 8
- 230000003993 interaction Effects 0.000 description 8
- 230000007423 decrease Effects 0.000 description 5
- 239000003999 initiator Substances 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 238000010998 test method Methods 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 230000009849 deactivation Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5077—Logical partitioning of resources; Management or configuration of virtualized resources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
- G06F9/5022—Mechanisms to release resources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45595—Network integration; Enabling network access in virtual machine instances
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The disclosure provides a service resource scheduling method, relates to the technical field of computers, and particularly relates to the technical field of cloud computing. The specific implementation scheme is as follows: determining a current estimate of the service resource in response to receiving a scheduling request for the service resource from the user, wherein the scheduling request includes a type of the service resource and an expected estimate of the service resource by the user; in the case that the expected estimated value is greater than or equal to the current estimated value and the number of currently free service resources of the type is greater than or equal to the predetermined number, assigning the service resources of the type to the user; and periodically re-determining the current estimated value of the service resource of the type according to the preset time interval, and releasing the service resource allocated to the user when the expected estimated value is smaller than the re-determined current estimated value.
Description
Technical Field
The disclosure relates to the technical field of computers, in particular to the technical field of cloud computing, and specifically relates to a service resource scheduling method and device.
Background
On a cloud computing platform, a user may request to use computing service resources such as cloud virtual machines on an hourly or per second basis. However, in this case, the characteristics of some service resource requirements of high scalability of the user are not fully utilized, the service resource utilization rate is low, and the cost of using the service resource by the user is too high, which becomes a bottleneck when the user uses the cloud computing platform to construct a large-scale application system.
Accordingly, there is a need for a method and apparatus that can increase the utilization of service resources on a cloud computing platform and reduce the cost of using the service resources for a user.
Disclosure of Invention
The disclosure provides a service resource scheduling method and device.
According to an aspect of the present disclosure, there is provided a service resource scheduling method, including:
determining a current estimate of a service resource in response to receiving a scheduling request for the service resource from a user, wherein the scheduling request includes a type of the service resource and an expected estimate of the service resource by the user;
assigning the type of service resource to the user in case the expected estimate is greater than or equal to a current estimate and the number of currently free type of service resources is greater than or equal to a predetermined number; and
periodically re-determining the current estimated value of the service resource of the type according to a preset time interval, and releasing the service resource allocated to the user when the expected estimated value is smaller than the re-determined current estimated value.
According to another aspect of the present disclosure, there is provided a service resource scheduling apparatus, including:
A determining module for determining a current estimate of a service resource in response to receiving a scheduling request for the service resource from a user, wherein the scheduling request includes a type of the service resource and an expected estimate of the service resource by the user;
an allocation module, configured to allocate the type of service resource to the user when the expected estimated value is greater than or equal to a current estimated value and the number of the type of service resource that is currently idle is greater than or equal to a predetermined number; and
and the first release module is used for periodically redefining the current estimated value of the service resource of the type according to a preset time interval, and releasing the service resource allocated to the user when the expected estimated value is smaller than the redetermined current estimated value.
According to another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method according to an embodiment of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform a method according to an embodiment of the present disclosure.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements a method according to embodiments of the present disclosure.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a flow chart of a service resource scheduling method according to an embodiment of the present disclosure;
FIG. 2 is a diagram illustrating one example of a process of using a service resource on a cloud computing platform according to an embodiment of the present disclosure;
FIG. 3 illustrates a diagram of one example of expected estimates and the relationship between current estimates and service resource operations in accordance with an embodiment of the present disclosure;
FIG. 4 is a diagram illustrating one example of a service resource scheduling process according to an embodiment of the present disclosure;
fig. 5 is a diagram showing one example of a service resource release procedure according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a service resource scheduling apparatus according to an embodiment of the present disclosure;
FIG. 7 is a diagram of one example of an interaction process to create a service resource instance according to an embodiment of the present disclosure;
FIG. 8 is a diagram of one example of an interaction process to release service resource instances according to an embodiment of the present disclosure; and
FIG. 9 illustrates a schematic block diagram of an example electronic device that may be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
A user may use a cloud computing platform to provide stateless application services. Such applications are for example, flexible and scalable Web site services, image rendering, big data analysis and massively parallel computing, etc., which are highly distributed, scalable and fault tolerant. In the case where a user requests to use a computing service resource on a cloud computing platform on an hourly or per second basis when such an application is implemented, the service resource utilization rate is low and the cost of using the service resource by the user is excessive due to the characteristic of high scalability of the application being not fully utilized.
The present disclosure provides a service resource scheduling method and apparatus, including: in response to receiving a scheduling request for a service resource from a user, determining a current estimate of the service resource, wherein the scheduling request includes a type of the service resource and an expected estimate of the service resource by the user, and assigning the type of service resource to the user if the expected estimate is greater than or equal to the current estimate and the number of currently free type of service resources is greater than or equal to a predetermined number. Periodically re-determining a current estimate of the type of service resource at a preset time interval and releasing the service resource allocated to the user when the expected estimate is less than the re-determined current estimate. By the method, the characteristic of high scalability of some types of applications can be fully utilized, the utilization rate of service resources on the cloud computing platform is improved, and the cost of using the service resources by users is reduced.
Fig. 1 is a flowchart of a service resource scheduling method 100 according to an embodiment of the present disclosure. A service resource scheduling method 100 according to an embodiment of the present disclosure is described below with reference to fig. 1.
In step S110, a current estimate of the service resource is determined in response to receiving a scheduling request for the service resource from the user. The scheduling request includes the type of service resource and an expected estimate of the service resource by the user.
The service resource may be a resource on a cloud computing platform for computing services, such as a cloud virtual machine. The type of service resource may include, for example, a physical area in which the service resource is located, a machine model of the service resource, a predetermined usage period of the service resource, and the like. A user may make a scheduling request to a cloud computing platform when he wishes to use a service resource on the cloud computing platform. The user may specify in the scheduling request the type of service resource requested by the user and the expected estimate of that type of service resource by the user. The expected estimate is an estimate of the service resource expected by the user. The user may specify the expected estimated value based on an estimate of the cost of the service resource itself, a historical estimated value of the service resource used by the user itself, and the like.
In determining the current estimate of the service resource, their respective estimates may be determined for each type of service resource. For a certain type of service resources, when the number of idle service resources is large, the current estimated value of the service resources can be determined to be low, so that the utilization rate of the idle service resources is improved, and the cost of using the service resources by a user is reduced. When the number of idle service resources is small, the current estimated value of the service resources may be determined to be high.
In step S120, in the case where the expected estimated value is greater than or equal to the current estimated value and the number of currently free service resources of the type is greater than or equal to the predetermined number, the service resources of the type are allocated to the user.
If the expected estimated value specified by the user is greater than or equal to the current estimated value determined in step S110, it indicates that the user is able to accept the current estimated value. If the number of currently free service resources is greater than or equal to a predetermined number, which is a threshold number, it is indicated that sufficient service resources are currently available. Thus, in case both conditions are satisfied at the same time, the user can be allocated the service resource of the type. For example, in one example, the current estimate may be 20% of the predetermined highest estimate of the service resource. The predetermined highest estimated value of the service resource is a predetermined highest estimated value of the service resource, and may be equal to an estimated value in the case where the use of the service resource is requested on an hourly basis or a per second basis. For example, the predetermined highest estimated value of a cloud virtual machine of a specific specification is 100 yuan per hour, and the current estimated value may be 20% thereof, that is, 20 yuan per hour. In one example, the threshold number may be one-fourth of the total number of service resources. For example, the total number of cloud virtual machines of a particular specification is 1000, and the threshold number may be one quarter thereof, namely 250.
In step S130, the current estimated value of the service resource of the type is periodically re-determined at preset time intervals, and the service resource allocated to the user is released when the expected estimated value is smaller than the re-determined current estimated value.
After the service resources are allocated to the users, the current estimated value of the service resources of the type can be periodically updated along with the continuous change of the use condition of the service resources of the type by other users, so that the current estimated value can reflect the current use condition in real time. For example, as more and more users use this type of service resources, this type of service resources becomes more and more stressed, and it is necessary to increase the current estimated value of this type of service resources at this time, so that this type of service resources will only be allocated to users having a higher expected estimated value, and thus the service resources will be reasonably allocated. On the other hand, in the case where users using service resources of this type are getting smaller and smaller, the service resources of this type become more and more abundant, and at this time, it is necessary to lower the current estimated value of the service resources of this type, so that the service resources of this type can be sufficiently allocated to users having lower expected estimated values, improving the resource utilization and reducing the use cost of the users. When the expected estimate of the user is less than the redefined current estimate, the service resources allocated to the user may be released. Thus, when the demand of the service resources is large, the service resources which are being used by the users with smaller expected estimation values can be preempted by other users with larger expected estimation values, so that real-time and reasonable scheduling of the resources is realized.
According to the dynamic scheduling method for the service resources on the cloud computing platform, which is disclosed by the embodiment of the invention, the utilization rate of the service resources can be effectively improved, and the use cost of a user can be reduced.
In an exemplary embodiment, periodically re-determining the current estimate of the type of service resource at preset time intervals may specifically include the operations of: after the service resource of the type is distributed to the user, judging whether the duration of the service resource belonging to the user reaches the preset duration; and under the condition that the duration of the service resource belonging to the user reaches the preset duration, periodically redefining the current estimated value of the service resource of the type according to the preset time interval.
The preset duration may be equivalent to a protection period set for the user, so as to improve reliability and stability when the user uses the service resource. That is, after the service resource is allocated to the user, a preset time period, which is a guard period, is waited for, within which the service resource is not released. In one exemplary embodiment, the preset time period is, for example, one hour, but is not limited thereto, and may be any time period set according to a user's demand or the like. In the preset duration, no matter how the current estimated value of the service resource fluctuates, the service resource is not released, so that the use of the service resource by a user can be ensured to the minimum extent, and the availability of the service resource is improved.
In one exemplary embodiment, the service resource may continue to be attributed to the user when the expected estimate is greater than or equal to the redefined current estimate.
If the expected estimate of the user is greater than or equal to the redetermined current estimate, this means that the expected estimate of the user is still sufficiently high even if the current estimate fluctuates due to new scheduling requests from other users. Thus, the user can continue to run the service resource. This ensures that users with higher expected estimates can stably run the service resources, thereby improving the availability of the service resources.
In one exemplary embodiment, in determining the current estimate of a type of service resource, the current estimate of the service resource may be determined based on a predetermined highest estimate of the type of service resource, the number of currently operating type of service resources, the total number of type of service resources, and an expected estimate of the user to whom the currently operating type of service resource is assigned.
As described in the foregoing, the predetermined highest estimated value of the service resource is the highest estimated value of the predetermined service resource, and may be equal to the estimated value in the case where the use of the service resource is requested on an hourly basis or a per-second basis. The current estimate of service resources decreases accordingly as the number of currently running service resources of this type decreases, even down to about one tenth of the highest estimate. The number of service resources of the type currently running refers to the number of service resources of the type currently being used. The expected estimated value of the user to which the service resource of the type currently running is allocated indicates an expected estimated value each of which is being assigned by the user using the service resource of the type currently running when making his/her scheduling request, and when these values are higher, it indicates that the current user is expected to be able to accept a higher estimated value, and thus the current estimated value of the service resource can be appropriately raised.
In this way, the current estimated value of the service resource can be reasonably determined, so that the service resource can be reasonably allocated in real time.
In one exemplary embodiment, the current estimate may be quantitatively calculated by the following equation (1):
where f represents the current estimate, α and β represent weight coefficients, k represents a predetermined highest estimate of the type of service resources, n represents the number of currently running service resources of the type, m represents the total number of service resources of the type, x i And the expected estimated value corresponding to the i-th currently running service resource of the type is represented, wherein i is a positive integer less than or equal to n. In this equation, the weight coefficients α and β are set in advance.Representing the average expected estimate of n currently running service resources of that type. />Reflecting the extent of shortage of service resources of this type at present.
It can be seen that the larger the average expected estimate of the currently running service resource, the more scarce the service resource, the larger the current estimate.
According to the above equation (1), the current estimated value of the service resource can be calculated quickly and appropriately.
In one exemplary embodiment, the current estimate may also be calculated quantitatively by the following equation (2):
Where f represents the current estimate, k represents the predetermined highest estimate of the type of service resource, n represents the number of currently running type of service resource, m represents the total number of type of service resource, x i And the expected estimated value corresponding to the i-th currently running service resource of the type is represented, wherein i is a positive integer less than or equal to n.
As shown in formula (2), preferably, α=0.1 and β=0.9 in formula (1). It can be seen that in case of very abundant service resources, the current estimate f may be at least about one tenth of k, so that the cost of using the service resources by the user at this time is very low. As more users use service resources, the expected estimated value of the users is higher, and the current estimated value f is increased, so that the scheduling of the service resources is adjusted in real time, and the resources used by the users with low expected estimated values are released.
As an example of a user-specified expected estimate, the user may set the expected estimate to be consistent in real-time with the current estimate calculated by the platform, which is the lowest expected estimate that the user can specify that resources may be obtained. Thus, in this case, the user can use the resources at the lowest cost. As another example of the user-specified expected estimated value, the user may specify a random value between the current estimated value and a predetermined highest estimated value as the expected estimated value. As yet another example of the user-specified expected estimated value, the user may specify a predetermined highest estimated value as the expected estimated value. In this case, since the expected estimated value as high as possible is specified, the user can ensure that the resource is reliably obtained. As yet another example of a user-specified expected estimate, the user may specify a highest upper limit for the expected estimate that he is willing to accept, which may be any value between the current estimate and a predetermined highest estimate.
According to the above equation (2), the current estimated value of the service resource can be calculated quickly and appropriately. According to the above equation (2), it can be ensured that the calculated current estimated value falls within a reasonable range of [0.1×k, k ]. In addition, as a plurality of users respectively make scheduling requests and designate expected estimated values, the current estimated value calculated according to the above equation (2) fluctuates. However, the current estimated value calculated according to the above formula (2) does not fluctuate drastically regardless of how the user designates the expected estimated value, that is, whether the user designates the lowest expected estimated value, the random expected estimated value, the highest expected estimated value, or other expected estimated values. Therefore, the user does not take care that the service resources that have been obtained by himself are frequently released due to the drastic change of the current estimated value, thereby improving the availability of the service resources.
The second term on the right of the equal sign of the expression (2) may be multiplied by the interference factor δ to reflect the influence of the interference factor on the current estimated value, and the expression thus obtained is the expression (3) below. The interference factor represents other factors than the above factors among the factors determining the current estimated value. Delta may take values in the range of 0,1 and the default value of delta may be set to 1. For example, if the cloud computing platform wishes to allocate more resources, the interference factor δ may be set to a smaller value. Conversely, if the cloud computing platform wishes to keep more resources free, the interference factor δ may be set to a larger value.
In one exemplary embodiment, the type of service resource includes at least one of: a physical area corresponding to the service resource; a machine model corresponding to the service resource; a predetermined period of use of the service resource. By classifying the service resources into different types, the operation of calculating the current estimated value and the like can be performed for the inherent characteristics of each type of service resources. That is, the equation for calculating the current estimate, the manner in which the resources are allocated, etc. may be different for each type of service resource. Therefore, each type of service resource can be flexibly and efficiently allocated.
In one exemplary embodiment, to provide early warning to the user, the user is notified before releasing the service resources used by the user, so that the user can save or transfer important data in time, or find alternative service resources. The notification may be in the form of generating and transmitting a notification message, but is not limited thereto. The service resources currently operated by the user are released after a predetermined period of time from the generation of the notification message. In this way, abrupt deactivation and data loss of service resources used by the user are avoided, and the availability of the service resources is improved.
In one exemplary embodiment, during the user's running of the service resources, the number of currently idle service resources of the type is redetermined, and in case the redetermined number of service resources of the type is less than a predetermined number, the service resources currently run by the user are released. By the method, the number of the currently idle service resources on the cloud computing platform is at least a preset number, so that the minimum number of the idle service resources is ensured, and the availability of the service resources is improved.
In one exemplary embodiment, after the service resources are allocated for use by the user, the cloud computing platform records the expected estimate of the user for use in later recalculating the current estimate of the service resources.
In one exemplary embodiment, the cost of a user using a service resource is determined based on the length of a time period the user uses the service resource and a current estimate determined by the cloud computing platform in real-time during the user's use of the service resource, rather than an expected estimate specified by the user. Thus, the user can use the resource at a lower cost, thereby improving the utilization rate of the service resource and reducing the use cost of the user.
Those skilled in the art will appreciate that the processes described in this disclosure as being performed by a cloud computing platform may also be performed by other computers, servers, etc. outside of the cloud computing platform.
Fig. 2 is a diagram illustrating one example of a process of using service resources on a cloud computing platform according to an embodiment of the present disclosure. One example 200 of a process of using a service resource on a cloud computing platform according to an embodiment of the present disclosure is described in detail below with reference to fig. 2.
The use procedure example 200 begins at 201. At 202, a user requests scheduling of service resources. At 203, the cloud computing platform determines that both condition 1 and condition 2 are satisfied. Condition 1 means that the expected estimate specified by the user in the scheduling request is greater than or equal to the current estimate of the service resource. Condition 2 means that the number of service resources of the type currently idle on the cloud computing platform is greater than or equal to a predetermined number. In case at least one of the conditions 1 and 2 is not fulfilled, the process continues 203, i.e. it is continued to determine whether the conditions 1 and 2 are fulfilled. In the case where both condition 1 and condition 2 are satisfied, the request dispatch is successful and the platform allocates service resources to the user for use at 204. And during the protection period, the user stably uses the service resource. Here, an example in which the guard period is 1 hour is shown, but the length of the guard period is not limited thereto, and may be any suitable length. If the user is done at some point during the guard period, the service resources may be actively released at 206. In the case where the user is actively releasing the service resource, the process ends at 212. If the user has not released the service resource for the guard period, after the guard period, the process proceeds to 207. At 207, wait 5 minutes. Then it is checked at 208 whether condition 1 and condition 2 continue to be met. Here, the current estimate in condition 1 and the number of currently free service resources in condition 2 are constantly changing over time. In this example, the condition is shown to be checked every 5 minutes, but the cycle time is not limited to 5 minutes, but may be any suitable cycle time. If both condition 1 and condition 2 continue to be met at the same time, the process proceeds to 209 where the user continues to use the service resource. If at least one of condition 1 and condition 2 is not met, the process proceeds to 211 and the platform releases the service resources that the user is using and ends at 212.
During continued use of the resources, if the user actively releases the service resources at 210, the process ends at 212. If the user does not actively release the service resource, the process returns to 207 and after 5 minutes it again checks if conditions 1 and 2 are met.
In one exemplary embodiment, the user may cancel the scheduling request or modify the scheduling request at any time during the period from the time the scheduling request is made by the user to the time the service resource is allocated by the platform, so that the user can flexibly and conveniently request the scheduling of the service resource.
When a user makes a scheduling request, first, the type of service resource to be requested is selected, which is the same as in the case where service resources are requested to be used on an hourly or per second basis. The user then specifies the expected estimate for the selected type. The user may choose to specify the desired estimate in a number of ways. For example, one way is for the user to specify a highest expected estimate that he is willing to accept, which may be between the current estimate of the type of service resource and a predetermined highest estimate, e.g. may be specified as 80% of the predetermined highest estimate. For example, another way is for the user to specify an automatic dynamic expectation estimate. For example, the user may specify that the expected estimate is consistent with the current estimate in real time, such that when the current estimate increases or decreases, the expected estimate increases or decreases accordingly, thereby enabling stable use of service resources and enjoying lower cost of service resources.
Fig. 3 is a diagram illustrating one example of expected estimates and the relationship between current estimates and service resource operation according to an embodiment of the present disclosure. An example of expected estimates and the relationship between current estimates and service resource operation according to an embodiment of the present disclosure is described below with reference to fig. 3.
As shown in 301 of fig. 3, the predetermined highest estimated value of the service resource is 1. At point 05, the user makes a scheduling request with an expected estimate of 0.3, as shown at 302. At this time, as shown in 303, the current estimated value of the service resource is 0.4, and the expected estimated value of the user is smaller than the current estimated value, so the service resource is not allocated to the user. Starting at point 1 and 10 minutes, the current estimate becomes 0.3 below the expected estimate, thus allocating service resources to the user. During the time period from point 1, 10, to point 1, 45, the expected estimate is greater than or equal to the current estimate, as shown at 304-311, so the service resources are always used by the user. At point 1, 50 minutes, the current estimate becomes greater than the expected estimate, as shown at 312, and thus the service resources used by the user are released. In this example, the current estimate is redetermined every 5 minutes.
Fig. 4 is a diagram illustrating one example of a service resource scheduling process according to an embodiment of the present disclosure. An example of a service resource scheduling procedure according to an embodiment of the present disclosure is described below with reference to fig. 4.
As shown at 404 in fig. 4, a user 401 makes a scheduling request for a service resource to a cloud computing platform. After receiving the scheduling request, the condition judgment service 402 makes a condition judgment, i.e., judges whether or not the condition 1 and the condition 2 are satisfied simultaneously, as shown in 405. Condition 1 means that the expected estimate specified by the user 401 in the scheduling request is greater than or equal to the current estimate of the service resource. Condition 2 means that the number of service resources of the type currently idle on the cloud computing platform is greater than or equal to a predetermined number. In the case where both the condition 1 and the condition 2 are satisfied, the condition judgment service 402 determines that it is possible to allocate the service resource to the user 401, updates the current estimated value with the expected estimated value of the user 401 as shown by 406, and notifies the resource management service 403 of the allocation of the resource as shown by 407. After allocating the service resources to the user 401, the resource management service 403 transmits an allocation completion response to the condition judgment service 402 as shown at 408. After receiving the allocation completion response, the condition judgment service 402 notifies the user 401 of the success of allocation as shown in 409. After receiving the notification of successful allocation, the user 401 starts to use the allocated service resources.
Fig. 5 is a diagram illustrating one example of a service resource release procedure according to an embodiment of the present disclosure. An example of a service resource release procedure according to an embodiment of the present disclosure is described below with reference to fig. 5.
As shown at 504 in fig. 5, the condition judgment service 502 sends a request to the resource management service 503 for acquisition of the resource outside the guard period. An out-of-guard period resource refers to a service resource that has run-time that has exceeded the guard period. In response to the request, the resource management service 503 sends a resource information response to the condition judgment service 502 including information about all current out-of-guard resources, as shown at 505. As shown in 506, the condition judgment service 502 judges whether or not each of the guard period outside sources satisfies the above-described condition 1 and condition 2, based on the information. If a resource outside of a certain guard period does not meet either of conditions 1 and 2, then the condition judgment service 502 will notify the corresponding user 501 that the resource is about to be released, as shown at 507. Then, as indicated at 508, the condition judgment service 502 notifies the resource management service 503 of releasing the resource. As shown in 509, the resource management service 503 transmits a release completion response to the condition judgment service 502 after releasing the resource. Finally, as shown at 510, the condition judgment service 502 will notify the corresponding user 501 that the resource has been released.
Fig. 6 is a schematic diagram of a service resource scheduling apparatus according to an embodiment of the present disclosure. A service resource scheduling apparatus 600 according to an embodiment of the present disclosure is described below with reference to fig. 6. The service resource scheduler 600 includes a determination module 610, an allocation module 620, and a first release module 630.
The determination module 610 is configured to determine a current estimate of the service resource in response to receiving a scheduling request for the service resource from the user, wherein the scheduling request includes a type of the service resource and an expected estimate of the service resource by the user.
The allocation module 620 is configured to allocate the type of service resource to the user in case the expected estimate is greater than or equal to the current estimate and the number of currently free type of service resources is greater than or equal to the predetermined number.
The first release module 630 is configured to periodically re-determine the current estimate of the type of service resource at preset time intervals and release the service resource allocated to the user when the expected estimate is less than the re-determined current estimate.
According to the dynamic scheduling device 600 of the service resources on the cloud computing platform, the utilization rate of the service resources can be improved, and the use cost of users can be reduced.
Fig. 7 is a diagram of one example of an interaction process to create a service resource instance according to an embodiment of the present disclosure. An example 700 of an interaction process to create a service resource instance according to an embodiment of the present disclosure is described below with reference to fig. 7.
In this example of the interactive process of creating a service resource instance, first, in step S710, the user 701 selects a service resource (for example, a cloud virtual machine of a certain specification) that is desired to be run, and makes a scheduling request to the console (console) layer 702. The scheduling request includes the type of the service resource and an expected estimate of the service resource by the user (e.g., 20 yuan per hour).
In step S711, the console layer 702, upon receiving a scheduling request of the user 701, requests creation of a corresponding request item from the item manager 703 based on the scheduling request.
In step S712, upon receiving the request of the console layer 702, the item manager 703 creates a corresponding request item, which is in a suspended state immediately after creation. And, the project manager 703 replies to the console layer 702 with a message indicating that the requested project has been created.
In step S713, the item manager 703 requests the matcher 704 to perform item matching. This step may be performed once at a preset time interval, which may be set to 10 seconds, for example, but is not limited thereto. The size of the preset time interval can be set according to the requirement, so that the processing of the suspended items (i.e. the user scheduling requests which are not processed) in time can be ensured, and the processing capability of the system can be met.
In step S714, the matcher 704 determines, for each item in the suspended state, whether the user expectation evaluation value corresponding to the item is greater than or equal to the current evaluation value and whether the number of instances currently free is greater than or equal to a predetermined number.
For the items satisfying the above conditions, in step S715, the matcher 704 notifies the item manager 703 to lock the corresponding items. Since a plurality of matchers 704 can be simultaneously implemented at the time of actual implementation to increase the processing speed, locking can be notified by the matchers 704 so as to prevent the case where a plurality of matchers 704 process the same item at the same time.
In step S716, the item manager 703, upon receiving the notification of the matcher 704, performs locking of the corresponding item, and notifies the matcher 704 of a message indicating that locking was successful.
In step S717, the matcher 704, upon receiving a message indicating that locking is successful from the item manager 703, determines that the state of the corresponding item can be updated from tentative to be created.
In step S718, the matcher 704 sends a message indicating that the state of the corresponding item is updated from tentative to be created to the item manager 703.
In step S719, the project manager 703 updates the state of the corresponding project from tentative to be created, and transmits a project execution command for the corresponding project to the project executor 705.
In step S720, the project executor 705, upon receiving a project execution command for a corresponding project from the project manager 703, requests the back-end instance manager 706 to create a service resource instance for the corresponding project.
In step S721, the back-end instance manager 706 allocates a service resource for a corresponding item from among currently free service resources, creates a service resource instance for the corresponding item, and transmits the name of the created service resource instance to the item executor 705.
In step S722, the project executor 705 sends the name of the created service resource instance to the project manager 703, so that the project manager 703 can bind the corresponding project with the name of the created service resource instance.
In step S723, the back-end instance manager 706 sends a message to the user 701 indicating that the service resource instance has been created successfully.
In step S724, the back-end instance manager 706 sends the updated inventory count (i.e., the number of currently free service resources) to the matcher 704, or the matcher 704 may also actively query and obtain the updated inventory count from the back-end instance manager 706. In this way, when the number of currently free service resources decreases due to allocation of service resources, the matcher 704 can learn the updated number so as to correctly perform judgment later.
In step S725, the project manager 703 transmits the created resource record to the current estimate determiner 707, so that the current estimate determiner 707 redetermines the current estimate based on the user expected estimate of the corresponding project.
In step S726, the project manager 703 transmits a notification message to the resource manager 708, notifying the resource manager 708 to update the state of the service resource instance allocated to the corresponding project to the running state.
Finally, in step S727, the item manager 703 updates the status of the corresponding item to completed.
Fig. 8 is a diagram of one example of an interaction process to release service resource instances according to an embodiment of the present disclosure. An example 800 of an interaction process to release service resource instances according to an embodiment of the present disclosure is described below with reference to fig. 8.
In this example of the interactive process of releasing the service resource instance, first, in step S810, the matcher 804 acquires the service resource instance whose operation time period has reached the preset time period from the start of operation from the resource manager 802. The preset duration is, for example, one hour. As described above, the service resource instance running for longer than the preset time period is a service resource instance that has passed the guard period, and may be preempted by other users. Step S810 may be performed once at a preset time interval, which may be set to 10 seconds, for example, but is not limited thereto. The size of the preset time interval can be set according to the needs, so that the running service resource instance can be guaranteed to be processed in time, and the processing capacity of the system can be met.
In step S811, the matcher 804 redetermines the current estimated value of the service resource of the type for each acquired service resource instance, and determines that the service resource instance is to be preempted by other users when the expected estimated value of the corresponding item is smaller than the redetermined current estimated value.
In step S812, the matcher 804 sends a notification message to the user 801 that the service resource is about to be preempted, so that the user saves or transfers important data in time, or finds an alternative service resource.
In step S813, the matcher 804 transmits a stop instance notification to the instance start-stop 805.
In step S814, the instance initiator 805 stops the corresponding instance and notifies the resource manager 802 to update the resource status of the corresponding instance from the running state to the suspended state.
In step S815, the instance initiator 805 notifies the back-end instance manager 803 of the update of the stock number.
In step S816, the back-end instance manager 803 notifies the updated inventory count to the matcher 804 so that the matcher 704 can learn the updated count and then correctly perform judgment.
In step S817, the back-end instance manager 803 sends a message indicating that the resources have been released to the instance initiator 805.
Finally, in step S818, the instance initiator 805 sends a resource release notification to the user 801.
In addition, the current estimate determiner 806 may perform step S819 at preset time intervals, in which the current estimate determiner 806 may clean up unwanted created resource records in itself, e.g., mark unwanted created resource records as deleted, thereby ensuring that memory space is not exhausted due to the accumulation of too many unwanted records.
According to embodiments of the present disclosure, there is also provided an electronic device, a readable storage medium, and a computer program product, which improve the utilization of service resources and reduce the use cost of users by allocating and releasing the service resources according to supply and demand conditions.
Fig. 9 shows a schematic block diagram of an example electronic device 900 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 9, the apparatus 900 includes a computing unit 901 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 902 or a computer program loaded from a storage unit 908 into a Random Access Memory (RAM) 903. In the RAM 903, various programs and data required for the operation of the device 900 can also be stored. The computing unit 901, the ROM 902, and the RAM 903 are connected to each other by a bus 904. An input/output (I/O) interface 905 is also connected to the bus 904.
Various components in device 900 are connected to I/O interface 905, including: an input unit 906 such as a keyboard, a mouse, or the like; an output unit 907 such as various types of displays, speakers, and the like; a storage unit 908 such as a magnetic disk, an optical disk, or the like; and a communication unit 909 such as a network card, modem, wireless communication transceiver, or the like. The communication unit 909 allows the device 900 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunications networks.
The computing unit 901 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 901 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 901 performs the various methods and processes described above, such as those performed by the processors of the roadside computing devices, traffic alert devices, or remote processors described above. For example, in some embodiments, the methods may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 908. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 900 via the ROM 902 and/or the communication unit 909. When the computer program is loaded into RAM 903 and executed by the computing unit 901, one or more steps of the test method of the distributed system described above may be performed. Alternatively, in other embodiments, the computing unit 901 may be configured to perform the above-described methods by any other suitable means (e.g., by means of firmware). The device 900 may be, for example, a control center of a distributed system, or any device located inside or outside of a distributed system. The apparatus 900 is not limited to the above examples as long as the above test method can be implemented.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be understood that the various forms of flow shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.
Claims (19)
1. A service resource scheduling method, comprising:
determining a current estimate of a service resource in response to receiving a scheduling request for the service resource from a user, wherein the scheduling request includes a type of the service resource and an expected estimate of the service resource by the user;
assigning the type of service resource to the user in case the expected estimate is greater than or equal to a current estimate and the number of currently free type of service resources is greater than or equal to a predetermined number; and
Periodically re-determining a current estimated value of the type of service resource according to a preset time interval, and releasing the service resource allocated to the user when the expected estimated value is smaller than the re-determined current estimated value;
wherein said determining a current estimate of said service resource comprises calculating said current estimate by:wherein f represents the current estimated value, alpha and beta represent weight coefficients, and k represents theThe predetermined highest estimated value of the service resources of the type, n represents the number of the service resources of the type currently running, m represents the total number of the service resources of the type, xi represents the expected estimated value corresponding to the service resources of the type currently running, i is a positive integer less than or equal to n.
2. The method of claim 1, wherein periodically re-determining the current estimate of the type of service resource at a preset time interval comprises:
after the service resources of the type are distributed to the users, judging whether the duration of the service resources belonging to the users reaches a preset duration or not;
and under the condition that the time length of the service resource belonging to the user reaches the preset time length, periodically redefining the current estimated value of the type of service resource according to a preset time interval.
3. The method of claim 2, wherein the method further comprises:
and when the expected estimated value is greater than or equal to the redetermined current estimated value, the service resource continues to be attributed to the user.
4. The method of claim 1, wherein the determining the current estimate of the service resource comprises:
the current estimate of the service resource is determined based on a predetermined highest estimate of the type of service resource, a number of the type of service resource currently running, a total number of the type of service resource, and an expected estimate of the user to whom the type of service resource currently running is assigned.
5. The method of claim 1, further comprising:
and when the expected estimated value is greater than or equal to the redetermined current estimated value, the service resource continues to be attributed to the user.
6. The method of claim 1, wherein α=0.1 and β=0.9.
7. The method of any of claims 1 to 6, wherein the type of service resource comprises at least one of:
a physical area corresponding to the service resource; a machine model corresponding to the service resource; a predetermined period of use of the service resource.
8. The method of any one of claims 1 to 6, further comprising:
generating a notification message for notifying the user that the service resource is about to be released; and
and releasing the service resource currently operated by the user after a preset time period from the generation of the notification message.
9. The method of any one of claims 1 to 6, further comprising:
re-determining the number of currently free service resources of said type during the operation of said service resources by said user;
and releasing the service resource currently operated by the user under the condition that the redetermined quantity of the service resource of the type is smaller than the preset quantity.
10. A service resource scheduling apparatus comprising:
a determining module for determining a current estimate of a service resource in response to receiving a scheduling request for the service resource from a user, wherein the scheduling request includes a type of the service resource and an expected estimate of the service resource by the user;
an allocation module, configured to allocate the type of service resource to the user when the expected estimated value is greater than or equal to a current estimated value and the number of the type of service resource that is currently idle is greater than or equal to a predetermined number; and
The first release module is used for periodically redefining the current estimated value of the service resource of the type according to a preset time interval, and releasing the service resource allocated to the user when the expected estimated value is smaller than the redetermined current estimated value;
wherein the determining module comprises: a first calculation unit, configured to calculate the current estimated value by:wherein f represents the current estimated value, alpha and beta represent weight coefficients, k represents a predetermined highest estimated value of the type of service resources, n represents the number of the type of service resources currently running, m represents the total number of the type of service resources, xi represents an expected estimated value corresponding to the ith currently running type of service resources, and i is a positive integer less than or equal to n.
11. The apparatus of claim 10, wherein the first release module comprises:
the judging unit is used for judging whether the duration of the service resource belonging to the user reaches a preset duration after the service resource of the type is distributed to the user; and
and the redetermining unit is used for periodically redetermining the current estimated value of the type of service resource according to a preset time interval under the condition that the duration of the service resource belonging to the user reaches the preset duration.
12. The apparatus of claim 11, wherein the apparatus further comprises:
and the maintenance module is used for enabling the service resource to be continuously attributed to the user when the expected estimated value is greater than or equal to the redetermined current estimated value.
13. The apparatus of claim 10, wherein the means for determining comprises: a determining unit, configured to determine a current estimated value of the service resource based on a predetermined highest estimated value of the service resource of the type, a number of service resources of the type currently running, a total number of service resources of the type, and an expected estimated value of a user to which the service resource of the type currently running is allocated.
14. The apparatus of claim 10, wherein the apparatus further comprises:
and the maintenance module is used for enabling the service resource to be continuously attributed to the user when the expected estimated value is greater than or equal to the redetermined current estimated value.
15. The apparatus of claim 10, wherein α=0.1 and β=0.9.
16. The apparatus of any of claims 10 to 15, further comprising:
and the notification module is used for generating a notification message, wherein the notification message is used for notifying the user that the service resource is about to be released, and releasing the service resource currently operated by the user after a preset time period from the generation of the notification message.
17. The apparatus of any of claims 10 to 15, further comprising:
a second release module for redetermining the number of the currently idle service resources of the type during the operation of the service resources by the user; and releasing the service resource currently operated by the user under the condition that the redetermined quantity of the service resource of the type is smaller than the preset quantity.
18. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-9.
19. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110878336.8A CN113590326B (en) | 2021-07-30 | 2021-07-30 | Service resource scheduling method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110878336.8A CN113590326B (en) | 2021-07-30 | 2021-07-30 | Service resource scheduling method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113590326A CN113590326A (en) | 2021-11-02 |
CN113590326B true CN113590326B (en) | 2024-02-02 |
Family
ID=78253521
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110878336.8A Active CN113590326B (en) | 2021-07-30 | 2021-07-30 | Service resource scheduling method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113590326B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016195709A1 (en) * | 2015-06-05 | 2016-12-08 | Hewlett Packard Enterprise Development Lp | Pricing of cloud resources |
CN106789118A (en) * | 2016-11-28 | 2017-05-31 | 上海交通大学 | Cloud computing charging method based on service-level agreement |
US10057185B1 (en) * | 2015-09-21 | 2018-08-21 | Amazon Technologies, Inc. | User-initiated activation of multiple interruptible resource instances |
CN109426550A (en) * | 2017-08-23 | 2019-03-05 | 阿里巴巴集团控股有限公司 | The dispatching method and equipment of resource |
CN110503534A (en) * | 2019-08-27 | 2019-11-26 | 山东大学 | Cloud computing service resource dynamic dispatching method and system based on price expectation |
CN111078390A (en) * | 2018-10-19 | 2020-04-28 | 阿里巴巴集团控股有限公司 | Server resource allocation method and device |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109298936B (en) * | 2018-09-11 | 2021-05-18 | 华为技术有限公司 | Resource scheduling method and device |
-
2021
- 2021-07-30 CN CN202110878336.8A patent/CN113590326B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016195709A1 (en) * | 2015-06-05 | 2016-12-08 | Hewlett Packard Enterprise Development Lp | Pricing of cloud resources |
US10057185B1 (en) * | 2015-09-21 | 2018-08-21 | Amazon Technologies, Inc. | User-initiated activation of multiple interruptible resource instances |
CN106789118A (en) * | 2016-11-28 | 2017-05-31 | 上海交通大学 | Cloud computing charging method based on service-level agreement |
CN109426550A (en) * | 2017-08-23 | 2019-03-05 | 阿里巴巴集团控股有限公司 | The dispatching method and equipment of resource |
CN111078390A (en) * | 2018-10-19 | 2020-04-28 | 阿里巴巴集团控股有限公司 | Server resource allocation method and device |
CN110503534A (en) * | 2019-08-27 | 2019-11-26 | 山东大学 | Cloud computing service resource dynamic dispatching method and system based on price expectation |
Non-Patent Citations (2)
Title |
---|
亚马逊竞价型云服务定价策略的分析;李雪菲;李铮;张贺;荣国平;;小型微型计算机系统(06);全文 * |
多实例云计算资源市场下超额预订决策方法;陈冬林;姚梦迪;邓国华;;计算机应用(01);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN113590326A (en) | 2021-11-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112559182B (en) | Resource allocation method, device, equipment and storage medium | |
CN115794337B (en) | Resource scheduling method, device, cloud platform, equipment and storage medium | |
CN112860974A (en) | Computing resource scheduling method and device, electronic equipment and storage medium | |
CN111857992B (en) | Method and device for allocating linear resources in Radosgw module | |
CN114840323A (en) | Task processing method, device, system, electronic equipment and storage medium | |
EP4177745A1 (en) | Resource scheduling method, electronic device, and storage medium | |
CN112486642B (en) | Resource scheduling method, device, electronic equipment and computer readable storage medium | |
CN111190719B (en) | Method, device, medium and electronic equipment for optimizing cluster resource allocation | |
CN111796933A (en) | Resource scheduling method, device, storage medium and electronic equipment | |
CN114911598A (en) | Task scheduling method, device, equipment and storage medium | |
CN115629865A (en) | Deep learning inference task scheduling method based on edge calculation | |
CN112887407B (en) | Job flow control method and device for distributed cluster | |
CN113986497A (en) | Queue scheduling method, device and system based on multi-tenant technology | |
CN113590326B (en) | Service resource scheduling method and device | |
CN116048791B (en) | Regulation and control method and device of test node, electronic equipment and storage medium | |
JP2023126142A (en) | Resource control method and apparatus for function computing, device, and medium | |
CN115269145A (en) | High-energy-efficiency heterogeneous multi-core scheduling method and device for offshore unmanned equipment | |
CN115801687A (en) | Flow balancing method and device, electronic equipment and storage medium | |
CN110457130B (en) | Distributed resource elastic scheduling model, method, electronic equipment and storage medium | |
CN114610575B (en) | Method, apparatus, device and medium for calculating updated peak value of branch | |
CN115391042B (en) | Resource allocation method and device, electronic equipment and storage medium | |
CN116149798B (en) | Virtual machine control method and device of cloud operating system and storage medium | |
CN113285833B (en) | Method and device for acquiring information | |
CN113347016B (en) | Virtualization network function migration method based on resource occupation and time delay sensitivity | |
CN113946414A (en) | Task processing method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |