CN116671086A - Method and system for optimizing utilization constraints for large-scale resource allocation - Google Patents
Method and system for optimizing utilization constraints for large-scale resource allocation Download PDFInfo
- Publication number
- CN116671086A CN116671086A CN202180083212.9A CN202180083212A CN116671086A CN 116671086 A CN116671086 A CN 116671086A CN 202180083212 A CN202180083212 A CN 202180083212A CN 116671086 A CN116671086 A CN 116671086A
- Authority
- CN
- China
- Prior art keywords
- decision variables
- sub
- target
- values
- equations
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 66
- 238000013468 resource allocation Methods 0.000 title abstract description 40
- 238000012545 processing Methods 0.000 claims abstract description 51
- 238000003860 storage Methods 0.000 claims abstract description 29
- 238000013507 mapping Methods 0.000 claims description 31
- 239000011159 matrix material Substances 0.000 claims description 16
- 230000008569 process Effects 0.000 claims description 10
- 238000012804 iterative process Methods 0.000 claims description 6
- 230000004044 response Effects 0.000 claims description 4
- 230000002776 aggregation Effects 0.000 claims description 2
- 238000004220 aggregation Methods 0.000 claims description 2
- 238000004590 computer program Methods 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 75
- 238000006243 chemical reaction Methods 0.000 description 18
- 230000015654 memory Effects 0.000 description 15
- 238000004891 communication Methods 0.000 description 8
- 230000003190 augmentative effect Effects 0.000 description 7
- 238000005457 optimization Methods 0.000 description 6
- 238000004422 calculation algorithm Methods 0.000 description 5
- 238000007726 management method Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000009467 reduction Effects 0.000 description 4
- 238000010276 construction Methods 0.000 description 3
- 238000007792 addition Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000000638 solvent extraction Methods 0.000 description 2
- HPTJABJPZMULFH-UHFFFAOYSA-N 12-[(Cyclohexylcarbamoyl)amino]dodecanoic acid Chemical compound OC(=O)CCCCCCCCCCCNC(=O)NC1CCCCC1 HPTJABJPZMULFH-UHFFFAOYSA-N 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000005315 distribution function Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 238000012886 linear function Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- -1 spark Chemical compound 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
- H04L67/565—Conversion or adaptation of application format or content
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
- H04L67/566—Grouping or aggregating service requests, e.g. for unified processing
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Supply And Distribution Of Alternating Current (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for determining optimal resource allocation in a large-scale system. An example method includes: obtaining a plurality of resource requests for resources hosted by a plurality of host computer devices; a target and one or more constraints are constructed for assigning the plurality of resource requests to the plurality of host computer devices. The one or more constraints include one or more inequalities of a plurality of decision variables. The example method further includes: converting the one or more inequalities into one or more equations for the plurality of decision variables; incorporating the one or more equations into the target to obtain a new target; dividing the new target into a plurality of sub-targets; a plurality of parallel processing tasks corresponding to the plurality of sub-targets are generated to obtain values of the plurality of decision variables.
Description
Technical Field
The present disclosure relates generally to systems and methods for large scale resource allocation utilizing various constraints.
Background
The resource management platform may allow a resource owner to provide resources in response to a request and subject to various constraints. The resources to be provided may be associated with negative indicators (e.g., risk, cost) and positive indicators (e.g., interest, royalties, recurring payments, other types of revenue). This means that when the resource owner (by providing the resource) meets the resource request, it gets some benefit (e.g. interest, payment) but also bears some risk/cost. Determining optimal resource allocation requires maximizing the overall positive index and controlling the negative index within certain limits. This is challenging because the decision variables are huge in number. For example, a resource management platform of reasonable size may support thousands of resource hosts (owners) simultaneously, and serve tens of millions of borrowers simultaneously. Determining the optimal resource allocation plan for a platform requires determining the values of billions of decision variables (borrowers times resource requests). Worse still, finding the optimal allocation solution is an NP-hard problem. Thus, it is impractical to use standard optimization techniques to determine which resource request should be allocated to or serviced by which host.
The existing resource management platform determines a solution to the problem of large-scale resource application based on a simple divide-by-divide approach, which divides a certain number of borrowers into several blocks, thereby making the objective function separable. However, these methods are based on the assumption that the borrow blocks are homogenous and that the constraints can be evenly distributed among the blocks. However, in practical applications, the homogeneity of the borrow tile is generally not established. Thus, forcing the objective function to separate may far from optimal the final solution.
The present disclosure describes a multiplier-based solution to the alternate direction method (ADMM) for finding a near optimal solution to the problem of massive resource allocation with constraints to address the challenges described above.
Disclosure of Invention
Various embodiments in this specification may include systems, methods, and non-transitory computer-readable media for determining optimal resource allocation with various constraints in a large-scale system.
According to one aspect, a method is provided for determining an optimal solution for large-scale resource allocation using various constraints. The method may include: obtaining, by a computer device, a plurality of resource requests for resources hosted by a plurality of host computer devices; constructing, by the computer device, a target for assigning a plurality of resource requests to a plurality of host computer devices, wherein the target comprises a plurality of decision variables, each of the decision variables indicating whether to assign a resource request to a host computer device for service, and one or more constraints comprising one or more inequalities of the plurality of decision variables; converting, by the computer device, one or more inequalities in the one or more constraints into one or more equations for the plurality of decision variables; incorporating, by the computer device, the one or more equations into the target to obtain a new target; dividing, by the computer device, the new target into a plurality of sub-targets; generating, by the computer device, a plurality of parallel processing tasks corresponding to the plurality of sub-targets to obtain values of the plurality of decision variables; and sending, by the computer device, instructions to the plurality of host computer devices to perform a plurality of resource requests according to the values of the plurality of decision variables.
In some embodiments, each of the plurality of parallel processing tasks includes an iteration of a multiplier-based alternating direction method (ADMM) to solve a respective sub-objective.
In some embodiments, generating a plurality of parallel processing tasks corresponding to the plurality of sub-targets to obtain values of the plurality of decision variables comprises: generating, by the computer device, an aggregate task to aggregate results of the plurality of parallel processing tasks to obtain values of the plurality of decision variables.
In some embodiments, before sending the instructions to the plurality of host computer devices, the method further comprises: determining, by the computer device, whether values of the plurality of decision variables converge; and responsive to the values of the plurality of decision variables being converging, performing the issuing of the instruction.
In some embodiments, converting the one or more constraints by converting the one or more inequalities to one or more equations for the plurality of decision variables comprises: for each of the one or more inequalities, adding an auxiliary variable to the left-hand side of the inequality to become an equation, wherein the left-hand side of each inequality comprises a product of the matrix and the plurality of decision variables.
In some embodiments, each of the plurality of sub-targets comprises: one or more lagrangian multipliers, auxiliary variables, and a subset of the plurality of decision variables. And, each of the plurality of parallel processing tasks implements an iterative process comprising: updating values of a subset of the plurality of decision variables based on the one or more lagrangian multipliers and the auxiliary variable; updating the auxiliary variable based on the updated values of the subset of the plurality of decision variables and the one or more lagrangian multipliers; and updating the one or more lagrangian multipliers based on the updated values of the subset of the plurality of decision variables and the updated auxiliary variables.
In some embodiments, incorporating the one or more equations into the target to obtain a new target includes: for each of the one or more equations, adding a factor to the target that includes a square difference between a left-hand side and a right-hand side of the equation, wherein the left-hand side of each equation includes a product of the matrix and the plurality of decision variables and the right-hand side of each equation includes a constant.
In some embodiments, generating a plurality of parallel processing tasks corresponding to the plurality of sub-targets includes: under a MapReduce programming framework, a plurality of mapping tasks are generated to solve the plurality of sub-targets in parallel.
In some embodiments, each of the plurality of mapping tasks includes a quadratic programming process.
In some embodiments, generating a plurality of parallel processing tasks corresponding to the plurality of sub-targets includes: multiple threads are generated on one or more Graphics Processing Units (GPUs) to solve the multiple sub-targets in parallel.
In some embodiments, the one or more constraints include one or more risk constraints configured by a plurality of host computer devices.
In some embodiments, each of the plurality of sub-targets includes a subset of the plurality of decision variables, and each of the plurality of mapping tasks determines a subset of the respective sub-targets.
According to other embodiments, a system for determining an optimal solution for large scale resource allocation using various constraints is configured with instructions executable by one or more processors to cause the system to perform the method of any of the preceding embodiments.
According to other embodiments, a non-transitory computer-readable storage medium is configured with instructions executable by one or more processors to cause the one or more processors to perform the method of any of the preceding embodiments.
The embodiments disclosed herein have one or more technical effects. The massive resource allocation with constraints can be expressed as an optimization problem of linear or nonlinear objective functions. Existing divide-and-conquer solutions can be used to solve the optimization problem of linear objective functions, but cannot solve the problem of those nonlinear objective functions. This is because the nonlinearity makes it impossible to separate the optimization problem into smaller problems (e.g., the nonlinear objective function includes cross-borrower decision variables (decision variables of one borrower times decision variables of another borrower) and thus cannot be broken down into smaller objective functions). In some embodiments, the method is applicable to large-scale resource allocation problems that may be expressed as either a linear or a nonlinear objective function. For example, the objective function may be transformed by eliminating all cross-debit decision variables using a well-designed matrix, thereby making the decision variables remaining in the objective function separable. In some embodiments, after converting the objective function, the objective function is split into a plurality of mapping tasks using a parallel processing framework such as MapReduce, which solve the optimization problem in parallel and then aggregate via reduction tasks to obtain the final values of the decision variables. These decision variables may then be used to determine which borrower to assign to which resource host to service (e.g., provide the requested resource). In some embodiments, the conversion of the objective function will generate a new objective function to conform to a format that is solvable by an alternating direction of operators (ADMM) algorithm. The new objective function may then be decomposed into a plurality of sub-objective functions, which are processed by the plurality of mapping tasks described above, respectively. ADMM is then implemented in the mapping tasks, each of the plurality of mapping tasks being solved iteratively through a plurality of loops. When the decision variable value of each mapping task converges, the iterative process ends. By means of ADMM, the convergence of the decision variables can be guaranteed. That is, by applying ADMM, a near-optimal solution for large-scale resource allocation with constraints can always be generated.
These and other features of the systems, methods, and non-transitory computer readable media disclosed herein, together with the methods of operation and functions of the related structural elements, as well as the combination of parts and economies of manufacture, will become more apparent upon reading the following description and the appended claims with reference to the accompanying drawings, all of which form a part of this specification, and wherein like reference numerals designate corresponding parts in the different drawings. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended as a definition of the limits of the invention.
Drawings
FIG. 1 illustrates an example system for large-scale resource allocation using constraints in accordance with various embodiments.
Fig. 2 illustrates an example system diagram for large-scale resource allocation based on ADMM utilization constraints, in accordance with various embodiments.
FIG. 3 illustrates an example parallel processing workflow for massive resource allocation based on utilization constraints of ADMM in accordance with various embodiments.
Fig. 4 illustrates an example method of large-scale resource allocation based on ADMM utilization constraints, in accordance with various embodiments.
FIG. 5 illustrates a block diagram of an example computer system apparatus for mass resource allocation based on utilization constraints of an ADMM in accordance with various embodiments.
FIG. 6 illustrates an example computing device that may be used to implement any of the embodiments described herein.
Detailed Description
The techniques disclosed herein include determining an optimal or near-optimal resource allocation plan for a large scale platform. Platforms such as e-commerce platforms, cloud service providers, or car pooling or network booking platforms may require processing resource allocation every day. The definition of "resources" may vary from one practical application to another. For example, a server, computer cluster, or cloud service may provide computing or storage resources for a client device. In this case, the resource allocation may involve allocating a certain number of processors, a certain amount of memory space, a certain number of virtual machines to different client devices. As another example, a network taxi platform may need to balance supply and demand in real time between different regions. The allocation of resources in this case may involve making order assignments and/or vehicle repositioning decisions. For another example, a platform providing loan services may host thousands of banks or financial institutions to serve millions of borrowers. In this case, the resource allocation may involve allocating different borrowers for loans to different banks or financial institutions for services.
In these resource allocation scenarios, resources are typically associated with positive factors (also referred to as revenue levels) and negative factors (also referred to as risk levels or cost levels) of the respective resource providers. For example, if the resource is cloud storage of digital data, providing such storage (e.g., by a cloud storage system) may be associated with a level of revenue (e.g., paying a cloud storage provider for the cost of storing such data) and a level of risk (e.g., risk the cloud storage provider bears, and associated costs of implementing security measures to prevent data leakage) for the resource provider. As another example, if the resource is a loan, providing the loan may be associated with a profit level (e.g., interest level) and a risk level (e.g., risk of default) for the supplier. Furthermore, different resource providers may have different resource constraints. For example, cloud service providers may have limited types of processors or storage devices (spindle drives or solid state drives) and thus may only provide certain types of computing or storage resources. As another example, different loan providers may have different risk constraints. The techniques disclosed in this specification may provide a way to determine an optimal or near optimal resource allocation plan to allocate resource requests among resource providers to (1) maximize the overall objective and (2) satisfy various constraints for each resource provider. The definition of the overall goal may vary depending on the application, such as overall performance improvement, total amount of resources allocated to the resource borrower by the resource provider, positive feedback from the resource borrower, etc.
For ease of description, embodiments disclosed herein are described by way of example in terms of a loan service platform, where the resource to be allocated may refer to a loan and the provider may refer to a provider. Each loan offered by a supplier may be associated with a risk level, a benefit level, etc. It will be obvious to a person skilled in the art that the same idea can be applied to another suitable scenario requiring resource allocation.
There are two commonly used metrics to measure risk and gain balance between suppliers, namely Mean Absolute Percentage Error (MAPE) and Probability Stability Index (PSI). Assuming that the risk distribution of the platform level (e.g. global level) is determined by a function q (x) and the risk distribution of the individual provider j is determined by a function p (x), where x is one of the predefined risk levels, q and p are distribution functions that may be explicitly defined or implicitly, the MAPE may be determined by:
where E () represents the expected value (or average value).
In this specification, it is assumed that one loan corresponds to one borrower, and that there is a one-to-one mapping relationship between a loan and a borrower. This assumption may be reasonably extended to one borrower lending multiple loans, where each loan may be associated with multiple borrowers. However, these extended cases can be easily transformed so that the previous assumptions still apply. For example, when one borrower is associated with multiple loans, the borrower behind each loan may be considered an individual (although these refer to the same individual). Thus, in this specification, migrating a "loan" may mean migrating the "borrower" of the loan.
FIG. 1 illustrates an example system 100 for large-scale resource allocation using constraints in accordance with various embodiments. The components of system 100 are intended to be illustrative. Depending on the implementation, system 100 may include additional components, fewer components, or alternative components. It should be appreciated that although only one computing device is illustrated in FIG. 1, any number of computing devices may be included in system 100. Computing system 102 may be implemented in one or more networks (e.g., an enterprise network), one or more terminals, one or more servers (e.g., server 105), or one or more clouds. Server 105 may include hardware or software that manages access to centralized resources or services in a network. The cloud may include clusters of servers and other devices distributed in a network.
In some embodiments, the example system 100 may include a computing system 102, a computing device 104, a server 105, and the computing system 102 may communicate with borrowers 120 and borrowers 110 through the respective computing devices. Computing system 102 can be understood as a platform that includes an online service interface and an offline (e.g., back-end) computing system. Computing device 104 may be associated with computing system 102 by providing computing power. Server 105 may be associated with computing system 102 by providing storage and/or computing capabilities. In some embodiments, the computing device 104 may be implemented on or as various devices such as a cell phone, tablet, server, desktop, notebook, etc. Meter with a meter body The computing system 102 may communicate with the computing device 104 and other computing devices. Communication between devices may be through the internet, through a local network (e.g., a local area network), through direct communication (e.g., bluetooth) TM Radio frequency, infrared), etc.
In some embodiments, computing system 102 may include an acquisition component 112, a construction component 114, a conversion component 116, and a parallel processing component 118. Computing system 102 may include one or more processors (e.g., digital processors, analog processors, digital circuits designed to process information, central processing units, graphics processing units, microcontrollers or microprocessors, analog circuits designed to process information, state machines, and/or other mechanisms for electronically processing information) and one or more memories (e.g., persistent memory, temporary memory, non-transitory computer-readable storage medium). The one or more memories may be configured with instructions executable by the one or more processors. The processor may be configured to perform various operations by compiling machine-readable instructions stored in the memory. Computing system 102 may install appropriate software (e.g., a platform program, etc.) and/or hardware (e.g., a wired, wireless connection, etc.) to access other components in system 100.
In some embodiments, the retrieval component 112 in the computing system 102 may be configured to retrieve a plurality of resource requests from the borrower 120 for resources hosted by the requestor 110 (e.g., a plurality of host computer devices). The computing system 102 needs to determine a resource allocation plan to allocate multiple resource requests to the providers 110 for execution. Each of the providers 110 may then provide resources in response to the allocated resource request.
In some embodiments, the construct component 114 in the computing system 102 may be configured to construct a target and one or more constraints for assigning multiple resource requests to multiple host computer devices. The target may include a plurality of decision variables, each decision variable indicating whether a resource request is allocated to the host computer device for service, and the one or more constraints may include one or more inequalities of the plurality of decision variables. In the context of loan services, there are different objective functions depending on the implementation. Different targets may be defined according to the interests of the platform. For example, based on the above description of MAPE, the provider 110 (e.g., bank j) may have one of the following objective functions that need to be minimized:
risk MAPE for Bank j
Wherein,,because of sigma i a i x ij ≈D j
Risk MAPE of bank j on m level
Wherein D is jm =P m (D j +B j ) Because of sigma i a i x ij ≈D j
Balance change sigma of bank j i a i x ij
Wherein the symbols are as follows:
a i loan balance of borrower i
x ij ∈[0,1]If debit i is assigned to bank j
x i Vector x i′j Wherein i' =i
x j Vector x ij′ Where j' =j
r i Risk of default for debit i
σ im The offending risk level of e {0,1} debit i is m
B j Current balance of bank j
BR j Bank j current debit risk weighted balance
B jm Bank j currently balances at risk level m
D j Target balance change for bank j
Average risk of breach for all users
P m Proportion of all user balances on risk level m
The above objective function can be expressed as the following objective function with constraints:
the goal and constraint corresponding to "risk MAPE of Bank j at level m" may formally be expressed as equation (1):
where m represents an index of target metrics such as MAPE and/or PSI, and u comprises And W both represent a parameter matrix, and v j Representing the maximum number of borrowers 120 that the supplier j may simultaneously serve. The last constraint indicates that a debit 120 can only be assigned to one supplier 110.
In some cases, to represent the mathematical objective in a computer system, the absolute value in the objective function may be converted into the following quadratic objective function with constraints:
Wherein u comprisesAnd->Representing a parameter matrix. The above-described quadratic objective function of "risk MAPE of bank j at level m" can be expressed as the following objective function with constraints, expressed as equation (2):
m is the index of the target measurement index
n is the index of the inequality constraint
k is an index of the constraint of the equation (2)
In some embodiments, the conversion component 116 can be configured to determine the desired conversion rate by converting one or more inequalities (e.g.,) One or more equations that are converted into a plurality of decision variables are converted to the one or more constraints, and the one or more equations are incorporated into the target to obtain a new target. Here, "equation" refers to a relationship between two quantities, or more generally, a relationship between two mathematical expressions, which are asserted to have the same value, or which represent the same mathematical object, while "inequality" refers to a relationship of unequal comparison between two numbers or other mathematical expressions. The purpose of the conversion is to conform the objective function to a format that is solvable by the ADMM algorithm. The absolute and square operators in the objective function may result in cross-debit terms (cross-shunt term), e.g., x, before conversion ij *x (i+1)j . For example, assume x j Is a two-dimensional vector x 1 ,x 2 ]Then [ x ] 1 ,x 2 ] 2 =[x 1 x 1 ,x 1 x 2 ,x 2 x 1 ,x 2 x 2 ]Wherein term x 1 x 2 And x 2 x 1 Known as cross debit. In some embodiments, cross-borrowing terms in the objective function may be eliminated by converting inequality constraints into equality constraints and incorporating the equality constraints into the objective function.
In some embodiments, the parallel processing component 118 may be configured to divide the target into a plurality of sub-targets and generate a plurality of parallel processing tasks corresponding to the plurality of sub-targets to obtain values of a plurality of decision variables. Because the conversion performed by the conversion component 116 eliminates cross-borrowing terms, the resulting objective functions are separable and can be solved by using various parallel processing frameworks. In some embodiments, the MapReduce framework may be employed due to its simplicity and compatibility with the format of the objective function. For example, the converted objective function may be divided into a plurality of sub-objective functions, each sub-objective function comprising a subset of decision variables. Each sub-objective function may be solved by creating a mapping task to determine the values of the corresponding subset of decision variables. After all mapping tasks are finished, the values generated by the mapping tasks may be aggregated by creating a reduction task.
In some embodiments, the value of the decision variable may indicate which request (or debit 120) should be assigned to which supplier 110 to service or perform. Here, the provider 110 may perform the request by providing the requested service. The computing system 102 may send direct instructions to the provider 110 to provide services based on the values of the decision variables.
FIG. 2 is an exemplary system diagram for large-scale resource allocation based on ADMM utilization constraints in accordance with various embodiments. The system 200 in fig. 2 is for illustration purposes only and may include more components, fewer components, or alternative components depending on the implementation.
In some embodiments, system 200 is configured to utilize constraints 202 to solve the large-scale resource allocation problem. The presentation of the resource allocation problem may vary from use case to use case. Exemplary use cases include computing and/or storage resource allocation to customers by cloud service providers, loan resource allocation to loan borrowers by loan service platforms, supply and demand management in network about car platforms, warehouse management between different geographic areas in e-commerce platforms, and the like. The "large-scale" representation of the problem here includes a large number of decision variables (e.g., each decision variable represents a match between a resource borrower and a resource provider), so brute force solutions (e.g., exhaustive, even using the most advanced and powerful computers) cannot generate optimal solutions in reasonable time.
In some embodiments, large-scale resource allocation using constraints 202 may be expressed first as an optimization problem that includes an objective function and one or more constraints. In the context of loan servicing, example objective functions and constraints are described in equation 1. The objective function may include an objective to maximize (e.g., total interest gain) or minimize (e.g., total risk or MAPE), and the constraint may include various thresholds (e.g., maximum number of borrowers that can be serviced, maximum risk tolerance) and limits (e.g., one borrower can only be serviced by one supplier).
In some embodiments, the system 200 may include an objective function converter 210 to convert an initially formed objective function according to constraints. The objective function converter 210 may be implemented in a variety of programming languages. The purpose of the transformation is to generate a new objective function and a new set of constraints to make it conform to the format that the ADMM algorithm can solve. In some embodiments, the conversion may include two steps: converting inequalities in the constraint to equations; and incorporating the equation into an objective function to obtain a new/converted objective.
For example, the objective functions and corresponding constraints in equations (1) and (2) can be generalized to equation (3) below:
minf(x)
s.t.Ax=b
Cx≤d
Where f (x) represents the objective function (e.g., MAPE) to be minimized, A and C represent different parameter matrices, x represents the decision variable matrix, x i The ith resource provider is represented, and b and d represent quantized thresholds or limits. To apply ADMM to solve equation (3) above, in some embodiments, the first step of converting may include converting the inequality in the constraint into one or more equations. For example, a positive auxiliary variable may be added to the left-hand side of the inequality (LHS). In some embodiments, the last constraint (e.g., 1 T x i =1) can be incorporated into the objective function such that violating such a constraint would penalize the objective function indefinitely. For example, the above formula (3) can be converted into the following formula (4):
minf(x)+∑ i I i (1 T x i -1)+Π(ξ)
s.t.Ax=b
Cx+ξ=d (4)
where ζ represents a positive auxiliary variable, pi represents a non-differentiable function, and I represents a parameter matrix. In this conversion, the inequality cx+.d in the formula (3) is converted into the equation cx+ζ=d in the formula (4).
In some embodiments, in the second step of objective function conversion, the equations resulting from the inequality conversion in the constraints may be incorporated into the objective function to generate an augmented lagrangian function, which augmented lagrangian Lang Ri function may also be referred to as a new objective function or output of the objective function converter 210. In some embodiments, the generation of the augmented lagrangian function may include adding a decision variable-dependent positive definite matrix to the objective function to eliminate cross-borrowed decision variables and one or more lagrangian multipliers therein. After elimination of the cross-debit decision variables, the resulting augmented lagrangian function may be separable and may be processed in parallel. A detailed example is depicted in fig. 3.
In some embodiments, the generated augmented lagrangian functions may be iteratively solved in parallel in the ADMM-based parallel processing system 220. The system 220 may employ various parallel processing frameworks such as Hadoop MapReduce, CUDA, spark, MPI, and the like. For example, the augmented lagrangian function may be divided into a plurality of sub-objective functions. Each sub-objective function may comprise a subset of decision variables. Solving the augmented lagrangian function may include multiple iterations of parallel processing. Assuming a MapReduce framework is used in system 220, during each iteration, multiple mapping tasks may be created to solve multiple sub-objective functions separately. That is, each mapping task will determine the value of the corresponding subset of decision variables. The reduction task may then aggregate all values generated by the mapping task. Even though the decision variable values obtained during one iteration may be locally optimized (optimized for each sub-objective) instead of globally optimized, they may be used as a baseline for the next iteration. The mapping task is described in detail below.
FIG. 3 illustrates an example parallel processing workflow for massive resource allocation based on utilization constraints of ADMM in accordance with various embodiments. The workflow 300 in fig. 3 is for illustration purposes only and may include more steps, fewer steps, or alternative steps depending on the implementation. The steps in the workflow 300 may be performed in various orders or in parallel.
As described in fig. 2, after conversion, the objective function for large-scale resource allocation with constraints may be separable and can be solved using parallel processing. For simplicity, the MapReduce framework is used in FIG. 3 as an example to illustrate how to solve an objective function based on ADMM. Herein, "solving" an objective function for large-scale resource allocation using constraints refers to iteratively seeking a convergence value of a decision variable in the objective function. In some embodiments, multiple iterations of parallel processing may be performed.
In some embodiments, each mapping task may implement an ADMM-based iteration. In each iteration, multiple mapping tasks and a reduction task may be created to solve the objective function in parallel, where each mapping task solves a portion/subset of the decision variables (e.g., the corresponding sub-objective) while reducing the output of the task aggregate mapping task.
For example, the mapping task may begin at step 305, followed by: in step 310, values of a subset of the plurality of decision variables are updated based on the one or more lagrangian multipliers and the auxiliary variables added during the conversion of the objective function (see description with reference to fig. 2); in step 320, the auxiliary variable is updated according to the updated values of the subset of the plurality of decision variables and the one or more lagrangian multipliers; and in step 330, updating one or more lagrangian multipliers based on the updated values for the subset of the plurality of decision variables and the updated auxiliary variables. That is, the decision variables, auxiliary variables, and Lagrangian multipliers in the sub-objective functions are iteratively updated.
In some embodiments, in each iteration, the decision variable subset may be updated at step 310 based on the following equation:
s.t.1 T x i =1 (5)
wherein x is k ,x k+1 Representing the values of the decision variable at the kth and k +1 iterations, v, w representing the lagrangian multiplier, ζ representing the auxiliary variable,ρ, τ represents a pre-configured hyper-parameter, and P represents a semi-positive definite matrix. Referring back to formulas (3) and (4) described with reference to fig. 2, the inequality Cx c.ltoreq.d in formula (3) is converted to the equation cx+ζ=d in formula (4) (first step of conversion), and cx+ζ=d is then incorporated into the new target in formula (5) (second step of conversion), as the square differences on both the left and right hands, for example,
in some embodiments, if f (x) in equation (5) is nonlinear, f () can be performed at x k To convert it to a linear function. In some embodiments, if f (x) in formula (5) is linear, reference is made to x, x k And the last term of P may be configured to effectively eliminate cross-debit terms among other terms. As shown in the above equation, the value of the decision variable for the k+1th iteration is determined based on the lagrangian multiplier and the auxiliary variable for the k iteration.
In some embodiments, in each iteration, in step 320, the auxiliary variable may be updated based on:
Wherein x is k+1 Represents the value of the decision variable at the k+1st iteration, and w k One of the lagrangian multipliers in the kth iteration is represented. As indicated by the above equation, the value of the auxiliary variable for the k+1th iteration is determined from the lagrangian multiplier for the k-th iteration and the value of the decision variable for the k+1th iteration.
In some embodiments, in each iteration, the lagrangian multiplier may be updated in step 320 based on:
v k+1 =v k +Ax k+1 -b
w k+1 =w k +Cx k+1 +ξ k+1 -d
wherein x is k+1 Representing the value of the decision variable at the k+1th iteration, v k 、w k Lagrangian multiplier, v, representing the kth iteration k+1 、w k+1 Represents the lagrangian multiplier for the k+1th iteration. As indicated by the above equation, the value of the lagrangian multiplier in the k+1 iteration is determined from the value of the auxiliary variable of the k+1 iteration and the value of the decision variable of the k+1 iteration.
In some embodiments, in step 340, the values of the decision variable, the values of the lagrangian multiplier, and the values of the auxiliary variable obtained at the k+1th iteration may be compared to the corresponding values of the k-th iteration to determine their convergence. If any of these values (e.g., decision variables, lagrangian multipliers, and auxiliary variables) do not converge, the mapping task will loop back and continue iteration. If all values converge, the mapping task ends at step 345 (step 346 represents the end of another mapping task). In this context, "convergence" is determined based on a difference threshold. If the difference between the values of the different iterations is below a threshold, then the values are determined to have converged.
After all mapping tasks have been processed in parallel, e.g., in steps 345 and 346, a reduced task may be created in step 350 to obtain a candidate solution for large-scale resource allocation with constraints. It is a "candidate" because the value of the decision variable is locally optimized (from the point of view of the sub-objective) and not necessarily globally optimized (from the point of view of the objective). To determine if the candidates are sufficiently optimal, the aggregate value of the decision variable may be compared with those of the previous iteration in step 360. If the values do not converge, a new iteration of MapReduce is performed. If these values converge, they will be considered the optimal solution 362 to the large-scale resource allocation problem. In some embodiments, the optimal solution 362 may indicate a match between the resource supplier and the resource borrower or acquirer to achieve an optimal return (e.g., minimum MAPE value).
Fig. 4 illustrates an example method 400 for massive resource allocation based on ADMM utilization constraints, in accordance with various embodiments. The method 400 may be performed by an apparatus, device, or system. The method 400 may be performed by one or more components of the arrangement shown in fig. 1, such as the computing system 102 and the computing device 104. Depending on the implementation, method 400 may include additional steps, fewer steps, or alternative steps, which may be performed in a different order or in parallel.
Block 410 includes obtaining, by a computer device, a plurality of resource requests for resources hosted by a plurality of host computer devices.
Block 420 includes constructing, by the computer device, a target for assigning a plurality of resource requests to a plurality of host computer devices, wherein the target includes a plurality of decision variables, each decision variable indicating whether to assign a resource request to a host computer device for service, and one or more constraints including one or more inequalities of the plurality of decision variables. In some embodiments, the one or more constraints include one or more risk constraints configured by a plurality of host computer devices.
Block 430 includes converting, by the computer device, one or more inequalities in the one or more constraints into one or more equations for the plurality of decision variables. In some embodiments, converting the one or more inequalities into one or more equations for the plurality of decision variables comprises: for each of the one or more inequalities, adding an auxiliary variable on the left-hand side of the inequality to become an equation, wherein the left-hand side of each inequality comprises a product of a matrix and a plurality of decision variables. In some embodiments, each of the plurality of sub-targets comprises: one or more lagrangian multipliers, auxiliary variables, and a subset of the plurality of decision variables. And, each of the plurality of parallel processing tasks implements an iterative process comprising: updating values of a subset of the plurality of decision variables based on the one or more lagrangian multipliers and the auxiliary variables; updating the auxiliary variable based on the updated values of the subset of the plurality of decision variables and the one or more lagrangian multipliers; one or more lagrangian multipliers are updated based on the updated values of the subset of the plurality of decision variables and the updated auxiliary variables.
Block 440 includes incorporating, by the computer device, the one or more equations into the target to obtain a new target. In some embodiments, incorporating the one or more equations into the target to obtain the new target includes: for each of the one or more equations, a factor is added to the target that includes a square difference between a left-hand side and a right-hand side of the equation, wherein the left-hand side of each equation includes a product of the matrix and the plurality of decision variables and the right-hand side of each equation includes a constant.
Block 450 includes dividing, by the computer device, the new target into a plurality of sub-targets. In some embodiments, each of the plurality of sub-targets includes a subset of the plurality of decision variables, and each of the plurality of mapping tasks determines a subset of the respective sub-targets.
Block 460 includes generating, by the computer device, a plurality of parallel processing tasks corresponding to the plurality of sub-objectives to obtain values of a plurality of decision variables. In some embodiments, each of the plurality of parallel processing tasks includes an iteration of a multiplier-based alternating direction method (ADMM) to solve a respective sub-objective. In some embodiments, generating a plurality of parallel processing tasks corresponding to a plurality of sub-targets to obtain values of a plurality of decision variables includes: an aggregation task is generated by the computer device to aggregate results of the plurality of parallel processing tasks to obtain values of the plurality of decision variables. In some embodiments, generating a plurality of parallel processing tasks corresponding to a plurality of sub-targets includes: under the MapReduce programming framework, multiple mapping tasks are generated to solve multiple sub-targets in parallel. In some embodiments, each of the plurality of mapping tasks includes a quadratic programming process. In some embodiments, generating a plurality of parallel processing tasks corresponding to a plurality of sub-targets includes: multiple threads are generated on one or more Graphics Processing Units (GPUs) to solve multiple sub-targets in parallel.
Block 470 includes sending, by the computer device, instructions to the plurality of host computer devices to execute the plurality of resource requests based on the values of the plurality of decision variables.
In some embodiments, the method 400 may further comprise, prior to transmitting the instruction to the plurality of host computer devices: determining, by the computer device, whether the values of the plurality of decision variables converge; and responsive to the values of the plurality of decision variables having converged, performing the issuing of the instruction.
Fig. 5 illustrates a block diagram of an example computer system apparatus 500 for mass resource allocation based on ADMM utilization constraints, in accordance with various embodiments. The components of computer system 500 shown below are intended to be illustrative. Depending on the implementation, computer system 500 may include additional components, fewer components, or alternative components.
Computer system 500 may be an example of an implementation of one or more components of computing system 102. The processes and methods illustrated in fig. 1-4 may be implemented by computer system 500. Computer system 500 may include one or more processors and one or more non-transitory computer-readable storage media (e.g., one or more memories) coupled to the one or more processors and configured with instructions executable by the one or more processors to cause a system or device (e.g., a processor) to perform the above-described method (e.g., method 400). Computer system 500 may include various units/modules corresponding to instructions (e.g., software instructions).
In some embodiments, computer system 500 may be referred to as a means for determining an optimal allocation for a borrowing request. The apparatus may include an acquisition module 510, a construction module 520, a conversion module 530, a merging module 540, a partitioning module 550, a parallel processing module 560, and a transmission module 570. In some embodiments, the retrieval module 510 may be configured to retrieve a plurality of resource requests for resources hosted by a plurality of host computer devices. The construction module 520 may be configured to construct a goal for allocating a plurality of resource requests to a plurality of host computer devices, wherein the goal comprises a plurality of decision variables, each of the decision variables indicating whether to allocate a resource request to a host computer device for service, and one or more constraints comprising one or more inequalities of the plurality of decision variables. The conversion module 530 may be configured to convert one or more inequalities in the one or more constraints into one or more equations of the decision variables. The merge module 540 may be configured to merge the one or more equations into the target to obtain a new target. The partitioning module 550 may be configured to partition the new target into a plurality of sub-targets. The parallel processing module 560 may be configured to generate a plurality of parallel processing tasks corresponding to the plurality of sub-objectives to obtain values of a plurality of decision variables. The transmission module 570 may be configured to transmit instructions to the plurality of host computer devices to perform the plurality of resource requests according to the values of the plurality of decision variables.
FIG. 6 illustrates an example computing device that may be used to implement any of the embodiments described herein. Computing device 600 may be used to implement one or more components of the systems and methods shown in fig. 1-5. Computing device 600 may include a bus 602 or other communication mechanism for communicating information, and one or more hardware processors 604 coupled with bus 602 for processing information. The hardware processor 604 may be, for example, one or more general purpose microprocessors.
Computing device 600 may also include a main memory 606, such as a Random Access Memory (RAM), cache and/or other dynamic storage device, coupled to bus 602 for storing information and instructions to be executed by processor 604. Main memory 606 also may be used for storing temporary variables or other intermediate information during execution of commands by processor 604. When such instructions are stored in a storage medium accessible to processor 604, computing device 600 may be presented as a special purpose machine that is customized to perform the operations specified in the instructions. Main memory 606 may include non-volatile media and/or volatile media. Non-volatile media may include, for example, optical or magnetic disks. Volatile media may include dynamic memory. Common forms of media may include, for example, RAM, DRAM, PROM and EPROM, FLASH-EPROM, NVRAM, any other memory chip or cartridge, or the same network version.
Computing device 600 may implement the techniques described herein using custom hardwired logic, one or more ASICs or FPGAs, firmware, and/or program logic, which in combination with the computing device may make or program computing device 600 a special purpose machine. According to one embodiment, the techniques herein are performed by computing device 600 in response to processor 604 executing one or more sequences of one or more instructions stored in main memory 606. Such instructions may be read into main memory 606 from another storage medium, such as storage device 609. Execution of the sequences of instructions contained in main memory 606 causes processor 604 to perform the process steps described herein. For example, the processes/methods disclosed herein may be implemented by computer program instructions stored in main memory 606. When executed by the processor 604, the instructions may perform the steps as shown in the respective figures and described above. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions.
Computing device 600 also includes a communication interface 610 that is coupled to bus 602. The communication interface 610 may provide bi-directional data communication over one or more network links connected to one or more networks. As another example, communication interface 610 may be a Local Area Network (LAN) card to provide a data communication connection to a compatible LAN (or WAN component communicates with a WAN). Wireless links may also be implemented.
The performance of certain operations may be distributed among different processors, residing not just in a single machine, but rather being deployed on multiple machines. In some embodiments, the processor or processor-implemented engine may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other example embodiments, the processor or processor-implemented engine may be distributed across multiple geographic locations.
Each of the processes, methods, and algorithms described in the preceding sections may be implemented in code modules executed by one or more computer systems or computer processors, including computer hardware, and may be fully or partially automated. The processes and algorithms may be partially or fully implemented in dedicated circuitry.
When the functions disclosed herein are implemented in the form of software functional units and sold or used as a stand-alone product, they may be stored in a non-volatile computer-readable storage medium executable by a processor. The specific aspects (in whole or in part) or aspects of the contributions to the art disclosed herein may be embodied in the form of software products. The software product may be stored in a storage medium including instructions that cause a computing device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of a method of an embodiment of the application. The storage medium may include a flash drive, portable hard disk, ROM, RAM, magnetic disk, optical disk, another medium that can store program code, or any combination thereof.
Particular embodiments also provide a system comprising a processor and a non-transitory computer readable storage medium storing instructions executable by the processor to cause the system to perform operations corresponding to steps in any of the methods of the embodiments disclosed above. Particular embodiments also provide a non-transitory computer-readable storage medium configured with instructions executable by one or more processors to cause the one or more processors to perform operations corresponding to steps in any of the methods of the embodiments disclosed above.
Embodiments disclosed herein may be implemented by a cloud platform, server, or group of servers (hereinafter collectively referred to as "service systems") that interact with clients. The client may be a terminal device, or a client that the user registers on the platform, where the terminal device may be a mobile terminal, a Personal Computer (PC), or any device that may install a platform application.
The various features and methods described above may be used independently of each other or in various combinations. All possible combinations and subcombinations are within the scope of this disclosure. Furthermore, in some implementations, some of the methods or process blocks may be omitted. The methods and processes described herein are also not limited to any particular sequence, and the blocks or states associated therewith may be performed in other suitable order. For example, the described blocks or states may be performed in an order different than that specifically disclosed, or multiple blocks or states may be combined in a single block or state. Example blocks or states may be performed serially, in parallel, or in some other manner. Blocks or states may be added to or deleted from the disclosed example embodiments. The example systems and components described herein may differ from the described configuration. For example, elements may be added, deleted, or rearranged as compared to the disclosed embodiments.
In this specification, multiple instances may implement components, operations, or structures described as a single instance. Although individual operations of one or more methods are illustrated and described as separate operations, one or more individual operations may be performed concurrently and nothing requires that the operations be performed in the order illustrated. Structures and functions presented as separate components in the example configuration may be implemented as a combined structure or component. Similarly, structures and functions presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements may fall within the scope of the subject matter herein.
As used herein, the terms "or" and "are inclusive, and not exclusive, unless explicitly specified otherwise or the context indicates otherwise. Thus, herein, "A, B or C" means "A, B, A and B, A and C, B and C or A, B and C" unless explicitly stated otherwise or the context indicates otherwise. The terms "comprising" or "including" are used to denote the presence of a subsequently stated feature, but do not exclude the addition of other features. Conditional language such as "can," "may," or "may," etc., is generally intended to convey that certain embodiments include but other embodiments do not include certain features, elements, and/or steps unless expressly stated otherwise or otherwise understood in the context of use. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required by one or more embodiments or that one or more embodiments must include logic for deciding, with or without user input or prompting, whether these features, elements and/or steps are included in or are to be performed in any particular embodiment.
While the subject matter has been summarized in terms of specific example embodiments, various modifications and alterations may be made to these embodiments without departing from the broader scope of the disclosure. Embodiments of these subject matter may be referred to herein, individually or collectively, by the term "application" merely for convenience and without intending to voluntarily limit the scope of this application to any single publication or concept if in fact there are multiple disclosures.
The embodiments described herein have been described in sufficient detail to enable those skilled in the art to practice the disclosure. Other embodiments may be utilized and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. The detailed description is, therefore, not to be taken in a limiting sense, and the scope of various embodiments is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled.
Claims (20)
1. A computer-implemented method, comprising:
obtaining, by a computer device, a plurality of resource requests for resources hosted by a plurality of host computer devices;
constructing, by the computer device, a target and one or more constraints to allocate the plurality of resource requests to the plurality of host computer devices, wherein the target comprises a plurality of decision variables, each of the decision variables indicating whether to allocate a resource request to a host computer device for service, and the one or more constraints comprise one or more inequalities of the plurality of decision variables;
Converting, by the computer device, the one or more inequalities in the one or more constraints into one or more equations for the plurality of decision variables;
incorporating, by the computer device, the one or more equations into the target to obtain a new target;
dividing, by the computer device, the new target into a plurality of sub-targets;
generating, by the computer device, a plurality of parallel processing tasks corresponding to the plurality of sub-targets to obtain values of the plurality of decision variables; and
and sending, by the computer device, instructions to the plurality of host computer devices to execute the plurality of resource requests according to the values of the plurality of decision variables.
2. The method of claim 1, wherein each of the plurality of parallel processing tasks comprises an iteration of a multiplier-based alternating direction method (ADMM) to solve for a respective sub-objective.
3. The method of claim 1, wherein the generating a plurality of parallel processing tasks corresponding to the plurality of sub-targets to obtain values of the plurality of decision variables comprises:
generating, by the computer device, an aggregate task to aggregate results of the plurality of parallel processing tasks to obtain the values of the plurality of decision variables.
4. The method of claim 1, wherein sending the instructions to the plurality of host computer devices further comprises:
determining, by the computer device, whether values of the plurality of decision variables converge; and
the sending of the instruction is performed in response to the values of the plurality of decision variables having converged.
5. The method of claim 1, wherein converting the one or more inequalities into one or more equations for the plurality of decision variables comprises:
for an inequality of the one or more inequalities, adding an auxiliary variable to the left-hand side of the inequality to become an equation, wherein the left-hand side of the inequality includes a product of a matrix and the plurality of decision variables.
6. The method of claim 5, wherein,
each sub-target of the plurality of sub-targets comprises: one or more multipliers, the auxiliary variable, and a subset of the plurality of decision variables, and
each of the plurality of parallel processing tasks implements an iterative process comprising:
updating values of the subset of the plurality of decision variables based on the one or more multipliers and the auxiliary variable;
Updating the auxiliary variable according to the updated values of the subset of the plurality of decision variables and the one or more multipliers; and is also provided with
The one or more multipliers are updated according to the updated values of the subset of the plurality of decision variables and the updated auxiliary variables.
7. The method of claim 1, wherein incorporating the one or more equations into the target to obtain a new target comprises:
for each of the one or more equations, adding a factor comprising a square difference between a left-hand side and a right-hand side of the equation to the target, wherein the left-hand side of each of the equations comprises a product of a matrix and the plurality of decision variables and the right-hand side of each of the equations comprises a constant.
8. The method of claim 1, wherein the generating a plurality of parallel processing tasks corresponding to the plurality of sub-targets comprises:
under a MapReduce programming framework, multiple mapping tasks are generated to solve the multiple sub-targets in parallel.
9. The method of claim 8, wherein each of the plurality of mapping tasks comprises a quadratic programming process.
10. The method of claim 1, wherein generating a plurality of parallel processing tasks corresponding to the plurality of sub-targets comprises:
multiple threads are created on one or more Graphics Processing Units (GPUs) to solve the multiple sub-targets in parallel.
11. The method of claim 1, wherein the one or more constraints comprise one or more risk constraints configured by the plurality of host computer devices.
12. The method of claim 1, wherein,
each sub-objective of the plurality of sub-objectives comprises a subset of the plurality of decision variables, and
each of the plurality of mapping tasks determines the subset of the respective sub-targets.
13. A system comprising one or more processors and one or more non-transitory computer-readable storage media coupled with the one or more processors and configured with instructions executable by the one or more processors to cause the system to perform operations comprising:
obtaining a plurality of resource requests for resources hosted by a plurality of host computer devices;
constructing a target for assigning the plurality of resource requests to the plurality of host computer devices and one or more constraints, wherein the target comprises a plurality of decision variables, each of the decision variables indicating whether to assign a resource request to a host computer device for service, and the one or more constraints comprise one or more inequalities of the plurality of decision variables;
Converting the one or more inequalities in the one or more constraints to one or more equations for the plurality of decision variables;
incorporating the one or more equations into the target to obtain a new target;
dividing the new target into a plurality of sub-targets;
generating a plurality of parallel processing tasks corresponding to the plurality of sub-targets to obtain values of the plurality of decision variables; and
and sending instructions to the plurality of host computer devices to execute the plurality of resource requests according to the values of the plurality of decision variables.
14. The system of claim 13, wherein converting the one or more inequalities into one or more equations for the plurality of decision variables comprises:
for an inequality of the one or more inequalities, adding an auxiliary variable to the left-hand side of the inequality to become an equation, wherein the left-hand side of the inequality includes a product of a matrix and the plurality of decision variables.
15. The system of claim 14, wherein,
each sub-target of the plurality of sub-targets comprises: one or more multipliers, the auxiliary variable, and a subset of the plurality of decision variables, and
Each of the plurality of parallel processing tasks implements an iterative process comprising:
updating values of the subset of the plurality of decision variables based on the one or more multipliers and the auxiliary variable;
updating the auxiliary variable according to the updated values of the subset of the plurality of decision variables and the one or more multipliers; and is also provided with
Updating the one or more multipliers according to the updated values of the subset of the plurality of decision variables and the updated auxiliary variables.
16. A non-transitory computer-readable storage medium configured with instructions executable by one or more processors to cause the one or more processors to perform operations comprising:
obtaining a plurality of resource requests for resources hosted by a plurality of host computer devices;
constructing a target for assigning the plurality of resource requests to the plurality of host computer devices and one or more constraints, wherein the target comprises a plurality of decision variables, each of the decision variables indicating whether to assign a resource request to a host computer device for service, and the one or more constraints comprise one or more inequalities of the plurality of decision variables;
Converting the one or more inequalities in the one or more constraints to one or more equations for the plurality of decision variables;
incorporating the one or more equations into the target to obtain a new target;
dividing the new target into a plurality of sub-targets;
generating a plurality of parallel processing tasks corresponding to the plurality of sub-targets to obtain values of the plurality of decision variables; and
and sending instructions to the plurality of host computer devices to execute the plurality of resource requests according to the values of the plurality of decision variables.
17. The storage medium of claim 16, wherein converting the one or more inequalities into one or more equations for the plurality of decision variables comprises:
for an inequality of the one or more inequalities, adding an auxiliary variable to the left-hand side of the inequality to become an equation, wherein the left-hand side of the inequality includes a product of a matrix and the plurality of decision variables.
18. The storage medium of claim 17, wherein,
each sub-target of the plurality of sub-targets comprises: one or more multipliers, the auxiliary variable, and a subset of the plurality of decision variables, and
Each of the plurality of parallel processing tasks implements an iterative process comprising:
updating values of the subset of the plurality of decision variables based on the one or more multipliers and the auxiliary variable;
updating the auxiliary variable according to the updated values of the subset of the plurality of decision variables and the one or more multipliers; and is also provided with
Updating the one or more multipliers according to the updated values of the subset of the plurality of decision variables and the updated auxiliary variables.
19. The storage medium of claim 16, wherein generating a plurality of parallel processing tasks corresponding to the plurality of sub-targets to obtain values of the plurality of decision variables comprises:
an aggregation task is generated by a computer device to aggregate results of the plurality of parallel processing tasks to obtain the values of the plurality of decision variables.
20. The storage medium of claim 16, wherein incorporating the one or more equations into the target to obtain a new target comprises:
for each of the one or more equations, adding a factor comprising a square difference between a left-hand side and a right-hand side of the equation to the target, wherein the left-hand side of each of the equations comprises a product of a matrix and the plurality of decision variables and the right-hand side of each of the equations comprises a constant.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2021/092484 WO2022236501A1 (en) | 2021-05-08 | 2021-05-08 | Method and system for optimizing large-scale resource allocation with constraints |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116671086A true CN116671086A (en) | 2023-08-29 |
Family
ID=84027842
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202180083212.9A Pending CN116671086A (en) | 2021-05-08 | 2021-05-08 | Method and system for optimizing utilization constraints for large-scale resource allocation |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN116671086A (en) |
WO (1) | WO2022236501A1 (en) |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150006433A1 (en) * | 2013-03-15 | 2015-01-01 | C4Cast.Com, Inc. | Resource Allocation Based on Available Predictions |
CN110213363B (en) * | 2019-05-30 | 2020-12-22 | 华南理工大学 | Cloud resource dynamic allocation system and method based on software defined network |
WO2020143850A2 (en) * | 2020-04-13 | 2020-07-16 | Alipay (Hangzhou) Information Technology Co., Ltd. | Method and system for optimizing allocation of borrowing requests |
-
2021
- 2021-05-08 CN CN202180083212.9A patent/CN116671086A/en active Pending
- 2021-05-08 WO PCT/CN2021/092484 patent/WO2022236501A1/en active Application Filing
Also Published As
Publication number | Publication date |
---|---|
WO2022236501A1 (en) | 2022-11-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11734060B2 (en) | Graph data based task scheduling method, apparatus and storage medium thereof | |
JP5680105B2 (en) | Sharing resources between clouds in a cloud computing environment | |
EP1979813B1 (en) | Method for modeling a free pool of resources | |
Salem et al. | An artificial bee colony algorithm for data replication optimization in cloud environments | |
CN111722806B (en) | Cloud disk allocation method and device, electronic equipment and storage medium | |
CN114902273A (en) | System and method for optimizing resource allocation using GPU | |
CN103999049A (en) | Cloud provisioning accelerator | |
US20170147955A1 (en) | Enterprise resource management tools | |
Mansour et al. | Design of cultural emperor penguin optimizer for energy-efficient resource scheduling in green cloud computing environment | |
US20190156243A1 (en) | Efficient Large-Scale Kernel Learning Using a Distributed Processing Architecture | |
WO2020143850A2 (en) | Method and system for optimizing allocation of borrowing requests | |
US20120323821A1 (en) | Methods for billing for data storage in a tiered data storage system | |
Lin et al. | Multi-centric management and optimized allocation of manufacturing resource and capability in cloud manufacturing system | |
Leonardos et al. | Optimality despite chaos in fee markets | |
US20210217083A1 (en) | Method and system for optimizing resource redistribution | |
Femminella et al. | IoT, big data, and cloud computing value chain: pricing issues and solutions | |
US10887162B2 (en) | Dynamic planning and configuration based on inconsistent supply | |
CN116671086A (en) | Method and system for optimizing utilization constraints for large-scale resource allocation | |
US20200167756A1 (en) | Hybridized cryptocurrency and regulated currency structure | |
US12008399B2 (en) | Optimization for scheduling of batch jobs | |
US20220374891A1 (en) | Transaction data processing | |
US20190377603A1 (en) | Independent storage and processing of data with centralized event control | |
Luo | Design and Implementation of Decentralized Swarm Intelligence E‐Commerce Model Based on Regional Chain and Edge Computing | |
Chen et al. | A Two‐Phase Cloud Resource Provisioning Algorithm for Cost Optimization | |
Chaplyha et al. | Development of the management system of Ukraine’s enterprises in the conditions of the digital economy |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |