CN114581160B - Resource allocation method, distributed computing system and equipment - Google Patents

Resource allocation method, distributed computing system and equipment Download PDF

Info

Publication number
CN114581160B
CN114581160B CN202210481627.8A CN202210481627A CN114581160B CN 114581160 B CN114581160 B CN 114581160B CN 202210481627 A CN202210481627 A CN 202210481627A CN 114581160 B CN114581160 B CN 114581160B
Authority
CN
China
Prior art keywords
optimization
node
constraint
target
decision
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210481627.8A
Other languages
Chinese (zh)
Other versions
CN114581160A (en
Inventor
简道红
沈文博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202210481627.8A priority Critical patent/CN114581160B/en
Publication of CN114581160A publication Critical patent/CN114581160A/en
Application granted granted Critical
Publication of CN114581160B publication Critical patent/CN114581160B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0207Discounts or incentives, e.g. coupons or rebates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/11Complex mathematical operations for solving equations, e.g. nonlinear equations, general mathematical optimization problems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/04Trading; Exchange, e.g. stocks, commodities, derivatives or currency exchange
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/06Asset management; Financial planning or analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • General Physics & Mathematics (AREA)
  • Finance (AREA)
  • Development Economics (AREA)
  • Theoretical Computer Science (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Mathematical Physics (AREA)
  • General Business, Economics & Management (AREA)
  • Computational Mathematics (AREA)
  • Technology Law (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Operations Research (AREA)
  • Pure & Applied Mathematics (AREA)
  • Game Theory and Decision Science (AREA)
  • Data Mining & Analysis (AREA)
  • Algebra (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Human Resources & Organizations (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the specification provides a resource allocation method, a distributed computing system and equipment. When resource allocation is carried out, a nonlinear target optimization model can be constructed on the basis of a resource allocation optimization problem, in the process of solving the target optimization model in an iteration mode, the gradient of each decision variable is determined on the basis of the optimization result of each decision variable determined in the previous iteration, and then the nonlinear target optimization model is converted into a linear model by utilizing the gradient to be solved. Therefore, when the ADMM algorithm or the similar algorithm is used for solving the nonlinear optimization model constructed based on the resource allocation problem, the nonlinear optimization model is directly solved without being converted into a linear model from a business angle before solving, and the precision of the solved result can be improved.

Description

Resource allocation method, distributed computing system and equipment
Technical Field
The embodiments of the present disclosure relate to the field of data processing technologies, and in particular, to a resource allocation method, a distributed computing system, and a device.
Background
Aiming at optimization problems in a plurality of service scenes (such as resource allocation scenes), an optimization model can be constructed to solve, and the optimal result of each decision variable in the optimization problem is obtained. Due to some business scenarios, the business problem involved in the business scenario is a nonlinear programming problem, and therefore the constructed optimization model is a nonlinear optimization model. In some scenarios, for solving the nonlinear optimization model, a user is generally required to convert the nonlinear optimization model into an approximately linear optimization model from a business perspective and then perform the solving. Since the nonlinear optimization model is approximated from a business perspective, the optimization target is changed from the optimization target of the original nonlinear optimization model, resulting in a loss of accuracy after solution.
Disclosure of Invention
To overcome the problems in the related art, embodiments of the present specification provide a resource allocation method, a distributed computing system, and a device.
According to a first aspect of embodiments of the present specification, there is provided a resource allocation method for allocating a target number of resources to be allocated to a plurality of resource recipients if a constraint condition is satisfied, so as to maximize benefits obtained by the plurality of resource recipients using the allocated resources, the method including:
acquiring a constructed nonlinear target optimization model; the optimization goal of the target optimization model is to maximize the profit, variables in the target optimization model include decision variables and target variables introduced when the target optimization model is incorporated into the constraint condition, and the decision variables represent the number of resources allocated to each resource receiver;
iteratively executing the following steps until a preset condition is reached, and obtaining an optimization result of each decision variable in the target optimization model, so that a user performs resource allocation based on the obtained optimization result:
determining gradients corresponding to the decision variables and constraint errors corresponding to the constraint conditions based on the optimization results of the decision variables determined in the previous iteration, updating the target optimization model by using the optimization results of the target variables determined according to the constraint errors, converting the updated target optimization model into a linear model based on the gradients, and determining the optimization results of the decision variables in the linear model to serve as the optimization results of the decision variables in the current iteration.
According to a second aspect of embodiments of the present specification, there is provided a distributed computing system, the distributed computing system including a master node and a plurality of working nodes, the distributed computing system being configured to determine an optimization result of decision variables in a plurality of non-linear target optimization models constructed based on an original optimization problem, each target optimization model including a part of the decision variables of the original optimization problem and target variables introduced when constraints of the original optimization problem are incorporated in the target optimization model, each working node corresponding to one target optimization model;
the main node and the working node are used for iteratively executing the following steps to obtain an optimization result of each decision variable in the target optimization model:
the main node is used for acquiring the optimization result of each decision variable determined by each working node in the previous iteration, and forwarding the optimization result of the decision variable determined by other working nodes except the working node to the working node aiming at each working node; the system comprises a plurality of working nodes, a constraint condition determining unit, a target variable determining unit, a decision variable determining unit and a target variable determining unit, wherein the decision variable determining unit is used for determining a constraint error corresponding to the constraint condition based on an optimization result of each decision variable determined by each working node in a previous iteration, determining an optimization result of the target variable based on the constraint error, and sending the optimization result to each working node;
the working node is used for receiving the optimization results of the decision variables determined by the other working nodes and sent by the main node, determining the gradient of the decision variables in the target optimization model corresponding to the working node based on the received optimization results of the decision variables, updating the target optimization model by using the received optimization results of the target variables, converting the updated target optimization model into a linear model by using the gradient, and determining the optimization results of the decision variables in the linear model as the optimization results of the decision variables in the current iteration.
A third aspect according to embodiments of the present specification provides a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method of the first aspect when executing the program.
The beneficial effect of this application: when resource allocation is carried out, a nonlinear target optimization model can be constructed based on the resource allocation optimization problem, and variables in the target optimization model comprise decision variables representing the number of resources allocated to each resource receiver and target variables introduced when the target optimization model is incorporated into a constraint condition. When each decision variable in the target optimization model is solved in an iterative manner, the gradient corresponding to each decision variable can be determined based on the optimization result of each decision variable determined in the previous iteration, the constraint error corresponding to each constraint condition can be determined based on the optimization result of each decision variable determined in the previous iteration, the optimization result of the target variable is determined according to the constraint error, the target optimization model is updated by using the optimization result of the target variable, the updated target optimization model can be converted into a linear model based on the gradient, the optimization result of each decision variable in the linear model is determined and used as the optimization result of each decision variable in the current iteration, the iteration steps are repeatedly executed until the preset condition is reached, the iteration is stopped, the optimization result of each decision variable determined at present is used as the final solution and is returned to the user, so that the user can allocate resources based on the optimization results. In the process of solving the nonlinear target optimization model, in each iteration, the gradient of each decision variable is determined based on the optimization result of each decision variable determined in the previous iteration, and then the nonlinear target optimization model is converted into a linear model by using the gradient to be solved. Therefore, when the ADMM algorithm or the similar algorithm is used for solving the nonlinear optimization model, the nonlinear optimization model is directly solved without being converted into the linear model from a business angle before being solved, and the precision of the solved result can be improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of embodiments of the invention.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the embodiments of the specification and, together with the description, serve to explain the principles of the embodiments of the specification.
Fig. 1 is a schematic diagram illustrating a resource allocation method according to an exemplary embodiment of the present disclosure.
FIG. 2 is a schematic diagram of a distributed computing system shown in an exemplary embodiment of the present description.
FIG. 3 is a schematic diagram of a distributed computing system shown in an exemplary embodiment of the present description.
FIG. 4 is a logical block diagram illustrating the architecture of a computing device in accordance with an exemplary embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the examples of this specification. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the embodiments of the specification, as detailed in the appended claims.
The terminology used in the embodiments of the present specification is for the purpose of describing particular embodiments only and is not intended to be limiting of the embodiments of the present specification. As used in the specification examples and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in the embodiments of the present specification to describe various information, the information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, the first information may also be referred to as second information, and similarly, the second information may also be referred to as first information, without departing from the scope of the embodiments herein. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
Aiming at optimization problems in a plurality of service scenes, an optimization model can be constructed to solve, and the optimal result of each decision variable in the optimization problems is obtained. For example, in a resource allocation scenario, limited resources are generally required to be allocated to a plurality of resource recipients, and each resource recipient may create profits by using the allocated resources, so that a resource allocation manner needs to be determined, and the sum of the profits created by each resource recipient based on the allocated resources is the highest under the condition that corresponding limiting conditions are met. For the resource allocation problem, resources allocated to each resource receiver can be used as decision variables, the total profit maximization is used as an optimization target, an optimization model is constructed, some limiting conditions which need to be followed in the resource allocation process are used as constraint conditions of the optimization model, then the optimization model can be solved, and the optimization result of each decision variable is determined.
Due to some scenes, the data volume related to an original optimization model constructed by a user based on a business problem is large, decision variables contained in the original optimization model are hundreds of millions, and a conventional solver cannot be used for solving or is slow in solving speed. Thus, the optimization goal of the original optimization model can be split into multiple sub-goals by some specific algorithm, and then the sub-goals are solved in parallel. By the method, the solving speed of the optimization model which is relatively complex and has more decision variables can be accelerated, and the processing efficiency is improved. When the original optimization model is solved by using the specific algorithms, the original optimization model is generally converted into a target optimization model in a specified form, and then the target optimization model is solved by using the algorithms.
Taking an ADMM algorithm Alternating Direction Method of Multipliers, exchange Direction multiplier Method) as an example, the ADMM algorithm can be used for solving the decomposable convex optimization problem, is suitable for solving a large-scale optimization problem, can equivalently decompose the original optimization problem into a plurality of sub-problems which can be solved by using the ADMM algorithm, then solves each sub-problem in parallel, and finally coordinates the solution of the sub-problems to obtain a global solution of the original optimization problem. The model that can be solved by the ADMM algorithm is usually a model of a specified form, for example, an original optimization model that is generally constructed based on a business problem is all constrained, and thus, the original optimization model can be first converted into an equivalent target optimization model without constraint, for example, the target optimization model can be represented by a augmented lagrange function. And then, solving the target optimization model by using an ADMM algorithm to obtain an optimization result of each decision variable in the original optimization model.
In general, when an objective optimization model is constructed based on an original optimization model with constraints, the constraints can be coupled into the original optimization model through some objective variables, and the objective optimization model without the constraints is constructed, that is, other variables, hereinafter referred to as objective variables, except for decision variables of the original optimization problem are additionally introduced in the process of adding the constraints into the objective optimization model. Therefore, variables to be optimized in the target optimization model comprise decision variables of the original optimization problem and newly added target variables, and then the target variables and the decision variables in the original optimization problem can be solved in a multi-round iteration mode.
In general, if the original optimization problem is a linear programming problem, the target optimization model constructed based on the original optimization problem is also linear, and for the linear model, an algorithm such as an ADMM algorithm or an algorithm with similar functions can be directly used for solving. For many scenarios, for example, in some resource allocation scenarios, the original optimization problem is a nonlinear programming problem, the constructed target optimization model is also a nonlinear model, and for the nonlinear model, an algorithm such as an ADMM algorithm or an algorithm with similar functions cannot be used for solving.
Therefore, when an objective optimization model is constructed for such a nonlinear programming problem and solved by using, for example, an ADMM algorithm or an algorithm with similar functions, a user is usually required to convert the nonlinear objective optimization model into an approximate linear model from a business perspective and then solve the model. Since the nonlinear target optimization model is approximated from a business perspective, the optimization target of the linear model obtained by approximation is changed from the original optimization target, resulting in loss of accuracy after solution.
For example, in many resource allocation scenarios (e.g., allocating limited investment amount to each financing product to maximize investment profit, or allocating limited assets to each project by an enterprise to maximize accumulated profit obtained by each project, etc.), the target optimization model is usually constructed as a non-linear model, and then the target optimization model is usually converted into a linear model from a business perspective, and then the linear model is solved by using an ADMM algorithm or an algorithm with similar functions, which may result in a decrease in accuracy of a final solution, so that the finally determined resource allocation scheme is not accurate enough.
In view of the above problems in the resource allocation scenario, an embodiment of the present specification provides a resource allocation method, where the method is used to allocate a target number of resources to be allocated to multiple resource recipients when a constraint condition is satisfied, so that a benefit obtained by the multiple resource recipients using the allocated resources is maximized. Fig. 1 is a schematic diagram of the resource allocation method.
In some embodiments, the resource to be allocated may be an amount to be invested by the user, the resource receiver may be each financial product (e.g., fund, stock, etc.), the decision variable may be an amount to be allocated to each financial product, the optimization goal may be that the accumulated profit of each financial product is the highest, the constraint condition may be that the sum of the amounts to be allocated to each financial product is equal to the total amount to be invested, and the risk caused by the user to invest each financial product does not exceed the risk level that the user can bear, etc.
In some embodiments, the resource to be allocated may be total assets of the enterprise, the resource receiver may be each project of the enterprise, the decision variable may be an investment amount allocated to each project, the optimization goal is to maximize a total profit amount of each project, the constraint condition may be that a sum of the investment amounts allocated to each project is equal to the total assets of the enterprise, and an association relationship between each project, and the like.
Or in some embodiments, the resource to be allocated may be the total amount of the coupons for a certain marketing activity, the resource receiver may be each user account, the decision variable may be the amount of the coupons allocated to each user account, the optimization goal is that the conversion rate of the user to the optimized coupons is maximum (i.e. the amount of the coupons used by the user and the proportion of the total amount), the constraint condition is that the total amount of the coupons allocated to each user account is equal to the total amount, and the like, and other constraint conditions in some marketing activities.
The optimization problem involved in each resource allocation scenario is a nonlinear programming problem.
When determining a specific scheme of resource allocation, a target optimization model may be constructed based on the resource allocation optimization problem, where the target optimization model is a non-linear model, and an optimization goal of the target optimization model is to maximize gains obtained by each resource recipient based on allocated resources. In order to decompose the target optimization model into a plurality of submodels which can be solved in parallel, constraint conditions in the original resource allocation optimization problem are incorporated into the target optimization model, namely the target optimization model is a model without the constraint conditions, so that variables to be solved in the target optimization model are classified into two types, one type is a decision variable in the original resource allocation optimization problem, the decision variable represents the number of resources allocated to each resource receiver, and the other type is a variable introduced when the constraint conditions are incorporated into the target optimization model, and the variables are collectively referred to as target variables hereinafter.
After the objective optimization model is obtained, the objective optimization model can be solved to obtain the optimization results of the decision variables, namely the quantity of resources allocated to each resource receiver, and then the optimization results are displayed to the user, so that the user can allocate the resources based on the optimization results of the decision variables.
When determining the optimization result of each decision variable in each target model, the following steps can be iteratively executed:
firstly, the gradient corresponding to each decision variable can be determined based on the optimization result of each decision variable determined in the previous iteration, the constraint error corresponding to each constraint condition can be determined based on the optimization result of each decision variable determined in the previous iteration, then the optimization result of the target variable is determined according to the constraint error, and the original numerical value of the target variable in the target optimization model is replaced by the optimization result of the target variable, so that the target optimization model is updated. The updated target optimization model may then be converted into a linear model based on the gradient, and the optimization result of each decision variable in the linear model is determined as the optimization result of each decision variable in the current iteration. And repeatedly executing the steps until a preset condition is reached, stopping iteration, taking the optimization result of each currently determined decision variable as a final solution, and returning the final solution to the user. The gradient of each decision variable is determined to convert the nonlinear target optimization model into a linear model by using the gradient, after the gradient of each decision variable is obtained, the gradient can be used as a coefficient of each decision variable to obtain a decision variable term corresponding to each decision variable again, and then a constraint term corresponding to a constraint condition is added to obtain the converted linear model.
In the process of solving the nonlinear target optimization model constructed based on the resource allocation optimization problem, each iteration determines the gradient of each decision variable based on the optimization result of each decision variable determined in the previous iteration, and then the nonlinear target optimization model is converted into a linear model by using the gradient to be solved. Therefore, when the ADMM algorithm or the similar algorithm is used for solving the nonlinear optimization model, the nonlinear optimization model is directly solved without being converted into the linear model from a business angle before being solved, and the precision of the solved result can be improved.
When determining the gradient corresponding to each decision variable, various manners may be adopted, for example, in some embodiments, when determining the gradient corresponding to each decision variable based on the optimization result of each decision variable determined in the previous iteration, the gradient corresponding to each decision variable may be determined by obtaining a partial derivative of the decision variable. For example, for each decision variable, the decision variables in the target optimization model except for the decision variable are used as constants, and the bias derivative is calculated for the decision variables to obtain the target function. And then, the optimization result of each decision variable determined in the previous iteration can be substituted into the objective function to obtain the gradient corresponding to the decision variable. Of course, the above is only one way to calculate the gradient corresponding to each decision variable, and for the situation where the decision variables are relatively complex, other ways may also be adopted, and some existing methods to calculate the gradient of the decision variables may be specifically referred to, and are not described herein again.
For example, assume that the expression of the target optimization model is as follows:
Figure 620159DEST_PATH_IMAGE002
wherein the content of the first and second substances,
Figure 474983DEST_PATH_IMAGE004
Figure 51458DEST_PATH_IMAGE006
Figure 368038DEST_PATH_IMAGE008
for decision variables, the constraints are
Figure 588935DEST_PATH_IMAGE010
Figure 676977DEST_PATH_IMAGE012
Is the target variable.
For decision variables
Figure 475169DEST_PATH_IMAGE004
When determining the corresponding gradient, the method can solve
Figure 67212DEST_PATH_IMAGE004
Obtaining a target function by calculating a partial derivative
Figure 267249DEST_PATH_IMAGE014
Figure 526192DEST_PATH_IMAGE016
And then determined in the previous iteration
Figure 421467DEST_PATH_IMAGE004
Figure 689637DEST_PATH_IMAGE006
Figure 868814DEST_PATH_IMAGE008
Into
Figure 829817DEST_PATH_IMAGE014
A gradient p1 is obtained. For decision variables
Figure 743547DEST_PATH_IMAGE006
Figure 549829DEST_PATH_IMAGE008
Similar methods can be used to obtain their corresponding gradients p2, p 3.
Similarly, the previous iteration may be determined
Figure 849092DEST_PATH_IMAGE004
Figure 715417DEST_PATH_IMAGE006
Figure 850863DEST_PATH_IMAGE008
Substituting the value of (b) into the constraint
Figure 195256DEST_PATH_IMAGE017
In, calculate
Figure 349026DEST_PATH_IMAGE019
As a constraint error. A constraint error determination may then be based on
Figure 386252DEST_PATH_IMAGE020
The value of (c).
Can then utilize the determinations
Figure 868049DEST_PATH_IMAGE020
Replaces the original value (i.e. the value determined in the previous iteration) of the target with the value of (c), and optimizes the targetUpdating the chemistry model and using the determinations
Figure 891500DEST_PATH_IMAGE004
Figure 775142DEST_PATH_IMAGE006
Figure 983270DEST_PATH_IMAGE008
The gradient of the model is used for converting a nonlinear target optimization model into a linear optimization model, namely, the gradient is used as a coefficient of each decision variable, a term corresponding to each decision variable is reconstructed, then the original constraint term is added to obtain the linear model, and the constructed linear model can be expressed as follows:
Figure 79926DEST_PATH_IMAGE022
then, the linear model can be solved to obtain the optimization result of each decision variable in the current round.
And continuously repeating the iteration process until a preset condition is reached, stopping the iteration process, obtaining the final solution of each decision variable, and returning the final solution to the user.
In some embodiments, the iteration may be stopped when a preset condition is reached, where the preset condition may be that the constraint error is smaller than a preset threshold, that is, when the constraint error is smaller than the preset threshold, the iteration flow may be stopped. In some embodiments, the iteration process may also be stopped when the number of iterations reaches a preset number. Or, in some embodiments, the two conditions may be simultaneously satisfied, and may be specifically set according to actual requirements.
In some embodiments, the constraint is a linear constraint. The constraint condition may be an equality constraint condition or an inequality constraint condition.
In some scenarios, the original resource allocation problem may involve a large amount of data, and include many decision variables, for example, in the case of investment and financing, because the number of financing products involved is large, the decision variables are also large. In this case, the rate is slow if a single solver is used for the solution. In order to improve the processing efficiency, business data (such as income data and risk data of financial products) related to the original resource allocation problem can be divided into a plurality of data fragments, each data fragment comprises part of decision variables, then a target optimization model can be constructed based on each data fragment, and optimization results of all decision variables in the original resource allocation problem are determined through the target optimization models. In order to increase the processing speed, the solution of the objective optimization model may be implemented by using a distributed computing system, for example, each node of the distributed computing system may be used to solve one objective optimization model, and then the solution results of the objective optimization models are integrated to obtain a final result.
Therefore, in some embodiments, the objective optimization model may be a plurality of models, each objective optimization model including a portion of the decision variables to be solved in the original resource allocation problem, and the resource allocation method may be performed by a distributed computing system including a master node and worker nodes, each worker node for solving one objective optimization model. The master node and the worker node may run on a physical machine or may run on a virtual machine. The nodes may run on different physical machines or on the same physical machine.
In the process of model solution, because when the gradient of each decision variable is determined, the optimization results determined by all the decision variables in the previous iteration are needed, the master node may be configured to collect the optimization results of the decision variables determined by each working node in the previous iteration, and then forward the optimization results of the decision variables determined by other working nodes except the working node to the working node, so that the working node may determine the gradient corresponding to each decision variable in the target optimization model on the working node based on all the received decision variables. Meanwhile, the main node is also used for determining constraint errors corresponding to the constraint conditions based on the optimization results of the decision variables determined by each working node in the previous iteration, determining the optimization results of the target variables according to the constraint errors, and sending the optimization results to each working node;
each working node is used for receiving the optimization results of the decision variables determined by other working nodes sent by the main node, determining the gradient of the decision variables in the target optimization model corresponding to the working node based on the received optimization results of the decision variables, updating the target optimization model by using the received optimization results of the target variables, converting the updated target optimization model into a linear model by using the determined gradient, and determining the optimization results of the decision variables in the linear model as the optimization results of the decision variables in the current iteration.
In the process of solving the objective optimization model, the solution of the decision variables in the original optimization problem is distributed to a plurality of working nodes for execution, each working node is used for solving one objective optimization model and calculating the gradient of the objective optimization model, and the master node is used for collecting the optimization results of the decision variables of each working node, forwarding the optimization results and determining the optimization results of the objective variables, so that the gradient of the decision variables and the optimization results of the objective variables can be determined and executed in two nodes in parallel, and the processing efficiency can be greatly improved.
Of course, in some scenarios, after obtaining the optimization result of each decision variable determined by each working node in the previous iteration, the main node may determine the gradient of each decision variable, and then send the gradient of the decision variable in the target optimization model corresponding to each working node to the working node. Compared with the mode, the mode of calculating the gradient of each decision variable in parallel through the working nodes is higher in efficiency.
In some embodiments, when the master node determines that the constraint error is determined by the constraint condition based on the optimization result of each decision variable determined by the previous iteration of each working node, there are two ways, one is that after receiving the optimization result of each decision variable determined by the previous iteration of each working node, the master node may substitute the optimization result of each decision variable into the constraint condition to determine the constraint error. For example, assume the constraint of
Figure 766123DEST_PATH_IMAGE024
If there are 5 working nodes and the target optimization model in each working node contains 20 decision variables, the working node 1 can use the decision variables
Figure 504271DEST_PATH_IMAGE026
-
Figure 758666DEST_PATH_IMAGE028
The value determined in the previous iteration is sent to the main node, and the rest working nodes are similar. The main node receives decision variables sent by 5 working nodes
Figure 215056DEST_PATH_IMAGE026
-
Figure 704943DEST_PATH_IMAGE030
Then, the above constraint conditions may be substituted
Figure 156653DEST_PATH_IMAGE032
Then can calculate
Figure 831216DEST_PATH_IMAGE034
As a constraint error.
In another mode, after obtaining the optimization result of each decision variable in the target optimization model after the last iteration, each working node determines a constraint value based on the optimization result and the constraint condition of each decision variable, then sends the constraint value to the master node, accumulates the constraint values sent by each working node by the master node to obtain an accumulated result, and then determines a constraint error based on the accumulated result and the constraint condition. For example, the working node 1 may determine the decision variable in the previous iteration
Figure 774902DEST_PATH_IMAGE004
-
Figure 209425DEST_PATH_IMAGE036
Then according to the constraint condition
Figure 922166DEST_PATH_IMAGE038
Determining decision variables
Figure 767631DEST_PATH_IMAGE004
-
Figure 198613DEST_PATH_IMAGE039
Corresponding constraint value (a)
Figure 905669DEST_PATH_IMAGE041
) And sending the calculated constraint value to the master node, similarly to other nodes. After receiving the constraint values sent by the working nodes, the master node may accumulate the constraint values to obtain
Figure 738496DEST_PATH_IMAGE043
Then can calculate
Figure 757792DEST_PATH_IMAGE045
As a constraint error.
In the second mode, the working node determines the constraint value based on the optimization result and the constraint condition of each decision variable and then sends the constraint value to the main node, so that the data transmission quantity between the main node and the working node can be reduced.
In some resource allocation scenarios, due to the large amount of related service data, for example, the number of related resource receivers is large, that is, the decision variables are large, and the processing speed by a single computing node is slow. In order to improve the processing speed, the embodiment of the present specification further provides a distributed computing system for solving a nonlinear programming problem.
As shown in fig. 2, the distributed computing system includes a master node and a plurality of working nodes, and is configured to determine an optimization result of decision variables in a plurality of nonlinear target optimization models constructed based on an original optimization problem, where each target optimization model includes part of the decision variables of the original optimization problem and target variables introduced when constraint conditions of the original optimization problem are coupled to the target optimization model, and each working node corresponds to one target optimization model;
the main node and the working node are used for iteratively executing the following steps to obtain an optimization result of each decision variable in the target optimization model:
the main node is used for acquiring the optimization result of each decision variable determined by each working node in the previous iteration, and forwarding the optimization results of the decision variables determined by other working nodes except the working node to the working node aiming at each working node; and the optimization method is used for determining the constraint error corresponding to each constraint condition based on the optimization result of each decision variable determined by each working node in the previous iteration, determining the optimization result of the target variable based on the constraint error, and sending the optimization result to each working node.
The working nodes are used for receiving the optimization results of the decision variables determined by other working nodes sent by the main node, determining the gradient of the decision variables in the target optimization model corresponding to the working nodes based on the received optimization results of the decision variables, updating the target optimization model by using the received optimization results of the target variables, converting the updated target optimization model into a linear model by using the gradient, and determining the optimization results of the decision variables in the linear model as the optimization results of the decision variables in the current iteration.
The specific implementation details of the distributed computing system when solving the target optimization model refer to the description in the above method, and are not described herein again.
In some embodiments, the resource to be allocated may be an amount to be invested by the user, the resource receiver may be each financial product (e.g., fund, stock, etc.), the decision variable may be an amount to be allocated to each financial product, the optimization goal may be that the accumulated profit of each financial product is the highest, the constraint may be that the sum of the amounts to be allocated to each financial product equals the total amount to be invested, the risk caused by the user investing each financial product does not exceed the risk level that the user can bear, etc.
In some embodiments, when the master node is configured to determine a constraint error corresponding to each constraint condition of the nonlinear programming problem based on an optimization result of each decision variable determined by each working node in a previous iteration, the master node may determine the constraint error according to the optimization result of each decision variable in the previous iteration obtained from each working node and the constraint condition.
Or obtaining a constraint value from each working node, accumulating the constraint values obtained from the working nodes, and determining the constraint error based on the accumulated result and the constraint condition, wherein the constraint value is determined based on the optimization result of each decision variable determined in the previous iteration and the constraint condition. Specifically, reference may be made to the description in the above embodiments, which are not repeated herein.
In some embodiments, when constructing the target optimization model, the constraint condition of the original optimization problem may be coupled to the original optimization model corresponding to the original optimization problem by using a dual variable to obtain the target optimization model, and thus, the target variable may be the dual variable. Wherein, if the constraints in the original optimization problem include equality constraints and non-equality constraints, the equality constraints and the inequality constraints can be coupled to the original optimization model by using one dual variable each, i.e. the target variable can include two or more dual variables.
In some embodiments, in addition to coupling constraints to the original optimization model, a secondary penalty term comprising a specified variable may be added to the original optimization model when constructing the target optimization model. Thus, the target variable may also be a specified variable in the secondary penalty term.
In some embodiments, the constraint condition of the original optimization problem may include an equality constraint condition and an inequality constraint condition, and the master node may determine the equality constraint error based on the optimization result of each decision variable determined in one iteration of each working node and the equality constraint condition when determining the constraint error based on the optimization result of each decision variable determined in one iteration of each working node and the constraint condition of the original optimization problem, for example, assuming that the equality constraint condition is
Figure DEST_PATH_IMAGE047
After the values of the decision variables are determined, the values of the decision variables can be substituted into the constraint conditions to determine
Figure DEST_PATH_IMAGE049
As a constraint error.
In addition, the inequality constraint error can also be determined based on the optimization result of each decision variable determined by one iteration on each working node and the inequality constraint condition. For example, assume the inequality constraint condition is
Figure DEST_PATH_IMAGE051
After the values of the decision variables are determined, the values of the decision variables can be substituted into the constraint conditions to determine
Figure DEST_PATH_IMAGE053
As a constraint error.
Because the target optimization model needs multiple rounds of iterative solution in the solution process, the calculation amount involved in the iterative solution process of the target optimization model is large, and the solution processing speed is slow by adopting the conventional master-slave distributed calculation framework.
Therefore, in some embodiments, a new distributed computing frame is provided, and a node with higher computing power is newly added in an original master-slave distributed computing frame, and in the process of processing an iterative solution target optimization model by using the node, some tasks with larger computation amount and more time-consuming processing are processed, so that the processing efficiency is improved.
For example, as shown in FIG. 3, a master node in the distributed computing system may include a first node and a second node. The first node is used for determining whether iteration continues and executing some scheduling work based on the constraint error, and the second node is used for collecting the optimization results of all decision variables from all the working nodes in the iteration process, forwarding the optimization results to other nodes and calculating the constraint error.
For example, the second node may be configured to forward, from the optimization result of each decision variable determined by each working node in the previous iteration, the optimization result of the decision variable determined by other working nodes except the working node to each working node after receiving the indication information indicating that the iteration task is not terminated, and determine a constraint error corresponding to the constraint condition based on the optimization result of each decision variable determined by each working node in the previous iteration, and determine the optimization result of the target variable based on the constraint error; and sending the optimization result of the target variable to the working node, and sending the constraint error to the first node.
The first node is used for determining whether to terminate the iterative task based on the constraint error and informing the second node.
In some embodiments, the second node is further configured to record the completion of the first node and the work node in the current round of the iterative task, in addition to calculating the optimization results of the constraint error and the objective variable. For example, after receiving the optimization results of the decision variables of the current round sent by all the working nodes or the constraint values corresponding to the decision variables, the second node may mark the state of the working nodes as the completed state. Meanwhile, after receiving the indication information of whether the prompt of the first node stops the iterative process, the second node can mark the working state of the first node as a finished state. Therefore, the working node can determine whether the current iteration task of the first node is finished or not through the working state information in the second node, and the first node can also determine whether the current iteration task of the working node is finished or not through the state information recorded in the second node.
In some embodiments, after the first node notifies the second node that the iterative task is not terminated, the second node and the working node may continue to perform a next iteration, and at this time, the first node may perform some tasks unrelated to the iterative task while waiting for a constraint error of the next iteration. For example, the first node may record the constraint error determined by each iteration obtained from the second node in a report, and display the report to a user, or perform some other scheduling task.
The various technical features in the above embodiments can be arbitrarily combined, so long as there is no conflict or contradiction between the combinations of the features, but the combination is limited by the space and is not described one by one, and therefore, any combination of the various technical features in the above embodiments also falls within the scope disclosed in the present specification.
Accordingly, the present specification further provides a computer device, as shown in fig. 4, the computer device includes a processor 41, a memory 42, and a computer program stored in the memory 42 and executable by the processor 41, and when the computer program is executed, the computer program implements the resource allocation method in any of the above method embodiments.
Accordingly, the embodiments of the present specification further provide a computer storage medium, in which a program is stored, and when the program is executed by a processor, the method for allocating resources in any of the above embodiments is implemented.
Embodiments of the present description may take the form of a computer program product embodied on one or more storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having program code embodied therein. Computer-usable storage media include permanent and non-permanent, removable and non-removable media, and information storage may be implemented by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of the storage medium of the computer include, but are not limited to: phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technologies, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic tape storage or other magnetic storage devices, or any other non-transmission medium, may be used to store information that may be accessed by a computing device.
Other embodiments of the present disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. The embodiments of the present specification are intended to cover any variations, uses, or adaptations of the embodiments of the specification following, in general, the principles of the embodiments of the specification and including such departures from the present disclosure as come within known or customary practice in the art to which the embodiments of the specification pertain. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the embodiments being indicated by the following claims.
It is to be understood that the embodiments of the present specification are not limited to the precise arrangements described above and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the embodiments of the present specification is limited only by the appended claims.
The above description is only a preferred embodiment of the present disclosure, and should not be taken as limiting the present disclosure, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

Claims (12)

1. A distributed computing system comprises a main node and a plurality of working nodes, the distributed computing system is used for determining optimization results of decision variables in a plurality of nonlinear target optimization models constructed based on an original optimization problem, each target optimization model comprises part of the decision variables of the original optimization problem and target variables introduced when constraint conditions of the original optimization problem are incorporated into the target optimization model, and each working node corresponds to one target optimization model;
the main node and the working nodes are used for iteratively executing the following steps until a preset condition is reached so as to obtain an optimization result of each decision variable in the target optimization model:
the main node is used for acquiring the optimization result of each decision variable determined by each working node in the previous iteration, and forwarding the optimization results of the decision variables determined by other working nodes except the working node to the working node aiming at each working node; the system comprises a plurality of working nodes, a constraint condition determining unit, a target variable determining unit, a decision variable determining unit and a target variable determining unit, wherein the decision variable determining unit is used for determining a constraint error corresponding to the constraint condition based on an optimization result of each decision variable determined by each working node in a previous iteration, determining an optimization result of the target variable based on the constraint error, and sending the optimization result to each working node;
the working node is used for receiving the optimization results of the decision variables determined by the other working nodes sent by the main node, determining the gradient of the decision variables in the target optimization model corresponding to the working node based on the received optimization results of the decision variables, updating the target optimization model by using the received optimization results of the target variables, converting the updated target optimization model into a linear model by using the gradient, and determining the optimization results of the decision variables in the linear model as the optimization results of the decision variables in the current iteration.
2. The distributed computing system according to claim 1, wherein the working node, when determining a gradient of a decision variable in the target optimization model corresponding to the working node based on the received optimization result of the decision variable, is specifically configured to:
for each decision variable, calculating a partial derivative of the decision variable to obtain a target function;
and substituting the optimization result of each decision variable determined in the previous iteration into the objective function to obtain the gradient corresponding to the decision variable.
3. The distributed computing system of claim 1, the preset conditions comprising: the constraint error is smaller than a preset threshold value; and/or the number of iterations reaches a preset number.
4. The distributed computing system of claim 1, the constraint being a linear constraint.
5. The distributed computing system of claim 1, the original optimization problem comprising allocating a target amount of resources to be allocated to a plurality of resource recipients, such that revenue obtained by the plurality of resource recipients using the allocated resources is maximized, if a constraint is satisfied, wherein the resources to be allocated are amounts to be invested by a user, the resource recipients comprise financial products, and the constraint comprises: the sum of the amounts allocated to the respective financial products is equal to the total amount to be invested, and the risk caused by investing the respective financial products does not exceed the risk level of the user.
6. The distributed computing system of claim 1, wherein the master node is configured to determine a constraint error corresponding to the constraint condition based on an optimization result of each decision variable determined by each of the working nodes in a previous iteration, and the constraint error comprises:
determining the constraint error based on the optimization result of each decision variable determined in the previous iteration acquired from each working node and the constraint condition; or
Obtaining a constraint value from each working node, wherein the constraint value is determined based on the optimization result of each decision variable determined in the previous iteration and the constraint condition, accumulating the constraint values obtained from each working node, and determining the constraint error based on the accumulated result and the constraint condition.
7. The distributed computing system of claim 1, the master node comprising a first node and a second node, the first node and the second node to iteratively:
the second node is configured to, after receiving indication information sent by the first node and indicating that an iteration task is not terminated, obtain an optimization result of each decision variable determined by each working node in a previous iteration, forward the optimization result of the decision variable determined by other working nodes except the working node to each working node, and determine a constraint error corresponding to the constraint condition based on the optimization result of each decision variable determined by each working node in the previous iteration, and determine an optimization result of the target variable based on the constraint error; sending the gradient and the optimization result of the target variable to a working node, and sending the constraint error to the first node;
the first node is used for determining whether to terminate the iterative task based on the constraint error and informing the second node.
8. The distributed computing system of claim 7, the first node further to perform other tasks unrelated to the iterative task after notifying the second node that the iterative task has not terminated.
9. The distributed computing system of claim 1, the objective optimization model being derived based on:
coupling the constraint conditions of the original optimization problem to an original optimization model corresponding to the original optimization problem by using a dual variable to construct the target optimization model;
wherein the target variable comprises the dual variable.
10. The distributed computing system of claim 1, further comprising a secondary penalty term in the objective optimization model, the secondary penalty term comprising a specified variable, the objective variable further comprising the specified variable.
11. The distributed computing system of claim 1, the constraints comprising equality constraints and inequality constraints, the master node being configured to determine equality constraint errors based on the optimization results for the decision variables determined by each of the working nodes in a previous iteration, and the equality constraints;
and determining inequality constraint errors based on the optimization results of the decision variables determined by each working node in the previous iteration and the inequality constraint conditions.
12. The distributed computing system of claim 1, the worker node further operable to perform the steps of:
acquiring a processing request submitted by a user, wherein the processing request comprises an original optimization model constructed based on the original optimization problem, a constraint condition corresponding to the original optimization model and data fragments, and the data fragments are data related to partial decision variables of the original optimization problem;
and constructing an object optimization model based on the original optimization model, the data fragments and the constraint conditions, wherein the optimization object of the object optimization model is equivalent to the optimization object of the original optimization model, and the object optimization model can be decomposed into a plurality of submodels capable of being solved in parallel.
CN202210481627.8A 2022-05-05 2022-05-05 Resource allocation method, distributed computing system and equipment Active CN114581160B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210481627.8A CN114581160B (en) 2022-05-05 2022-05-05 Resource allocation method, distributed computing system and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210481627.8A CN114581160B (en) 2022-05-05 2022-05-05 Resource allocation method, distributed computing system and equipment

Publications (2)

Publication Number Publication Date
CN114581160A CN114581160A (en) 2022-06-03
CN114581160B true CN114581160B (en) 2022-09-02

Family

ID=81779249

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210481627.8A Active CN114581160B (en) 2022-05-05 2022-05-05 Resource allocation method, distributed computing system and equipment

Country Status (1)

Country Link
CN (1) CN114581160B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107230004A (en) * 2017-06-28 2017-10-03 国网江苏省电力公司经济技术研究院 A kind of regenerative resource portfolio optimization optimization method, device and computing device
CN108665089A (en) * 2018-04-04 2018-10-16 清华大学 A kind of Robust Optimization Model method for solving for location problem
CN110298138A (en) * 2019-07-09 2019-10-01 南方电网科学研究院有限责任公司 A kind of integrated energy system optimization method, device, equipment and readable storage medium storing program for executing
CN110929964A (en) * 2019-12-18 2020-03-27 国网福建省电力有限公司 Energy-storage-containing power distribution network optimal scheduling method based on approximate dynamic programming algorithm
CN112136111A (en) * 2019-04-24 2020-12-25 阿里巴巴集团控股有限公司 Distributed resource allocation
US11062219B1 (en) * 2020-03-30 2021-07-13 Sas Institute Inc. Nonlinear optimization system
CN113705866A (en) * 2021-08-16 2021-11-26 成都飞机工业(集团)有限责任公司 Scheduling optimization method and system based on resource-constrained project scheduling problem model
CN113890023A (en) * 2021-09-29 2022-01-04 西安交通大学 Distributed economic dispatching optimization method and system for comprehensive energy microgrid

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11004097B2 (en) * 2016-06-30 2021-05-11 International Business Machines Corporation Revenue prediction for a sales pipeline using optimized weights
CN109905888B (en) * 2019-03-21 2021-09-07 东南大学 Joint optimization migration decision and resource allocation method in mobile edge calculation
CN111163519B (en) * 2019-12-27 2023-04-28 东北大学秦皇岛分校 Wireless body area network resource allocation and task offloading method with maximized system benefit
CN113556764B (en) * 2021-07-30 2022-05-31 云南大学 Method and system for determining calculation rate based on mobile edge calculation network

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107230004A (en) * 2017-06-28 2017-10-03 国网江苏省电力公司经济技术研究院 A kind of regenerative resource portfolio optimization optimization method, device and computing device
CN108665089A (en) * 2018-04-04 2018-10-16 清华大学 A kind of Robust Optimization Model method for solving for location problem
CN112136111A (en) * 2019-04-24 2020-12-25 阿里巴巴集团控股有限公司 Distributed resource allocation
CN110298138A (en) * 2019-07-09 2019-10-01 南方电网科学研究院有限责任公司 A kind of integrated energy system optimization method, device, equipment and readable storage medium storing program for executing
CN110929964A (en) * 2019-12-18 2020-03-27 国网福建省电力有限公司 Energy-storage-containing power distribution network optimal scheduling method based on approximate dynamic programming algorithm
US11062219B1 (en) * 2020-03-30 2021-07-13 Sas Institute Inc. Nonlinear optimization system
CN113705866A (en) * 2021-08-16 2021-11-26 成都飞机工业(集团)有限责任公司 Scheduling optimization method and system based on resource-constrained project scheduling problem model
CN113890023A (en) * 2021-09-29 2022-01-04 西安交通大学 Distributed economic dispatching optimization method and system for comprehensive energy microgrid

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Analysis of Distributed ADMM Algorithm for Consensus Optimization in Presence of Node Error;L. Majzoobi, F. Lahouti, V. Shah-Mansouri;《IEEE Transactions on Signal Processing》;20190401;第1774-1784页 *
一种基于NSGA-Ⅱ的网络规划资源配置优化处理方法;曹桓等;《电信工程技术与标准化》;20200415(第04期);第35-42页 *
可扩展机器学习的并行与分布式优化算法综述;亢良伊等;《软件学报》;20171009(第01期);第113-134页 *

Also Published As

Publication number Publication date
CN114581160A (en) 2022-06-03

Similar Documents

Publication Publication Date Title
CN108446975B (en) Quota management method and device
US8898108B2 (en) System and method for scheduling data storage replication over a network
CN109933593B (en) Asset data recording method, device and equipment
CN108629436B (en) Method and electronic equipment for estimating warehouse goods picking capacity
CN110852559A (en) Resource allocation method and device, storage medium and electronic device
CN114581220B (en) Data processing method and device and distributed computing system
CN109597800A (en) A kind of log distribution method and device
CN114581160B (en) Resource allocation method, distributed computing system and equipment
CN106708875B (en) Feature screening method and system
CN107679766B (en) Dynamic redundant scheduling method and device for crowd-sourcing task
US20110029982A1 (en) Network balancing procedure that includes redistributing flows on arcs incident on a batch of vertices
CN114581221B (en) Distributed computing system and computer device
CN110020954B (en) Revenue distribution method and device and computer equipment
CN110737727B (en) Data processing method and system
CN114757448B (en) Manufacturing inter-link optimal value chain construction method based on data space model
CN109544329B (en) Method, device and system for matching
CN111652477A (en) Order processing and similarity calculation model obtaining method and device and electronic equipment
CN115936875A (en) Financial product form hanging processing method and device
CN115578195A (en) Method, device, storage medium and equipment for electing verification node
JP2016103093A (en) Information processing method and information processing program
US10109019B2 (en) Accelerated disaggregation in accounting calculation via pinpoint queries
Xu et al. Optimal replenishment and transshipment management with two locations
CN113159926A (en) Loan transaction repayment date determination method and device
Li et al. A sort-based interest matching algorithm with two exclusive judging conditions for region overlap
US20200195550A1 (en) Tree structure-based smart inter-computing routing model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant