CN114595641A - Method and system for solving combined optimization problem - Google Patents

Method and system for solving combined optimization problem Download PDF

Info

Publication number
CN114595641A
CN114595641A CN202210495655.5A CN202210495655A CN114595641A CN 114595641 A CN114595641 A CN 114595641A CN 202210495655 A CN202210495655 A CN 202210495655A CN 114595641 A CN114595641 A CN 114595641A
Authority
CN
China
Prior art keywords
branch
solving
current
sample
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210495655.5A
Other languages
Chinese (zh)
Inventor
王贵阳
刘子奇
沈文博
周俊
华致刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202210495655.5A priority Critical patent/CN114595641A/en
Publication of CN114595641A publication Critical patent/CN114595641A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2111/00Details relating to CAD techniques
    • G06F2111/04Constraint-based CAD
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2111/00Details relating to CAD techniques
    • G06F2111/06Multi-objective optimisation, e.g. Pareto optimisation using simulated annealing [SA], ant colony algorithms or genetic algorithms [GA]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Medical Informatics (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The method and the system for solving the combinatorial optimization problem solve the embodiment of the combinatorial optimization problem through the branch-and-bound algorithm, and train a decision model by taking the constraint and relaxation solution of each branch node and the strong branch corresponding to the node as sample data in the process of solving the embodiment. According to the solving method and the solving system, in the process of solving the target combinatorial optimization problem, based on the branch-and-bound algorithm, constraint and relaxation solutions corresponding to the branch nodes are input into a trained decision model at each branch node, and strong branches corresponding to the current node are output, so that the branch process in the branch-and-bound process is simulated based on the decision model, the strong branches in the branch nodes are quickly found, each branch does not need to be solved, the calculation time is greatly shortened, and the solving speed of the combinatorial optimization problem is accelerated.

Description

Method and system for solving combined optimization problem
Technical Field
The present disclosure relates to the field of integer programming technologies, and in particular, to a method and a system for solving a combinatorial optimization problem.
Background
Many combinatorial optimization problems can be solved by formal modeling as (mixed) integer programming problems. The combined optimization problem is characterized in that the decision space of the decision variables is a finite point set, and the optimal solution of the problem can be obtained through an exhaustion method. However, since the number of feasible solutions increases exponentially with the problem scale, the solution speed needs to be doubled every time a decision variable is added. When the scale of the decision variables is large, the time taken to solve the optimal solution is also long.
Therefore, it is necessary to provide a new method and system for solving the combinatorial optimization problem, so as to improve the solution speed and the solution quality of the combinatorial optimization problem under large-scale variables.
Disclosure of Invention
The specification provides a novel method and a novel system for solving a combinatorial optimization problem, so that the solving speed and the solving quality of the combinatorial optimization problem are improved under large-scale variables.
In a first aspect, the present specification provides a method for solving a combinatorial optimization problem, comprising: obtaining a target optimization model of a target combination optimization problem, wherein the target optimization model comprises an optimization objective function, a target constraint and a decision variable, and at least part of the decision variable is an integer programming variable; solving the target optimization model based on a branch-and-bound method, and determining a target solution, wherein the step of solving the target optimization model comprises the following steps of: determining a target strong branch corresponding to a current branch node based on a pre-trained decision model, wherein the decision model is obtained by training sample data of each sample branch node and a sample decision corresponding to the sample branch node in the solving process of a historical optimization model through a branch-and-bound algorithm, the sample data comprises sample constraints corresponding to the current sample branch node and relaxation solutions of sample variables, and the sample decision comprises the sample strong branch corresponding to the current sample branch node; and outputting the target solution.
In some embodiments, the historical optimization model and the target optimization model are homogeneous models.
In some embodiments, the sample data comprises a bipartite graph structure.
In some embodiments, the bipartite graph structure comprises a plurality of sample constraints corresponding to the current sample branch nodes, a relaxed solution for a plurality of sample variables, and edges connecting the plurality of sample constraints and the relaxed solution for the plurality of sample variables.
In some embodiments, the decision model is a graph convolution neural network model.
In some embodiments, the sample data is initialized based on an affine transformation in a training process of the decision model.
In some embodiments, in the training of the decision model, the decision model is trained based on a minimized cross-entropy loss function.
In some embodiments, the determining a target strong branch corresponding to the current branch node based on a pre-trained decision model includes: determining a current optimization model corresponding to the current branch node, wherein the current optimization model comprises the optimization objective function, current constraints and the decision variables; determining a current relaxation solution of the decision variable corresponding to the current optimization model based on a relaxation algorithm; and inputting the current constraint and the current relaxation solution into the decision model, and determining the target strong branch corresponding to the current branch node.
In some embodiments, said inputting said current constraint and said current relaxed solution into said decision model, determining said target strong branch corresponding to said current branch node, comprises: determining two branches corresponding to the current branch node; inputting the current constraint and the current relaxation solution into the decision model, and determining the probability of the two branches corresponding to the current branch node; and taking one branch with high probability in the two branches as the target strong branch.
In a second aspect, the present specification also provides a system for solving a combinatorial optimization problem, comprising at least one storage medium storing at least one instruction set for solving the combinatorial optimization problem, and at least one processor; and the at least one processor is communicatively connected to the at least one storage medium, wherein when the system for solving the combinatorial optimization problem is running, the at least one processor reads the at least one instruction set and performs the method for solving the combinatorial optimization problem according to the instructions of the at least one instruction set.
According to the technical scheme, the method and the system for solving the combinatorial optimization problem solve the embodiment of the combinatorial optimization problem through the branch-and-bound algorithm, and train the decision model by taking the constraint and relaxation solution of each branch node and the strong branch corresponding to the node as sample data in the process of solving the embodiment. In the solution process of the target combinatorial optimization problem, based on the branch-and-bound algorithm, in each branch node, the constraint and relaxation solutions corresponding to the branch nodes are input into the trained decision model, and the strong branch corresponding to the current node is output, so that the branch process in the branch-and-bound process is simulated based on the decision model, the strong branch in the branch node is quickly found, the calculation time is greatly shortened, and the solution speed of the combinatorial optimization problem is accelerated. Meanwhile, the method and the system are solved based on the branch-and-bound algorithm, so that high-quality solving results can be obtained.
Additional functions of the method and system for solving combinatorial optimization problems provided herein will be set forth in part in the description which follows. The following numerical and exemplary descriptions will be readily apparent to those of ordinary skill in the art in view of the description. The inventive aspects of the method and system for solving combinatorial optimization problems provided herein can be fully explained by the practice or use of the methods, apparatus and combinations described in the detailed examples below.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present disclosure, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 shows a schematic diagram of a branch-and-bound method;
FIG. 2 illustrates a hardware block diagram of a system for solving a combinatorial optimization problem provided in accordance with an embodiment of the present description;
FIG. 3 illustrates a flow chart of a method for solving a combinatorial optimization problem provided in accordance with an embodiment of the present description;
FIG. 4 illustrates a schematic diagram of a bipartite graph structure provided in accordance with an embodiment of the present description;
FIG. 5 is a schematic diagram illustrating a training process of a decision model provided in accordance with an embodiment of the present disclosure; and
FIG. 6 is a flow chart illustrating a method for solving an objective optimization model provided in accordance with an embodiment of the present disclosure.
Detailed Description
The following description is presented to enable any person skilled in the art to make and use the present description, and is provided in the context of a particular application and its requirements. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present description. Thus, the present description is not limited to the embodiments shown, but is to be accorded the widest scope consistent with the claims.
The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting. For example, as used herein, the singular forms "a", "an" and "the" may include the plural forms as well, unless the context clearly indicates otherwise. The terms "comprises," "comprising," "includes," and/or "including," when used in this specification, are intended to specify the presence of stated integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
These and other features of the present specification, as well as the operation and function of the elements of the structure related thereto, and the combination of parts and economies of manufacture, may be particularly improved upon in view of the following description. Reference is made to the accompanying drawings, all of which form a part of this specification. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended as a definition of the limits of the specification. It should also be understood that the drawings are not drawn to scale.
The flow diagrams used in this specification illustrate the operation of system implementations according to some embodiments of the specification. It should be clearly understood that the operations of the flow diagrams may be performed out of order. Rather, the operations may be performed in reverse order or simultaneously. In addition, one or more other operations may be added to the flowchart. One or more operations may be removed from the flowchart.
For convenience of description, the present specification will explain terms that will appear from the following description as follows:
integer Programming (Integer Programming): the case where integer numbers exist for decision variables in the planning problem.
Mixed Integer Programming (MIP): decision variables include both integers and continuous variables.
Combinatorial Optimization Problem (COP): and solving the optimization problem of extremum in a discrete state.
Machine learning: machine learning is a multi-field cross discipline, and relates to a plurality of disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. The special research computer simulates or realizes learning behavior to acquire new knowledge or skill and reorganizes the existing knowledge structure to improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. Machine learning generally includes techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and the like.
In the prior art, methods for solving combinatorial optimization problems mainly include two major types, namely Exact approaches (Exact approaches) and Approximate approaches (Approximate approaches). The precise method is an algorithm which can solve to obtain the global optimal solution of the problem, and mainly comprises a Branch and bound method and a Dynamic programming method, wherein the Branch and bound method and the Dynamic programming method adopt the concept of divide-and-conquer to solve by decomposing the original problem into sub-problems, and the global optimal solution of the problem is obtained by continuously iterating and solving. The precise method can solve to obtain the global optimal solution of the combined optimization problem, but when the problem scale is enlarged, the algorithm consumes huge calculation amount and is difficult to expand to the large-scale problem. The approximation method is a method capable of solving to obtain a local optimal solution of the problem, and mainly comprises an approximation algorithm (Approximate algorithms) and a Heuristic algorithm (Heuristic algorithms). The approximation algorithm is a method capable of obtaining a solution with quality assurance, and comprises a greedy algorithm, a local search algorithm, a linear programming and relaxation algorithm, a sequence algorithm and the like; the heuristic algorithm is a method for searching a solution space by using a set heuristic rule, can find a better solution within feasible time, but does not ensure the quality of the solution. The heuristic algorithm for solving the combinatorial optimization problem mainly comprises a simulated annealing algorithm, tabu search, an evolutionary algorithm (such as a genetic algorithm, a differential evolutionary algorithm and the like), an ant colony optimization algorithm, a particle swarm algorithm, iterative local search, variable neighborhood search and the like. When the problem is large in scale, a large number of iterative searches still result in large calculation time consumption, and the approximation method is still difficult to be expanded to an online and real-time optimization problem. Once the problem changes, the search solution generally needs to be performed again, or the heuristic rule is adjusted by trial and error continuously to obtain a better effect, so that the calculation cost is high.
Branch and bound method (branch and bound): is one of the most common algorithms for solving integer programming problems. The method can solve not only pure integer programming but also mixed integer programming. The branch-and-bound method is a search and iteration method, and different branch variables and subproblems are selected for branching. Generally, the total feasible solution space is repeatedly partitioned into smaller and smaller subsets, called branches; and a target lower bound (for the minimum problem) is computed for the solution set within each subset, which is called bounding. After each branch, any subset whose bounds exceed the target value of the known feasible solution set is not further branched, so that many subsets can be disregarded, which is called pruning. This is the main idea of the branch-and-bound method.
Fig. 1 shows a schematic diagram of a branch-and-bound approach. As shown in fig. 1, the topmost point in the figure is called the root node 010. The branch-and-bound method can perform Linear Relaxation (Linear Relaxation) on integer variables, and then convert the integer programming Problem of the combinatorial optimization Problem into a corresponding relaxed Linear programming Problem (Linear Relaxation program). The relaxation solution obtained by solving the linear programming problem is the relaxation solution of the root node, and the objective function value corresponding to the relaxation solution is also the first Lower Bound (Lower Bound, the optimization target is minimum optimization) or Upper Bound (Upper Bound, the optimization target is maximum optimization) of the combined optimization problem. When a plurality of decision variables are branched in sequence, a relaxation solution obtained by the linear programming problem after relaxation is possible to be a feasible solution of the original problem at a certain node. At this time, a feasible solution of the original problem is found and put into the optimal feasible solution. In the next branch, if the solution of the linear programming problem for solving a node is greater than the upper bound (the optimization goal is minimum optimization) or less than the lower bound (the optimization goal is maximum optimization), at this time, although a possible feasible solution for the node in the lower branch is not found yet, if the node is continuously branched, since the branch represents more constraints, the number of feasible solutions is reduced, and the solution obtained later is worse than the solution. Thus, fromFrom the optimization perspective, it is not necessary to find a better solution than the current node from the branches after the node, so that it is not necessary to continue to branch the node, and the node can be directly deleted, that is, the shaded area 020 in the graph. That is, even if the shaded area 020 continues to explore, finding a feasible solution may not be better than the currently found solution. This is the importance of the bounding in branch-and-bound, which makes it unnecessary to solve for all
Figure 562527DEST_PATH_IMAGE001
And (n is the number of decision variables). Since many nodes and their underlying branches are deleted. The shaded area 020 indicates that this part of the branch has been discarded. For convenience of description, we define the deleted branch in each branch as the weak branch 020, and the retained branch as the strong branch 030.
Therefore, the branch-and-bound method can effectively solve the problem of combinatorial optimization, but when the scale of solution becomes large, for example, the number of decision variables is large, the solution space becomes very large, each branch needs a large amount of space to calculate the linear programming problem, and a strong branch of the current node is decided, which also requires huge calculation resources to be consumed, otherwise, an effective solution cannot be obtained.
The method and the system for solving the combinatorial optimization problem solve the embodiment of the combinatorial optimization problem through the branch-and-bound algorithm, and train a decision model by taking the constraint and relaxation solution of each branch node and the strong branch corresponding to the node as sample data in the process of solving the embodiment. In the solution process of the target combinatorial optimization problem, based on the branch-and-bound algorithm, in each branch node, the constraint and relaxation solutions corresponding to the branch nodes are input into a trained decision model, and the strong branch corresponding to the current node is output, so that the branch process in the branch-and-bound process is simulated based on the decision model, the strong branch in the branch nodes is quickly found, the solution of each branch is not needed, the calculation time is greatly shortened, and the solution speed of the combinatorial optimization problem is accelerated. Meanwhile, the method and the system are solved based on the branch-and-bound algorithm, so that high-quality solving results can be obtained. Therefore, the process can be analyzed and the process of disassembling the learning branch to achieve the aim.
The method and the system for solving the combinatorial optimization problem provided by the specification can be applied to the solution scene of any combinatorial optimization problem. Such as information recommendation scenarios, target population delineation scenarios, loan scenarios, etc. Any form of combinatorial optimization problem solution may use the combinatorial optimization problem solution methods and systems provided herein.
Fig. 2 shows a hardware structure diagram of a system 001 for solving a combined optimization problem (hereinafter referred to as system 001) provided in accordance with an embodiment of the present specification. System 001 may store data or instructions that implement the methods for solving combinatorial optimization problems described herein and may execute or be used to execute the data or instructions. The solution to the combinatorial optimization problem is described elsewhere in this specification. In some embodiments, system 001 may include a hardware device having a data information processing function and a program necessary for driving the hardware device to operate. In some embodiments, system 001 may be a remote computing device, such as a server. In some embodiments, the system 001 may be a mobile terminal, such as a mobile device, a tablet, a laptop, an in-built device of a motor vehicle, or the like, or any combination thereof. In some embodiments, the mobile device may include a smart home device, a smart mobile device, a virtual reality device, an augmented reality device, or the like, or any combination thereof. In some embodiments, the smart home device may include a smart television, a desktop computer, etc., or any combination. In some embodiments, the smart mobile device may include a smartphone, a personal digital assistant, a gaming device, a navigation device, and the like, or any combination thereof. In some embodiments, the virtual reality device or augmented reality device may include a virtual reality helmet, virtual reality glasses, a virtual reality patch, an augmented reality helmet, augmented reality glasses, an augmented reality patch, or the like, or any combination thereof. For example, a virtual reality device or augmented reality device may include google glasses, head mounted displays, VRs, and the like. In some embodiments, the built-in devices in the automobile may include an on-board computer, an on-board television, and the like. In some embodiments, the system 001 may be a device with positioning technology for locating the position of the system 001.
As shown in fig. 2, system 001 may include at least one storage medium 130 and at least one processor 120. In some embodiments, system 001 may also include a communication port 150 and an internal communication bus 110. Meanwhile, system 001 may also include I/O component 160.
Internal communication bus 110 may connect various system components including storage medium 130, processor 120, and communication port 150.
The I/O component 160 supports input/output between the system 001 and other components.
The communication port 150 is used for data communication between the system 001 and the outside, for example, the communication port 150 may be used for data communication between the system 001 and other devices. The communication port 150 may be a wired communication port or a wireless communication port.
Storage medium 130 may include a data storage device. The data storage device may be a non-transitory storage medium or a transitory storage medium. For example, the data storage device may include one or more of a magnetic disk 132, a read-only storage medium (ROM) 134, or a random access storage medium (RAM) 136. The storage medium 130 also includes at least one set of instructions stored in the data storage device. The instructions are computer program code that may include programs, routines, objects, components, data structures, procedures, modules, and the like that perform the methods for solving combinatorial optimization problems provided herein.
The at least one processor 120 may be communicatively coupled to at least one storage medium 130 and a communication port 150 via an internal communication bus 110. The at least one processor 120 is configured to execute the at least one instruction set. When the system 001 is running, the at least one processor 120 reads the at least one instruction set and, as directed by the at least one instruction set, performs the method of solving the combinatorial optimization problem provided herein. The processor 120 may perform all the steps involved in the method of solving the combinatorial optimization problem. Processor 120 may be in the form of one or more processors, and in some embodiments, processor 120 may include one or more hardware processors, such as microcontrollers, microprocessors, Reduced Instruction Set Computers (RISC), Application Specific Integrated Circuits (ASICs), application specific instruction set processors (ASIPs), Central Processing Units (CPUs), Graphics Processing Units (GPUs), Physical Processing Units (PPUs), microcontroller units, Digital Signal Processors (DSPs), Field Programmable Gate Arrays (FPGAs), Advanced RISC Machines (ARM), Programmable Logic Devices (PLDs), any circuit or processor capable of executing one or more functions, or the like, or any combination thereof. For illustrative purposes only, only one processor 120 is depicted in the system 001 in this description. It should be noted, however, that the system 001 may also include multiple processors, and thus, the operations and/or method steps disclosed in this specification may be performed by one processor as described herein, or may be performed by a combination of multiple processors. For example, if in this description the processor 120 of the system 001 performs steps a and B, it should be understood that steps a and B may also be performed by two different processors 120, jointly or separately (e.g., a first processor performing step a, a second processor performing step B, or both a first and second processor performing steps a and B).
Fig. 3 shows a flowchart of a method P100 for solving a combinatorial optimization problem provided according to an embodiment of the present description. As before, the system 001 may perform the combined optimization problem solution method P100 of the present description. Specifically, the processor 120 may read a set of instructions stored in its local storage medium and then execute the method P100 of solving the combinatorial optimization problem of the present description, as specified by the set of instructions. As shown in fig. 3, method P100 may include:
s120: and obtaining a target optimization model of the target combination optimization problem.
As previously mentioned, many combinatorial optimization problems can be solved by formally modeling as (mixed) integer programming problems. The method P100 and the system 001 can be applied to any scenario of solving a combinatorial optimization problem. The objective optimization model includes an optimization objective function, objective constraints, and decision variables. At least some of the decision variables are integer programming variables. For convenience of description, we express the objective optimization model of the objective combinatorial optimization problem as the following formula:
Figure 479667DEST_PATH_IMAGE002
wherein the content of the first and second substances,
Figure 823186DEST_PATH_IMAGE003
in order to optimize the objective function,
Figure 218395DEST_PATH_IMAGE004
in order to be a target constraint,
Figure 32768DEST_PATH_IMAGE005
are decision variables. For convenience of description, we constrain the goals
Figure 917547DEST_PATH_IMAGE006
The number of (2) is defined as m. Wherein m is an integer greater than 1. We will decide on variables
Figure 184580DEST_PATH_IMAGE007
The number of (2) is defined as n. Wherein n is an integer greater than 1. Decision variables
Figure 445797DEST_PATH_IMAGE007
Can be represented as an n-dimensional matrix,
Figure 114676DEST_PATH_IMAGE008
. Wherein, the first and the second end of the pipe are connected with each other,
Figure 170357DEST_PATH_IMAGE009
d represents a discrete decision space, a collection of finite points. D can also be expressed as an n-dimensional matrix, wherein each matrix component is respectively associated with a decision variable
Figure 924686DEST_PATH_IMAGE010
Each decision variable in the set corresponds to one and represents the corresponding decision variable
Figure 222550DEST_PATH_IMAGE011
The corresponding space. Decision variables
Figure 745935DEST_PATH_IMAGE010
Where at least some of the decision variables are integer programming variables. I.e. decision variables
Figure 175780DEST_PATH_IMAGE010
Where the values of at least some of the decision variables are only integers. For example, integer programming variables of 0-1, i.e., decision variables
Figure 479722DEST_PATH_IMAGE010
Can only take 0 or 1. In some embodiments, the integer programming variable may also be any other integer.
In some embodiments, the objective optimization model of the objective combinatorial optimization problem is further expressed as the following formula:
Figure 754846DEST_PATH_IMAGE012
wherein G is an n-dimensional coefficient matrix of the optimization objective function,
Figure 726213DEST_PATH_IMAGE013
Figure 61379DEST_PATH_IMAGE014
. Each matrix component therein
Figure 587038DEST_PATH_IMAGE015
Respectively and decision variables
Figure 665853DEST_PATH_IMAGE016
Each decision variable in (1)
Figure 163830DEST_PATH_IMAGE017
One-to-one correspondence, respectively representing the corresponding decision variables
Figure 499259DEST_PATH_IMAGE018
The corresponding coefficients. C is an m x n dimensional coefficient matrix of the target constraint,
Figure 512214DEST_PATH_IMAGE019
. Each matrix component therein
Figure 394720DEST_PATH_IMAGE020
Decision variables in the j-th target constraint respectively
Figure 747204DEST_PATH_IMAGE021
Correspondingly, the decision variables in the jth target constraint corresponding to the decision variables are represented
Figure 752069DEST_PATH_IMAGE022
The coefficient of (c). Wherein the content of the first and second substances,
Figure 190003DEST_PATH_IMAGE023
. B an m-dimensional coefficient matrix corresponding to the deviation of the target constraint,
Figure 672937DEST_PATH_IMAGE024
. Each matrix component therein
Figure 145507DEST_PATH_IMAGE025
Respectively corresponding to the jth target constraint, representing the deviation in the jth target constraint corresponding thereto.
Wherein the optimization objective function is a minimization function. It will be appreciated by those skilled in the art that optimizing the objective function to a maximization function is also within the scope of the present description. Also, the maximization function may be translated into a minimization function. The description only takes the optimization objective function as the minimization function as an example.
S140: and solving the target optimization model based on a branch-and-bound method to determine a target solution.
The solution of the objective optimization model by the method P100 is based on a branch-and-bound algorithm. As described above, the important point of the branch-and-bound algorithm is that at each time, the branch node solves the two branches based on the relaxation algorithm, and calculates the lower bounds of the optimized objective functions corresponding to the two branches, so as to determine the strong branch from the two branches, and then continues to branch and bound the strong branch, and delete the weak branch. When decision variables
Figure 843246DEST_PATH_IMAGE007
When the number n of the branch-and-bound units is large, each calculation of the branch-and-bound unit will take a lot of calculation resources and calculation time. In order to accelerate the solving process, step S140 may simulate the process of branch-and-bound for each branch node based on a pre-trained decision model, and quickly determine strong branches therefrom, thereby saving computation time and computation resources. The decision model is pre-trained and pre-stored in the system 001, such as in the storage medium 130. Specifically, step S140 may include, for each branch node: and determining a target strong branch corresponding to the current branch node based on a pre-trained decision model. The target strong branch comprises a retained strong branch of the two branches. That is, step S140 may include, for each branch node: and determining a strong branch corresponding to the current branch node based on a pre-trained decision model.
The decision model is obtained by training sample data of each sample branch node and a corresponding sample decision based on a historical optimization model in the process of solving through a branch-and-bound algorithm. It is noted that, in some embodiments, the historical optimization model and the target optimization model are homogeneous models. The same-class model can be a data model established by the historical optimization model and the target optimization model aiming at the same-class combined optimization problem. For example, the historical optimization model and the target optimization model are both mathematical models established for the target combination optimization problem. For the same objective optimization problem, when the known parameter data are different (for example, when any one or more parameters in G, C, B are changed), the objective optimization model corresponding to the objective optimization problem also changes, and at this time, the solution corresponding to the objective optimization model also changes. Different parameter data may constitute different embodiments of the objective combinatorial optimization problem. The historical optimization model may be any one embodiment of a combinatorial optimization problem for the target. The historical optimization model used for the decision model training may be one embodiment or a plurality of embodiments of the objective combinatorial optimization problem.
The method P100 may solve the historical optimization model based on a branch-and-bound algorithm, and obtain data in each branch-and-bound process in the solving process as sample data of a training decision model. And the data corresponding to each branch node of the historical optimization model in the solving process can be used as sample data for training the decision model. The plurality of sample data form a training sample set for training the decision model. The plurality of sample data in the training sample set may include data of a plurality of branch nodes corresponding to one historical optimization model, or may include data of a plurality of branch nodes corresponding to a plurality of historical optimization models. For convenience of illustration, we define the branch nodes of the history optimization model as sample branch nodes. We define the decision variables of the historical optimization model as sample variables.
When the historical optimization model is solved based on the branch-and-bound algorithm, a new random seed (a sample variable to be subjected to branch-and-bound) is arranged at each solved sample branch node. The method P100 may record new node states and strong branch decisions during branch-and-bound. Each sample data may include a node status of the current sample branch node. For example, we can label the node state corresponding to the tth sample branch node of the history optimization model as the node state
Figure 768477DEST_PATH_IMAGE026
. Node status
Figure 55101DEST_PATH_IMAGE026
Strong branches corresponding to all sample branch nodes before the t-th sample branch node and a relaxation solution corresponding to each sample branch node may be included. That is, each sample data may include all strong branches before the current sample branch node (tth sample branch node), i.e., all sample constraints corresponding to the current sample branch node (tth sample branch node). That is, each sample data may include all sample constraints and relaxed solutions of the sample variables for the current sample branch node (the tth sample branch node). Node status
Figure 116598DEST_PATH_IMAGE026
Strong branches corresponding to all sample branch nodes preceding the t-th sample branch node may be included.
In some embodiments, the sample data may comprise a bipartite graph structure. The method P100 may place the branch-and-bound process in the node states of the t sample branch nodes
Figure 400949DEST_PATH_IMAGE026
Encoding into a bipartite graph structure with point/edge features
Figure 875793DEST_PATH_IMAGE027
. And G is an n-dimensional coefficient matrix of an optimization objective function of the historical optimization model. C is an m x n dimensional coefficient matrix of the sample constraints of the historical optimization model. X is an n-dimensional matrix of sample variables of the history optimization model. E is the edge connecting the sample constraint with the sample variable. FIG. 4 illustrates a schematic diagram of a bipartite graph structure provided in accordance with an embodiment of the present description. As shown in FIG. 4, the bipartite graph structure may include a plurality of (m) sample constraints C corresponding to the current sample branch node (the tth sample branch node), a plurality of (n) relaxed solutions of sample variables X, and a concatenation of the plurality of sample constraints and the tth sample branch nodeEdge E of the relaxed solution of the plurality of sample variables. On the left side of the bipartite graph structure are a plurality of (m) sample constraints in an optimized historical model
Figure 903792DEST_PATH_IMAGE028
. Each row
Figure 147691DEST_PATH_IMAGE029
Is the feature matrix for the jth sample constraint,
Figure 337364DEST_PATH_IMAGE030
. On the right side of the bipartite graph structure are a number (n) of samples in an optimized historical model
Figure 237187DEST_PATH_IMAGE031
Relaxation solutions of the variables. Wherein each sample variable can be encoded as a d-dimensional vector. Each row of
Figure 367079DEST_PATH_IMAGE032
Is the relaxation solution corresponding to the ith sample variable. Edges connecting the plurality of sample constraints and relaxation solutions of the plurality of sample variables
Figure 668748DEST_PATH_IMAGE033
The jth sample constraint and the ith sample variable are connected.
In the branch-and-bound solution process, the branch-and-bound algorithm may branch one of the sample variables in the current sample branch node (the t-th sample branch node), and select a strong branch among the two branches. The method P100 may take the strong branch as a decision action and mark the decision action as a decision action
Figure 91639DEST_PATH_IMAGE034
. The sample decision corresponding to the sample data may comprise a sample strong branch corresponding to the current sample branch node (tth sample branch node)
Figure 478758DEST_PATH_IMAGE034
. For convenience of description, we can label the sample data and its corresponding sample decisions as
Figure 848559DEST_PATH_IMAGE035
In some embodiments, the decision model may be a Graph Convolutional Neural network model (GCNN). The GCNN model, also known as a message passing neural network, is an extension of a convolutional neural network from data (such as images or sounds) of a mesh structure to an intended purpose. The characteristics of this model are as follows:
1) regardless of the size of the input graph, they are well-defined;
2) their computational complexity is directly related to the density of the map, making it an ideal choice to handle typical sparse combination optimization problems;
3) they are rank invariant, that is, they always produce the same output regardless of the order in which the nodes are presented.
In training the decision model, sample data is structured in a bipartite graph
Figure 801472DEST_PATH_IMAGE036
As input, a single graph convolution is performed in the form of two interleaved half convolutions. In other words, due to the bipartite graph structure of the input graph, the convolution of the graph can be decomposed into two consecutive passes, one from the sample variable to the sample constraint and the other from the sample constraint to the sample variable. These transfers take the form shown below:
Figure 598526DEST_PATH_IMAGE037
wherein the content of the first and second substances,
Figure 269679DEST_PATH_IMAGE038
the two-layer perceptron takes ReLu as an activation function. After the map is rolled up, a map having a phase with the input map is obtainedBipartite graph of a same topology, but possibly with different node characteristics. So that each sample branch node now contains information from its neighbors. The strategy is obtained by discarding the sample constraints and applying the last layer 2 perceptron on the sample variable nodes, in combination with a masked softmax activation function, to generate a probability distribution over the candidate branch variables (two branches for each sample node). Fig. 5 is a schematic diagram illustrating a training process of a decision model provided according to an embodiment of the present disclosure. In the embodiment shown in fig. 5, n =2 and m =2 are shown as examples. Wherein the content of the first and second substances,
Figure 443172DEST_PATH_IMAGE039
is the decision result of the sample variable.
In order to overcome the problem that weight initialization in GCNN depends on a training sample set and stabilize the machine learning process, the method P100 initializes the sample data of the decision model based on affine transformation in the training process of the decision model, thereby solving the fluctuation problem of a data set based on affine transformation. The affine transformation can be represented as:
Figure 14705DEST_PATH_IMAGE040
this formula is called the antenormal layer and is applied immediately after the addition in the last formula. Using the parameters of beta and sigma separately on the training sample set
Figure 982661DEST_PATH_IMAGE041
Is initialized and fixed once before the actual training. At the same time, the method P100 employs non-standardized volumes and such pre-trained procedures, improving generalization performance over larger problems.
During the training of the decision model, the method P100 may train the decision model based on a minimized cross-entropy loss function. The minimized cross entropy loss function can be expressed as the following equation:
Figure 141110DEST_PATH_IMAGE042
wherein, N is the number of sample data. And S is a training sample set.
In some embodiments, the sample data may also be in other forms, such as a matrix form, and so on. In some embodiments, the decision model may also be other structures, such as a convolutional neural network, a cyclic neural network, and so forth. In some embodiments, the loss function of the decision model may also be of other types, such as a logarithmic loss function, an exponential loss function, a quadratic loss function, and so on.
FIG. 6 is a flow chart illustrating a method for solving an objective optimization model provided in accordance with an embodiment of the present disclosure. Fig. 6 shows step S140. As shown in fig. 6, step S140 may include:
s142: and determining a current optimization model corresponding to the current branch node.
As mentioned above, in the branch-and-bound process, the target constraint needs to be re-determined according to each strong branch. In the solving process of the target optimization model, at each branch node, the optimization model corresponding to the current branch node, that is, the current optimization model corresponding to the current branch node, needs to be re-determined according to the corresponding extra-strong branch of the current branch node. The current optimization model may include the optimization objective function, current constraints, and the decision variables. The optimization objective function is the optimization objective function of the target optimization model. And the current constraint is the constraint corresponding to the current branch node corresponding to the current optimization model on the premise of determining the strong branch. The current constraint not only comprises a target constraint corresponding to the target optimization model, but also comprises a constraint corresponding to all strong branches before the current branch node.
S144: and determining the current relaxation solution of the decision variable corresponding to the current optimization model based on a relaxation algorithm.
At each branch node, the method P100 may solve the current optimization model based on the current optimization model corresponding to the current branch node, and obtain a current relaxation solution of the current optimization model.
S146: and inputting the current constraint and the current relaxation solution into the decision model, and determining the target strong branch corresponding to the current branch node.
The method P100 may treat the current constraint and the current relaxation solution as the node state of the current branch node (node state as described above)
Figure 852714DEST_PATH_IMAGE043
) Inputting the node state of the current branch node into the decision model as the input data of the decision model; the decision model may be calculated based on the input data to obtain a predicted value of the current branch node, that is, a predicted value of a decision action corresponding to the current branch node (such as the decision action described above)
Figure 45798DEST_PATH_IMAGE044
)。
Specifically, step S146 may include: determining two branches corresponding to the current branch node; inputting the current constraint and the current relaxation solution into the decision model, and determining the probability of the two branches corresponding to the current branch node; and taking one branch with high probability in the two branches as the target strong branch. The method P100 may select one of the remaining decision variables to branch based on the current branch node and the current relaxation solution, and determine two branches corresponding to the current branch node. Specifically, the method for determining two branches is described as a branch-and-bound algorithm, which is not described in detail herein. After determining the two branches, the method P100 may input the node states of the current branch nodes into the decision model as input data of the decision model, where the decision model may perform calculation based on the input data and output probability values corresponding to the two branches; the method P100 may take a branch with a high probability value as the target strong branch and a branch with a low probability value as the weak branch. Then, the method P100 may continue to perform the above process on the new branch node under the strong branch until an integer solution of a decision variable, i.e., the target solution, is obtained.
S160: and outputting the target solution.
The system 001 may output the target solution to other devices communicatively coupled to the system 001, such as a client, or may output the target solution to other computing modules of the system 001, such as the storage medium 130, and so forth.
In summary, the present specification provides a method P100 and a system 001 for solving a combinatorial optimization problem, which solve an embodiment of the combinatorial optimization problem through a branch-and-bound algorithm, and train a decision model by using a constraint and relaxation solution of each branch node and a strong branch corresponding to the node as sample data in the process of solving the embodiment. In the solution process of the target combinatorial optimization problem, based on the branch-and-bound algorithm, in each branch node, the constraint and relaxation solutions corresponding to the branch nodes are input into a trained decision model, and the strong branch corresponding to the current node is output, so that the branch process in the branch-and-bound process is simulated based on the decision model, the strong branch in the branch nodes is quickly found, the solution of each branch is not needed, the calculation time is greatly shortened, and the solution speed of the combinatorial optimization problem is accelerated. Meanwhile, the method and the system are solved based on the branch-and-bound algorithm, so that high-quality solving results can be obtained. Therefore, the process can be analyzed and the process of disassembling the learning branch to achieve the aim.
Another aspect of the present description provides a non-transitory storage medium having stored thereon at least one set of executable instructions for solving a combinatorial optimization problem. When executed by a processor, the executable instructions direct the processor to perform the steps of the method for solving a combinatorial optimization problem P100 described herein. In some possible implementations, various aspects of the description may also be implemented in the form of a program product including program code. The program code is configured to cause the system 001 to perform the steps of the method for solving a combinatorial optimization problem P100 described herein when the program product is run on the system 001. A program product for implementing the above-described methods may employ a portable compact disc read only memory (CD-ROM) including program code and may be run on system 001. However, the program product of this description is not limited in this respect, as the readable storage medium can be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system. The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. The computer readable storage medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable storage medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing. Program code for carrying out operations for this specification may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on system 001, partly on system 001, as a stand-alone software package, partly on system 001 and partly on a remote computing device or entirely on a remote computing device.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
In conclusion, upon reading the present detailed disclosure, those skilled in the art will appreciate that the foregoing detailed disclosure can be presented by way of example only, and not limitation. Those skilled in the art will appreciate that the present specification contemplates various reasonable variations, enhancements and modifications to the embodiments, even though not explicitly described herein. Such alterations, improvements, and modifications are intended to be suggested by this specification, and are within the spirit and scope of the exemplary embodiments of this specification.
Furthermore, certain terminology has been used in this specification to describe embodiments of the specification. For example, "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various portions of this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined as suitable in one or more embodiments of the specification.
It should be appreciated that in the foregoing description of embodiments of the specification, various features are grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the specification, for the purpose of aiding in the understanding of one feature. This is not to be taken as an admission that any of the above-described features are required in combination, and it is fully possible for a person skilled in the art, on reading this description, to identify some of the devices as single embodiments. That is, embodiments in this specification may also be understood as an integration of a plurality of sub-embodiments. And each sub-embodiment described herein is equally applicable in less than all features of a single foregoing disclosed embodiment.
Each patent, patent application, publication of a patent application, and other material, such as articles, books, descriptions, publications, documents, articles, and the like, cited herein is hereby incorporated by reference. All matters hithertofore set forth herein except as related to any prosecution history, any prosecution history which may be inconsistent or conflicting with this document, or any prosecution history which may have a limiting effect on the broadest scope of the claims. Now or later associated with this document. For example, if there is any inconsistency or conflict in the description, definition, and/or use of terms associated with any of the included materials with respect to the terms, descriptions, definitions, and/or uses associated with this document, the terms in this document are used.
Finally, it should be understood that the embodiments of the application disclosed herein are illustrative of the principles of the embodiments of the present specification. Other modified embodiments are also within the scope of this description. Accordingly, the disclosed embodiments are to be considered in all respects as illustrative and not restrictive. Those skilled in the art may implement the applications in this specification in alternative configurations according to the embodiments in this specification. Therefore, the embodiments of the present description are not limited to the embodiments described precisely in the application.

Claims (10)

1. A method of solving a combinatorial optimization problem, comprising:
obtaining a target optimization model of a target combination optimization problem, wherein the target optimization model comprises an optimization target function, target constraints and decision variables, and at least part of the decision variables are integer programming variables;
solving the target optimization model based on a branch-and-bound method, and determining a target solution, wherein the step of solving the target optimization model comprises the following steps of:
determining a target strong branch corresponding to a current branch node based on a pre-trained decision model, wherein the decision model is obtained by training sample data of each sample branch node and a sample decision corresponding to the sample branch node in the solving process of a historical optimization model through a branch-and-bound algorithm, the sample data comprises sample constraints corresponding to the current sample branch node and relaxation solutions of sample variables, and the sample decision comprises the sample strong branch corresponding to the current sample branch node; and
and outputting the target solution.
2. The method for solving a combinatorial optimization problem according to claim 1, wherein the objective combinatorial optimization problem is an objective population delineation problem, and the historical optimization model and the objective optimization model are homogeneous models.
3. The method of solving a combinatorial optimization problem of claim 1, wherein the sample data comprises a bipartite graph structure.
4. The method of solving a combinatorial optimization problem of claim 3, wherein the bipartite graph structure includes a plurality of sample constraints corresponding to the current sample branch nodes, a relaxed solution for a plurality of sample variables, and edges connecting the plurality of sample constraints and the relaxed solution for the plurality of sample variables.
5. The method of solving a combinatorial optimization problem of claim 1, wherein the decision model is a convolutional neural network model.
6. The method for solving a combinatorial optimization problem of claim 1, wherein the sample data is initialized based on affine transformation during the training of the decision model.
7. The method of solving a combinatorial optimization problem of claim 1, wherein the decision model is trained based on a minimized cross-entropy loss function during training of the decision model.
8. The method for solving the combinatorial optimization problem according to claim 1, wherein the determining the target strong branch corresponding to the current branch node based on the pre-trained decision model comprises:
determining a current optimization model corresponding to the current branch node, wherein the current optimization model comprises the optimization objective function, current constraints and the decision variables;
determining a current relaxation solution of the decision variable corresponding to the current optimization model based on a relaxation algorithm; and
and inputting the current constraint and the current relaxation solution into the decision model, and determining the target strong branch corresponding to the current branch node.
9. The method for solving the combinatorial optimization problem of claim 8, wherein the inputting the current constraint and the current relaxation solution into the decision model to determine the target strong branch corresponding to the current branch node comprises:
determining two branches corresponding to the current branch node;
inputting the current constraint and the current relaxation solution into the decision model, and determining the probability of the two branches corresponding to the current branch node; and
and taking one branch with high probability in the two branches as the target strong branch.
10. A system for solving a combinatorial optimization problem, comprising:
at least one storage medium storing at least one instruction set for solving a combinatorial optimization problem; and
at least one processor communicatively coupled to the at least one storage medium,
wherein when the system for solving the combinatorial optimization problem is running, the at least one processor reads the at least one instruction set and performs the method for solving the combinatorial optimization problem of any of claims 1-9 according to the instructions of the at least one instruction set.
CN202210495655.5A 2022-05-09 2022-05-09 Method and system for solving combined optimization problem Pending CN114595641A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210495655.5A CN114595641A (en) 2022-05-09 2022-05-09 Method and system for solving combined optimization problem

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210495655.5A CN114595641A (en) 2022-05-09 2022-05-09 Method and system for solving combined optimization problem

Publications (1)

Publication Number Publication Date
CN114595641A true CN114595641A (en) 2022-06-07

Family

ID=81820844

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210495655.5A Pending CN114595641A (en) 2022-05-09 2022-05-09 Method and system for solving combined optimization problem

Country Status (1)

Country Link
CN (1) CN114595641A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024001610A1 (en) * 2022-07-01 2024-01-04 华为云计算技术有限公司 Method for solving goal programming problem, node selection method, and apparatus

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060112049A1 (en) * 2004-09-29 2006-05-25 Sanjay Mehrotra Generalized branching methods for mixed integer programming
CN104701867A (en) * 2015-03-27 2015-06-10 河海大学 Day-ahead reactive power optimization method based on branch-bound method and primal-dual interior point method
CN111915060A (en) * 2020-06-30 2020-11-10 华为技术有限公司 Processing method and processing device for combined optimization task
CN113641417A (en) * 2021-06-29 2021-11-12 南京邮电大学 Vehicle safety task unloading method based on branch-and-bound method
CN113657589A (en) * 2021-07-08 2021-11-16 南方科技大学 Method, system, device and storage medium for solving optimization problem
CN114398430A (en) * 2022-03-25 2022-04-26 清华大学深圳国际研究生院 Complex network link prediction method based on multi-target mixed integer programming model

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060112049A1 (en) * 2004-09-29 2006-05-25 Sanjay Mehrotra Generalized branching methods for mixed integer programming
CN104701867A (en) * 2015-03-27 2015-06-10 河海大学 Day-ahead reactive power optimization method based on branch-bound method and primal-dual interior point method
CN111915060A (en) * 2020-06-30 2020-11-10 华为技术有限公司 Processing method and processing device for combined optimization task
CN113641417A (en) * 2021-06-29 2021-11-12 南京邮电大学 Vehicle safety task unloading method based on branch-and-bound method
CN113657589A (en) * 2021-07-08 2021-11-16 南方科技大学 Method, system, device and storage medium for solving optimization problem
CN114398430A (en) * 2022-03-25 2022-04-26 清华大学深圳国际研究生院 Complex network link prediction method based on multi-target mixed integer programming model

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
WENLONG FU等: "A new branch and bound algorithm for noncovex quadratic programming with box constraints", 《2013 10TH INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS AND KNOWLEDGE DISCOVERY (FSKD)》 *
张丽华等: "内点-分支定界法在最优机组投入中的应用", 《继电器》 *
赵洪山等: "输电线扩展规划分支定界算法", 《电力系统保护与控制》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024001610A1 (en) * 2022-07-01 2024-01-04 华为云计算技术有限公司 Method for solving goal programming problem, node selection method, and apparatus

Similar Documents

Publication Publication Date Title
Gu et al. Mamba: Linear-time sequence modeling with selective state spaces
US9990558B2 (en) Generating image features based on robust feature-learning
Kurach et al. Neural random-access machines
US20180260709A1 (en) Calculating device and method for a sparsely connected artificial neural network
US11544542B2 (en) Computing device and method
CN115104105A (en) Antagonistic autocoder architecture for graph-to-sequence model approach
US11775832B2 (en) Device and method for artificial neural network operation
Chen et al. Binarized neural architecture search for efficient object recognition
CN115311506B (en) Image classification method and device based on quantization factor optimization of resistive random access memory
WO2023124342A1 (en) Low-cost automatic neural architecture search method for image classification
CN115017178A (en) Training method and device for data-to-text generation model
CN114444668A (en) Network quantization method, network quantization system, network quantization apparatus, network quantization medium, and image processing method
Li et al. Efficient bitwidth search for practical mixed precision neural network
CN114595641A (en) Method and system for solving combined optimization problem
Ma et al. Accelerating deep neural network filter pruning with mask-aware convolutional computations on modern CPUs
CN116797850A (en) Class increment image classification method based on knowledge distillation and consistency regularization
Zhou et al. Effective vision transformer training: A data-centric perspective
Xia et al. Regularly truncated m-estimators for learning with noisy labels
Guo et al. Efficient convolutional networks learning through irregular convolutional kernels
Ressi et al. Neural networks reduction via lumping
CN115905546A (en) Graph convolution network document identification device and method based on resistive random access memory
CN115599918A (en) Mutual learning text classification method and system based on graph enhancement
Fan et al. A repetitive feature selection method based on improved ReliefF for missing data
Xia et al. Efficient synthesis of compact deep neural networks
Tang et al. Training Compact DNNs with ℓ1/2 Regularization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20220607