WO2020196866A1 - 情報処理装置、情報処理システム、情報処理方法、記憶媒体およびプログラム - Google Patents
情報処理装置、情報処理システム、情報処理方法、記憶媒体およびプログラム Download PDFInfo
- Publication number
- WO2020196866A1 WO2020196866A1 PCT/JP2020/014164 JP2020014164W WO2020196866A1 WO 2020196866 A1 WO2020196866 A1 WO 2020196866A1 JP 2020014164 W JP2020014164 W JP 2020014164W WO 2020196866 A1 WO2020196866 A1 WO 2020196866A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- vector
- variable
- information processing
- calculation
- searched
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/11—Complex mathematical operations for solving equations, e.g. nonlinear equations, general mathematical optimization problems
- G06F17/13—Differential equations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/01—Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N7/00—Computing arrangements based on specific mathematical models
- G06N7/01—Probabilistic graphical models, e.g. probabilistic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2111/00—Details relating to CAD techniques
- G06F2111/20—Configuration CAD, e.g. designing by assembling or positioning modules selected from libraries of predesigned modules
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N10/00—Quantum computing, i.e. information processing based on quantum-mechanical phenomena
- G06N10/40—Physical realisations or architectures of quantum processors or components for manipulating qubits, e.g. qubit coupling or qubit control
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N10/00—Quantum computing, i.e. information processing based on quantum-mechanical phenomena
- G06N10/60—Quantum algorithms, e.g. based on quantum optimisation, quantum Fourier or Hadamard transforms
Definitions
- An embodiment of the present invention relates to an information processing device, an information processing system, an information processing method, a storage medium, and a program.
- the combinatorial optimization problem is a problem of selecting the most suitable combination from a plurality of combinations.
- Combinatorial optimization problems are mathematically reduced to a problem called an "objective function" that maximizes a function having a plurality of discrete variables or minimizes the function.
- the combinatorial optimization problem is a universal problem in various fields such as finance, logistics, transportation, design, manufacturing, and life science, but the number of combinations increases in the order of the exponential function of the problem size, so-called "combinatorial explosion". Therefore, it is not always possible to find the optimum solution. Moreover, it is often difficult to even obtain an approximate solution close to the optimum solution.
- An embodiment of the present invention provides an information processing device, an information processing system, an information processing method, a storage medium, and a program for calculating a solution of a combinatorial optimization problem within a practical time.
- the information processing apparatus is configured to repeatedly update the first vector having the first variable as an element and the second vector having the second variable corresponding to the first variable as an element. ..
- the information processing device includes a storage unit and a processing circuit.
- the processing circuit updates the first vector by weighting and adding the second variable corresponding to the first variable, and stores the updated first vector as a searched vector in the storage unit.
- the first variable is weighted by the first coefficient that monotonically increases or decreases according to the number of updates, added to the corresponding second variable, the problem term between the first variables is calculated, and the problem term is the second.
- Add to the variable read the searched vector from the storage unit, calculate the correction term including the inverse of the distance between the first vector to be updated and the searched vector, and use the correction term as the second. It is configured to update the second vector by adding it to a variable.
- the figure which showed the configuration example of an information processing system A block diagram showing a configuration example of the management server.
- the figure which shows the example of the data stored in the storage of the calculation server. A flowchart showing an example of processing when calculating the solution of a simulated branch algorithm by time evolution.
- FIG. 1 is a block diagram showing a configuration example of the information processing system 100.
- the information processing system 100 of FIG. 1 includes a management server 1, a network 2, calculation servers (information processing devices) 3a to 3c, cables 4a to 4c, a switch 5, and a storage device 7. Further, FIG. 1 shows a client terminal 6 capable of communicating with the information processing system 100.
- the management server 1, the calculation servers 3a to 3c, the client terminal 6, and the storage device 7 can communicate data with each other via the network 2.
- the calculation servers 3a to 3c can store data in the storage device 7 and read data from the storage device 7.
- the network 2 is, for example, the Internet in which a plurality of computer networks are connected to each other.
- the network 2 can use wired, wireless, or a combination thereof as a communication medium.
- an example of the communication protocol used in the network 2 is TCP / IP, but the type of the communication protocol is not particularly limited.
- the calculation servers 3a to 3c are connected to the switch 5 via cables 4a to 4c, respectively. Cables 4a-4c and switch 5 form an interconnect between compute servers. The calculation servers 3a to 3c can also perform data communication with each other via the interconnect.
- the switch 5 is, for example, an Infiniband switch.
- the cables 4a to 4c are, for example, Infiniband cables. However, a wired LAN switch / cable may be used instead of the InfiniBand switch / cable.
- the communication standard and communication protocol used in the cables 4a to 4c and the switch 5 are not particularly limited. Examples of the client terminal 6 include a notebook PC, a desktop PC, a smartphone, a tablet, an in-vehicle terminal, and the like.
- the management server 1 controls the calculation server by converting, for example, the combinatorial optimization problem input by the user into a format that can be processed by each calculation server. Then, the management server 1 acquires the calculation result from each calculation server and converts the aggregated calculation result into a solution of the combinatorial optimization problem. In this way, the user can obtain a solution to the combinatorial optimization problem.
- the solution of the combinatorial optimization problem shall include an optimal solution and an approximate solution close to the optimal solution.
- FIG. 1 shows three calculation servers.
- the number of calculation servers included in the information processing system is not limited.
- the number of calculation servers used to solve the combinatorial optimization problem is not particularly limited.
- the information processing system may include one calculation server.
- the combinatorial optimization problem may be solved by using one of a plurality of calculation servers included in the information processing system.
- the information processing system may include hundreds or more calculation servers.
- the calculation server may be a server installed in a data center or a desktop PC installed in an office. Further, the calculation server may be a plurality of types of computers installed at different locations.
- the type of information processing device used as the calculation server is not particularly limited.
- the calculation server may be a general-purpose computer, a dedicated electronic circuit, or a combination thereof.
- FIG. 2 is a block diagram showing a configuration example of the management server 1.
- the management server 1 in FIG. 2 is, for example, a computer including a central processing unit (CPU) and a memory.
- the management server 1 includes a processor 10, a storage unit 14, a communication circuit 15, an input circuit 16, and an output circuit 17. It is assumed that the processor 10, the storage unit 14, the communication circuit 15, the input circuit 16 and the output circuit 17 are connected to each other via the bus 20.
- the processor 10 includes a management unit 11, a conversion unit 12, and a control unit 13 as internal components.
- the processor 10 is an electronic circuit that executes calculations and controls the management server 1.
- the processor 10 is an example of a processing circuit.
- the processor 10 for example, a CPU, a microprocessor, an ASIC, an FPGA, a PLD, or a combination thereof can be used.
- the management unit 11 provides an interface for operating the management server 1 via the user's client terminal 6. Examples of the interface provided by the management unit 11 include API, CLI, and a web page.
- the user can input information on the combinatorial optimization problem via the management unit 11, and can view and / or download the calculated solution of the combinatorial optimization problem.
- the conversion unit 12 converts the combinatorial optimization problem into a format that can be processed by each calculation server.
- the control unit 13 transmits a control command to each calculation server.
- control unit 13 After the control unit 13 acquires the calculation result from each calculation server, the conversion unit 12 aggregates the plurality of calculation results and converts them into a solution of the combinatorial optimization problem. Further, the control unit 13 may specify the processing content to be executed by each calculation server or the processor in each server.
- the storage unit 14 stores various types of data including the program of the management server 1, data necessary for executing the program, and data generated by the program.
- the storage unit 14 may be a volatile memory, a non-volatile memory, or a combination thereof. Examples of volatile memory include DRAM, SRAM and the like. Examples of non-volatile memory include NAND flash memory, NOR flash memory, ReRAM, or MRAM. Further, as the storage unit 14, a hard disk, an optical disk, a magnetic tape, or an external storage device may be used.
- the communication circuit 15 transmits / receives data to / from each device connected to the network 2.
- the communication circuit 15 is, for example, a wired LAN NIC (Network Interface Card). However, the communication circuit 15 may be another type of communication circuit such as a wireless LAN.
- the input circuit 16 realizes data input to the management server 1. It is assumed that the input circuit 16 includes, for example, USB, PCI-Express, or the like as an external port.
- the operating device 18 is connected to the input circuit 16.
- the operation device 18 is a device for inputting information to the management server 1.
- the operating device 18 is, for example, a keyboard, a mouse, a touch panel, a voice recognition device, and the like, but is not limited thereto.
- the output circuit 17 realizes data output from the management server 1. It is assumed that the output circuit 17 is provided with HDMI, DisplayPort, or the like as an external port.
- the display device 19 is connected to the output circuit 17. Examples of the display device 19 include, but are not limited to, an LCD (liquid crystal display), an organic EL (organic electroluminescence) display, or a projector.
- the administrator of the management server 1 can perform maintenance on the management server 1 by using the operation device 18 and the display device 19.
- the operation device 18 and the display device 19 may be incorporated in the management server 1. Further, the operation device 18 and the display device 19 do not necessarily have to be connected to the management server 1. For example, the administrator may perform maintenance on the management server 1 using an information terminal capable of communicating with the network 2.
- FIG. 3 shows an example of data stored in the storage unit 14 of the management server 1.
- the problem data 14A, the calculation data 14B, the management program 14C, the conversion program 14D, and the control program 14E are stored in the storage unit 14 of FIG.
- the problem data 14A includes data of a combinatorial optimization problem.
- the calculation data 14B includes the calculation results collected from each calculation server.
- the management program 14C is a program that realizes the functions of the management unit 11 described above.
- the conversion program 14D is a program that realizes the functions of the conversion unit 12 described above.
- the control program 14E is a program that realizes the functions of the control unit 13 described above.
- FIG. 4 is a block showing a configuration example of the calculation server.
- the calculation server of FIG. 4 is, for example, an information processing apparatus that executes the calculation of the first vector and the second vector independently or in sharing with other calculation servers.
- FIG. 4 illustrates the configuration of the calculation server 3a as an example.
- the other calculation server may have the same configuration as the calculation server 3a, or may have a configuration different from that of the calculation server 3a.
- the calculation server 3a includes, for example, a communication circuit 31, a shared memory 32, processors 33A to 33D, a storage 34, and a host bus adapter 35. It is assumed that the communication circuit 31, the shared memory 32, the processors 33A to 33D, the storage 34, and the host bus adapter 35 are connected to each other via the bus 36.
- the communication circuit 31 transmits / receives data to / from each device connected to the network 2.
- the communication circuit 31 is, for example, a wired LAN NIC (Network Interface Card). However, the communication circuit 31 may be another type of communication circuit such as a wireless LAN.
- the shared memory 32 is a memory that can be accessed from the processors 33A to 33D. Examples of the shared memory 32 include volatile memories such as DRAM and SRAM. However, as the shared memory 32, another type of memory such as a non-volatile memory may be used.
- the shared memory 32 may be configured to store, for example, the first vector and the second vector.
- the processors 33A to 33D can share data via the shared memory 32.
- the memories of the calculation server 3a need to be configured as shared memory.
- a part of the memory of the calculation server 3a may be configured as a local memory that can be accessed only by any processor.
- the shared memory 32 and the storage 34 described later are examples of storage units of the information processing device.
- Processors 33A to 33D are electronic circuits that execute calculation processing.
- the processor may be, for example, a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), an FPGA (Field-Programmable Gate Array), or an ASIC (Application Specific Integrated Circuit), which may be a combination of these or a combination of these. You may.
- the processor may be a CPU core or a CPU thread.
- the processor may be connected to other components of the calculation server 3a via a bus such as PCI express.
- the calculation server is equipped with four processors.
- the number of processors provided in one computing server may be different from this.
- the number and / or type of processors implemented by the compute server may be different.
- the processor is an example of a processing circuit of an information processing device.
- the information processing device may include a plurality of processing circuits.
- the processing circuit of the information processing device updates the first vector by weighting and adding the second variable to the first variable, stores the updated first vector as a searched vector in the storage unit, and stores the updated first variable in the storage unit.
- the problem term is calculated using the plurality of first variables, and the problem term is added to the second variable.
- Read the searched vector from the storage unit calculate the correction term including the inverse of the distance between the first vector to be updated and the searched vector, and add the correction term to the second variable to obtain the second vector. It may be configured to update.
- the problem argument may be calculated based on the Ising model.
- the first variable does not necessarily have to increase or decrease monotonically.
- the problem item may include many-body interactions. Details of the first coefficient, problem term, searched vector, correction term, Ising model, and many-body interaction will be described later.
- processing contents can be assigned for each processor.
- the unit of computational resources to which the processing contents are assigned is not limited.
- the processing contents may be assigned for each computer, or the processing contents may be assigned for each process running on the processor or each CPU thread.
- the storage 34 stores various data including the program of the calculation server 3a, the data necessary for executing the program, and the data generated by the program.
- the storage 34 may be configured to store, for example, the first vector and the second vector.
- the storage 34 may be a volatile memory, a non-volatile memory, or a combination thereof. Examples of volatile memory include DRAM, SRAM and the like. Examples of non-volatile memory include NAND flash memory, NOR flash memory, ReRAM, or MRAM. Further, as the storage 34, a hard disk, an optical disk, a magnetic tape, or an external storage device may be used.
- the host bus adapter 35 realizes data communication between calculation servers.
- the host bus adapter 35 is connected to the switch 5 via a cable 4a.
- the host bus adapter 35 is, for example, an HCA (Host Channel Adapter).
- HCA Hyper Channel Adapter
- FIG. 5 shows an example of data stored in the storage of the calculation server.
- the calculation data 34A, the calculation program 34B, and the control program 34C are stored in the storage 34 of FIG.
- the calculation data 34A includes data in the middle of calculation or a calculation result of the calculation server 3a. At least a part of the calculated data 34A may be stored in a different storage hierarchy such as the shared memory 32, the cache of the processor, or the register of the processor.
- the calculation program 34B is a program that realizes a calculation process in each processor and a data storage process in the shared memory 32 and the storage 34 based on a predetermined algorithm.
- the control program 34C is a program that controls the calculation server 3a based on the command transmitted from the control unit 13 of the management server 1 and transmits the calculation result of the calculation server 3a to the management server 1.
- An Ising machine is an example of an information processing device used to solve a combinatorial optimization problem.
- the Ising machine is an information processing device that calculates the energy of the ground state of the Ising model.
- the Ising model has often been used mainly as a model for ferromagnets and phase transition phenomena.
- the Ising model has been increasingly used as a model for solving combinatorial optimization problems.
- the following equation (1) shows the energy of the Ising model.
- s i and s j are spins
- spin is a binary variable having a value of either +1 or -1.
- N is the number of spins.
- h i is the local magnetic field acting on each spin.
- J is a matrix of coupling coefficients between spins.
- the matrix J is a real symmetric matrix having a diagonal component of 0. Therefore, J ij indicates the elements of the matrix J in row i and column j.
- the Ising model of equation (1) is a quadratic equation for spin, but as will be described later, an extended Ising model including terms of the third order or higher of spin (Ising having many-body interactions). Model) may be used.
- the solution of the Ising model is expressed in the form of spin vectors (s 1 , s 2 , ..., S N ). Let this vector be called the solution vector.
- the vector (s 1 , s 2 , ..., S N ) at which the energy E Ising is the minimum value is called an optimum solution.
- the calculated Ising model solution does not necessarily have to be an exact optimal solution.
- an Ising problem the problem of finding an approximate solution in which the energy E Ising is as small as possible (that is, an approximate solution in which the value of the objective function is as close to the optimum value as possible) using the Ising model is referred to as an Ising problem.
- the spin s i of the equation (1) is a binary variable, it can be easily converted with the discrete variable (bit) used in the combinatorial optimization problem by using the equation (1 + s i ) / 2. .. Therefore, it is possible to find the solution of the combinatorial optimization problem by converting the combinatorial optimization problem into the Ising problem and letting the Ising machine perform the calculation.
- the problem of finding a solution that minimizes the quadratic objective function with a discrete variable (bit) that takes either 0 or 1 as a variable is QUABO (Quadratic Unconstrained Binary Optimization, unconstrained binary variable quadratic optimization). ) It is called a problem. It can be said that the Ising problem represented by the equation (1) is equivalent to the QUABO problem.
- Quantum annealing realizes quantum annealing using a superconducting circuit.
- the coherent Ising machine utilizes the oscillation phenomenon of a network formed by an optical parametric oscillator.
- the quantum branching machine utilizes the quantum mechanical branching phenomenon in a network of parametric oscillators with the Kerr effect. While these hardware implementations have the potential to significantly reduce calculation time, they also have the problem of being difficult to scale up and operate stably.
- SA Simulated annealing
- Technology is being developed to perform simulated annealing faster.
- general simulated annealing is a sequential update algorithm in which each variable is sequentially updated, it is difficult to speed up the calculation process by parallelization.
- a simulated branching algorithm has been proposed that can solve large-scale combinatorial optimization problems at high speed by parallel calculation in a digital computer.
- information processing devices, information processing systems, information processing methods, storage media, and programs that solve combinatorial optimization problems using a simulated branching algorithm will be described.
- H is a Hamiltonian of the following formula (3).
- the Hamiltonian H' including the terms G (x 1 , x 2 , ... x N ) shown in the following formula (4) is used. You may.
- processing will be described by taking the case where the term G (x 1 , x 2 , ... X N ) is a correction term as an example.
- the term G (x 1 , x 2 , ... X N ) may be derived from the constraints of the combinatorial optimization problem.
- the method and type of derivation of the term G (x 1 , x 2 , ... X N ) are not limited. Further, in the equation (4), the argument G (x 1 , x 2 , ... X N ) is added to the original Hamiltonian H. However, the argument G (x 1 , x 2 , ... X N ) may be incorporated into the extended Hamiltonian in a different way.
- each term is either the element x i of the first vector or the element y i of the second vector.
- an extended Hamiltonian that can be divided into a term U of the element x i of the first vector and a term V of the element y i of the second vector may be used.
- the processing will be described assuming that the time evolution is calculated. However, the calculation of the simulated branch algorithm may be performed by a method other than time evolution.
- the coefficient D corresponds to detuning.
- the coefficient p (t) corresponds to the above-mentioned first coefficient, and is also called pumping amplitude.
- the value of the coefficient p (t) can be monotonically increased according to the number of updates.
- the initial value of the coefficient p (t) may be set to 0.
- the first coefficient p (t) is a positive value and the value of the first coefficient p (t) increases according to the number of updates will be described as an example.
- the sign of the algorithm presented below may be inverted and a negative first coefficient p (t) may be used.
- the value of the first coefficient p (t) decreases monotonically according to the number of updates.
- the absolute value of the first coefficient p (t) increases monotonically according to the number of updates.
- the coefficient K corresponds to a positive Kerr coefficient.
- a constant coefficient can be used as the coefficient c.
- the value of the coefficient c may be determined before performing the calculation by the simulated branching algorithm.
- the coefficient c can be set to a value close to the reciprocal of the maximum eigenvalue of the J (2) matrix.
- the value c 0.5D ⁇ (N / 2n) can be used.
- n is the number of edges of the graph related to the combinatorial optimization problem.
- a (t) is a coefficient that increases with p (t) when calculating the time evolution.
- ⁇ (p (t) / K) can be used as a (t).
- the information processing apparatus may execute the above-mentioned conversion process based on the number of updates of the first vector and the second vector, and determine whether or not to obtain the solution vector.
- the symplectic Euler method can be used to convert (2) above into a discrete recurrence formula to perform the solution.
- (6) shows an example of a simulated branching algorithm after conversion to a recurrence formula.
- t is a time
- ⁇ t is a time step (time step width).
- the time t and the time step ⁇ t are used to show the correspondence with the differential equation.
- the time t and the time step ⁇ t do not necessarily have to be included as explicit parameters. For example, if the time step ⁇ t is set to 1, it is possible to remove the time step ⁇ t from the algorithm at the time of mounting.
- x i (t + ⁇ t) may be interpreted as the updated value of x i (t). That is, "t” in (4) above indicates the value of the variable before the update, and "t + ⁇ t” indicates the value of the variable after the update.
- the value of spin s i is calculated based on the sign of the variable x i after increasing the value of p (t) from the initial value (eg 0) to a predetermined value.
- sgn (x i) + 1
- the value of spin s i can be obtained by converting the variable x i with a sign function.
- Solution of a combinatorial optimization problem for the timing of obtaining a not particularly limited.
- the solution (solution vector) of the combinatorial optimization problem may be obtained when the number of updates of the first vector and the second vector, the value of the first coefficient p, or the value of the objective function becomes larger than the threshold value.
- FIG. 6 shows an example of processing in the case of calculating the solution of the simulated branch algorithm by time evolution. Hereinafter, the process will be described with reference to FIG.
- the calculation server obtains the matrix J ij and vector h i corresponding to the problem from the management server 1 (step S101). Then, the calculation server initializes the coefficients p (t) and a (t) (step S102). For example, the values of the coefficients p and a can be set to 0 in step S102, but the initial values of the coefficients p and a are not limited.
- the calculation server initializes the first variable x i and the second variable y i (step S103).
- the first variable x i is an element of the first vector.
- the second variable y i is an element of the second vector.
- the compute server may initialize x i and y i with pseudo-random numbers, for example.
- the method of initializing x i and y i is not limited. Further, the variables may be initialized at different timings, or at least one of the variables may be initialized a plurality of times.
- the calculation server updates the first vector by weighting and adding the element y i of the second vector corresponding to the element x i of the first vector (step S104). For example, in step S104, ⁇ t ⁇ D ⁇ y i can be added to the variable x i . Then, the calculation server updates the element y i of the second vector (steps S105 and S106). For example, in step S105, ⁇ t ⁇ [(pdK ⁇ x i ⁇ x i ) ⁇ x i ] can be added to the variable y i . In step S106, it is possible to add a - ⁇ t ⁇ c ⁇ h i ⁇ a - ⁇ t ⁇ c ⁇ ⁇ J ij ⁇ x j and into the variable y i.
- the calculation server updates the values of the coefficients p and a (step S107). For example, a constant value ( ⁇ p) can be added to the coefficient p, and the coefficient a can be set to the positive square root of the updated coefficient p. However, as will be described later, this is only an example of how to update the values of the coefficients p and a.
- the calculation server determines whether or not the number of updates of the first vector and the second vector is less than the threshold value (step S108). If the number of updates is less than the threshold value (YES in step S108), the calculation server re-executes the processes of steps S104 to S107.
- step S109 When the number of updates is equal to or greater than the threshold value (NO in step S108), the spin s i , which is an element of the solution vector, is obtained based on the element x i of the first vector (step S109).
- step S109 for example, in the first vector, the variable x i is a positive value +1, converts each variable x i is a negative value -1, it is possible to obtain a solution vector.
- step S108 when the number of updates is less than the threshold value (YES in step S108), the Hamiltonian value may be calculated based on the first vector, and the first vector and the Hamiltonian value may be stored. .. This allows the user to select the approximate solution closest to the optimal solution from the plurality of first vectors.
- processing may be parallelized using a plurality of calculation servers.
- Processing may be parallelized by a plurality of processors.
- the implementation and the mode of parallelization of processing for realizing parallelization of processing are not limited.
- the execution order of the update processing of the variables x i and y i shown in steps S105 to S106 described above is only an example. Therefore, the update processing of the variables x i and y i may be executed in a different order. For example, the order in which the update process of the variable x i and the update process of the variable y i may be executed may be interchanged. Further, the order of sub-processing included in the update processing of each variable is not limited. For example, the execution order of the addition processing included in the update processing of the variable y i may be different from the example of FIG. The execution order and timing of the processes that are the prerequisites for executing the update process of each variable are not particularly limited.
- the computing problems terms may be performed in parallel with other processing including processing of updating the variable x i.
- the order and timing at which the update processing of the variables x i and y i, the sub-processing included in the update processing of each variable, and the calculation processing of the problem item are executed are not limited, and the same applies to the processing of each flowchart shown below. Is.
- the calculation node is, for example, a calculation server (information processing unit), a processor (CPU), a GPU, a semiconductor circuit, a virtual computer (VM), a virtual processor, a CPU thread, and a process.
- the calculation node may be any computational resource that can be the execution subject of the calculation process, and does not limit the particle size and the distinction between hardware / software.
- each calculation node executes the calculation process independently, there is a possibility that multiple calculation nodes will search for overlapping areas in the solution space. Further, when the calculation process is repeated, the calculation node may search the same region of the solution space in a plurality of trials. Therefore, the same local solution may be calculated by a plurality of calculation nodes, or the same local solution may be calculated repeatedly. Ideally, the optimal solution should be found by searching all the local solutions in the solution space and evaluating each local solution in the calculation process. On the other hand, considering that there can be many local solutions in the solution space, the information processing device / information processing system executes efficient solution processing, and a practical solution within a realistic calculation time and amount of calculation. Is desired to be obtained.
- the calculation node can store the calculated first vector in the storage unit during the calculation process.
- the calculation node reads out the first vector x (m) calculated before the storage unit.
- m is a number indicating the timing at which the element of the first vector is obtained.
- the calculation node executes the correction process based on the previously calculated first vector x (m) .
- the previously calculated first vector will be referred to as the searched vector and will be distinguished from the first vector to be updated.
- the correction process can be performed by using the above-mentioned correction term G (x 1 , x 2 , ... X N ).
- Equation (7) is an example of the distance between the first vector and the searched vector. Equation (7) is called the Q-th power norm. In equation (7), Q can take any positive value.
- the following equation (8) is an infinite Q of the equation (7) and is called an infinite power norm.
- the square norm is used as the distance will be described as an example. However, it does not limit the type of distance used in the calculation.
- the correction argument G (x 1 , x 2 , ... X N ) may include the reciprocal of the distance between the first vector and the searched vector.
- the value of the correction term G (x 1 , x 2 , ... X N ) becomes large.
- (9) is only an example of a correction term that can be used in the calculation. Therefore, in the calculation, a correction term having a form different from that in (9) may be used.
- the following equation (10) is an example of the extended Hamiltonian H'including the correction argument.
- any positive value can be used as the coefficient c A in equation (10).
- any positive value can be used for k A.
- the correction argument of (10) includes the sum of the reciprocals of the distances calculated using each of the searched vectors obtained so far. That is, the processing circuit of the information processing apparatus may be configured to calculate the reciprocal of the distance using each of the plurality of searched vectors and calculate the correction term by adding the plurality of reciprocals. As a result, the update process of the first vector can be executed so as to avoid the regions in the vicinity of the plurality of searched vectors obtained so far.
- the above (11) can be converted into a discrete recurrence formula to calculate the simulated branching algorithm.
- (13) shows an example of a simulated branching algorithm after conversion to a recurrence formula.
- the following item (14) is derived from Ising energy. Since the form of this argument is determined according to the problem to be solved, it is called a problem term (problem term). As will be described later, the problem section may be different from (14).
- FIG. 7 shows an example of processing in the case of performing a solution using an algorithm including a correction term. Hereinafter, the process will be described with reference to FIG. 7.
- the calculation server initializes the coefficients p (t), a (t) and the variable m (step S111).
- the values of the coefficients p and a can be set to 0 in step S111, but the initial values of the coefficients p and a are not limited.
- the variable m can be set to 1 in step S111.
- the calculation server is assumed to have acquired matrix J ij and vector h i corresponding to the problem from the management server 1 before the processing of the flowchart of FIG. 7 is started.
- the calculation server initializes the first variable x i and the second variable y i (step S112).
- the first variable x i is an element of the first vector.
- the second variable y i is an element of the second vector.
- the compute server may initialize x i and y i with pseudo-random numbers, for example.
- the method of initializing x i and y i is not limited.
- the calculation server updates the first vector by weighting and adding the second variable y i corresponding to the first variable x i (step S113). For example, in step S113, ⁇ t ⁇ D ⁇ y i can be added to the variable x i .
- the calculation server updates the second variable y i (steps S114 to S116). For example, in step S114, it is possible to add the ⁇ t ⁇ [(p-D- K ⁇ x i ⁇ x i) ⁇ x i] to y i.
- step S115 it is possible to further add the - ⁇ t ⁇ c ⁇ h i ⁇ a - ⁇ t ⁇ c ⁇ ⁇ J ij ⁇ x j in y i.
- Step S115 corresponds to the process of adding the problem term to the second variable y i .
- step S116 the correction term of (12) can be added to y i .
- the correction term can be calculated based on, for example, the searched vector and the first vector stored in the storage unit.
- the calculation server updates the values of the coefficients p (first coefficient) and a (step S117). For example, a constant value ( ⁇ p) can be added to the coefficient p, and the coefficient a can be set to the positive square root of the updated coefficient p. However, as will be described later, this is only an example of how to update the values of the coefficients p and a. Further, when the variable t is used to determine whether or not to continue the loop, ⁇ t may be added to the variable t. Then, the calculation server determines whether or not the number of updates of the first vector and the second vector is less than the threshold value (step S118). For example, the determination in step S118 can be performed by comparing the value of the variable t with T. However, the determination may be made by other methods.
- step S118 If the number of updates is less than the threshold value (YES in step S118), the calculation server executes the processes of steps S113 to S117 again.
- the first vector is stored in the storage unit as the searched vector, and m is incremented (step S119). Then, when the number of searched vectors stored in the storage unit is equal to or greater than the threshold value Mth, the searched vectors in the storage unit are deleted for any m (step S120).
- the process of storing the first vector as the searched vector in the storage unit may be executed at an arbitrary timing between the execution of step S113 and the execution of step S117.
- the calculation server substitutes the first vector and the second vector into the Hamiltonian of the above equation (6), and calculates the Hamiltonian value E. Then, the calculation server determines whether or not the Hamiltonian value E is less than the threshold value E 0 (step S121). When the Hamiltonian value E is less than the threshold value E 0 (YES in step S121), the calculation server can obtain the spin s i which is an element of the solution vector based on the first variable x i (shown in the figure). Z). For example, in the first vector, the first variable x i is a positive value +1, converts respectively the first variable x i is a negative value -1, it is possible to obtain a solution vector.
- step S121 if the Hamiltonian value E is not less than the threshold value E 0 (NO in step S121), the calculation server re-executes the processes after step S111. As described above, in the determination in step S121, it is confirmed whether or not the optimum solution or an approximate solution close to the optimum solution is obtained. As described above, the processing circuit of the information processing apparatus may be configured to determine whether or not to stop updating the first vector and the second vector based on the value of the Hamiltonian (objective function).
- the user can determine the value of the threshold value E 0 according to the sign used in the formulation of the problem and the accuracy required in the solution.
- the first vector with the minimum Hamiltonian value may be the optimum solution
- the first vector with the maximum Hamiltonian value may be the optimum solution.
- the first vector having the minimum value is the optimum solution.
- the calculation server may calculate the Hamiltonian value at any time.
- the calculation server can store the Hamiltonian values and the first and second vectors used in the calculation in the storage unit.
- the processing circuit of the information processing apparatus may be configured to store the updated second vector as a third vector in the storage unit. Further, the processing circuit is configured to read the third vector updated to the same iteration as the searched vector from the storage unit and calculate the Hamiltonian (objective function) value based on the searched vector and the third vector. It may have been done.
- the user can determine how often to calculate the Hamiltonian value, depending on the amount of storage space and computational resources available. Further, at the timing of step S118, whether or not to continue the loop processing based on whether or not the number of combinations of the first vector, the second vector, and the Hamiltonian values stored in the storage unit exceeds the threshold value. The determination may be made. In this way, the user can select the searched vector closest to the optimum solution from the plurality of searched vectors stored in the storage unit and calculate the solution vector.
- the processing circuit of the information processing device selects one of the searched vectors from a plurality of searched vectors stored in the storage unit based on the value of the Hamiltonian (objective function), and is the positive value of the selected searched vector.
- the solution vector may be calculated by converting one variable to a first value and converting a negative first variable to a second value smaller than the first value.
- the first value is, for example, +1.
- the second value is, for example, -1.
- the first value and the second value may be other values.
- processing may be parallelized using a plurality of calculation servers.
- Processing may be parallelized by a plurality of processors.
- the implementation and the mode of parallelization of processing for realizing parallelization of processing are not limited.
- step S120 of FIG. 7 a process of deleting one of the searched vectors stored in the storage unit was executed.
- the searched vector to be deleted can be randomly selected. For example, if there is a limit on the storage area that can be used, the above-mentioned threshold value Mth can be determined based on the limit. Further, regardless of the limitation of the usable storage area, the amount of calculation in step S116 (calculation of the correction term) can be suppressed by setting an upper limit on the number of searched vectors held in the storage unit. Specifically, the calculation process of the correction term can be executed with a calculation amount of a constant multiple of N ⁇ Mth or less.
- the calculation server may always skip the process of step S120, or may execute other processes at the timing of step S120.
- the searched vector may be moved to another storage. Further, when the computational resources are sufficient, it is not necessary to perform the deletion processing of the searched vector.
- a storage unit and a plurality of processing circuits are used to obtain a first vector having a first variable as an element and a second vector having a second variable corresponding to the first variable as an element. Update repeatedly.
- the information processing method searches for a step of updating the first vector by weighting and adding the second variable corresponding to the first variable by a plurality of processing circuits, and searching for the updated first vector of the plurality of processing circuits.
- the information processing method includes a step in which a plurality of information processing devices update the first vector by weighting and adding a second variable corresponding to the first variable, and a first vector in which the plurality of information processing devices are updated. Is stored in the storage device as a searched vector, and a step in which a plurality of information processing devices weight the first variable with a first coefficient that monotonically increases or decreases according to the number of updates and adds it to the corresponding second variable.
- a step in which a plurality of information processing devices calculate a correction term including the inverse of the distance between the first vector to be updated and the searched vector, and a step in which a plurality of information processing devices add the correction term to the second variable. May include.
- the program repeatedly updates, for example, the first vector having the first variable as an element and the second vector having the second variable corresponding to the first variable as an element.
- the program updates the first vector by weighting and adding the second variable corresponding to the first variable, and saves the updated first vector as the searched vector in the storage unit.
- a step of adding to a variable a step of reading the searched vector from the storage unit, a step of calculating a correction term including the inverse of the distance between the first vector to be updated and the searched vector, and a second correction term. It may cause the computer to perform a step of adding to a variable.
- the storage medium may be a non-temporary computer-readable storage medium in which the above-mentioned program is stored.
- the calculation node may be any computational resource that can be the execution subject of the calculation process, and is the same as the above in that the particle size and the distinction between hardware / software are not limited.
- a plurality of calculation nodes may share and execute the update processing of the same pair of the first vector and the second vector. In this case, it can be said that the plurality of calculation nodes form one group for calculating the same solution vector.
- a plurality of calculation nodes may be divided into groups that execute update processing of different pairs of the first vector and the second vector. In this case, it can be said that the plurality of calculation nodes are divided into a plurality of groups, each of which calculates a different solution vector.
- the information processing device may include a plurality of processing circuits.
- each processing circuit may be divided into a plurality of groups that execute update processing of different pairs of the first vector and the second vector.
- Each processing circuit may be configured to read out the searched vector stored in the storage unit by another processing circuit.
- the information processing system including the storage device 7 and the plurality of information processing devices repeatedly updates the first vector having the first variable as an element and the second vector having the second variable corresponding to the first variable as an element. You may.
- each information processing device updates the first vector by weighting and adding the second variable corresponding to the first variable, and stores the updated first vector as the searched vector in the storage device 7.
- the first variable is weighted by the first coefficient that monotonically increases or decreases according to the number of updates, added to the corresponding second variable, the problem term is calculated using the plurality of first variables, and the problem term is the second variable. Is added to, the searched vector is read from the storage device 7, a correction term including the inverse of the distance between the first vector to be updated and the searched vector is calculated, and the correction term is added to the second variable. It may be configured to update the second vector.
- each information processing device may be divided into a plurality of groups that execute update processing of different pairs of the first vector and the second vector.
- Each information processing device may be configured to read out the searched vector stored in the storage unit by another information processing device.
- Equation (15) below is an example of a Hamiltonian that does not include a correction argument.
- a plurality of calculation nodes may search for overlapping regions in the solution space, or a plurality of calculation nodes may search for overlapping areas. It is possible to obtain the same local solution.
- a correction term such as (16) below can be used.
- m1 indicates a variable or value used in the calculation of each calculation node.
- m2 indicates the variables used in the calculation by the other calculation nodes as seen from each calculation node.
- the vector x (m1 ) of (16) is the first vector calculated by the self-calculation node.
- the vector x (m2) is the first vector calculated by other calculation nodes. That is, when the correction term (16) is used, the first vector calculated by another calculation node is used as the searched vector.
- arbitrary positive values can be set for c G and k G in (16). The values of c G and k G may be different.
- the extended Hamiltonian of the following equation (17) is obtained.
- the value of the denominator becomes smaller in each of the correction terms shown in (16) and (17). Therefore, the value of (16) becomes large, and the update process of the first vector x (m1) is executed at each calculation node so as to avoid the region near the vector x (m2) .
- the above (18) can be converted into a discrete recurrence formula to calculate the simulated branching algorithm.
- (20) shows an example of a simulated branching algorithm after conversion to a recurrence formula.
- the algorithm (20) also includes the problem item (14) described above. As will be described later, a problem term having a format different from that of (20) may be used.
- the information processing device may include a plurality of processing circuits.
- Each processing circuit may be configured to store the updated first vector in the storage unit.
- each processing circuit can calculate the correction term using the searched vector calculated by the other processing circuits.
- each processing circuit is configured to transfer the updated first vector to another processing circuit and calculate the correction term using the first vector received from the other processing circuit instead of the searched vector. You may.
- FIG. 8 shows an example of processing in the case of efficiently performing a solution using the first vector calculated by another calculation node.
- the process will be described with reference to FIG.
- the calculation server retrieves matrix J ij and vector h i corresponding to the problem from the management server 1, the coefficient p (t), to initialize the a (t) and the variable t (step S131).
- the values of p, a, and t can be set to 0 in step S131.
- the initial values of p, a and t are not limited.
- the first variable x i (m1) is an element of the first vector.
- the second variable y i (m1) is an element of the second vector.
- x i (m1) and y i (m1) may be initialized by pseudo-random numbers.
- the method of initializing x i (m1) and y i (m1) is not limited.
- the calculation server substitutes 1 for the counter variable m1 (step S133).
- the counter variable m1 is a variable that specifies a calculation node.
- the calculation node # 1 that performs the calculation process is specified.
- the processing of steps S131 to S133 may be executed by a computer other than the calculation server such as the management server 1.
- the calculation node # (m1) updates the first vector by weighting and adding the second variable y i (m1) corresponding to the first variable x i (m1) , and updates the updated first vector. It is saved in the storage area shared with other compute nodes (step S134). For example, in step S134, it is possible to add a ⁇ t ⁇ D ⁇ y i (m1 ) to x i (m1). For example, if the other compute node is another processor or a thread on another processor, the updated first vector can be stored in shared memory 32 or storage 34. Further, when the other calculation node is a calculation server, the first vector may be stored in the shared external storage. Other compute nodes can use the first vector stored in the shared storage area as the searched vector. In step S134, the updated first vector may be transferred to another calculation node.
- step S1305 it is possible to add a Delta] t ⁇ a y i (m1) [(p -D-K ⁇ x i (m1) ⁇ x i (m1)) ⁇ x i (m1)].
- step S136 it is possible to add a further y i (m1) to - ⁇ t ⁇ c ⁇ h i ⁇ a - ⁇ t ⁇ c ⁇ ⁇ J ij ⁇ x j (m1).
- Step S136 corresponds to the process of adding the problem term to the second variable y i .
- step S137 the correction term of (19) can be added to the variable y i .
- the correction term is calculated based on, for example, the first vector and the searched vector stored in the shared storage area. Then, the calculation server increments the counter variable m1 (step S138).
- step S139 determines whether or not the counter variable 1 is M or less.
- the processes of steps S134 to S138 are executed again.
- the calculation server updates the values of p, a, and t (step S140). For example, a constant value ( ⁇ p) can be added to p, a can be set to the positive square root of the updated coefficient p, and ⁇ t can be added to t.
- ⁇ p a constant value
- ⁇ t can be added to t.
- the calculation server determines whether or not the number of updates of the first vector and the second vector is less than the threshold value (step S141). For example, the determination in step S141 can be performed by comparing the value of the variable t with T. However, the determination may be made by other methods.
- the calculation server executes the process of step S133, and the designated calculation node further executes the process of step S134 and subsequent steps.
- the calculation server or the management server 1 can obtain the spin s i which is an element of the solution vector based on the first variable x i (shown in the figure). Z).
- the first variable x i is a positive value +1, converts respectively the first variable x i is a negative value -1, it is possible to obtain a solution vector.
- calculation nodes # 1 to calculation nodes # M sequentially execute the update processing of the elements of the first vector and the second vector by a loop.
- the processes of steps S133, S138, and S139 in the flowchart of FIG. 8 may be skipped, and instead, a plurality of calculation nodes may execute the processes of steps S134 to S137 in parallel.
- a component that manages a plurality of calculation nodes for example, the control unit 13 of the management server 1 or any calculation server
- the overall calculation process can be speeded up.
- the number M of a plurality of calculation nodes that execute the processes of steps S134 to S137 in parallel is not limited.
- the number M of the calculation nodes may be equal to the number of elements (number of variables) N of the first vector and the second vector, respectively.
- one solution vector can be obtained by using M calculation nodes.
- the number M of the calculation nodes may be different from the number N of the elements possessed by the first vector and the second vector, respectively.
- the number M of the calculation nodes may be a positive integer multiple of the number of elements N of each of the first vector and the second vector.
- M / N solution vectors can be obtained by using a plurality of calculation nodes. Then, the plurality of calculation nodes are grouped according to the solution vector to be calculated. In this way, the calculated nodes grouped so as to perform the calculation of different solution vectors may share the searched vector and realize more efficient calculation processing. That is, the vector x (m2) may be the first vector calculated by the calculation nodes belonging to the same group. Further, the vector x (m2) may be the first vector calculated by the calculation nodes belonging to different groups. It is not necessary to synchronize the processing between the calculation nodes belonging to different groups.
- steps S134 to S137 may be executed in parallel so that at least a part of the N elements of the first vector and the second vector are updated in parallel.
- the implementation and mode of parallelization of processing are not limited.
- the calculation node may calculate the Hamiltonian value based on the first vector and the second vector at any timing.
- the Hamiltonian may be the Hamiltonian of (15) or the extended Hamiltonian including the amendment of (17). Moreover, both the former and the latter may be calculated.
- the compute node can store the values of the first vector, the second vector, and the Hamiltonian in the storage unit. These processes may be executed every time if the determination in step S141 is affirmative. Further, it may be executed at a part of the timings when the determination in step S141 becomes affirmative. Further, the above-mentioned processing may be executed at other timings. The user can determine how often the Hamiltonian values are calculated, depending on the amount of storage and computational resources available.
- step S141 it is determined whether or not to continue the loop processing based on whether or not the number of combinations of the first vector, the second vector, and the Hamiltonian values stored in the storage unit exceeds the threshold value. You may go. In this way, the user can select the first vector closest to the optimum solution from the plurality of first vectors (local solutions) stored in the storage unit and calculate the solution vector.
- FIGS. 9 and 10 show an example of processing in the case where the solution is efficiently performed by the simulated branch algorithm in a plurality of calculation nodes. Hereinafter, the process will be described with reference to FIGS. 9 and 10.
- the calculation server retrieves matrix J ij and vector h i corresponding to the problem from the management server 1, and transfers these data to each compute node (step S150).
- the management server 1 may transfer the matrix J ij and vector h i corresponding to a problem with each compute node directly.
- the variable m1 indicates the number of each calculation node in the information processing system. Further, m2 is assumed to indicate the numbers of other calculation nodes as seen from each calculation node.
- the number M of the calculation nodes may be equal to the number N of elements each of the first vector and the second vector. Further, the number M of the calculation nodes may be different from the number N of the elements possessed by the first vector and the second vector, respectively. Further, the number M of the calculation nodes may be a positive integer multiple of the number of elements N of the first vector and the second vector, respectively.
- each calculation node initializes the variables t (m1) and the coefficients p (m1) and a (m1) (step S152).
- the values of p (m1) , a (m1), and t (m1) can be set to 0 in step S131.
- the initial values of p (m1) , a (m1) and t (m1) are not limited.
- each compute node initializes the first variable x i (m1) and the second variable y i (m1) (step S153).
- the first variable x i (m1) is an element of the first vector.
- the second variable y i (m1) is an element of the second vector.
- the compute server may initialize x i (m1) and y i (m1) with pseudo-random numbers, for example.
- the method of initializing x i (m1) and y i (m1) is not limited.
- each calculation node updates the first vector by weighting and adding the second variable y i (m1) corresponding to the first variable x i (m1) (step S154). For example, in step S154, it is possible to add a ⁇ t ⁇ D ⁇ y i (m1 ) to x i (m1).
- each calculation node updates the second variable y i (m1) (steps S155 to S157). For example, in step S155, it is possible to add a Delta] t ⁇ a y i (m1) [(p -D-K ⁇ x i (m1) ⁇ x i (m1)) ⁇ x i (m1)].
- step S156 it is possible to add a further y i (m1) to - ⁇ t ⁇ c ⁇ h i ⁇ a - ⁇ t ⁇ c ⁇ ⁇ J ij ⁇ x j (m1).
- Step S156 corresponds to the process of adding the problem term to the second variable y i .
- the correction term of (19) can be added to the second variable y i .
- Each compute node calculates the correction term based on, for example, the first vector and the searched vector stored in the shared storage area 300.
- the searched vector may be saved by a calculation node performing calculation of a different solution vector. Further, the searched vector may be saved by a calculation node performing calculation of the same solution vector.
- each compute node updates the values of t (m1) , p (m1), and a (m1) (step S158). For example, ⁇ t can be added to t (m1) , a constant value ( ⁇ p) can be added to p (m1) , and a (m1) can be set as the positive square root of the updated coefficient p. However, this is only an example of how to update the values of p (m1) , a (m1) and t (m1) .
- each calculation node stores a snapshot of the first vector in the storage area 300 (step S159).
- the snapshot refers to data including the values of each element x i (m1) of the first vector at the timing when step S159 is executed.
- the storage area 300 a storage area accessible from a plurality of calculation nodes can be used. Further, as the storage area 300, for example, a shared memory 32, a storage 34, or a storage area in an external storage can be used. However, the type of memory or storage that provides the storage area 300 is not limited. The storage area 300 may be a combination of a plurality of types of memory or storage. The second vector updated in the same iteration as the first vector in step S159 may be stored in the storage area 300.
- each calculation node determines whether or not the number of updates of the first vector and the second vector is less than the threshold value (step S160).
- the determination in step S160 can be performed by comparing the value of the variable t (m1) with T.
- the determination may be made by other methods.
- step S160 If the number of updates is less than the threshold value (YES in step S160), the calculation node executes the processes after step S154.
- the calculation server increments the counter variable m1 (step S161). Note that step S161 may be skipped.
- the calculation server or the management server 1 can select at least one of the searched vectors stored in the storage area 300 based on the Hamiltonian value and calculate the solution vector (step S162).
- the Hamiltonian may be the Hamiltonian of (15) or the objective function including the correction term of (17). Moreover, both the former and the latter may be calculated.
- the Hamiltonian value may be calculated at a timing different from that in step S162. In that case, the compute node can store the Hamiltonian values together with the first and second vectors in the storage area 300.
- step S159 it is not always necessary to save the variable snapshot in the storage area 300 every time.
- a snapshot of the variable may be stored in the storage area 300 in a part of the loop processing of steps S154 to S159. As a result, the consumption of the storage area can be suppressed.
- the data is recovered using the snapshots of the first vector and the second vector stored in the storage area 300, and the calculation process is restarted. Is possible. Storing the data of the first vector and the second vector in the storage area 300 contributes to improving the fault tolerance and availability of the information processing system.
- each calculation node By preparing the storage area 300 in the information processing system in which a plurality of calculation nodes can store the elements of the first vector (and the elements of the second vector) at arbitrary timings, each calculation node can be set to step S157 regardless of the timing.
- the correction term can be calculated and the correction term can be added to the variable y i .
- the calculated first vector may be mixed in the iterations having different loop processing. Therefore, when one calculation node is updating the first vector, the other calculation nodes can calculate the correction term using the first vector before the update. This makes it possible to efficiently solve the combinatorial optimization problem in a relatively short time while reducing the frequency of synchronous processing between a plurality of calculation nodes.
- FIG. 11 conceptually shows an example of an information processing system including a plurality of calculation nodes.
- FIG. 11 shows compute node # 1, compute node # 2, and compute node # 3.
- Information about the first vector that has been searched for is exchanged between the calculation node # 1 and the calculation node # 2.
- information about the first vector that has been searched for is exchanged between the calculation node # 2 and the calculation node # 3.
- information about the first vector that has been searched for may be exchanged between the calculation node # 1 and the calculation node # 3.
- the data transfer between the compute node # 1 and the compute node # 3 may be performed directly or indirectly via the compute node # 2. As a result, it is possible to avoid searching in overlapping solution spaces in a plurality of calculation nodes.
- FIG. 11 shows three calculation nodes.
- the number of computing nodes provided in the information processing apparatus or information processing system may be different from this.
- it does not limit the connection topology between computing nodes and the route through which data is transferred between computing nodes.
- the computing node is a processor
- data transfer may be performed via interprocess communication or shared memory 32.
- the calculation node is a calculation server
- data transfer may be performed via the interconnect between the calculation servers including the switch 5.
- each calculation node of FIG. 11 may execute the process of saving the snapshot of the first vector in the storage area 300 described in the flowcharts of FIGS. 9 and 10 in parallel.
- FIG. 12 shows the first vector x (m1) calculated by the calculation node # 1, the first vector x (m2) calculated by the calculation node # 2, and the value of the extended Hamiltonian H'.
- the calculation node # 1 acquires the data of the first vector x (m2) from the calculation node # 2.
- the calculation node # 1 can calculate the correction term of (19) using the acquired first vector x (m2) and update the first vector and the second vector.
- the value of the extended Hamiltonian increases in the vicinity of the first vector x (m2) of the calculation node # 2 in the calculation node # 1.
- the probability that the first vector x (m1) updated in the calculation node # 1 goes to a region distant from the first vector x (m2) of the calculation node # 2 in the solution space increases.
- the calculation node # 2 acquires the data of the first vector x (m1) from the calculation node # 1.
- the calculation node # 2 can calculate the correction term of (19) using the acquired first vector x (m1) and update the first vector and the second vector.
- the value of the extended Hamiltonian increases in the vicinity of the first vector x (m1) of the calculation node # 1 in the calculation node # 2.
- the probability that the first vector x (m2) updated in the calculation node # 2 goes to a region distant from the first vector x (m1) of the calculation node # 1 in the solution space increases.
- the histogram in FIG. 15 shows the number of calculations required to obtain the optimum solution in a plurality of calculation methods.
- the data obtained when the Hamiltonian cycle problem of 48 nodes and 96 edges is solved is used.
- the vertical axis of FIG. 15 shows the frequency with which the optimum solution was obtained.
- the horizontal axis of FIG. 15 indicates the number of trials.
- “DEFAULT” corresponds to the result when the processing of the flowchart of FIG. 6 is executed using the Hamiltonian of the equation (3).
- ADAPTIVE corresponds to the result when the processing of the flowchart of FIG. 8 is executed using the extended Hamiltonian of the equation (10).
- GROUP corresponds to the result when the processing of the flowcharts of FIGS. 9 and 10 is executed using the extended Hamiltonian of the equation (10).
- the vertical axis of FIG. 15, a combination of different matrix J ij and vector h i when 1000 sets prepared, the optimal solution has been shown resulting frequency within a predetermined number of calculations.
- the number of calculations corresponds to the number of times the processing of the flowchart of FIG. 6 is executed.
- the number of calculations corresponds to the number M of the searched vectors in the equation (10).
- the frequency with which the optimum solution is obtained with 10 or less calculations is about 260.
- the frequency with which the optimum solution is obtained with 10 or less calculations is about 280.
- the frequency with which the optimum solution is obtained with 10 or less calculations is about 430. Therefore, in the case of the "GROUP” condition, the probability that the optimum solution can be obtained with a smaller number of calculations is higher than in other cases.
- the information processing device and the information processing system according to the present embodiment it is possible to avoid searching for overlapping regions of the solution space based on the data related to the searched vector. Therefore, it is possible to search for a solution in a wider area of the solution space and increase the probability of obtaining an optimum solution or an approximate solution close to it. Further, in the information processing apparatus and the information processing system according to the present embodiment, it is easy to parallelize the processing, whereby the calculation processing can be executed more efficiently. This makes it possible to provide the user with an information processing device or an information processing system that calculates the solution of the combinatorial optimization problem within a practical time.
- Equation (21) corresponds to the energy of the Ising model including many-body interactions.
- both QUABO and HOBO are a kind of unconstrained polynomial binary variable optimization (PUBO: Polynomial Unconstrained Binary Optimization). That is, among PUBOs, the combinatorial optimization problem having a quadratic objective function is QUABO. Further, among PUBOs, the combinatorial optimization problem having a third-order or higher objective function can be said to be HOBO.
- PUBO Polynomial Unconstrained Binary Optimization
- the Hamiltonian H in the above equation (3) may be replaced with the Hamiltonian H in the following equation (22).
- problem term is derived from the equation (22) using a plurality of first variables shown in the following equation (23).
- Problems section z i (23) takes the the second equation, obtained by partially differentiating the one of the variable x i (element of the first vector) format (22).
- the variable x i to be partially differentiated depends on the index i.
- the index i of the variable x i corresponds to the index that specifies the element of the first vector and the element of the second vector.
- the recurrence formula of (20) above is replaced with the recurrence formula of (24) below.
- (24) corresponds to a further generalization of the recurrence formula of (20).
- the term of many-body interaction may be used.
- the problem items shown above are only examples of problem items that can be used by the information processing apparatus according to the present embodiment. Therefore, the form of the problem argument used in the calculation may be different from these.
- additional processing may be performed when the first variable is updated.
- the value of the first variable x i is replaced with sgn (x i ). That is, when x i > 1 due to the update, the value of the variable x i is set to 1. Also, when x i ⁇ -1 due to the update, the value of the variable x i is set to -1. This makes it possible to approximate the spin s i with higher accuracy using the variable x i .
- the variable y i corresponding to the variable x i may be multiplied by the coefficient rf.
- the coefficient rf a coefficient rf of -1 ⁇ r ⁇ 0
- the arithmetic circuit sets the second variable corresponding to the first variable whose value is smaller than the first value, or the second variable corresponding to the first variable whose value is larger than the second value, to the original first variable.
- the arithmetic circuit sets the second variable corresponding to the first variable having a value smaller than -1 or the second variable corresponding to the first variable having a value greater than 1 as the original second variable with a second coefficient. It may be configured to update to a value multiplied by.
- the second coefficient corresponds to the above-mentioned coefficient rf.
- the arithmetic circuit may set the value of the variable y i corresponding to the variable x i as a pseudo-random number when x i > 1 due to the update. For example, random numbers in the range [-0.1, 0.1] can be used. That is, the arithmetic circuit converts the value of the second variable corresponding to the first variable whose value is smaller than the second value, or the value of the second variable corresponding to the first variable whose value is larger than the first value into a pseudo random number. It may be configured to be set.
- a continuous variable x is used in the problem term instead of a discrete variable. Therefore, there is a possibility that an error may occur with the discrete variable used in the original combinatorial optimization problem.
- the value sgn (x) obtained by converting the continuous variable x with a sign function can be used instead of the continuous variable x in the calculation of the problem term as shown in (26) below.
- sgn (x) corresponds to spin s.
- the product of the spins appearing in the problem term always takes a value of either -1 or 1, so when dealing with the HOMO problem having a higher-order objective function, an error occurs due to the product operation. Can be prevented.
- a spin vector can be obtained by converting each element of the first vector with a sign function.
- a PC cluster is a system that connects a plurality of computers and realizes computing performance that cannot be obtained by one computer.
- the information processing system 100 shown in FIG. 1 includes a plurality of calculation servers and processors, and can be used as a PC cluster.
- MPI Message Passing Interface
- MPI can be used to implement the control program 14E of the management server 1, the calculation program 34B and the control program 34C of each calculation server.
- the calculation of L variables among the variables x i included in the first vector (x 1 , x 2 , ..., X N ) for each processor. Can be done.
- m (j-1) L + 1, (j-1) L + 2, ..., jL ⁇ and ⁇ y.
- m ( j-1) L + 1, (j-1) L + 2, ⁇ , jL ⁇ calculated tensor shown in (27) as follows required to J of (n ) Shall be stored in a storage area accessible to processor # j (eg, register, cache, memory, etc.).
- each processor calculates a constant variable of the first vector and the second vector.
- the number of elements (variables) of the first vector and the second vector to be calculated may differ depending on the processor. For example, when there is a performance difference depending on the processor implemented in the calculation server, the number of variables to be calculated can be determined according to the performance of the processor.
- the values of all the components of the first vector are required.
- the conversion to a binary variable can be performed, for example, by using the sign function sgn (). Therefore, using the Allgather function, the values of all the components of the first vector (x 1 , x 2 , ..., X N ) can be shared by the Q processors.
- the second vector (y 1 , y 2 , ..., Y N )
- tensor J (n) it is not essential to share values between processors.
- Data sharing between processors can be realized, for example, by using interprocess communication or storing data in shared memory.
- Processor # j calculates the value of the problem item ⁇ z m
- m (j-1) L + 1, (j-1) L + 2, ..., JL ⁇ .
- the processor #j is calculated problem claim ⁇ z m
- m ( j-1) L + 1, based on the value of (j-1) L + 2 , ⁇ , jL ⁇ , variables ⁇ y m
- m Update (j-1) L + 1, (j-1) L + 2, ..., JL ⁇ .
- the tensor J (n) and the vector (x 1 , x 2 , ... , X N ) requires a product-sum operation, including the calculation of the product.
- FIG. 16 schematically shows an example of a multiprocessor configuration.
- the plurality of calculation nodes in FIG. 16 correspond to, for example, a plurality of calculation servers of the information processing system 100.
- the high-speed link in FIG. 16 corresponds to, for example, an interconnect between cables 4a to 4c of the information processing system 100 and a calculation server formed by the switch 5.
- the shared memory of FIG. 16 corresponds to, for example, the shared memory 32.
- the processor of FIG. 16 corresponds to, for example, the processors 33A to 33D of each calculation server.
- FIG. 16 shows the data arranged in each component and the data transferred between the components.
- the values of the variables x i and y i are calculated. Further, between the processor and the shared memory, the variable x i is transferred.
- L of the first vector (x 1 , x 2 , ..., X N ) and the second vector (y 1 , y 2 , ..., Y N ) are stored.
- the variable and a part of the tensor J (n) are saved.
- the first vector (x 1 , x 2 , ..., X N ) is transferred.
- all the elements of the first vector (x 1 , x 2 , ..., X N ) are required to update the variable y i in each processor.
- the simulated branch algorithm may be calculated using the GPU (Graphics Processing Unit).
- FIG. 17 schematically shows an example of a configuration using a GPU.
- FIG. 17 shows a plurality of GPUs connected to each other by a high-speed link.
- Each GPU has a plurality of cores that can access the shared memory.
- a plurality of GPUs are connected via a high-speed link to form a GPU cluster.
- the high speed link corresponds to an interconnect between the compute servers formed by the cables 4a-4c and the switch 5.
- a plurality of GPUs are used in the configuration example of FIG. 17, parallel calculations can be executed even when one GPU is used. That is, each GPU in FIG. 17 can execute the calculation corresponding to each calculation node in FIG. That is, the processor (processing circuit) of the information processing device (calculation server) may be the core of the Graphics Processing Unit (GPU).
- the processor processing circuit
- the information processing device may be the core of the Graphics Processing Unit (GPU).
- the variables x i and y i , and the tensor J (n) are defined as device variables.
- the GPU can calculate the product of the tensor J (n) required to update the variable y i and the first vector (x 1 , x 2 , ..., X N ) in parallel by the matrix vector product function. it can.
- the product of the tensor and the vector can be obtained by repeatedly executing the product operation of the matrix and the vector.
- the calculation of the first vector (x 1 , x 2 , ..., X N ) and the part of the second vector (y 1 , y 2 , ..., y N ) other than the multiply-accumulate operation. is, i-th element in each thread (x i, y i) to execute the update processing, it is possible to realize parallel processing.
- FIG. 18 shows an example of the overall processing executed to solve the combinatorial optimization problem. Hereinafter, the process will be described with reference to FIG.
- the combinatorial optimization problem is formulated (step S201). Then, the formulated combinatorial optimization problem is converted into an Ising problem (Ising model format) (step S202). Next, the solution of the Ising problem is calculated by the Ising machine (information processing device) (step S203). Then, the calculated solution is verified (step S204). For example, in step S204, it is confirmed whether or not the constraint condition is satisfied. Further, in step S204, the value of the objective function may be referred to to confirm whether or not the obtained solution is the optimum solution or an approximate solution close to it.
- step S205 it is determined whether or not to recalculate according to at least one of the verification result in step S204 and the number of calculations. If it is determined that the recalculation is to be performed (YES in step S205), the processes of steps S203 and S204 are executed again. On the other hand, when it is determined not to recalculate (NO in step S205), a solution is selected (step S206). For example, in step S206, selection can be made based on at least either the satisfaction of the constraints or the value of the objective function. If a plurality of solutions have not been calculated, the process of step S206 may be skipped. Finally, the selected solution is converted into the solution of the combinatorial optimization problem, and the solution of the combinatorial optimization problem is output (step S207).
- the present invention is not limited to the above embodiment as it is, and at the implementation stage, the components can be modified and embodied within a range that does not deviate from the gist thereof.
- various inventions can be formed by an appropriate combination of the plurality of components disclosed in the above-described embodiment. For example, some components may be removed from all the components shown in the embodiments. In addition, components across different embodiments may be combined as appropriate.
- Management server 2 Network 3a, 3b, 3c Calculation server 4a, 4b, 4c Cable 5 Switch 6 Client terminal 10 Processor 11 Management unit 12 Conversion unit 13 Control unit 14 Storage unit 14A Problem data 14B Calculation data 14C Management program 14D Conversion program 14E , 34C Control program 15, 31 Communication circuit 16 Input circuit 17 Output circuit 18 Operation device 19 Display device 20 Bus 32 Shared memory 33A, 33B, 33C, 33D Processor 34 Storage 34A Calculation data 34B Calculation program 35 Host bus adapter
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Computational Mathematics (AREA)
- Mathematical Optimization (AREA)
- Mathematical Analysis (AREA)
- Pure & Applied Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- Algebra (AREA)
- Artificial Intelligence (AREA)
- Databases & Information Systems (AREA)
- Probability & Statistics with Applications (AREA)
- Computer Hardware Design (AREA)
- Geometry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Operations Research (AREA)
- Computational Linguistics (AREA)
- Complex Calculations (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
Description
シミュレーテッド分岐アルゴリズムを含む最適化問題の計算では、最適解またはそれに近い近似解(実用的な解という)を得ることが望ましい。ただし、計算処理(例えば、図6の処理)の各試行で必ず実用的な解が得られるとは限らない。例えば、計算処理の試行後に得られる解が実用的な解ではなく局所解である可能性もある。また、問題に複数の局所解が存在している可能性もある。実用的な解が見つける確率を高めるために、複数の計算ノードのそれぞれに計算処理を実行させることが考えられる。また、計算ノードが繰り返し計算処理を実行し、複数回にわたって解を探索することも可能である。さらに、前者と後者の方法とを組み合わせてもよい。
複数の計算ノードが並列的にシミュレーテッド分岐アルゴリズムを実行する場合にも上述の適応的な探索を適用することが可能である。ここで、計算ノードが計算処理の実行主体となりうる何らかの計算資源であればよく、粒度およびハードウェア/ソフトウェアの区別を限定しない点は、上述と同様である。複数の計算ノードに第1ベクトルおよび第2ベクトルの同じペアの更新処理を分担して実行させてもよい。この場合、複数の計算ノードは、同一の解ベクトルを計算するひとつのグループを形成しているといえる。また、複数の計算ノードが第1ベクトルおよび第2ベクトルの異なるペアの更新処理を実行するグループに分けられていてもよい。この場合、複数の計算ノードは、それぞれが異なる解ベクトルを計算する複数のグループに分けられているといえる。
以下では、第1ベクトルおよび第2ベクトルの異なるペアの計算を行っている計算ノードのグループを跨って探索済ベクトルの共有を行うときにも適用可能な処理のその他の例について説明する。計算ノードが計算処理の実行主体となりうる何らかの計算資源であればよい。このため、計算ノードの粒度およびハードウェア/ソフトウェアの区別を限定するものではない。
シミュレーテッド分岐アルゴリズムを使うことにより、3次以上の目的関数を有する組合せ最適化問題を解くことも可能である。2値変数を変数とする3次以上の目的関数を最小化する変数の組合せを求める問題は、HOBO(Higher Order Binary Optimization)問題とよばれる。HOBO問題を扱う場合、高次へ拡張されたイジングモデルにおけるエネルギー式として、下記の式(21)を使うことができる。
ここでは、シミュレーテッド分岐アルゴリズムの変形例について説明する。例えば、誤差の軽減または計算時間の短縮を目的に、上述のシミュレーテッド分岐アルゴリズムに各種の変形を行ってもよい。
以下では、シミュレーテッド分岐アルゴリズムの計算時における変数の更新処理の並列化の例について説明する。
以下では、シミュレーテッド分岐アルゴリズムを用いて組合せ最適化問題を解くために実行される全体的な処理を説明する。
2 ネットワーク
3a、3b、3c 計算サーバ
4a、4b、4c ケーブル
5 スイッチ
6 クライアント端末
10 プロセッサ
11 管理部
12 変換部
13 制御部
14 記憶部
14A 問題データ
14B 計算データ
14C 管理プログラム
14D 変換プログラム
14E、34C 制御プログラム
15、31 通信回路
16 入力回路
17 出力回路
18 操作装置
19 表示装置
20 バス
32 共有メモリ
33A、33B、33C、33D プロセッサ
34 ストレージ
34A 計算データ
34B 計算プログラム
35 ホストバスアダプタ
Claims (17)
- 第1変数を要素とする第1ベクトルおよび前記第1変数に対応する第2変数を要素とする第2ベクトルを繰り返し更新するように構成された情報処理装置であって、
記憶部と、
前記第1変数に対応する前記第2変数を重み付け加算することによって前記第1ベクトルを更新し、
更新された前記第1ベクトルを探索済ベクトルとして前記記憶部に保存し、
前記第1変数を更新回数に応じて単調増加または単調減少する第1係数で重み付けし対応する前記第2変数に加算し、複数の前記第1変数を用いて問題項を計算し、前記問題項を前記第2変数に加算し、前記記憶部より前記探索済ベクトルを読み出し、更新対象の前記第1ベクトルと前記探索済ベクトルとの間の距離の逆数を含む補正項を計算し、前記補正項を前記第2変数に加算することによって前記第2ベクトルを更新するように構成された処理回路とを備える、
情報処理装置。 - 前記処理回路は、複数の前記探索済ベクトルのそれぞれを用いて前記距離の前記逆数を計算し、複数の前記逆数を加算することによって前記補正項を計算するように構成されている、
請求項1に記載の情報処理装置。 - 複数の前記処理回路を備え、
それぞれの前記処理回路は、他の前記処理回路が前記記憶部に保存した前記探索済ベクトルを読み出すように構成されている、
請求項1または2に記載の情報処理装置。 - 複数の前記処理回路は、それぞれが前記第1ベクトルおよび前記第2ベクトルの異なるペアの更新処理を実行する、複数のグループに分けられている、
請求項3に記載の情報処理装置。 - 複数の前記処理回路を備え、
それぞれの前記処理回路は、更新した前記第1ベクトルを他の前記処理回路に転送し、前記探索済ベクトルに代わり他の前記処理回路より受信した前記第1ベクトルを使って前記補正項を計算するように構成されている、
請求項1または2に記載の情報処理装置。 - 前記処理回路は、更新された前記第2ベクトルを第3ベクトルとして前記記憶部に保存するように構成されている、
請求項1ないし5のいずれか一項に記載の情報処理装置。 - 前記処理回路は、前記記憶部より前記探索済ベクトルと同一のイタレーションに更新された前記第3ベクトルを読み出し、前記探索済ベクトルおよび前記第3ベクトルに基づいて目的関数の値を計算するように構成されている、
請求項6に記載の情報処理装置。 - 前記処理回路は、前記目的関数の値に基づいて前記第1ベクトルおよび前記第2ベクトルの更新を停止するか否かを判定するように構成されている、
請求項7に記載の情報処理装置。 - 前記処理回路は、前記目的関数の値に基づき前記記憶部に保存された複数の前記探索済ベクトルよりいずれかの前記探索済ベクトルを選択し、選択した前記探索済ベクトルの正値である前記第1変数を第1値に変換し、負値である前記第1変数を前記第1値より小さい第2値に変換することによって解ベクトルを計算するように構成されている、
請求項8に記載の情報処理装置。 - 前記処理回路が計算する前記問題項は、イジングモデルに基づいている、
請求項1ないし9のいずれか一項に記載の情報処理装置。 - 前記処理回路が計算する前記問題項は、多体相互作用を含んでいる、
請求項10に記載の情報処理装置。 - 第1変数を要素とする第1ベクトルおよび前記第1変数に対応する第2変数を要素とする第2ベクトルを繰り返し更新するように構成された情報処理システムであって、
記憶装置と、複数の情報処理装置とを備え、
それぞれの前記情報処理装置は、第1変数に対応する前記第2変数を重み付け加算することによって前記第1ベクトルを更新し、
更新された前記第1ベクトルを探索済ベクトルとして前記記憶装置に保存し、
前記第1変数を更新回数に応じて単調増加または単調減少する第1係数で重み付けし対応する前記第2変数に加算し、複数の前記第1変数を用いて問題項を計算し、前記問題項を前記第2変数に加算し、前記記憶装置より前記探索済ベクトルを読み出し、更新対象の前記第1ベクトルと前記探索済ベクトルとの間の距離の逆数を含む補正項を計算し、前記補正項を前記第2変数に加算することによって前記第2ベクトルを更新するように構成されている、
情報処理システム。 - 前記複数の情報処理装置は、それぞれが前記第1ベクトルおよび前記第2ベクトルの異なるペアの更新処理を実行する、複数のグループに分けられている、
請求項12に記載の情報処理システム。 - 記憶部と、複数の処理回路とを使って第1変数を要素とする第1ベクトルおよび前記第1変数に対応する第2変数を要素とする第2ベクトルを繰り返し更新する情報処理方法であって、
前記複数の処理回路が第1変数に対応する前記第2変数を重み付け加算することによって前記第1ベクトルを更新するステップと、
前記複数の処理回路が更新された前記第1ベクトルを探索済ベクトルとして前記記憶部に保存するステップと、
前記複数の処理回路が前記第1変数を更新回数に応じて単調増加または単調減少する第1係数で重み付けし対応する前記第2変数に加算するステップと、
前記複数の処理回路が複数の前記第1変数を用いて問題項を計算し、前記問題項を前記第2変数に加算するステップと、
前記複数の処理回路が前記記憶部より前記探索済ベクトルを読み出すステップと、
前記複数の処理回路が更新対象の前記第1ベクトルと前記探索済ベクトルとの間の距離の逆数を含む補正項を計算するステップと、
前記複数の処理回路が前記補正項を前記第2変数に加算するステップとを含む、
情報処理方法。 - 記憶装置と、複数の情報処理装置とを使って第1変数を要素とする第1ベクトルおよび前記第1変数に対応する第2変数を要素とする第2ベクトルを繰り返し更新する情報処理方法であって、
前記複数の情報処理装置が第1変数に対応する前記第2変数を重み付け加算することによって前記第1ベクトルを更新するステップと、
前記複数の情報処理装置が更新された前記第1ベクトルを探索済ベクトルとして前記記憶装置に保存するステップと、
前記複数の情報処理装置が前記第1変数を更新回数に応じて単調増加または単調減少する第1係数で重み付けし対応する前記第2変数に加算するステップと、
前記複数の情報処理装置が複数の前記第1変数を用いて問題項を計算し、前記問題項を前記第2変数に加算するステップと、
前記複数の情報処理装置が前記記憶装置より前記探索済ベクトルを読み出すステップと、
前記複数の情報処理装置が更新対象の前記第1ベクトルと前記探索済ベクトルとの間の距離の逆数を含む補正項を計算するステップと、
前記複数の情報処理装置が前記補正項を前記第2変数に加算するステップとを含む、
情報処理方法。 - 第1変数を要素とする第1ベクトルおよび前記第1変数に対応する第2変数を要素とする第2ベクトルを繰り返し更新するプログラムであって、
第1変数に対応する前記第2変数を重み付け加算することによって前記第1ベクトルを更新するステップと、
更新された前記第1ベクトルを探索済ベクトルとして記憶部に保存するステップと、
前記第1変数を更新回数に応じて単調増加または単調減少する第1係数で重み付けし対応する前記第2変数に加算するステップと、
複数の前記第1変数を用いて問題項を計算し、前記問題項を前記第2変数に加算するステップと、
前記記憶部より前記探索済ベクトルを読み出すステップと、
更新対象の前記第1ベクトルと前記探索済ベクトルとの間の距離の逆数を含む補正項を計算するステップと、
前記補正項を前記第2変数に加算するステップとをコンピュータに実行させるプログラムを格納している、
非一時的なコンピュータ可読な記憶媒体。 - 第1変数を要素とする第1ベクトルおよび前記第1変数に対応する第2変数を要素とする第2ベクトルを繰り返し更新するプログラムであって、
第1変数に対応する前記第2変数を重み付け加算することによって前記第1ベクトルを更新するステップと、
更新された前記第1ベクトルを探索済ベクトルとして記憶部に保存するステップと、
前記第1変数を更新回数に応じて単調増加または単調減少する第1係数で重み付けし対応する前記第2変数に加算するステップと、
複数の前記第1変数を用いて問題項を計算し、前記問題項を前記第2変数に加算するステップと、
前記記憶部より前記探索済ベクトルを読み出すステップと、
更新対象の前記第1ベクトルと前記探索済ベクトルとの間の距離の逆数を含む補正項を計算するステップと、
前記補正項を前記第2変数に加算するステップとをコンピュータに実行させる、
プログラム。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202080025393.5A CN113646782A (zh) | 2019-03-28 | 2020-03-27 | 信息处理设备、信息处理系统、信息处理方法、存储介质及程序 |
CA3135137A CA3135137C (en) | 2019-03-28 | 2020-03-27 | Information processing device, information processing system, information processing method, storage medium and program |
JP2021509661A JP7502269B2 (ja) | 2019-03-28 | 2020-03-27 | 情報処理装置、情報処理システム、情報処理方法、記憶媒体およびプログラム |
US17/487,144 US20220012307A1 (en) | 2019-03-28 | 2021-09-28 | Information processing device, information processing system, information processing method, and storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2019-064588 | 2019-03-28 | ||
JP2019064588 | 2019-03-28 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/487,144 Continuation US20220012307A1 (en) | 2019-03-28 | 2021-09-28 | Information processing device, information processing system, information processing method, and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020196866A1 true WO2020196866A1 (ja) | 2020-10-01 |
Family
ID=72608458
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2020/014164 WO2020196866A1 (ja) | 2019-03-28 | 2020-03-27 | 情報処理装置、情報処理システム、情報処理方法、記憶媒体およびプログラム |
Country Status (5)
Country | Link |
---|---|
US (1) | US20220012307A1 (ja) |
JP (1) | JP7502269B2 (ja) |
CN (1) | CN113646782A (ja) |
CA (1) | CA3135137C (ja) |
WO (1) | WO2020196866A1 (ja) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2023000462A (ja) * | 2021-06-18 | 2023-01-04 | 富士通株式会社 | データ処理装置、プログラム及びデータ処理方法 |
JP2023024085A (ja) * | 2021-08-06 | 2023-02-16 | 富士通株式会社 | プログラム、データ処理方法及びデータ処理装置 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006350673A (ja) * | 2005-06-15 | 2006-12-28 | Fuji Electric Systems Co Ltd | 最適化計算システム |
WO2016194051A1 (ja) * | 2015-05-29 | 2016-12-08 | 株式会社日立製作所 | 確率的システムの注目指標の統計量を最小化するパラメータセットを探索するシステム |
JP2017219979A (ja) * | 2016-06-06 | 2017-12-14 | 日本電信電話株式会社 | 最適化問題解決装置、方法、及びプログラム |
-
2020
- 2020-03-27 JP JP2021509661A patent/JP7502269B2/ja active Active
- 2020-03-27 CN CN202080025393.5A patent/CN113646782A/zh active Pending
- 2020-03-27 WO PCT/JP2020/014164 patent/WO2020196866A1/ja active Application Filing
- 2020-03-27 CA CA3135137A patent/CA3135137C/en active Active
-
2021
- 2021-09-28 US US17/487,144 patent/US20220012307A1/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006350673A (ja) * | 2005-06-15 | 2006-12-28 | Fuji Electric Systems Co Ltd | 最適化計算システム |
WO2016194051A1 (ja) * | 2015-05-29 | 2016-12-08 | 株式会社日立製作所 | 確率的システムの注目指標の統計量を最小化するパラメータセットを探索するシステム |
JP2017219979A (ja) * | 2016-06-06 | 2017-12-14 | 日本電信電話株式会社 | 最適化問題解決装置、方法、及びプログラム |
Also Published As
Publication number | Publication date |
---|---|
CA3135137A1 (en) | 2020-10-01 |
CN113646782A (zh) | 2021-11-12 |
JPWO2020196866A1 (ja) | 2020-10-01 |
JP7502269B2 (ja) | 2024-06-18 |
CA3135137C (en) | 2024-01-09 |
US20220012307A1 (en) | 2022-01-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11755941B2 (en) | Geometry-based compression for quantum computing devices | |
WO2020196862A1 (ja) | 情報処理装置、情報処理システム、情報処理方法、記憶媒体およびプログラム | |
JP7421291B2 (ja) | 情報処理装置、プログラム、情報処理方法、および電子回路 | |
WO2020246073A1 (en) | Information processing device, pubo solver, information processing method and non-transitory storage medium | |
US20220012307A1 (en) | Information processing device, information processing system, information processing method, and storage medium | |
JP2022533809A (ja) | 量子シミュレーションアルゴリズムに基づくデータサーチ方法、装置及び機器並びにコンピュータプログラム | |
CN114037082A (zh) | 量子计算任务处理方法、系统及计算机设备 | |
WO2020196883A1 (ja) | 情報処理装置、情報処理システム、情報処理方法、記憶媒体およびプログラム | |
US11966450B2 (en) | Calculation device, calculation method, and computer program product | |
WO2020196872A1 (ja) | 情報処理装置、情報処理システム、情報処理方法、記憶媒体およびプログラム | |
WO2020196915A1 (ja) | 情報処理装置、情報処理システム、情報処理方法、記憶媒体およびプログラム | |
JP7472062B2 (ja) | 計算装置、計算方法およびプログラム | |
WO2022249785A1 (ja) | 求解装置、求解方法およびプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20777361 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2021509661 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 3135137 Country of ref document: CA |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20777361 Country of ref document: EP Kind code of ref document: A1 |