CN114386593A - Method for processing TSP problem based on improved particle swarm optimization and dynamic step size neural network - Google Patents
Method for processing TSP problem based on improved particle swarm optimization and dynamic step size neural network Download PDFInfo
- Publication number
- CN114386593A CN114386593A CN202111552384.4A CN202111552384A CN114386593A CN 114386593 A CN114386593 A CN 114386593A CN 202111552384 A CN202111552384 A CN 202111552384A CN 114386593 A CN114386593 A CN 114386593A
- Authority
- CN
- China
- Prior art keywords
- network
- particle swarm
- swarm algorithm
- besti
- improved particle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/004—Artificial life, i.e. computing arrangements simulating life
- G06N3/006—Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention relates to a method for processing a TSP problem based on an improved particle swarm algorithm and a dynamic step size neural network, which comprises the following steps: aiming at the TSP problem, acquiring an urban position parameter; constructing a network energy function of the Hopfield network according to the constraint of the TSP problem, and initializing the Hopfield network; constructing a network dynamic equation and solving; judging whether the constructed Hopfield network is stable, if so, performing parameter optimization updating based on an improved particle swarm algorithm, otherwise, reconstructing a network dynamic equation; and judging whether the improved particle swarm algorithm reaches a termination condition, and if so, taking the optimal solution obtained based on the improved particle swarm algorithm as the optimal solution of the solved TSP problem. Compared with the prior art, the method has the advantages of solving the problem that the image solution space is exponentially increased along with the increase of the problem scale at present, effectively improving the convergence speed and the convergence precision and the like.
Description
Technical Field
The invention relates to the technical field of communication and computers, in particular to a method for processing a TSP problem based on an improved particle swarm algorithm and a dynamic step size neural network.
Background
The Hopfield neural network proposed by John Hopfield in 1982 is a recursive neural network, and is characterized in that all neurons work simultaneously and are processed in parallel. The continuous Hopfield neural network is similar to a circuit, the network introduces an energy function concept, when the network is stable, the energy function reaches the minimum, and the change of the network state can be represented by a difference equation derived from kirchhoff's law. If the parameters are properly set, the Hopfield network is suitable for solving various combinatorial optimization problems. The network is fast to solve, but has the disadvantage that a suboptimal solution is easily obtained, rather than a globally optimal solution.
As an application of the Hopfield neural network, the Travel Salesman Problem (TSP) is discussed as having n cities, represented by the number (1, …, n). The distance between city i and city j is d (i, j) i, j ═ 1, …, n. The goal of the TSP problem is to find exactly once for each domain city and finally return to the starting city, forming a loop with the shortest total path length. Solution space: the solution space S is all loops that go through each city exactly once.
At present, many algorithms only support serial operation, and for some algorithms with larger calculation amount, the efficiency is often lower, and in addition, for the NP problem that the image solution space increases exponentially with the increase of the problem scale at present, when the problem scale is smaller, the problem can be better solved to a certain extent through some algorithms, such as a heuristic algorithm and an accurate algorithm, but when the problem scale is continuously increased, because at present, the process manufacturing of a hardware core reaches the bottleneck (for example, on a CPU, the process manufacturing is limited by the number of chip calculation units), the technical problem that the performance is difficult to improve through the manufacturing of a single core is caused.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a method for processing a TSP problem based on an improved particle swarm algorithm and a dynamic step size neural network.
The purpose of the invention can be realized by the following technical scheme:
the method for processing the TSP problem based on the improved particle swarm algorithm and the dynamic step size neural network comprises the following steps:
aiming at the TSP problem, acquiring an urban position parameter;
constructing a network energy function of the Hopfield network according to the constraint of the TSP problem, and initializing the Hopfield network;
constructing a network dynamic equation and solving;
judging whether the constructed Hopfield network is stable, if so, performing parameter optimization updating based on an improved particle swarm algorithm, otherwise, reconstructing a network dynamic equation;
and judging whether the improved particle swarm algorithm reaches a termination condition, and if so, taking the optimal solution obtained based on the improved particle swarm algorithm as the optimal solution of the solved TSP problem.
Further, the city location parameters include city coordinates and distances between cities.
Further, the expression of the network energy function of the constructed Hopfield network is as follows:
wherein E is network energy, A and D are parameters for measuring network constraint and target optimal solution, respectively, and VxiFor the state of the neurons in the ith column of the x-th row in the transpose matrix, Vy,i+1Is the neuron state of the (i +1) th row of the y, x is the city number, y is the city number, and n is the city number.
Further, the expression of the constructed network dynamic equation is as follows:
in the formula of UxiFor initialized Hopfield networks, dxyDistance of city x to city y, VyiIs the state of the ith column of neurons in the transpose matrix for the y row.
Further, the specific content of performing parameter optimization updating based on the improved particle swarm optimization algorithm is as follows:
and taking a solution obtained by the operation of the neural network as an initial position of the particle swarm algorithm, taking the path length as a fitness function, calculating the chaotic random inertia weight, and updating the particle speed and the position.
The expression of the chaotic random inertial weight w is as follows:
z=z*μ*(1-z)
w=0.5*rand+0.5*z
where μ ═ 0.4, z is a number between (0, 1) and other than 0, 0.25, 0.5, and 1, and rand is a random number between (0, 1).
Further, the particle velocity viAnd position xiThe update expression of (1) is:
vi=viwi+c1×r1(pbesti-xi)+c2×r2(gbesti-xi)
xi=xi+vi
in the formula, wiIs an inertial weight, c1、c2As a learning factor, pbestiFor the best position found for the current particle, gbestiFor the optimal position found for the current population, r1 and r2 are random numbers in the (0, 1) interval.
Further, judging whether the improved particle swarm algorithm reaches the termination condition is to judge the fitness function value of each particle, and if the current position adaptation value of a certain particle is smaller than the individual extreme value pbestiThen the individual extremum p of the particle is setbestiUpdating the current particle position; if the individual extreme value p of the particle after updatingbestiLess than global extreme gbestiThen the position of the particle is assigned as the global extremum gbesti。
Compared with the prior art, the method for processing the TSP problem based on the improved particle swarm algorithm and the dynamic step size neural network at least has the following beneficial effects:
1) the method comprises the steps of obtaining urban position parameters, constructing a network energy function according to TSP problem constraints, initializing a Hopfield network, constructing a network dynamic equation, taking a solution obtained by running a neural network as an initial position of a particle swarm algorithm, taking path length as a fitness function, and solving the TSP problem by combining an improved particle swarm algorithm and a dynamic step size neural network, so that the NP problem that an image solution space is exponentially increased along with the increase of the problem scale at present is solved, and when the problem scale is small, the problem can be better solved to a certain extent through some algorithms.
2) Compared with the method of singly using the Hopfield neural network to solve the TSP problem, the method can be beneficial to jumping out the local optimum and obtaining the global optimum solution with higher probability.
3) The invention is improved on the Hopfield neural network, replaces fixed step length with dynamic step length, and realizes large-step optimization in the early stage and accurate convergence in the later stage.
4) The invention improves the particle swarm algorithm, utilizes the chaotic random inertia weight to effectively improve the convergence speed and the convergence precision compared with the common particle swarm algorithm, thereby enhancing the algorithm performance.
Drawings
Fig. 1 is a schematic flow chart of a method for processing a TSP problem based on an improved particle swarm algorithm and a dynamic step size neural network in the embodiment.
Detailed Description
Particle Swarm Optimization (PSO) is a stochastic optimization algorithm. The PSO algorithm considers each solution as a particle, and through iteration, each particle updates the searching speed according to the individual extreme value and the currently searched global extreme value so as to adjust the position of the particle. The algorithm has the characteristics of simple structure, fast convergence, high efficiency and the like, but also has the problem of falling into local optimum.
The method for processing the TSP problem based on the improved particle swarm algorithm and the dynamic step size neural network solves the NP problem that an image solution space grows exponentially along with the increase of the problem scale at present, can better solve the problem to a certain extent through some algorithms when the problem scale is small, but when the problem scale is continuously increased, the process manufacturing of a hardware core reaches the bottleneck at the present stage, so that the performance is difficult to improve through the manufacturing of a single core.
The invention is described in detail below with reference to the figures and specific embodiments. It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, shall fall within the scope of protection of the present invention.
In the method for processing the TSP problem based on the improved particle swarm algorithm and the dynamic step size neural network, the specific improvement process of the improved particle swarm algorithm comprises the following steps:
1) taking a solution obtained by operating the Hopfield network to a stable state as an initial position of the particle swarm algorithm, calculating a fitness function, and obtaining an individual extreme value p of the particlebestiCurrent global extremum gbesti。
2) To improve algorithm performance, suitable inertial weights are found. Due to the strong randomness and high ergodicity of chaotic mapping, the inertia weight of the particle swarm algorithm is changed from linear descending to chaotic random inertia weight w:
z=z*μ*(1-z)
w=0.5*rand+0.5*z
wherein, the value of z is between (0, 1) and is not equal to 0, 0.25, 0.5, 1. Compared with the linear decreasing strategy, the method can avoid the problem of lack of local searching capability in the early iteration stage and global searching capability in the later iteration stage, because the improvement combines the random strategy and the decreasing strategy to improve the performance of the two strategies. Compared with the common particle swarm optimization algorithm, the chaotic random inertial weight particle swarm optimization algorithm has the advantages that the convergence speed and the global convergence are obviously improved.
3) Velocity v of the particles according toiAnd position xiUpdating:
vi=viwi+c1×r1(pbesti-xi)+c2×r2(gbesti-xi)
xi=xi+vi
evaluating the fitness function value of each particle, and if the current position adaptation value of a certain particle is smaller than the individual extreme value pbestiThen the individual extremum p of the particle is setbestiUpdating the current particle position; if the individual extreme value p of the particle after updatingbestiLess than global extreme gbestiThen the position of the particle is assigned as the global extremum gbesti。
As shown in fig. 1, described below with a specific application scenario, the method for processing a TSP problem based on an improved particle swarm algorithm and a dynamic step size neural network according to an embodiment of the present invention includes:
the method comprises the following steps of aiming at the TSP problem, obtaining urban position parameters, and comprising the following steps: city coordinates, distance between cities.
Step two, constructing a network energy function according to TSP problem constraints:
in the formula, E is network energy, A and D are important parameters for measuring network constraint and a target optimal solution, the larger the value of A is, the more the network pays attention to the effectiveness of the solution, and invalid solutions are avoided as much as possible; the larger the value of D is, the more the network can focus on solving the objective function, namely the shortest path, and if the value of D is larger than A, the network often generates invalid solutions. The sequence of traversing cities is expressed by a transposition matrix, so VxiFor the state of the neurons in the ith column of the x-th row in the transpose matrix, Vy,i+1Is the neuron state of the (i +1) th row of the y, x is the city number, y is the city number, and n is the city number.
Step three, initializing a Hopfield network Uxi:
In the formula, a neuron output U is initialized0The value is 0.1, deltauxiIs a random number in the interval (-1, + 1).
Step four, constructing a network dynamic equation:
where A and D are important parameters for measuring network constraints and target optimum solution, and DxyIs the distance from city x to city y. VyiIs the state of the ith column of neurons in the transpose matrix for the y row.
Step five, calculating U (t +1) according to a first-order Euler formula, and converting the U into V (t): since Hopfield is a recurrent neural network, the net input of the network U (t +1) depends on the last epoch of U (t); v (t) is the output state of the neuron, obtained by the transfer function, of the net input u (t).
In the formula, step0 is a constant, r and a are parameters, L is the total number of iterations, and t is the current number of iterations.
And step six, judging whether the network is stable (the specified iteration times are reached, or the energy function is kept unchanged), if so, performing the next step, otherwise, returning to the step four.
And step seven, taking a solution obtained by running the neural network as an initial position of the particle swarm algorithm, and taking the path length as a fitness function.
And step eight, calculating the chaotic random inertia weight w, and updating the particle velocity v and the position x.
In this step, the chaotic random inertial weight is:
z=z*μ*(1-z)
w=0.5*rand+0.5*z
where μ is defined to be 0.4, and z is a number between (0, 1) and not 0, 0.25, 0.5, 1. rand is a random number between (0, 1).
The particle velocity v and position x are updated as follows:
vi=viwi+c1×r1(pbesti-xi)+c2×r2(gbesti-xi)
xi=xi+vi
in the formula, wiIs an inertial weight, c1、c2Is a learning factor, c1Larger particles will stay too much in the local search range, c2Larger may promote premature convergence of the particles. p is a radical ofbestiFor the best position found for the current particle, gbestiFor the optimal position found for the current population, r1 and r2 are random numbers in the (0, 1) interval.
Nine, updating individual extreme value pbestiAnd global extreme gbestiAnd calculating a fitness function. Judging whether a termination condition is reached, if so, outputting an optimal solution, wherein the current optimal solution is the optimal solution of the TSP problem to be solved; otherwise, returning to the step eight.
The termination conditions were: evaluating the fitness function value of each particle, and if the current position adaptation value of a certain particle is smaller than the individual extreme value pbestiThen the individual extremum p of the particle is setbestiUpdating the current particle position; if the individual extreme value p of the particle after updatingbestiLess than global extreme gbestiThen the position of the particle is assigned as the global extremum gbesti。
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and those skilled in the art can easily conceive of various equivalent modifications or substitutions within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (8)
1. The method for processing the TSP problem based on the improved particle swarm algorithm and the dynamic step size neural network is characterized by comprising the following steps of:
aiming at the TSP problem, acquiring an urban position parameter;
constructing a network energy function of the Hopfield network according to the constraint of the TSP problem, and initializing the Hopfield network;
constructing a network dynamic equation and solving;
judging whether the constructed Hopfield network is stable, if so, performing parameter optimization updating based on an improved particle swarm algorithm, otherwise, reconstructing a network dynamic equation;
and judging whether the improved particle swarm algorithm reaches a termination condition, and if so, taking the optimal solution obtained based on the improved particle swarm algorithm as the optimal solution of the solved TSP problem.
2. The method for processing the TSP problem based on the improved particle swarm algorithm and the dynamic step size neural network as claimed in claim 1, wherein the city location parameters comprise city coordinates, distance between cities.
3. The method for processing the TSP problem based on the improved particle swarm algorithm and the dynamic step size neural network as claimed in claim 1, wherein the expression of the network energy function of the constructed Hopfield network is as follows:
wherein E is network energy, A and D are parameters for measuring network constraint and target optimal solution, respectively, and VxiFor the state of the neurons in the ith column of the x-th row in the transpose matrix, Vy,i+1Is the y row, i +1 columnX is the city number, y is the city number, and n is the city number.
4. The method for processing the TSP problem based on the improved particle swarm algorithm and the dynamic step size neural network as claimed in claim 3, wherein the constructed network dynamic equation has the expression:
in the formula of UxiFor initialized Hopfield networks, dxyDistance of city x to city y, VyiIs the state of the ith column of neurons in the transpose matrix for the y row.
5. The method for processing the TSP problem based on the improved particle swarm optimization and the dynamic step size neural network as claimed in claim 1, wherein the parameters optimized and updated based on the improved particle swarm optimization are as follows:
and taking a solution obtained by the operation of the neural network as an initial position of the particle swarm algorithm, taking the path length as a fitness function, calculating the chaotic random inertia weight, and updating the particle speed and the position.
6. The method for processing the TSP problem based on the improved particle swarm algorithm and the dynamic step size neural network as claimed in claim 5, wherein the expression of the chaotic random inertia weight w is as follows:
z=z*μ*(1-z)
w=0.5*rand+0.5*z
where μ ═ 0.4, z is a number between (0, 1) and other than 0, 0.25, 0.5, and 1, and rand is a random number between (0, 1).
7. The method for processing the TSP problem based on the improved particle swarm algorithm and the dynamic step size neural network as claimed in claim 5, wherein the velocity v of the particle isiAnd position xiThe update expression of (1) is:
vi=viwi+c1×r1(pbesti-xi)+c2×r2(gbesti-xi)
xi=xi+vi
in the formula, wiIs an inertial weight, c1、c2As a learning factor, pbestiFor the best position found for the current particle, gbestiFor the optimal position found for the current population, r1 and r2 are random numbers in the (0, 1) interval.
8. The method of claim 1, wherein the determining whether the modified PSO reaches the end condition is a fitness function value of each particle, and if the current location fitness value of a particle is less than the individual extremum pbestiThen the individual extremum p of the particle is setbestiUpdating the current particle position; if the individual extreme value P of the particle after updatingbestiLess than global extreme gbestiThen the position of the particle is assigned as the global extremum gbesti。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111552384.4A CN114386593A (en) | 2021-12-17 | 2021-12-17 | Method for processing TSP problem based on improved particle swarm optimization and dynamic step size neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111552384.4A CN114386593A (en) | 2021-12-17 | 2021-12-17 | Method for processing TSP problem based on improved particle swarm optimization and dynamic step size neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114386593A true CN114386593A (en) | 2022-04-22 |
Family
ID=81197767
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111552384.4A Pending CN114386593A (en) | 2021-12-17 | 2021-12-17 | Method for processing TSP problem based on improved particle swarm optimization and dynamic step size neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114386593A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115062583A (en) * | 2022-06-15 | 2022-09-16 | 华中科技大学 | Hopfield network hardware circuit for solving optimization problem and operation method |
-
2021
- 2021-12-17 CN CN202111552384.4A patent/CN114386593A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115062583A (en) * | 2022-06-15 | 2022-09-16 | 华中科技大学 | Hopfield network hardware circuit for solving optimization problem and operation method |
CN115062583B (en) * | 2022-06-15 | 2024-05-31 | 华中科技大学 | Hopfield network hardware circuit for solving optimization problem and operation method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8429107B2 (en) | System for address-event-representation network simulation | |
Liu et al. | Social learning discrete Particle Swarm Optimization based two-stage X-routing for IC design under Intelligent Edge Computing architecture | |
CN109978283B (en) | Photovoltaic power generation power prediction method based on branch evolution neural network | |
CN108280545A (en) | A kind of photovoltaic power prediction technique based on K mean cluster neural network | |
CN108090565A (en) | Accelerated method is trained in a kind of convolutional neural networks parallelization | |
CN110837891B (en) | Self-organizing mapping method and system based on SIMD (Single instruction multiple data) architecture | |
CN114386593A (en) | Method for processing TSP problem based on improved particle swarm optimization and dynamic step size neural network | |
CN114707881A (en) | Job shop adaptive scheduling method based on deep reinforcement learning | |
CN115358178B (en) | Circuit yield analysis method based on fusion neural network | |
He | Chaotic simulated annealing with decaying chaotic noise | |
CN115470889A (en) | Network-on-chip autonomous optimal mapping exploration system and method based on reinforcement learning | |
Tang et al. | A columnar competitive model for solving combinatorial optimization problems | |
Patiño-Saucedo et al. | Empirical study on the efficiency of spiking neural networks with axonal delays, and algorithm-hardware benchmarking | |
CN102915407A (en) | Prediction method for three-dimensional structure of protein based on chaos bee colony algorithm | |
Zhang et al. | A network traffic prediction model based on quantum inspired PSO and neural network | |
US20030046278A1 (en) | Method of robust technology design using rational robust optimization | |
CN114547954A (en) | Logistics distribution center site selection method and device and computer equipment | |
Gholizadeh et al. | Shape optimization of structures by modified harmony search | |
US20190095783A1 (en) | Deep learning apparatus for ann having pipeline architecture | |
Wu et al. | An algorithm for solving travelling salesman problem based on improved particle swarm optimisation and dynamic step Hopfield network | |
CN111552844B (en) | Distributed method for solving shortest path of large-scale multi-section graph | |
CN116822759A (en) | Method, device, equipment and storage medium for solving traveling business problems | |
Miranda et al. | A new grammatical evolution method for generating deep convolutional neural networks with novel topologies | |
Niu et al. | The new large-scale RNNLM system based on distributed neuron | |
CN117236187B (en) | Parameterized design method and system for deep learning accelerator chip |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |