CN113627606A - RBF neural network optimization method based on improved particle swarm optimization - Google Patents
RBF neural network optimization method based on improved particle swarm optimization Download PDFInfo
- Publication number
- CN113627606A CN113627606A CN202010371490.1A CN202010371490A CN113627606A CN 113627606 A CN113627606 A CN 113627606A CN 202010371490 A CN202010371490 A CN 202010371490A CN 113627606 A CN113627606 A CN 113627606A
- Authority
- CN
- China
- Prior art keywords
- neural network
- particle swarm
- rbf neural
- iteration
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 239000002245 particle Substances 0.000 title claims abstract description 96
- 238000005457 optimization Methods 0.000 title claims abstract description 53
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 50
- 238000000034 method Methods 0.000 title claims abstract description 30
- 230000008569 process Effects 0.000 claims abstract description 8
- 230000006870 function Effects 0.000 claims description 20
- 238000012549 training Methods 0.000 claims description 11
- 238000010606 normalization Methods 0.000 claims description 9
- 238000012545 processing Methods 0.000 claims description 5
- 230000008859 change Effects 0.000 claims description 4
- 238000013507 mapping Methods 0.000 claims description 3
- 230000031864 metaphase Effects 0.000 claims description 3
- 230000031877 prophase Effects 0.000 claims description 3
- 230000011218 segmentation Effects 0.000 claims 2
- 238000004364 calculation method Methods 0.000 claims 1
- 230000002028 premature Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 3
- 230000003247 decreasing effect Effects 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000009191 jumping Effects 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000005291 chaos (dynamical) Methods 0.000 description 1
- 230000007123 defense Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 244000144992 flock Species 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000013179 statistical model Methods 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/004—Artificial life, i.e. computing arrangements simulating life
- G06N3/006—Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Feedback Control In General (AREA)
Abstract
The invention belongs to the technical field of neural network optimization, and particularly relates to an RBF neural network optimization method based on an improved particle swarm algorithm. The particle swarm optimization process is divided into three stages: the first stage mainly searches for a global optimal general position, the second stage evolves from global search to local exploration, and the third stage mainly carries out local fine exploration. The three optimization stages are definite in division of labor, so that the particle swarm has strong global search and local exploration capabilities, the optimization precision and convergence speed are improved, and the precision and stability of the RBF neural network are improved.
Description
Technical Field
The invention belongs to the technical field of neural network optimization, and particularly relates to an RBF neural network optimization method based on an improved particle swarm optimization algorithm.
Background
With the continuous upgrading of radar technology, various radars are more and more widely applied in civil and military fields. When the radar is used for marine detection, sea clutter with high intensity is doped in an echo, and the monitoring of marine environment and the detection of marine targets are greatly influenced. Under the background, the sea clutter prediction and suppression with low cost and high precision are realized, the monitoring capability of the radar on the sea is greatly improved, and the key effects on the observation of the national sea environment and the enhancement of national defense strength are achieved.
The learners deeply research the statistical characteristics of the sea clutter to establish a plurality of classical sea clutter statistical models, but the models have low precision and weak generalization capability and cannot reach the target of predicting the sea clutter. Therefore, based on the chaos theory, the scholars hope to establish a model to realize accurate prediction by learning the inherent chaos characteristic of the sea clutter. In recent years, the neural network developed vigorously has the unique advantage of processing the nonlinear problem, and the Radial Basis Function (RBF) neural network naturally becomes the first choice for building a sea clutter prediction model by virtue of the advantages of the Radial Basis Function (RBF) neural network per se in various aspects. When the RBF neural network is trained, initial parameters of the network are generated randomly according to data characteristics or calculated according to a clustering algorithm, but the methods cannot accurately find the optimal initial parameters. Meanwhile, the selection of the initial parameters has a large influence on the network training precision, and if the selection is not good, the precision and instability of the network model training can be increased. Aiming at the disadvantages, a Particle Swarm Optimization (PSO) algorithm is introduced, the PSO algorithm is used for searching the optimal initial parameters of the network, and the reliability and the stability of network prediction are improved.
The optimization of the particle swarm algorithm is to simulate the process of bird predation, where each bird in the flock searches for the perimeter of the bird that is currently closest to the food. Each particle in the particle swarm represents a potential solution, the characteristics of the particle are represented by three indexes of speed, position and fitness value, the speed expression of the particle controls the moving distance and moving direction of the particle, and the position of the particle is updated by tracking an individual extreme value and a global extreme value in the space. And (4) continuously updating the position of the particle swarm along with the increase of the iteration times, approaching the particle swarm to the global optimum, and finally searching the optimum solution of the problem. The particle swarm algorithm is successfully applied in many engineering fields of solving of optimization problems, computers, power systems, control and the like, and is used for optimizing the RBF neural network, searching for the optimal initial parameters of the network and making up for the defects of insufficient accuracy and poor stability of the RBF neural network.
Although the introduction of the PSO algorithm makes up the inherent deficiency of the RBF neural network, there are many problems from the viewpoint of the PSO algorithm itself optimization. Firstly, the particle swarm algorithm contains a plurality of parameters, the setting of the parameters is not guided by a system theory, the setting of the parameters is different aiming at different problems, a plurality of tests are needed, and proper parameters are selected according to experience, so that the workload of engineering application is increased to a certain extent. Secondly, the particle swarm optimization algorithm is prone to generate the phenomenon of premature in the iteration process, when a problem relates to high dimensionality, the particles often converge to a local optimal point and then wander around the local optimal point without continuing to converge, actually, the current particles are not the globally optimal solution, and the particles fall into the local optimal point and cannot jump out, so that premature is caused. Finally, another problem faced by the particle swarm optimization algorithm is that at the later stage of iteration, since the search step length cannot be reasonably adjusted, the problem of overlong search step length is presented near the optimal solution, and as a result, the particles cannot always converge towards the optimal solution, but swing above the optimal solution, and the convergence speed is slow.
Currently, in order to solve the above problems, scholars propose linear and nonlinear decreasing inertial weight strategies and a series of mechanisms for jumping out of local optima. Although the inertial weight decreasing strategy considers the global search capability in the early stage and the local exploration capability in the later stage to a certain extent, when the global search is put into the particle swarm in the early stage, the global search capability is continuously reduced along with the increase of the number of iteration steps, and when the local exploration is put into the later stage, the local exploration capability is close to the maximum number of iteration steps just before reaching the maximum, so that the key stage of convergence near the global optimum is missed. In addition, the idea of jumping out of the local optimum mechanism is set, firstly, whether the particles have the phenomenon of premature is judged, if the particles have the phenomenon of premature, the inertia weight is increased, the search step length is increased to jump out of the local optimum, and the search is carried out again. On one hand, the judgment of the particle premature phenomenon has no strong theoretical derivation, and only a threshold value can be set as a reference; on the other hand, after the particles jump out of the local optimum, the particles need to be searched again, so that the number of iteration steps is inevitably increased to obtain the optimum solution, and the workload of engineering application is increased. Therefore, the invention designs an improved particle swarm optimization algorithm for optimizing in stages, fully considers the global search capability in the early stage and the local exploration capability in the later stage of the particle, avoids falling into local optimization, and simultaneously accelerates the convergence speed in the later stage of iteration, thereby having very important significance in the optimization of the RBF neural network.
Disclosure of Invention
The invention aims to solve the technical problems that the existing PSO algorithm is easy to fall into local optimization and the later convergence speed is low, and provides an RBF neural network optimization method based on an improved particle swarm algorithm on the basis.
In order to achieve the purpose, the invention adopts the following technical scheme:
an RBF neural network optimization method based on an improved particle swarm optimization algorithm comprises the following steps:
step 1: determining a topological structure of the RBF neural network, determining the number of input and output nodes of the neural network according to a problem to be solved, and determining the number of hidden nodes of the neural network;
step 2: calculating the number of network parameters to be optimized, and mapping the target to be optimized into the position of the particle;
and step 3: initializing the position and speed of the population;
and 4, step 4: carrying out normalization processing on the data;
and 5: taking out part of training data and inputting the training data into the network model, and evaluating the fitness value of the current particle by using a fitness function according to the error between the network output value and the predicted value;
step 6: updating the current global optimum and local optimum of the population according to the fitness value corresponding to each particle in the population;
and 7: updating the speed and the position of the particles, wherein the inertia weight and the learning factor are dynamically changed in three stages along with the increase of the number of iteration steps;
and 8: and judging whether the set maximum iteration number is reached, if so, ending the process and reducing the current globally optimal particle position into a corresponding parameter of the RBF neural network to serve as the optimal initial parameter of the network, otherwise, returning to the step 5.
Specifically, the data normalization method in step 4 adopts a logarithmic function method, and the normalization formula is as follows:
in the formulaIndicates the normalization result of the ith (i is 1,2,3 …, n), n indicates the number of data samples, xiDenotes the ith data, and max (x) denotes the maximum value of the data sample.
Specifically, the fitness value of the particle calculated in step 5 is calculated as the following function:
where N represents the number of training samples, Y represents the value of the true label,the actual output value of the network is represented, and outnum represents the number of output layer nodes of the RBF neural network.
Specifically, the velocity and the position of the particle are updated in step 7, wherein the velocity update formula includes the inertia weight ω and the learning factor c1、c2These parameters all change dynamically as the number of iteration steps increases, and the process of dynamic change of the inertial weight is described by the following function:
in the formula, ωstartAnd ωendRespectively representing the set maximum and minimum inertial weights, respectively set to values of 0.95 and 0.4; a is a constant, the value of a is set to 0.005; t is t1And t2Respectively representing the iteration cutoff steps of the prophase and the metaphase of the particle swarm, wherein T is T2-t1Gen denotes the maximum number of iterations of the particle.
The dynamic variation process of the learning factor is described by the following function:
wherein m is a constant, and the value of m is set to 0.1; c. CstartRepresents the minimum value of the learning factor, and is set to 1.5.
Specifically, as described in step 7, as the number of iteration steps increases, the update strategy of the particle velocity is divided into three stages: always keeping a larger inertia weight in the early stage, and keeping c1Smaller and c2Greater, controlThe manufactured particles fly to the global optimal general position in a larger step length; in the middle stage, the inertia weight is evolved towards the minimum value, the numerical values of the two learning factors are evolved towards the opposite direction, and the particles are controlled to be evolved from global search to local exploration; later on keeping smaller inertia weight and c1Larger and c2And the smaller the size, the control particles quickly converge around the global optimum, and finally the global optimum solution is found.
Specifically, the example update speed strategy in step 7 is divided into three stages, where the number of previous iteration steps is adaptively selected, and a threshold L (L) of the number of previous iteration steps is set<Gen), during this L step, the inertial weight in the velocity update remains ωstartUnchanged, the learning factor remains c1Larger and c2If the L steps are continuously iterated, the optimal fitness value is not changed, the iteration strategy is shifted to the middle period, namely the inertia weight is evolved to a smaller value, the learning factor is also evolved to the opposite direction, if the L steps are continuously iterated, the optimal fitness value is changed, the iteration strategy in the previous period is continuously iterated for L times from the changed generation until the optimal fitness value is not changed after the L times of continuous iteration, and the iteration strategy in the middle period is shifted again.
By adopting the technical scheme, compared with the prior art, the invention has the beneficial effects that:
1. the initial parameters of the RBF neural network are optimized, the traditional method of selecting a data center by using a clustering algorithm is avoided, the high-dimensional data optimization is well adapted, the technical defects of the RBF neural network in engineering application are overcome, and the precision and the stability of network model training are enhanced;
2. in the iterative process of the particle swarm algorithm, the speed updating strategy is divided into three stages, the early stage is mainly put into global search, the middle stage is evolved from the global search to local exploration, and the later stage is mainly put into the local exploration.
Drawings
FIG. 1 is a flow chart of an RBF neural network optimization method based on an improved particle swarm optimization algorithm provided by the invention;
fig. 2 is a topology structure diagram of the RBF neural network.
Detailed Description
Exemplary embodiments of the present invention will be described below with reference to the accompanying drawings. It is to be understood that the specific embodiments shown and described in the drawings are merely exemplary and are intended to illustrate the principles of the invention and not to limit the scope of the invention.
The invention discloses an RBF neural network optimization method based on an improved particle swarm optimization algorithm, wherein a topological structure of an RBF neural network is given in figure 2, an exemplary description is carried out by using a sea clutter predicted RBF neural network model, and the specific steps of the embodiment are given in figure 1:
step 1: determining the topological structure of the RBF neural network, wherein the RBF neural network is a simple three-layer structure, the number of nodes of an input layer and an output layer is determined by specific problems, the number of hidden layers is generally determined by clustering data through a clustering algorithm and according to the number of obtained data clusters, when the data dimension is high and the clustering algorithm is difficult to classify, the number of hidden layers of the network can be manually specified according to experience,
step 2: determining the dimension of the particle according to the network structure parameters, introducing a particle swarm optimization algorithm into the RBF neural network, wherein essentially, the position parameter of the particle is mapped by the RBF neural network parameter to be optimized, so that the dimension of the particle is the sum of the parameters to be optimized, and the input layer to the hidden layer of the RBF neural network are obtained by Gaussian kernel function mapping, and the function form is as follows:
the parameters of the particle swarm optimization include a data center c, a data width σ and a network weight ω _ r, in fig. 2, for an example of a sea clutter prediction model, the number of nodes of a network output layer is 1, so that the number of parameters of the network weight, the number of data widths and the number of hidden layers are the same, the number of nodes of an input layer and the number of hidden layers are respectively innum and hidnum, each input needs to be excited once by all nodes of the hidden layers, so that the number of parameters of the data center is innum hidnum, and the dimension of a particle is:
n=2*hidnum+hidnum*innum
and step 3: when the position and the speed of the population are initialized, the data are classified, so that the initialization range of the position parameters corresponding to the data center and the data width is between (0 and 1), the initialization range of the position parameters corresponding to the network weight is set to be between (-1 and 1), and the initialization range of the speed is between (-0.1 and 0.1) because the data are subjected to normalization processing.
And 4, step 4: the data is normalized by a logarithmic function method, and the function form is given by the following formula:
in the formulaIndicates the normalization result of the ith (i is 1,2,3 …, n), n indicates the number of data samples, xiDenotes the ith data, and max (x) denotes the maximum value of the data sample.
And 5: taking a part of training data to train the RBF neural network, adopting a fitness function to calculate the fitness value of each current particle according to the error between the network output value and the true value, and when the number of nodes of the network output layer is 1, giving the fitness function by the following formula:
where N represents the number of training samples, Y represents the value of the true label,representing the actual output value of the network.
Step 6: updating a global optimum value and a local optimum value according to the calculated fitness values of all the particles, and replacing the global optimum of the previous generation with the global optimum value of the current population if the global optimum of the current population is smaller than the global optimum of the previous generation; for each particle, if the fitness value of the current particle is smaller than that of the previous generation, the position of the current particle is used as the local optimal position.
And 7: and updating the speed and the position of the particles, wherein the updated formula is as follows:
in the formula, ω represents an inertial weight, c1And c2Denotes a learning factor, r denotes a random number between (0,1),andrespectively representing the individual extremum locations and the global optimum location,andrespectively the speed and position of the ith particle in the d-dimension of the kth iteration.
In the process of iterative updating of the particles, the updating of the particle group velocity is divided into three stages, different updating strategies are respectively adopted, and the updating mode of the inertia weight is described by the following piecewise function:
in the formula, ωstartAnd ωendRespectively representing the set maximum and minimum inertial weights, respectively set to values of 0.95 and 0.4; a is a constant, the value of a is set to 0.005; t is t1And t2Respectively representing the iteration cutoff steps of the prophase and the metaphase of the particle swarm, wherein T is T2-t1Gen denotes the maximum number of iterations of the particle.
The updating mode of the learning factor is described by the following improved sigmoid function:
wherein m is a constant, and the value of m is set to 0.1; c. CstartRepresents the minimum value of the learning factor, and is set to 1.5.
According to the updating strategy described by the formula, a larger inertia weight is always kept in the early stage of iteration, and c is kept1Smaller and c2Larger, controlling the particle to fly to a globally optimal general position in a larger step length; middle term evolution of inertial weight towards minimum value, c1Evolution towards a set maximum, c2Evolving towards a set minimum value, and controlling the evolution of the particles from global search to local exploration; later keeping the inertia weight as the minimum value in the setting range and keeping c1Larger and c2And the smaller the size, the control particles are quickly converged around the global optimum, the work division of the three search stages is clear, and the global optimum solution is finally found.
In addition, in the above three stages of optimization, whether the particles can be gathered around the global optimum in the early stage is the key to success or failure of the optimization. Because the great inertia weight is always kept in the early stage, the step length of the search is long, and the search is not easy to fall into the local maximumPreferably, the particles have strong global search capability. If the fitness value does not change for a long time, the general position of the global optimum is found, so that local exploration in the middle and later stages can be carried out, and the global optimum can be quickly converged in the range of the global optimum; if the fitness value changes, the population is not stabilized in the global optimal range, and the global exploration capability in the previous period needs to be maintained continuously. Here, a threshold value L (L) of the number of previous iteration steps is set<Gen), adaptive selection of the iteration step number in the previous stage is realized, and in the L step, the inertia weight in the speed updating keeps omegastartUnchanged, the learning factor remains c1Larger and c2If the L steps are continuously iterated, the optimal fitness value is not changed, the iteration strategy is shifted to the middle period, namely the inertia weight is evolved to a smaller value, the learning factor is also evolved to the opposite direction, if the L steps are continuously iterated, the optimal fitness value is changed, the iteration strategy in the previous period is continuously iterated for L times from the changed generation until the optimal fitness value is not changed after the L times of continuous iteration, and the iteration strategy in the middle period is shifted again. The particle swarm is guaranteed not to fall into local optimum, and meanwhile, the later convergence speed and precision are increased.
And 8: and judging whether the current iteration step number reaches the set maximum iteration step number Gen, if not, continuing to update the next iteration, otherwise, reducing the current global optimal solution, namely the position of the particle with the minimum current fitness value, into the network parameter corresponding to the RBF neural network, thus realizing the optimization of the RBF neural network.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions, embodiments and advantages of the present invention in further detail, and the above-mentioned examples are only for further explaining the principles of the present invention and help the reader to understand the design idea of the present invention, and it should be understood that the scope of the present invention is not limited to the specific description and examples, and any modification and equivalent replacement made within the principles of the present invention should be included in the scope of the present invention.
Claims (9)
1. An RBF neural network optimization method based on an improved particle swarm optimization algorithm is characterized by comprising the following steps:
step 1: determining a topological structure of the RBF neural network, determining the number of input and output nodes of the neural network according to a problem to be solved, and determining the number of hidden nodes of the neural network;
step 2: calculating the number of network parameters to be optimized, and mapping the target to be optimized into the position of the particle;
and step 3: initializing the position and speed of the population;
and 4, step 4: carrying out normalization processing on the data;
and 5: taking out part of training data and inputting the training data into the network model, and evaluating the fitness value of the current particle by using a fitness function according to the error between the network output value and the predicted value;
step 6: updating the current global optimum and local optimum of the population according to the fitness value corresponding to each particle in the population;
and 7: updating the speed and the position of the particles, wherein the inertia weight and the learning factor are dynamically changed in three stages along with the increase of the number of iteration steps;
and 8: and judging whether the set maximum iteration number is reached, if so, ending the process and reducing the current globally optimal particle position into a corresponding parameter of the RBF neural network to serve as the optimal initial parameter of the network, otherwise, returning to the step 5.
2. The improved particle swarm optimization-based RBF neural network optimization method according to claim 1, wherein the position of the particle in step 2 is an n-dimensional data, where n is determined by the following formula:
n=hidnum+hidnum*innum+hidnum*outnum (1)
in the formula (1), innum represents the number of nodes of an input layer of the RBF neural network, hidnum represents the number of hidden layers, and outnum represents the number of nodes of an output layer.
3. The improved particle swarm optimization-based RBF neural network optimization method according to claim 1, wherein the classification discussion during initializing the population position in step 3 is that the initialization range of the location parameters corresponding to the data center and the data width is between (0,1), and the initialization range of the location parameters corresponding to the network weight is set to be between (-1, 1).
4. The improved particle swarm optimization-based RBF neural network optimization method according to claim 1, wherein the data normalization processing in step 4 adopts a logarithmic function method, and the functional form is given by the following formula:
5. The improved particle swarm optimization-based RBF neural network optimization method according to claim 1, wherein the following formula is adopted as the calculation function of the fitness value in step 5:
6. The improved particle swarm optimization-based RBF neural network optimization method according to claim 1, wherein the particle swarm velocity and position update formula in step 7 satisfies the following formula:
(4) in the formula, ω represents an inertial weight, c1And c2Denotes a learning factor, r denotes a random number between (0,1),andrespectively representing an individual optimal position and a global optimal position,andrespectively the speed and position of the ith particle in the d-dimension of the kth iteration.
7. The improved particle swarm optimization-based RBF neural network optimization method of claim 6, wherein the inertial weight in the velocity update formula of step 7 adopts a segmentation strategy, and the segmentation function is shown as follows:
(5) in the formula, ωstartAnd ωendRespectively representing the set maximum and minimum inertial weights, respectively set to values of 0.95 and 0.4; a is a constant, the value of a is set to 0.005; t is t1And t2Respectively representing the iteration cutoff steps of the prophase and the metaphase of the particle swarm, wherein T is T2-t1,t1Instead of being manually specified, Gen represents the maximum number of iterations of the particle.
8. The improved particle swarm optimization-based RBF neural network optimization method of claim 6, wherein the learning factor in the speed update formula of step 7 is a dynamic learning factor, and the function expression is shown as follows:
(6) in the formula (7), m is a constant, and the value of m is set to 0.1; c. CstartRepresents the minimum value of the learning factor, and is set to 1.5.
9. The improved particle swarm optimization-based RBF neural network optimization method according to claim 7 or 8, wherein the strategy for updating the particle swarm velocity in step 7 is divided into three stages, the previous iteration step number is adaptively selected, and a threshold L (L) of the previous iteration step number is set<Gen), during this L step, the inertial weight in the velocity update remains ωstartUnchanged, the learning factor remains c1Larger and c2If the L steps are continuously iterated and the optimal fitness value is not changed, the iteration strategy shifts to a middle-stage iteration strategy, namely the inertia weight evolves to a smaller value and the learning factor evolves to the opposite direction, if the L steps are continuously iterated and the optimal fitness value changes, the iteration strategy in the previous stage is continuously iterated for L times from the changed generation until the optimal fitness value does not change after the L times of continuous iteration, the iteration strategy shifts to the middle-stage iteration strategy, and the inertia weight in the later stage of iteration keeps omegaendUnchanged, the learning factor remains c1Smaller and c2The particle swarm is large, local fine exploration is conducted on the particle swarm, and convergence speed and precision are increased.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010371490.1A CN113627606A (en) | 2020-05-06 | 2020-05-06 | RBF neural network optimization method based on improved particle swarm optimization |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010371490.1A CN113627606A (en) | 2020-05-06 | 2020-05-06 | RBF neural network optimization method based on improved particle swarm optimization |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113627606A true CN113627606A (en) | 2021-11-09 |
Family
ID=78376484
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010371490.1A Pending CN113627606A (en) | 2020-05-06 | 2020-05-06 | RBF neural network optimization method based on improved particle swarm optimization |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113627606A (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114092768A (en) * | 2021-11-30 | 2022-02-25 | 苏州浪潮智能科技有限公司 | Screening method and device of training models in training model group and electronic equipment |
CN114330122A (en) * | 2021-12-28 | 2022-04-12 | 东北电力大学 | Hydroelectric generating set large shaft axis adjusting method based on machine learning |
CN114565039A (en) * | 2022-02-28 | 2022-05-31 | 华中科技大学 | Method for predicting drop point error of jet printing ink drop |
CN114756568A (en) * | 2022-03-21 | 2022-07-15 | 中国电子科技集团公司第五十四研究所 | Similarity retrieval method based on improved particle swarm optimization |
CN114882270A (en) * | 2022-04-15 | 2022-08-09 | 华南理工大学 | Aortic dissection CT image classification method based on particle swarm optimization algorithm |
CN115222007A (en) * | 2022-05-31 | 2022-10-21 | 复旦大学 | Improved particle swarm parameter optimization method for glioma multitask integrated network |
CN115357862A (en) * | 2022-10-20 | 2022-11-18 | 山东建筑大学 | Positioning method in long and narrow space |
CN116796611A (en) * | 2023-08-22 | 2023-09-22 | 成都理工大学 | Method for adjusting bridge buckling cable force based on flagelliforme algorithm and artificial neural network |
CN117114144A (en) * | 2023-10-24 | 2023-11-24 | 青岛农业大学 | Rice salt and alkali resistance prediction method and system based on artificial intelligence |
CN117454256A (en) * | 2023-12-26 | 2024-01-26 | 长春工程学院 | Geological survey method and system based on artificial intelligence |
CN117574213A (en) * | 2024-01-15 | 2024-02-20 | 南京邮电大学 | APSO-CNN-based network traffic classification method |
CN118279289A (en) * | 2024-05-10 | 2024-07-02 | 国网安徽省电力有限公司电力科学研究院 | Power equipment video image defect identification method and system |
-
2020
- 2020-05-06 CN CN202010371490.1A patent/CN113627606A/en active Pending
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114092768A (en) * | 2021-11-30 | 2022-02-25 | 苏州浪潮智能科技有限公司 | Screening method and device of training models in training model group and electronic equipment |
CN114330122B (en) * | 2021-12-28 | 2024-05-24 | 东北电力大学 | Machine learning-based adjustment method for large shaft axis of hydroelectric generating set |
CN114330122A (en) * | 2021-12-28 | 2022-04-12 | 东北电力大学 | Hydroelectric generating set large shaft axis adjusting method based on machine learning |
CN114565039A (en) * | 2022-02-28 | 2022-05-31 | 华中科技大学 | Method for predicting drop point error of jet printing ink drop |
CN114565039B (en) * | 2022-02-28 | 2024-09-06 | 华中科技大学 | Method for predicting drop point error of jet printing ink |
CN114756568A (en) * | 2022-03-21 | 2022-07-15 | 中国电子科技集团公司第五十四研究所 | Similarity retrieval method based on improved particle swarm optimization |
CN114882270A (en) * | 2022-04-15 | 2022-08-09 | 华南理工大学 | Aortic dissection CT image classification method based on particle swarm optimization algorithm |
CN115222007A (en) * | 2022-05-31 | 2022-10-21 | 复旦大学 | Improved particle swarm parameter optimization method for glioma multitask integrated network |
CN115357862A (en) * | 2022-10-20 | 2022-11-18 | 山东建筑大学 | Positioning method in long and narrow space |
CN115357862B (en) * | 2022-10-20 | 2023-04-07 | 山东建筑大学 | Positioning method in long and narrow space |
CN116796611A (en) * | 2023-08-22 | 2023-09-22 | 成都理工大学 | Method for adjusting bridge buckling cable force based on flagelliforme algorithm and artificial neural network |
CN116796611B (en) * | 2023-08-22 | 2023-10-31 | 成都理工大学 | Method for adjusting bridge buckling cable force based on flagelliforme algorithm and artificial neural network |
CN117114144B (en) * | 2023-10-24 | 2024-01-26 | 青岛农业大学 | Rice salt and alkali resistance prediction method and system based on artificial intelligence |
CN117114144A (en) * | 2023-10-24 | 2023-11-24 | 青岛农业大学 | Rice salt and alkali resistance prediction method and system based on artificial intelligence |
CN117454256A (en) * | 2023-12-26 | 2024-01-26 | 长春工程学院 | Geological survey method and system based on artificial intelligence |
CN117574213A (en) * | 2024-01-15 | 2024-02-20 | 南京邮电大学 | APSO-CNN-based network traffic classification method |
CN117574213B (en) * | 2024-01-15 | 2024-03-29 | 南京邮电大学 | APSO-CNN-based network traffic classification method |
CN118279289A (en) * | 2024-05-10 | 2024-07-02 | 国网安徽省电力有限公司电力科学研究院 | Power equipment video image defect identification method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113627606A (en) | RBF neural network optimization method based on improved particle swarm optimization | |
CN111709524A (en) | RBF neural network optimization method based on improved GWO algorithm | |
CN114969995B (en) | Rolling bearing early fault intelligent diagnosis method based on improved sparrow search and acoustic emission | |
CN112329934A (en) | RBF neural network optimization algorithm based on improved sparrow search algorithm | |
CN113240069A (en) | RBF neural network optimization method based on improved Harris eagle algorithm | |
CN112232493A (en) | RBF neural network optimization method based on improved whale algorithm | |
CN113240068A (en) | RBF neural network optimization method based on improved ant lion algorithm | |
CN111310885A (en) | Chaotic space cattle herd search algorithm introducing variation strategy | |
Abshouri et al. | New firefly algorithm based on multi swarm & learning automata in dynamic environments | |
CN111222286A (en) | Parameter optimization method based on power transmission line state estimation | |
CN110442129A (en) | A kind of control method and system that multiple agent is formed into columns | |
CN113722980B (en) | Ocean wave height prediction method, ocean wave height prediction system, computer equipment, storage medium and terminal | |
Saffari et al. | Fuzzy Grasshopper Optimization Algorithm: A Hybrid Technique for Tuning the Control Parameters of GOA Using Fuzzy System for Big Data Sonar Classification. | |
CN112149883A (en) | Photovoltaic power prediction method based on FWA-BP neural network | |
Talal | Comparative study between the (ba) algorithm and (pso) algorithm to train (rbf) network at data classification | |
CN117784615B (en) | Fire control system fault prediction method based on IMPA-RF | |
CN116933948A (en) | Prediction method and system based on improved seagull algorithm and back propagation neural network | |
Xu et al. | Elman neural network for predicting aero optical imaging deviation based on improved slime mould algorithm | |
CN113627075A (en) | Projectile aerodynamic coefficient identification method based on adaptive particle swarm optimization extreme learning | |
CN117523359A (en) | Image comparison and identification method and device based on reinforcement learning | |
CN114722710B (en) | Range gate dragging interference method based on random simulation optimization | |
CN116432539A (en) | Time consistency collaborative guidance method, system, equipment and medium | |
CN114004326B (en) | ELM neural network optimization method based on improved suburban wolf algorithm | |
CN116165886A (en) | Multi-sensor intelligent cooperative control method, device, equipment and medium | |
Liu et al. | Research on wireless sensor network localization based on an improved whale optimization algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20211109 |
|
RJ01 | Rejection of invention patent application after publication |