CN111709524A - RBF neural network optimization method based on improved GWO algorithm - Google Patents
RBF neural network optimization method based on improved GWO algorithm Download PDFInfo
- Publication number
- CN111709524A CN111709524A CN202010631143.8A CN202010631143A CN111709524A CN 111709524 A CN111709524 A CN 111709524A CN 202010631143 A CN202010631143 A CN 202010631143A CN 111709524 A CN111709524 A CN 111709524A
- Authority
- CN
- China
- Prior art keywords
- wolf
- neural network
- population
- rbf neural
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000004422 calculation algorithm Methods 0.000 title claims abstract description 62
- 238000005457 optimization Methods 0.000 title claims abstract description 49
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 44
- 238000000034 method Methods 0.000 title claims abstract description 29
- 241000282461 Canis lupus Species 0.000 claims abstract description 92
- 238000012216 screening Methods 0.000 claims description 16
- 238000012549 training Methods 0.000 claims description 16
- 230000003247 decreasing effect Effects 0.000 claims description 9
- 230000008569 process Effects 0.000 claims description 9
- 238000013507 mapping Methods 0.000 claims description 8
- 239000013598 vector Substances 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 4
- 230000000739 chaotic effect Effects 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 3
- 238000011423 initialization method Methods 0.000 claims 2
- 241000270293 Elaphe Species 0.000 claims 1
- 241000588653 Neisseria Species 0.000 claims 1
- 230000006870 function Effects 0.000 description 18
- 238000012360 testing method Methods 0.000 description 12
- 238000010586 diagram Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 230000006399 behavior Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 230000005764 inhibitory process Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 241000717544 Aconitum lycoctonum subsp. vulparia Species 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 239000004020 conductor Substances 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/082—Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/004—Artificial life, i.e. computing arrangements simulating life
- G06N3/006—Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0418—Architecture, e.g. interconnection topology using chaos or fractal principles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention belongs to the technical field of neural network optimization, and particularly relates to an RBF neural network optimization method based on an improved GWO algorithm. The invention calculates the fitness mean value of each generation of population, dynamically sets the fitness threshold value, and executes the strategy of large-range search by the wolf with the fitness higher than the threshold value, otherwise executes the strategy of small-range search, so that each generation of population has both global search and local search capabilities, and the convergence speed and the later optimization precision of the GWO algorithm are improved. The improved GWO optimization algorithm is used for optimizing initial parameters of the RBF neural network, and the stability and the precision of the network are further improved.
Description
Technical Field
The invention belongs to the technical field of neural network optimization, and particularly relates to an improved GWO algorithm-based RBF neural network optimization method.
Background
In the military and civilian fields, radar has become an essential part. When the radar is applied to detection of the offshore targets, a large number of sea clutter is often superposed in echo signals, the existence of the sea clutter brings great challenges to effective detection of the offshore targets, and accurate prediction and inhibition of the sea clutter become an essential step for detection of the offshore targets.
The learners begin to research the early stage of the sea clutter from the statistical characteristics of the sea clutter, but neither the classical statistical characteristic model of the sea clutter nor the improved model of the classical statistical characteristic model of the sea clutter can accurately describe the sea clutter, and the statistical characteristic model has no universality for different sea conditions. Further research finds that the sea clutter has chaotic characteristics, and accurate prediction and inhibition can be carried out on the sea clutter by establishing a prediction model through learning the intrinsic dynamics law of the sea clutter. The Radial Basis Function (RBF) neural network has strong nonlinear mapping capability and has inherent advantages for learning the dynamic characteristics of the sea clutter. The selection of the initial parameters of the RBF neural network directly influences the robustness and accuracy of a prediction model, so that a Grey wolf optimization (GWOlfoptimization) algorithm is introduced to optimize the initial parameters of the RBF neural network, and the prediction capability of the RBF neural network prediction model is further improved.
GWO the basic idea of the algorithm is to simulate the behavior of the gray wolf predation in nature, and the whole population is divided into four levels during the hunting process, alpha wolf, beta wolf, wolf and omega wolf. Wherein the alpha wolf is the highest conductor in the whole wolf group, the hunting behavior of the leading wolf group, the beta wolf and wolf assist the alpha wolf, and when the alpha wolf is lost, the beta wolf and wolf can replace the position of the alpha wolf immediately, and the rest omega wolfs are mainly responsible for the enclosure of the prey. In the algorithm implementation process, the alpha wolf, the beta wolf and the wolf are three gray wolfs with the minimum fitness value in the whole population, namely three gray wolfs closest to the optimal value in the global space, and the alpha wolf, the beta wolf and the wolf together guide other gray wolf individuals to be updated in an iterative mode in the global optimal direction, and finally the global optimal value is found. GWO algorithm has been effectively applied in various fields of communication systems, power systems, control engineering, computers and the like since the proposal in 2014, GWO algorithm optimizes initial parameters of RBF neural network, eliminates negative influence of network initial values on final models, and enhances robustness and accuracy of RBF neural network.
GWO in the iterative updating process, the convergence factor a controls the search range of each wolf individual, and the larger a is more beneficial to global search, and the opposite is more beneficial to local exploration. The classical GWO algorithm employs a linear decreasing strategy for the convergence factor to linearly decrease a from an initial maximum of 2 to a final minimum of 0. Therefore, all early-stage wolf individuals are searched in a large range, and the local exploration capability is poor, so that the convergence rate is low; in the later period, all the wolf individuals are explored in a small range, the information of surrounding solutions is ignored, and the wolf individuals are easy to fall into local optimum. The main reason for these two problems is that the classical GWO algorithm ignores the cooperative division operation of the grey wolf population, so that all the grey wolfs execute the same search strategy, and the search lacks flexibility.
At present, the improvement on the convergence factor strategy mainly changes linear decrement into non-linear decrement, and the improved algorithm still does not consider the problem of population division and cannot essentially solve the two main problems of low convergence speed and easy falling into local optimization. Therefore, the invention designs an improved GWO algorithm for the wolf population division collaborative optimization, dynamically divides the population into two sub-populations according to the fitness average value of each generation, and executes the search in different search ranges, so that each generation has a wolf individual responsible for global search and local exploration in the whole iterative process, thereby increasing the search flexibility, and having important significance for improving the algorithm precision and enhancing the optimization capability of the RBF neural network.
Disclosure of Invention
The invention aims to solve the technical problems that the existing GWO algorithm is low in convergence speed, small in later-stage search range, easy to ignore peripheral optimal solution information and easy to fall into local optimal, and provides an improved GWO algorithm-based RBF neural network optimization method.
In order to achieve the purpose, the invention adopts the following technical scheme:
an RBF neural network optimization method based on an improved GWO algorithm comprises the following steps:
step 1: determining a network structure of the RBF neural network according to specific problems, calculating the number of network parameters to be optimized, and encoding the parameters to generate position vectors of the wolf individuals;
step 2: setting the scale of the grey wolf population and initializing the position of the grey wolf population;
and step 3: carrying out normalization processing on the training data of the RBF neural network;
and 4, step 4: taking a part of training data as network input, and calculating the fitness value of each current wolf individual by adopting a fitness function according to the error between the network output and the training sample label;
and 5: setting a threshold value according to the mean value of the population fitness value, dividing the population into two sub-populations of wolfs elite and wolfs elite non-and respectively executing different decreasing strategies on convergence factors when positions are updated;
step 6: the first three wolf individuals with smaller fitness values are used as alpha wolfs, beta wolfs and wolfs in the whole population, other wolf individuals are guided to evolve towards the optimal value together, and the positions of the wolfs are updated;
and 7: judging whether a termination condition is reached, finishing the whole optimizing process after the termination condition is reached, storing the wolf position corresponding to the minimum fitness value, mapping the wolf position into a parameter corresponding to the network, and taking the parameter as the optimal initial parameter of the network, otherwise, returning to the step 5.
Specifically, Logistic mapping is adopted for population chaos initialization in step 2, and the expression is as follows:
in the formula, μ represents a disturbance parameter,the value of the k-th wolf individual in the i-th dimension is shown,represents the value of the k +1 st grey wolf individual in the ith dimension.
Specifically, the fitness function in step 4 is given by:
in the formula, Y represents an output value of the network,and the real values of the training samples are represented, n represents the number of the training samples, and L represents the number of network outputs.
Specifically, the threshold for partitioning the sub-populations in step 5 is set as follows:
in the formula, a threshold value for dividing the population is shown, m represents the size of the population, mu represents the screening weight, the proportion of the elite wolf population in the whole population is controlled, and f represents the fitness value of the wolf individual.
Specifically, when the seed group is divided in step 5, the screening weight μ adopts a linear decreasing strategy, and the expression is as follows,
in the formula, mumaxRepresents the maximum value of the screening weight, muminRepresenting the minimum value of the screening weight, taking 0.2, t representing the current iteration step number, tmaxThe maximum number of iteration steps is indicated.
Specifically, the convergence factors of the two sub-populations divided in step 5 adopt different strategies, and when the fitness value of the wolf individual is greater than the threshold value, the convergence factor adopts a1In contrast, adopt a2The expression of (1) is as follows:
a1=2-t·(1/tmax)
a2=1-t·(1/tmax)
specifically, the updating of the location of the wolf individual in step 6 is given by,
Dα=|C1·Xα(t)-X(t)|,
Dβ=|C2·Xβ(t)-X(t)|,
D=|C3·X(t)-X(t)|
Dα=|C1·Xα(t)-X(t)|,
Dβ=|C2·Xβ(t)-X(t)|,
D=|C3·X(t)-X(t)|
in the formula, A and C are random quantity coefficients, the scope of the gray wolf search is controlled, the expression is given by the following formula,
A=2a(r1-0.5),C=2r2
r1,r2is [0,1 ]]A is a convergence factor.
By adopting the technical scheme, compared with the prior art, the invention has the beneficial effects that:
1. in the optimization process of the existing GWO algorithm, the convergence factors of all wolfsbane individuals adopt the same decreasing strategy, the global search and the local search are not reasonably divided, the optimization lacks flexibility, the dynamic threshold value is set in the invention, the population is dynamically divided into two sub-populations in the whole optimization process, the global search and the local search are respectively executed, the convergence speed is accelerated, the phenomenon that the population is trapped into local optimum in the later period is avoided, and the calculation time is shorter.
2. The invention optimizes the initial parameters of the network by using GWO optimization algorithm, avoids the defect of clustering algorithm, makes up the deficiency of RBF neural network in the realization of engineering, reduces the adverse effect of the initial parameters on the network stability, and increases the precision of the RBF neural network;
drawings
FIG. 1 is a flow chart of an RBF neural network optimization method based on an improved GWO algorithm according to the present invention;
FIG. 2 is a schematic diagram of the GWO algorithm optimization;
FIG. 3 is a diagram of a multi-input single-output RBF neural network topology;
FIG. 4 is a result diagram of the impact of the value strategies of different screening weights on the algorithm optimization accuracy;
FIG. 5 is a diagram illustrating the attributes of an algorithmic test function;
FIG. 6 is a graph of the optimization results of the improved GWO algorithm on a unimodal function;
FIG. 7 is a graph of the optimization results of the improved GWO algorithm on a multi-peak function;
fig. 8 is a graph comparing the results of the convergence curves of the improved GWO algorithm and its comparison algorithm applied in the examples.
Detailed Description
Exemplary embodiments of the present invention will be described below with reference to the accompanying drawings. It is to be understood that the drawings and the described embodiments are merely exemplary in nature and are intended to illustrate the principles of the invention, not to limit the scope of the invention.
The invention discloses an RBF neural network optimization method based on an improved GWO algorithm, wherein a schematic diagram of GWO algorithm optimization is given in figure 2, a topological structure of the RBF neural network is given in figure 3, a sea clutter prediction model of the RBF neural network is taken as an example for explanation, and concrete implementation steps of the example are given in figure 1:
step 1: determining a network topology structure, generating position vectors of the wolf individuals by encoding network parameters to be optimized, including data center parameters, data width parameters and network weight parameters, wherein the vector dimensions are the total number of the parameters, an input layer of the RBF neural network is connected to a hidden layer by adopting the following Gaussian kernel function,
where X is the input data, c is the data center, σ is the data width, fig. 3 shows an example of a sea clutter prediction model, the network output is 1, and the number of nodes in the input layer is XnumThe number of hidden layer nodes is cnumThe dimension of the gray wolf individual position vector can be obtained as follows:
N=2·cnum+Xnum·cnum
step 2: when the population is initialized, the initialization ranges of corresponding positions after different network parameter codes are different, because the training data is subjected to normalization processing, the initialization ranges of data center parameters and data width parameters are set between (0,1), the initialization ranges of network weight parameters are set between (-1,1), the population chaos initialization adopts Logistic mapping, and the expression is as follows:
in the formula, mu represents a disturbance parameter, the value is 3.98,the value of the k-th wolf individual in the i-th dimension is shown,represents the value of the k +1 st grey wolf individual in the ith dimension.
And step 3: the training data of the RBF neural network is normalized, and the normalization method is given by the following formula:
wherein X represents the original data, XminRepresenting the minimum value, X, among the original datamaxRepresenting the minimum value, X, among the original datanormThe normalized data is represented.
And 4, step 4: taking part of training data as the input of the network, and calculating the fitness value of each current wolf individual by adopting a fitness function according to the output of the network and the error of the training sample label, wherein the fitness function is given by the following formula:
wherein Y represents an output value of the network,and the real values of the training samples are represented, n represents the number of the training samples, and L represents the number of network outputs.
And 5: each generation sets a threshold according to the fitness value mean value of the wolf population, and the wolf population is divided into an elite wolf population and a non-elite wolf population, and the threshold is set in the following way:
in the formula, a threshold value for dividing the population is represented, m represents the population scale, μ represents the screening weight, the proportion of the elite wolf population in the whole population is controlled, f represents the fitness value of the wolf individual, fig. 4 shows the influence of different screening weight value strategies on the algorithm optimization result, each screening weight is subjected to 30 times of experiments to obtain the fitness average value, wherein the minimum value falls into the screening weight strategies which are linearly decreased, therefore, the screening weight μ adopts the linearly decreasing strategy, and the expression is as follows:
in the formula, mumaxRepresents the maximum value of the screening weight, and is 0.8 muminRepresenting the minimum value of the screening weight, taking 0.2, t representing the current iteration step number, tmaxRepresenting the maximum iteration step number, adopting different strategies for convergence factors of the two divided sub-populations, and adopting a for the convergence factor when the fitness value of the wolf individual is greater than a threshold value1In contrast, adopt a2The expression of (1) is as follows:
a1=2-t·(1/tmax)
a2=1-t·(1/tmax)
step 6: the position of the gray wolf is updated according to the following formula,
Dα=|C1·Xα(t)-X(t)|,
Dβ=|C2·Xβ(t)-X(t)|,
D=|C3·X(t)-X(t)|
X1(t+1)=Xα(t)-A1·Dα,
X2(t+1)=Xβ(t)-A2·Dβ,
X3(t+1)=X(t)-A3·D
in the formula, A and C are random quantity coefficients, the search range of the wolf is controlled, the formula of the random quantity coefficients is shown as follows,
A=2a(r1-0.5),C=2r2
in the formula, r1,r2Is [0,1 ]]A is a convergence factor.
And 7: judging whether the current iteration reaches the set maximum iteration step number, outputting the position of the wolf individual corresponding to the current minimum fitness value after the maximum iteration step number is reached, and mapping the position vector into a corresponding network parameter to be used as the optimal initial parameter of the RBF neural network.
To examine the practical effect of the improved GWO algorithm (IGWO), 6 standard test functions were chosen as the test, and the attributes of the test functions are given in FIG. 5, where f1And f2Is a unimodal test function, f3And f4Is a multimodal test function, f5And f6The method is a multi-peak test function with fixed dimension, a PSO algorithm of a linear decreasing inertia weight strategy and a standard GWO algorithm are used as comparison algorithms, each algorithm runs for 30 times, the mean value and the standard deviation mean value of the optimal fitness value are taken, and the experimental result is given by the following table 1:
TABLE 1 optimization results of different algorithms on test functions
It can be seen from the table that the improved algorithm has the highest precision and stability when searching for the optimal value of each type of test function, in order to more intuitively show the convergence situation of the improved algorithm in the optimization process, fig. 6 and 7 respectively show the convergence curves of a unimodal test function and a multimodal test function in the optimization process, and the improved GWO optimization algorithm has a faster convergence speed and is higher in the optimization precision than the other two comparative algorithms.
In order to test the effect of the improved GWO algorithm in specific problems, simulation experiments are performed with the above embodiment as a background, different optimization algorithms are used to optimize the RBF neural network, a prediction model of the sea clutter is established, the sea clutter is predicted, the network prediction models optimized with the different algorithms are used for the prediction of the same group of sea clutter with background noise, in order to eliminate the influence of randomness on the experiments, each algorithm is simulated for 30 times and averaged, and the experimental results are given in the following table 2:
TABLE 2 different optimization algorithms to optimize the prediction effect of RBF neural network on sea clutter
In order to more intuitively show the optimizing capability of various optimization algorithms in specific problems, the data center, the data width and the network weight of the RBF neural network are optimized by taking an embodiment as a background, the iteration steps of the three optimization algorithms are set to be 300, the population size is set to be 25, and the convergence curve of the fitness value is given by a graph 8.
According to the experimental results, the improved GWO optimization algorithm has stronger optimization capability, and when the initial parameters of the RBF neural network are optimized, compared with a comparison algorithm, the improved GWO optimization algorithm has the advantages of higher convergence speed, shorter consumption time, higher optimization precision, stronger stability of the optimized network model and higher prediction precision.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions, embodiments and advantages of the present invention in further detail, and the above-mentioned examples are only for further explaining the principles of the present invention and help the reader to understand the design idea of the present invention, and it should be understood that the scope of the present invention is not limited to the specific description and examples, and any modification and equivalent replacement made within the principles of the present invention should be included in the scope of the present invention.
Claims (10)
1. An RBF neural network optimization method based on an improved GWO algorithm, which is characterized by comprising the following steps:
step 1: determining a network structure of the RBF neural network according to specific problems, calculating the number of network parameters to be optimized, and encoding the parameters to generate position vectors of the wolf individuals;
step 2: setting the scale of the grey wolf population, and initializing the position of the grey wolf population by adopting a chaotic initialization method;
and step 3: carrying out normalization processing on the training data of the RBF neural network;
and 4, step 4: taking a part of training data as network input, and calculating the fitness value of each current wolf individual by adopting a fitness function according to the error between the network output and the training sample label;
and 5: according to the dynamic threshold value of the fitness value mean value of each generation of population, the population is divided into two sub-populations of wolfs elite and wolfs elite non-population, and different decreasing strategies are respectively executed by convergence factors when positions are updated;
step 6: the first three wolf individuals with smaller fitness values are used as alpha wolfs, beta wolfs and wolfs in the whole population, other wolf individuals are guided to evolve towards the optimal value together, and the positions of the wolfs are updated;
and 7: judging whether a termination condition is reached, finishing the whole optimizing process after the termination condition is reached, storing the wolf position corresponding to the minimum fitness value, mapping the wolf position into a parameter corresponding to the network, and taking the parameter as the optimal initial parameter of the network, otherwise, returning to the step 4.
2. An RBF neural network optimization method based on the improved GWO algorithm according to claim 1, wherein the number of the network parameters in step 1 is N, and N satisfies the following formula:
N=cnum+σnum+ωnum(1)
c in formula (1)numRepresenting number of data center parameters, σnumRepresenting the number of data width parameters, ωnumThe number of network weight parameters is shown.
3. An RBF neural network optimization method based on the improved GWO algorithm, according to claim 1, wherein the gray wolf location is initialized in step 2, the initialization range of the parameters corresponding to the data center and the data width is between (0,1), and the initialization range of the parameters corresponding to the network weight is between (-1, 1).
4. The improved GWO algorithm-based RBF neural network optimization method of claim 1, wherein the initialization of the wolf position in step 2 is performed by a chaotic initialization method, the mapping method is Logistic mapping, and the expression is as follows:
5. An RBF neural network optimization method based on the improved GWO algorithm claimed in claim 1, wherein the fitness function in step 4 is given by:
6. An RBF neural network optimization method based on the improved GWO algorithm according to claim 1, wherein the threshold setting for dividing the sub-population in step 5 is as follows:
the threshold value for dividing the population is shown in the formula (4), m represents the size of the population, mu represents the screening weight, controls the proportion of the Elaphe guianensis population in the whole population, and f represents the fitness value of the individual Neisseria chensinensis.
7. The improved GWO algorithm-based RBF neural network optimization method of claim 6, wherein when the seed group is divided in step 5, the screening weight μ adopts a linear decreasing strategy, and the expression is as follows:
in the formula (5) < mu >maxRepresents the maximum value of the screening weight, muminRepresenting the minimum value of the screening weight, taking 0.2, t representing the current iteration step number, tmaxThe maximum number of iteration steps is indicated.
8. The improved GWO algorithm-based RBF neural network optimization method of claim 1, wherein the convergence factors of the two sub-populations divided in step 5 adopt different strategies, and when the fitness value of the wolf's own is greater than the threshold value, the convergence factor adopts a1In contrast, adopt a2The expression of (1) is as follows:
9. the improved GWO algorithm-based RBF neural network optimization method of claim 1, wherein the updating of the location of the wolf body in step 6 is given by the following equations (7) - (9),
in the formula, A and C are random quantity coefficients and control the scope of the gray wolf search.
10. An RBF neural network optimization method based on the improved GWO algorithm claimed in claim 1, wherein in step 6, A and C are random coefficients in the location update formula of the individual Greenwolf, given by the following formula,
A=2a(r1-0.5),C=2r2(10)
r in formula (10)1,r2Is [0,1 ]]A is a convergence factor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010631143.8A CN111709524A (en) | 2020-07-03 | 2020-07-03 | RBF neural network optimization method based on improved GWO algorithm |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010631143.8A CN111709524A (en) | 2020-07-03 | 2020-07-03 | RBF neural network optimization method based on improved GWO algorithm |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111709524A true CN111709524A (en) | 2020-09-25 |
Family
ID=72546489
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010631143.8A Pending CN111709524A (en) | 2020-07-03 | 2020-07-03 | RBF neural network optimization method based on improved GWO algorithm |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111709524A (en) |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112348080A (en) * | 2020-11-06 | 2021-02-09 | 北京石油化工学院 | RBF improvement method, device and equipment based on industrial control abnormity detection |
CN112416913A (en) * | 2020-10-15 | 2021-02-26 | 中国人民解放军空军工程大学 | Aircraft fuel system state missing value supplementing method based on GWO-BP algorithm |
CN112461919A (en) * | 2020-11-10 | 2021-03-09 | 云南电网有限责任公司保山供电局 | System and method for detecting physical and chemical properties of transformer oil by applying multi-frequency ultrasonic technology |
CN112507613A (en) * | 2020-12-01 | 2021-03-16 | 湖南工程学院 | Second-level ultra-short-term photovoltaic power prediction method |
CN112947056A (en) * | 2021-03-04 | 2021-06-11 | 北京交通大学 | Magnetic-levitation train displacement speed tracking control method based on IGWO-BP-PID |
CN113159264A (en) * | 2020-11-12 | 2021-07-23 | 江西理工大学 | Intrusion detection method, system, equipment and readable storage medium |
CN113190931A (en) * | 2021-05-28 | 2021-07-30 | 辽宁大学 | Sub-health state identification method for improving optimized DBN-ELM of wolf |
CN113239503A (en) * | 2021-05-10 | 2021-08-10 | 上海电气工程设计有限公司 | New energy output scene analysis method and system based on improved k-means clustering algorithm |
CN113313306A (en) * | 2021-05-28 | 2021-08-27 | 南京航空航天大学 | Elastic neural network load prediction method based on improved wolf optimization algorithm |
CN113609761A (en) * | 2021-07-21 | 2021-11-05 | 三明学院 | Method, device, equipment and storage medium for calculating model parameters |
CN113923104A (en) * | 2021-12-07 | 2022-01-11 | 南京信息工程大学 | Network fault diagnosis method, equipment and storage medium based on wavelet neural network |
CN114545280A (en) * | 2022-02-24 | 2022-05-27 | 苏州市职业大学 | New energy automobile lithium battery life prediction method based on optimization algorithm |
CN114626435A (en) * | 2022-02-10 | 2022-06-14 | 南京航空航天大学 | High-accuracy rolling bearing intelligent fault feature selection method |
CN114895206A (en) * | 2022-04-26 | 2022-08-12 | 合肥工业大学 | Lithium ion battery SOH estimation method based on RBF neural network of improved wolf optimization algorithm |
CN115809427A (en) * | 2023-02-06 | 2023-03-17 | 山东科技大学 | Mixed gas identification method based on mixed strategy optimization BP neural network |
CN116432687A (en) * | 2022-12-14 | 2023-07-14 | 江苏海洋大学 | Group intelligent algorithm optimization method |
CN116506307A (en) * | 2023-06-21 | 2023-07-28 | 大有期货有限公司 | Network delay condition analysis system of full link |
CN117452828A (en) * | 2023-12-22 | 2024-01-26 | 中电行唐生物质能热电有限公司 | Method and system for controlling emission of harmful gas in boiler based on neural network |
-
2020
- 2020-07-03 CN CN202010631143.8A patent/CN111709524A/en active Pending
Cited By (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112416913A (en) * | 2020-10-15 | 2021-02-26 | 中国人民解放军空军工程大学 | Aircraft fuel system state missing value supplementing method based on GWO-BP algorithm |
CN112348080A (en) * | 2020-11-06 | 2021-02-09 | 北京石油化工学院 | RBF improvement method, device and equipment based on industrial control abnormity detection |
CN112461919A (en) * | 2020-11-10 | 2021-03-09 | 云南电网有限责任公司保山供电局 | System and method for detecting physical and chemical properties of transformer oil by applying multi-frequency ultrasonic technology |
CN113159264A (en) * | 2020-11-12 | 2021-07-23 | 江西理工大学 | Intrusion detection method, system, equipment and readable storage medium |
CN113159264B (en) * | 2020-11-12 | 2022-06-21 | 江西理工大学 | Intrusion detection method, system, equipment and readable storage medium |
CN112507613B (en) * | 2020-12-01 | 2022-05-10 | 湖南工程学院 | Second-level ultra-short-term photovoltaic power prediction method |
CN112507613A (en) * | 2020-12-01 | 2021-03-16 | 湖南工程学院 | Second-level ultra-short-term photovoltaic power prediction method |
CN112947056A (en) * | 2021-03-04 | 2021-06-11 | 北京交通大学 | Magnetic-levitation train displacement speed tracking control method based on IGWO-BP-PID |
CN112947056B (en) * | 2021-03-04 | 2022-10-14 | 北京交通大学 | Magnetic-levitation train displacement speed tracking control method based on IGWO-BP-PID |
CN113239503A (en) * | 2021-05-10 | 2021-08-10 | 上海电气工程设计有限公司 | New energy output scene analysis method and system based on improved k-means clustering algorithm |
CN113313306A (en) * | 2021-05-28 | 2021-08-27 | 南京航空航天大学 | Elastic neural network load prediction method based on improved wolf optimization algorithm |
CN113190931A (en) * | 2021-05-28 | 2021-07-30 | 辽宁大学 | Sub-health state identification method for improving optimized DBN-ELM of wolf |
CN113313306B (en) * | 2021-05-28 | 2024-07-26 | 南京航空航天大学 | Elastic neural network load prediction method based on improved wolf optimization algorithm |
CN113609761B (en) * | 2021-07-21 | 2024-02-20 | 三明学院 | Calculation method, device, equipment and storage medium of model parameters |
CN113609761A (en) * | 2021-07-21 | 2021-11-05 | 三明学院 | Method, device, equipment and storage medium for calculating model parameters |
CN113923104A (en) * | 2021-12-07 | 2022-01-11 | 南京信息工程大学 | Network fault diagnosis method, equipment and storage medium based on wavelet neural network |
CN114626435A (en) * | 2022-02-10 | 2022-06-14 | 南京航空航天大学 | High-accuracy rolling bearing intelligent fault feature selection method |
CN114545280A (en) * | 2022-02-24 | 2022-05-27 | 苏州市职业大学 | New energy automobile lithium battery life prediction method based on optimization algorithm |
CN114545280B (en) * | 2022-02-24 | 2022-11-15 | 苏州市职业大学 | New energy automobile lithium battery life prediction method based on optimization algorithm |
CN114895206A (en) * | 2022-04-26 | 2022-08-12 | 合肥工业大学 | Lithium ion battery SOH estimation method based on RBF neural network of improved wolf optimization algorithm |
CN114895206B (en) * | 2022-04-26 | 2023-04-28 | 合肥工业大学 | Lithium ion battery SOH estimation method based on RBF neural network of improved gray wolf optimization algorithm |
CN116432687A (en) * | 2022-12-14 | 2023-07-14 | 江苏海洋大学 | Group intelligent algorithm optimization method |
CN115809427A (en) * | 2023-02-06 | 2023-03-17 | 山东科技大学 | Mixed gas identification method based on mixed strategy optimization BP neural network |
CN115809427B (en) * | 2023-02-06 | 2023-05-12 | 山东科技大学 | Mixed gas identification method based on mixed strategy optimization BP neural network |
CN116506307B (en) * | 2023-06-21 | 2023-09-12 | 大有期货有限公司 | Network delay condition analysis system of full link |
CN116506307A (en) * | 2023-06-21 | 2023-07-28 | 大有期货有限公司 | Network delay condition analysis system of full link |
CN117452828A (en) * | 2023-12-22 | 2024-01-26 | 中电行唐生物质能热电有限公司 | Method and system for controlling emission of harmful gas in boiler based on neural network |
CN117452828B (en) * | 2023-12-22 | 2024-03-01 | 中电行唐生物质能热电有限公司 | Method and system for controlling emission of harmful gas in boiler based on neural network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111709524A (en) | RBF neural network optimization method based on improved GWO algorithm | |
Zhang et al. | A fault diagnosis method for wind turbines gearbox based on adaptive loss weighted meta-ResNet under noisy labels | |
CN113627606A (en) | RBF neural network optimization method based on improved particle swarm optimization | |
CN112232493A (en) | RBF neural network optimization method based on improved whale algorithm | |
CN115096590B (en) | Rolling bearing fault diagnosis method based on IWOA-ELM | |
CN113240069A (en) | RBF neural network optimization method based on improved Harris eagle algorithm | |
CN107944648B (en) | Large ship speed and oil consumption rate prediction method | |
CN111027663A (en) | Method for improving algorithm of goblet sea squirt group | |
CN110826195A (en) | Wave compensation control algorithm based on ant colony optimization BP neural network | |
CN114969995A (en) | Rolling bearing early fault intelligent diagnosis method based on improved sparrow search and acoustic emission | |
CN113240067A (en) | RBF neural network optimization method based on improved manta ray foraging optimization algorithm | |
CN115099133B (en) | TLMPA-BP-based cluster system reliability assessment method | |
CN113722980B (en) | Ocean wave height prediction method, ocean wave height prediction system, computer equipment, storage medium and terminal | |
Saffari et al. | Fuzzy Grasshopper Optimization Algorithm: A Hybrid Technique for Tuning the Control Parameters of GOA Using Fuzzy System for Big Data Sonar Classification. | |
CN114201987B (en) | Active interference identification method based on self-adaptive identification network | |
CN114897144A (en) | Complex value time sequence signal prediction method based on complex value neural network | |
CN112149883A (en) | Photovoltaic power prediction method based on FWA-BP neural network | |
CN114330119B (en) | Deep learning-based extraction and storage unit adjusting system identification method | |
Yang et al. | Enhanced sparrow search algorithm based on improved game predatory mechanism and its application | |
CN112257648A (en) | Signal classification and identification method based on improved recurrent neural network | |
CN116467941A (en) | 4-degree-of-freedom ship motion online forecasting method considering different wind levels | |
WO2022242471A1 (en) | Neural network configuration parameter training and deployment method and apparatus for coping with device mismatch | |
CN116680969A (en) | Filler evaluation parameter prediction method and device for PSO-BP algorithm | |
CN114004326B (en) | ELM neural network optimization method based on improved suburban wolf algorithm | |
CN115994481A (en) | Ship motion attitude prediction method based on multiple combinations |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200925 |