CN114004326A - ELM neural network optimization method based on improved suburb algorithm - Google Patents
ELM neural network optimization method based on improved suburb algorithm Download PDFInfo
- Publication number
- CN114004326A CN114004326A CN202111365807.1A CN202111365807A CN114004326A CN 114004326 A CN114004326 A CN 114004326A CN 202111365807 A CN202111365807 A CN 202111365807A CN 114004326 A CN114004326 A CN 114004326A
- Authority
- CN
- China
- Prior art keywords
- suburb
- wolf
- group
- neural network
- formula
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 52
- 238000005457 optimization Methods 0.000 title claims abstract description 39
- 238000000034 method Methods 0.000 title claims abstract description 21
- 241000282461 Canis lupus Species 0.000 claims abstract description 73
- 238000012549 training Methods 0.000 claims description 13
- 241000282421 Canidae Species 0.000 claims description 11
- 238000004364 calculation method Methods 0.000 claims description 10
- 238000013507 mapping Methods 0.000 claims description 5
- 210000002569 neuron Anatomy 0.000 claims description 4
- 239000006185 dispersion Substances 0.000 claims description 3
- 108090000623 proteins and genes Proteins 0.000 claims description 3
- 239000004576 sand Substances 0.000 claims description 3
- 238000012163 sequencing technique Methods 0.000 claims description 3
- 230000008901 benefit Effects 0.000 abstract description 5
- 238000004891 communication Methods 0.000 abstract description 3
- 238000012804 iterative process Methods 0.000 abstract description 3
- 230000006870 function Effects 0.000 description 11
- 230000008569 process Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 238000003062 neural network model Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 235000021178 picnic Nutrition 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000739 chaotic effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000007123 defense Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000011423 initialization method Methods 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/004—Artificial life, i.e. computing arrangements simulating life
- G06N3/006—Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The invention discloses an ELM neural network optimization method based on an improved suburb algorithm. The method randomly groups the suburb populations with a certain probability in the iterative process, not only retains the advantage of parallel search of the suburb populations divided into a plurality of groups, but also strengthens the communication and information sharing among the whole populations. A novel growth formula is adopted to guide the growth of the suburb wolf, and the exploitation capacity and the search capacity of the algorithm are well balanced. Dynamic Levy variation is carried out on the optimal suburb wolf in the group, the dimensionality of the suburb wolf varied at the initial iteration stage is more, more new areas are searched for, the variation dimensionality is reduced at the later iteration stage, the optimal solution area can be finely searched, the diversity of the population is enhanced, and the problem of falling into local optimization is effectively avoided.
Description
Technical Field
The invention relates to the technical field of neural network optimization, in particular to an ELM neural network optimization method based on an improved suburb algorithm.
Background
In radar echo, the existence of sea clutter seriously interferes the detection, positioning and tracking performances of a sea surface target by a radar, and the monitoring of the marine environment and the detection of the sea target are greatly influenced. Under the conditions of strong sea clutter background and low false alarm probability, the target finding capability of the radar system is greatly influenced. If the influence of sea clutter can be reduced or eliminated to a certain extent, the monitoring capability of the radar on the sea can be greatly improved, and the method has great significance for protecting the territorial right to the sea and consolidating the national defense construction.
The learners carry out a great deal of research on the sea clutter and establish a plurality of classical sea clutter models based on traditional statistics, but most of the models are based on experience and data fitting and do not reflect the physical mechanism of the sea clutter generation, so that the generalization capability of describing the sea clutter by only certain model distribution is poor. Therefore, a series of researches on the intrinsic dynamic characteristics of the sea clutter, which are developed on the basis of chaotic dynamics, are derived. The nonlinear theory mainly studies regularity of a complex object hidden in an irregular representation, and therefore the intrinsic dynamic characteristics of the sea clutter are obtained through training of an Extreme Learning Machine (ELM) neural network. Before training the ELM neural network, firstly, determining initialization parameters of the neural network, wherein the initialization parameters have important influence on a training result of the neural network, a commonly used initialization method is to randomly generate the initialization parameters, and random selection of the initialization parameters easily causes poor training precision and poor generalization capability and also influences the stability of the neural network. Aiming at the problems, a suburb Algorithm is introduced to optimize initial parameters of the ELM neural network, and the optimal initial parameters of the neural network are searched through a suburb Optimization Algorithm (COA), so that the reliability and the stability of the network are improved.
The suburb algorithm is a population-based random algorithm inspired by suburb social life and behaviors thereof, and is used for solving and optimizing actual problems by simulating the phenomena of suburb social life, growth, life and death, group expelling, group receiving and the like. The position of the suburb is equivalent to the solution of the optimization problem, the position of the elite suburb is gradually close to the global optimal position along with the increase of the iteration times, and the final position of the elite suburb is the optimal solution after the optimization is finished. The suburb algorithm not only has unique search model, structure and excellent optimization capability, but also has obvious advantages in solving the global optimization problem. The method is widely applied to the fields of computers, power systems and the like. The method is applied to searching for the optimal initialization parameter of the ELM neural network, and the reliability and the stability of the ELM neural network are improved.
Although the COA algorithm is simple to operate and easy to implement, the performance of the ELM neural network is improved, and the training precision of the neural network is improved, a plurality of problems still exist in the optimization process of the COA algorithm and need to be improved. The growth of the suburbs is conducted by the guidance of the optimal suburbs in the group, the group culture trend and the like, each suburb is searched in the region of the suburb in parallel, the growth is not influenced by other groups and is only influenced by the optimal suburbs in the group and the group culture trend, the growth of the suburbs is developed by the optimal suburbs in the surrounding group, the quality of the optimal suburbs in the group directly determines the growth effect of the other suburbs in the group, and the overall optimization effect of the algorithm is further influenced. Although the guidance enables COA to have high exploitation capacity, the information sharing degree of suburbs in the population is not high, the information exchange among the groups is seriously insufficient, and the diversity of the population in the early stage is insufficient and the later development is insufficient. In addition, in COA, the number of births of young wolves is much lower than the number of growth of suburbs, which makes the diversity of suburb populations insufficient.
In order to solve the problems, scholars propose a series of optimization algorithms to improve the suburb algorithm. The method is characterized in that the optimum and worst-wolf suburb optimization algorithm is strengthened for the growth of the worst suburb in a group, global optimum suburb guiding operation is introduced on the basis of optimum suburb guiding so as to improve the social adaptability of the worst suburb, then, a random disturbance operation is embedded in the growth process of the optimum suburb in the group, namely, the growth is promoted by random disturbance among the suburbs, the activity of each suburb in the group is exerted, the diversity of the population is enhanced by the algorithm, the mapping capability of the population to a solution domain is improved, but the convergence speed of the algorithm is not improved. Therefore, the suburb algorithm with the cluster information sharing and dynamic Levy variation is provided aiming at the defects that the suburb algorithm is easy to fall into local optimization, low in convergence speed and low in convergence precision in the optimization process. And a random grouping strategy is carried out in an iterative process, so that information exchange among the populations is enhanced. In the growth process of the suburb individual, the suburb individual with the fitness superior to the group-oriented cultural trend is added with the global optimal suburb to guide the suburb growth, and the suburb individual with the fitness inferior to the group-oriented cultural trend is added with the global optimal suburb and the other group-oriented optimal suburbs to guide the suburb growth together, so that the suburbs in each sub-population are respectively close to each other from different directions to feasible areas at a faster convergence speed, and the convergence speed of the COA algorithm is greatly accelerated. In addition, the variation operation is carried out on the optimum suburb wolf in the group through Laevi flight, the diversity of the suburb wolf population is enhanced, the spatial dimension of the suburb wolf individual variation is reduced along with the increase of the number of evolution iterations, the suburb wolf has more variation dimensions in the early stage of the evolution, more new areas can be explored, the variation dimensions in the later stage are less, and the found optimum solution area can be finely searched. The method effectively avoids the problem of falling into local optimization, enhances the overall optimization performance and convergence rate, improves the optimization precision, and has very important significance in the optimization of the ELM neural network.
Disclosure of Invention
The invention provides an ELM neural network optimization method based on an improved suburb algorithm, which can enhance the precision and stability of ELM neural network model training.
In order to achieve the purpose, the invention is realized by the following technical scheme:
the invention relates to an ELM through-network optimization algorithm based on an improved suburb wolf algorithm, which comprises the following steps:
step 1: determining the structure of the ELM neural network, determining the number of input and output nodes of the neural network according to the solved problem, and determining the number of hidden layer neuron nodes;
step 2: calculating the number of network parameters to be optimized according to the number of nodes of the input layer and the hidden layer, mapping the target to be optimized to the position of the suburb, and initializing the number N of the packetsPAnd the number N of the suburbs of each groupCInitializing the position of the suburb population;
and step 3: normalizing the data, inputting the training data into a network model, and evaluating the fitness value of the current suburb individual by using a fitness function according to the error between the network output value and the predicted value;
and 4, step 4: probability P of suburb wolf populationgCarrying out random grouping;
and 5: sequencing the group internal wolf individuals according to the fitness value, and determining the global optimal wolf alphaallThe optimal suburb alpha in the group and calculating the cultural trend cult;
step 6: the suburb is grown according to a novel growth formula, the social adaptability of the grown suburb is evaluated, and the grown suburb with good adaptability is greedy to select;
and 7: birth and death of young wolves, the age of young wolves is 0 if they survive;
and 8: carrying out dynamic Levy variation operation on the optimal suburb wolf in the group, and greedy storing the suburb wolfs before and after variation;
and step 9: the suburbs are driven and accommodated by the group, the age of each suburb is updated, and the position of the globally optimal suburb is updated;
step 10: and judging whether the given maximum iteration number is reached, if so, returning the position parameter of the globally optimal suburb, mapping the position parameter to the initial value of the network parameter corresponding to the ELM neural network, and otherwise, returning to the step 4.
The invention is further improved in that: step 2, the position of the suburb is n-dimensional data, wherein n is calculated by the following formula:
n=hidnum*innum+hidnum (1)
in the formula, hidnum represents the number of hidden layer nodes of the radial basis function neural network, and innum represents the number of input layer nodes.
The invention is further improved in that: when the suburb position is initialized in the step 2, the initialization range is set to be (-1,1),
the invention is further improved in that: in step 3, the data is normalized, and the following formula is adopted as a calculation function:
xi%=(xi-min(x))/(max(x)-min(x)) (2)
in the formula xi% represents the normalized value of the i-th (i ═ 1,2, L, n) data, n represents the number of input data samples, x represents the number of input data samplesiFor the ith sample value, max (x) represents the maximum value of the input sample, and min (x) represents the minimum value of the input sample;
the fitness value is calculated as:
wherein Y represents the value of the genuine label, YpAnd representing the predicted value of the neural network, wherein N is the number of training data. The invention is further improved in that: in the step 4, the suburb population is randomly grouped to satisfy the following formula:
where t is the current iteration number, Packst-1Represents the grouping situation when the iteration number is t-1, r1Is [0,1 ]]Uniformly distributed random number, PgIs a self-defined random grouping probability.
The invention is further improved in that: step 5, the calculation function of the cultural trend cult is as follows:
in the formula, NCEach representsThe number of the suburbs of the group,representing the second after sorting by fitness valueThe j (j) th dimension variable of the suburb wolf is the social state factor of 1,2, …, n),is shown asThe j (j ═ 1,2, …, n) th dimension variable of the suburb.
The invention is further improved in that: the new growth formula in step 6 is defined as follows:
in the formula, SOC represents the position before the growth of the suburb wolf, cult represents the group culture trend, SOC _ fit represents the adaptability value of the suburb wolf individual, cult _ fit represents the adaptability value of the group culture trend, alpha represents the optimal suburb wolf in the group, and alpha represents the optimal suburb wolf in the groupallRepresenting the globally optimal suburb, alphaprRepresenting optimal suburb, SOC of other randomly selected groupscr2,SOCcr3,SOCcr4Three randomly selected different wolfs in the group are represented, T is the current iteration number, T is the maximum iteration number, r2,r3,r4Is [0,1 ]]Uniformly distributed random numbers within a range;
after the suburb is grown, the fitness function is adopted to evaluate the fitness value of the suburb, and the suburb with better adaptability before and after the growth is greedy to select is defined according to the following formula:
in the equation, new _ SOC _ fit is a fitness value of the new _ SOC of the grown wolf individual, and SOC _ fit is a fitness value of the SOC of the pre-grown wolf individual.
The invention is further improved in that: step 7 uses the following formula as the birth formula of young wolves in the group:
in the formula, pupjRepresents the social status factor of the young wolf individuals born in the group in the j (j is 1,2, …, n) th dimension, j1,j2Is two random dimensions of the problem, f1,f2Is the index number of two different random father suburbs in the group to ensure that the young wolf inherits the genes of the two father suburbs,indicating suburb f1The social status factor in the dimension j,indicating suburb f2Social status factor in dimension j, RjIs a random number, P, of the jth dimension social status factor within a decision variable rangesIs the probability of dispersion, PaIs the association probability, PsAnd PaIs determined by the following formula:
in the formula, n represents the dimension of the problem to be optimized.
The invention is further improved in that: step 8, carrying out dynamic Levy variation operation on the optimum suburb wolf in the group, and adopting the following formula as a definition formula of the variation dimension N:
in the formula, T is the current iteration number, T is the maximum iteration number, n represents the dimension of the problem to be optimized, int represents an integer function, and after the dimension of the variation is randomly selected, the following formula is adopted as a calculation formula for carrying out the Lai variation by the suburb:
in the formula, j1,j2Representing the dimension of variation randomly selected by suburb individuals in n-dimension, j1≠j2=1,2,L,n,Indicates the variation of the suburb in j1A social state factor in dimension, namely levi (beta) is a levy flight step length, and the levy index beta belongs to (0, 2)]The parameter u, v is a random number that is distributed too positive, subject to the following definition:
the invention is further improved in that: the probability formula of the suburb wolf being driven and accommodated in the step 9 is as follows:
in the formula, PeProbability of the suburb being randomly driven and admitted, NCThe number of the suburbs in each group. Then the age of each suburb is updated, and the position of the global optimal suburb is updated.
The invention has the beneficial effects that: 1. the optimized suburb wolf algorithm is adopted, so that the optimal initialization parameter is found for the ELM neural network, and the precision and the stability of the ELM neural network model training are enhanced;
2. in the iterative process, the suburbs are randomly grouped with a certain probability, so that the method not only retains the advantage of parallel search of the suburbs divided into a plurality of groups, but also strengthens the communication and information sharing among the whole groups;
3. by adopting a novel growth formula, aiming at the individual suburb with the fitness value superior to the group-in-culture trend, introducing the globally optimal suburb power-assisted suburb growth, aiming at the individual suburb with the fitness value inferior to the group-in-culture trend, introducing the globally optimal suburb and the other group optimal suburbs to jointly guide the suburb growth, increasing the search range of the population and balancing the exploitation capability and the search capability of the algorithm;
4. dynamic Levy variation is carried out on the optimal suburb wolf in the group, the dimensionality of the suburb wolf in the initial stage of iteration variation is more, more new areas are searched for, the variation dimensionality in the later stage of iteration is reduced, the optimal solution area can be searched finely, and the diversity of the population is enhanced.
5. The invention effectively avoids the problem of falling into local optimum, has better balance between the global search and the local search, enhances the overall optimization performance and the convergence rate, and improves the optimization precision.
Drawings
FIG. 1 is a flow chart of an ELM neural network optimization method based on an improved suburb algorithm according to the present invention;
fig. 2 is a topology structure diagram of the ELM neural network.
Detailed Description
Exemplary embodiments of the present invention will be described below with reference to the accompanying drawings. It is to be understood that the drawings and the described embodiments are merely exemplary in nature and are intended to illustrate the principles of the invention and not to limit the scope of the invention.
The invention relates to an ELM neural network optimization method based on an improved suburb wolf algorithm, which comprises the following specific steps of:
step 1: and determining the topological structure of the ELM neural network, wherein the ELM neural network has a simple structure and three-layer structures including an input layer, a hidden layer and an output layer. The number of the network input nodes and the number of the network output layer nodes need to be determined according to actual problems, and the number of the hidden layer nodes can be artificially determined through a large number of tests according to the requirements of the actual problems. Wherein, fig. 2 shows a topological structure of the ELM neural network, which is exemplarily illustrated by an ELM neural network model for sea clutter prediction;
step 2: and determining the position dimension of the suburb wolf according to the network structure parameters, wherein the parameters needing to be optimized by the suburb wolf algorithm comprise a connection weight omega between an input layer and a hidden layer and a threshold b of a hidden layer neuron. The number of the input layer nodes and the number of the hidden layer nodes of the ELM neural network are innum and hidnum respectively, and then the position of the suburb is n-dimensional data, wherein n is calculated by the following formula:
n=hidnum*innum+hidnum
the suburb algorithm is used for optimizing the initialization parameters of the ELM neural network, namely the global optimal suburb position parameters obtained by the suburb algorithm are mapped to the initialization parameters of the neural network, so that the dimension of the suburb position parameters is the number of the parameters to be optimized.
Number of initialization packets NPAnd the number N of the suburbs of each groupCAnd initializing the position of the suburb. The connection weight omega between the input layer and the hidden layer of the ELM neural network and the position parameter initialization range corresponding to the threshold b of the hidden layer neuron are set to be (-1, 1).
And step 3: the data is normalized using the following equation as the normalization calculation function:
xi%=(xi-min(x))/(max(x)-min(x))
in the formula xi% represents the normalized value of the i-th (i ═ 1,2, L, n) data, n represents the number of input data samples, x represents the number of input data samplesiFor the ith sample value, max (x) represents the maximum value of the input sample, and min (x) represents the minimum value of the input sample.
Inputting training data into a network model, estimating the fitness value of the current suburb wolf by using a fitness function according to the error between a network output value and a predicted value, and adopting the following formula as a calculation function of the fitness value:
wherein Y represents the value of the genuine label, YpRepresenting a neural networkMeasured value, N, is the number of training data.
And 4, step 4: probability P of suburb wolf populationgCarry out random grouping, PgSetting P for self-defined random grouping probabilityg0.05. The random grouping not only retains the advantage of parallel search divided into a plurality of groups, but also strengthens the communication and information sharing among the whole groups. The random grouping of the suburb wolf population satisfies the following formula:
where t is the current iteration number, Packst-1Represents the grouping situation when the iteration number is t-1, r1Is [0,1 ]]Uniformly distributed random numbers.
And 5: sequencing the group internal wolf individuals according to the fitness value, and determining the global optimal wolf alphaallThe optimal wolf alpha in the group and the cultural trend cult are calculated, and the following formula is adopted as a calculation function of the cultural trend cult in the group:
in the formula, NCThe number of the suburbs in each group is shown,representing the second after sorting by fitness valueThe j (j) th dimension variable of the suburb wolf is the social state factor of 1,2, …, n),is shown asThe j (j ═ 1,2, …, n) th dimension variable of the suburb.
Step 6: adopt novel growth formula, to the indifferent suburb individual that fitness value is superior to group's serialization trend, introduce global optimum suburb helping hand suburb and grow up, to the indifferent suburb individual of group's serialization trend of fitness value, introduce global optimum suburb and the indifferent suburb of other groups and lead the suburb to grow up jointly, increased the search range of population, balanced the mining ability and the search capacity of algorithm, adopt the following formula as the growth formula of suburb:
δ2=cult-SOCcr2
δ4=alphapr-SOCcr4
in the formula, SOC represents the position before the growth of the suburb wolf, cult represents the group culture trend, SOC _ fit represents the adaptability value of the suburb wolf individual, cult _ fit represents the adaptability value of the group culture trend, alpha represents the optimal suburb wolf in the group, and alpha represents the optimal suburb wolf in the groupallRepresenting the globally optimal suburb, alphaprRepresenting optimal suburb, SOC of other randomly selected groupscr2,SOCcr3,SOCcr4Three randomly selected different wolfs in the group are represented, T is the current iteration number, T is the maximum iteration number, r2,r3,r4Is [0,1 ]]Uniformly distributed random numbers within the range.
After the suburb is grown, the fitness function is adopted to evaluate the fitness value of the suburb, and the suburb with better adaptability before and after the growth is greedy to select is defined according to the following formula:
in the equation, new _ SOC _ fit is a fitness value of the new _ SOC of the grown wolf individual, and SOC _ fit is a fitness value of the SOC of the pre-grown wolf individual.
And 7: the birth and death of young wolves, the new-born suburb from the random selection of parent suburbs and the influence of environmental factors, the following formula is adopted as the birth formula of the young wolves in the group:
in the formula, pupjRepresents the social status factor of the young wolf individuals born in the group in the j (j is 1,2, …, n) th dimension, j1,j2Is two random dimensions of the problem, f1,f2Is the index number of two different random father suburbs in the group to ensure that the young wolf inherits the genes of the two father suburbs.Indicating suburb f1The social status factor in the dimension j,indicating suburb f2Social status factor in dimension j, RjIs a random number of the j-th dimension social status factor in the decision variable range. PsIs the probability of dispersion, PaIs the association probability. PsAnd PaIs determined by the following formula:
in the formula, n represents the dimension of the problem to be optimized. After birth, young wolves need to be evaluated for their social adaptation ability, and if they survive, the young wolves are 0 in age.
And 8: the dynamic Levy variation operation is carried out on the optimal suburb wolf in the group, the suburb wolf has more dimensions in the initial variation stage of iteration, more new areas are searched for, the variation dimensions in the later stage of iteration are reduced, the optimal solution area can be finely searched, and the diversity of the population is enhanced. The following formula is used as a defining formula for the variation dimension N:
in the formula, T is the current iteration number, T is the maximum iteration number, n represents the dimension of the problem to be optimized, and int represents the integer.
After randomly selecting the dimension of variation, adopting the following formula as a calculation formula for carrying out the Levy variation by the suburb:
in the formula, j1,j2Representing the dimension of variation randomly selected by suburb individuals in n-dimension, j1≠j2=1,2,L,n,Indicates the variation of the suburb in j1A social state factor in dimension, namely levi (beta) is a levy flight step length, and the levy index beta belongs to (0, 2)]The parameter u, v is a random number that is distributed too positive, subject to the following definition:
the picnic wolf before and after variation is stored in a greedy manner, the dynamic Levy variation strategy enables the picnic wolf to have more dimensionality in the initial variation of iteration, helps to explore more new areas, reduces the variation dimensionality in the later period of iteration, can perform fine search on the optimal solution area, and enhances the diversity of the population.
And step 9: probability of suburb wolf PeThe method is characterized in that the method is driven and accommodated by a group, the random driving and accommodating ensures the diversity of the suburb group, and the following formula is adopted as a probability formula of the suburb being driven and accommodated:
in the formula, PeProbability of the suburb being randomly driven and admitted, NCThe number of the suburbs in each group. Then the age of each suburb is updated, and the position of the global optimal suburb is updated.
Step 10: judging whether the given maximum iteration number is reached, if the current iteration number is smaller than the maximum iteration number, returning to the step 4 to continuously perform optimization updating, if the maximum iteration number is smaller than the maximum iteration number, returning the position of the globally optimal suburb wolf as a globally optimal solution, namely the position of the suburb wolf with the minimum fitness value, then mapping the suburb wolf to restore the parameter form of the neural network, and using the parameter form as the initialization parameter of the neural network, thereby realizing the optimization of the ELM neural network.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions, embodiments and advantages of the present invention in further detail, and the above-mentioned embodiments are only intended to illustrate the principles of the present invention and help the reader understand the design idea of the present invention, and it should be understood that the protection scope of the present invention is not limited to the specifically-mentioned embodiments, and any modifications and equivalents made within the principles of the present invention should be included in the protection scope of the present invention.
Claims (10)
1. An ELM neural network optimization algorithm based on an improved suburb algorithm is characterized in that: the method comprises the following steps:
step 1: determining the structure of the ELM neural network, determining the number of input and output nodes of the neural network according to the solved problem, and determining the number of hidden layer neuron nodes;
step 2: calculating the number of network parameters to be optimized according to the number of nodes of the input layer and the hidden layerMapping the target to be optimized to the position of the suburb, and initializing the number N of the groupsPAnd the number N of the suburbs of each groupCInitializing the position of the suburb population;
and step 3: normalizing the data, inputting the training data into a network model, and evaluating the fitness value of the current suburb individual by using a fitness function according to the error between the network output value and the predicted value;
and 4, step 4: probability P of suburb wolf populationgCarrying out random grouping;
and 5: sequencing the group internal wolf individuals according to the fitness value, and determining the global optimal wolf alphaallThe optimal suburb alpha in the group and calculating the cultural trend cult;
step 6: the suburb is grown according to a novel growth formula, the social adaptability of the grown suburb is evaluated, and the grown suburb with good adaptability is greedy to select;
and 7: birth and death of young wolves, the age of young wolves is 0 if they survive;
and 8: carrying out dynamic Levy variation operation on the optimal suburb wolf in the group, and greedy storing the suburb wolfs before and after variation;
and step 9: the suburbs are driven and accommodated by the group, the age of each suburb is updated, and the position of the globally optimal suburb is updated;
step 10: and judging whether the given maximum iteration number is reached, if so, returning the position parameter of the globally optimal suburb wolf, mapping the position parameter to the initial value of the network parameter corresponding to the ELM neural network, and otherwise, returning to the step 4.
2. The ELM neural network optimization algorithm based on the improved suburb algorithm as claimed in claim 1, wherein: step 2, the position of the suburb is n-dimensional data, wherein n is calculated by the following formula:
n=hidnum*innum+hidnum (1)
in the formula, hidnum represents the number of hidden layer nodes of the radial radix neural network, and innum represents the number of input layer nodes.
3. The ELM neural network optimization algorithm based on the improved suburb algorithm as claimed in claim 1, wherein: when the suburb position is initialized in the step 2, the initialization range is set to be (-1, 1).
4. The ELM neural network optimization algorithm based on the improved suburb algorithm as claimed in claim 1, wherein: the data is normalized using the following formula as the calculation function:
xi%=(xi-min(x))/(max(x)-min(x)) (2)
in the formula xi% represents the normalized value of the i-th (i ═ 1,2, L, n) data, n represents the number of input data samples, x represents the number of input data samplesiFor the ith sample value, max (x) represents the maximum value of the input sample, and min (x) represents the minimum value of the input sample;
the fitness value is calculated as:
wherein Y represents the value of the genuine label, YpAnd the predicted value of the neural network is shown, and N is the number of training data.
5. The ELM neural network optimization algorithm based on the improved suburb algorithm as claimed in claim 1, wherein: in the step 4, the suburb population is randomly grouped to satisfy the following formula:
where t is the current iteration number, Packst-1Represents the grouping situation when the iteration number is t-1, r1Is [0,1 ]]Uniformly distributed random number, PgIs a self-defined random grouping probability.
6. The ELM neural network optimization algorithm based on the improved suburb algorithm as claimed in claim 1, wherein: step 5, the calculation function of the cultural trend cult is as follows:
7. The ELM neural network optimization algorithm based on the improved suburb algorithm as claimed in claim 1, wherein: the new growth formula in step 6 is defined as follows:
in the formula, SOC represents the position before the growth of the suburb wolf, cult represents the group culture trend, SOC _ fit represents the adaptability value of the suburb wolf individual, cult _ fit represents the adaptability value of the group culture trend, alpha represents the optimal suburb wolf in the group, and alpha represents the optimal suburb wolf in the groupallRepresenting the globally optimal suburb, alphaprRepresenting optimal suburb, SOC of other randomly selected groupscr2,SOCcr3,SOCcr4Three randomly selected different wolfs in the group are represented, T is the current iteration number, T is the maximum iteration number, r2,r3,r4Is [0,1 ]]Uniformly distributed random numbers within a range;
after the suburb is grown, the fitness function is adopted to evaluate the fitness value of the suburb, and the suburb with better adaptability before and after the growth is greedy to select is defined according to the following formula:
in the equation, new _ SOC _ fit is a fitness value of the new _ SOC of the grown wolf individual, and SOC _ fit is a fitness value of the SOC of the pre-grown wolf individual.
8. The ELM neural network optimization algorithm based on the improved suburb algorithm as claimed in claim 1, wherein: step 7 uses the following formula as the birth formula of young wolves in the group:
in the formula, pupjRepresents the social status factor of the young wolf individuals born in the group in the j (j is 1,2, …, n) th dimension, j1,j2Is two random dimensions of the problem, f1,f2Is the index number of two different random father suburbs in the group to ensure that the young wolf inherits the genes of the two father suburbs,indicating suburb f1The social status factor in the dimension j,indicating suburb f2Social status factor in dimension j, RjIs the random of the j-th dimension social status factor in the decision variable rangeNumber, PsIs the probability of dispersion, PaIs the association probability, PsAnd PaIs determined by the following formula:
in the formula, n represents the dimension of the problem to be optimized.
9. The ELM neural network optimization algorithm based on the improved suburb algorithm as claimed in claim 1, wherein: step 8, in the step of carrying out dynamic Levy variation operation on the optimum suburb wolf in the group, the following formula is adopted as a definition formula of the variation dimension N:
in the formula, T is the current iteration number, T is the maximum iteration number, n represents the dimension of the problem to be optimized, int represents an integer function, and after the dimension of the variation is randomly selected, the following formula is adopted as a calculation formula for carrying out the Lai variation by the suburb:
in the formula, j1,j2Representing the dimension of variation randomly selected by suburb individuals in n-dimension, j1≠j2=1,2,L,n,Indicates the variation of the suburb in j1A social state factor in dimension, namely levi (beta) is a levy flight step length, and the levy index beta belongs to (0, 2)]Wherein the parameter u, v is a random number of positive distribution obeying the following definition:
10. the ELM neural network optimization algorithm based on the improved suburb algorithm as claimed in claim 1, wherein: the probability formula of the suburb wolf being driven and accommodated in the step 9 is as follows:
in the formula, PeProbability of the suburb being randomly driven and admitted, NCAnd updating the age of each suburb and the position of the globally optimal suburb for the number of suburbs in each group.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111365807.1A CN114004326B (en) | 2021-11-17 | 2021-11-17 | ELM neural network optimization method based on improved suburban wolf algorithm |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111365807.1A CN114004326B (en) | 2021-11-17 | 2021-11-17 | ELM neural network optimization method based on improved suburban wolf algorithm |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114004326A true CN114004326A (en) | 2022-02-01 |
CN114004326B CN114004326B (en) | 2024-05-28 |
Family
ID=79929359
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111365807.1A Active CN114004326B (en) | 2021-11-17 | 2021-11-17 | ELM neural network optimization method based on improved suburban wolf algorithm |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114004326B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115270079A (en) * | 2022-09-26 | 2022-11-01 | 中国人民解放军海军工程大学 | Group intelligent high-efficiency energy-saving cooperative positioning method |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019140725A1 (en) * | 2018-01-18 | 2019-07-25 | 东莞理工学院 | Method for smart optimisation of wolfpack behaviour simulation calculation |
WO2020136258A1 (en) * | 2018-12-27 | 2020-07-02 | Thales | Device for generating a simulated sea-clutter data set, and associated method and computer program |
CN112085147A (en) * | 2020-09-16 | 2020-12-15 | 北京邮电大学 | Feature selection method based on improved suburb optimization algorithm |
CN113240069A (en) * | 2021-05-14 | 2021-08-10 | 江苏科技大学 | RBF neural network optimization method based on improved Harris eagle algorithm |
CN113702843A (en) * | 2021-07-26 | 2021-11-26 | 南通大学 | Lithium battery parameter identification and SOC estimation method based on suburb optimization algorithm |
-
2021
- 2021-11-17 CN CN202111365807.1A patent/CN114004326B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019140725A1 (en) * | 2018-01-18 | 2019-07-25 | 东莞理工学院 | Method for smart optimisation of wolfpack behaviour simulation calculation |
WO2020136258A1 (en) * | 2018-12-27 | 2020-07-02 | Thales | Device for generating a simulated sea-clutter data set, and associated method and computer program |
CN112085147A (en) * | 2020-09-16 | 2020-12-15 | 北京邮电大学 | Feature selection method based on improved suburb optimization algorithm |
CN113240069A (en) * | 2021-05-14 | 2021-08-10 | 江苏科技大学 | RBF neural network optimization method based on improved Harris eagle algorithm |
CN113702843A (en) * | 2021-07-26 | 2021-11-26 | 南通大学 | Lithium battery parameter identification and SOC estimation method based on suburb optimization algorithm |
Non-Patent Citations (2)
Title |
---|
张先芝: "基于神经网络模型的高频地波雷达海杂波抑制方法研究", 中国优秀硕士学位论文全文数据库 工程科技II辑, 1 March 2024 (2024-03-01) * |
张新明;李双倩;刘艳;毛文涛;刘尚旺;刘国奇;: "信息共享模型和组外贪心策略的郊狼优化算法", 计算机科学, vol. 47, no. 05, 31 May 2020 (2020-05-31) * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115270079A (en) * | 2022-09-26 | 2022-11-01 | 中国人民解放军海军工程大学 | Group intelligent high-efficiency energy-saving cooperative positioning method |
Also Published As
Publication number | Publication date |
---|---|
CN114004326B (en) | 2024-05-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Gharehchopogh | An improved Harris Hawks optimization algorithm with multi-strategy for community detection in social network | |
Ergezer et al. | Oppositional biogeography-based optimization | |
Hasanzadeh-Mofrad et al. | Learning automata clustering | |
Liu et al. | A multiobjective framework for many-objective optimization | |
CN113240068A (en) | RBF neural network optimization method based on improved ant lion algorithm | |
CN108629400A (en) | A kind of chaos artificial bee colony algorithm based on Levy search | |
Yeh et al. | A novel hybrid clustering approach based on K-harmonic means using robust design | |
Fei et al. | Research on data mining algorithm based on neural network and particle swarm optimization | |
Wang et al. | A feature selection method by using chaotic cuckoo search optimization algorithm with elitist preservation and uniform mutation for data classification | |
Zhao et al. | A multipopulation cooperative coevolutionary whale optimization algorithm with a two-stage orthogonal learning mechanism | |
Chen et al. | Evolutionary clustering with differential evolution | |
CN114004326A (en) | ELM neural network optimization method based on improved suburb algorithm | |
Wang et al. | Cooperative velocity updating model based particle swarm optimization | |
Tian et al. | A multi-granularity clustering based evolutionary algorithm for large-scale sparse multi-objective optimization | |
Zheng et al. | Cluster head selection strategy of WSN based on binary multi-objective adaptive fish migration optimization algorithm | |
Pan et al. | Semisupervised SVM by hybrid whale optimization algorithm and its application in oil layer recognition | |
Cui et al. | Quantum-inspired moth-flame optimizer with enhanced local search strategy for cluster analysis | |
Wang et al. | Dynamic multiobjective squirrel search algorithm based on decomposition with evolutionary direction prediction and bidirectional memory populations | |
Zhang et al. | Dynamic Multi‐Swarm Differential Learning Quantum Bird Swarm Algorithm and Its Application in Random Forest Classification Model | |
Wu et al. | Hybrid intelligent deep kernel incremental extreme learning machine based on differential evolution and multiple population grey wolf optimization methods | |
Wen et al. | Adaptive tree-like neural network: Overcoming catastrophic forgetting to classify streaming data with concept drifts | |
Wu et al. | Historical information-based differential evolution for dynamic optimization problem | |
Zheng et al. | Adaptive particle Swarm optimization algorithm ensemble model applied to classification of unbalanced data | |
Xiao et al. | Locally informed gravitational search algorithm with hierarchical topological structure | |
Xue et al. | Optimizing neural network classification by using the Cuckoo algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |