CN107563518A - A kind of learning method of the extreme learning machine based on social force model colony optimization algorithm - Google Patents

A kind of learning method of the extreme learning machine based on social force model colony optimization algorithm Download PDF

Info

Publication number
CN107563518A
CN107563518A CN201710818504.8A CN201710818504A CN107563518A CN 107563518 A CN107563518 A CN 107563518A CN 201710818504 A CN201710818504 A CN 201710818504A CN 107563518 A CN107563518 A CN 107563518A
Authority
CN
China
Prior art keywords
mrow
msub
msubsup
mtd
pedestrian
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710818504.8A
Other languages
Chinese (zh)
Inventor
续欣莹
徐晨晨
陈琪
谢珺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taiyuan University of Technology
Original Assignee
Taiyuan University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taiyuan University of Technology filed Critical Taiyuan University of Technology
Priority to CN201710818504.8A priority Critical patent/CN107563518A/en
Publication of CN107563518A publication Critical patent/CN107563518A/en
Pending legal-status Critical Current

Links

Landscapes

  • Other Investigation Or Analysis Of Materials By Electrical Means (AREA)

Abstract

A kind of learning method of the extreme learning machine based on social force model colony optimization algorithm.Solving existing extreme learning machine, given input weights and deviation, ELM usually require the problem of more hidden layer node can be only achieved perfect precision at random.Individual including 1, initialization population;2nd, the output weight and fitness value of each individual are calculated;3rd, individual choice target and kind heap sort;4th, to population GkIn individual classified and perform corresponding search mechanisms;5th, non-free individual speed and position are updated;6th, more new individual historical trace;7th, the cooperation between individual;8th, end condition judges.The present invention preferably overcomes the shortcomings that original ELM gives weights and deviation at random, with PSO ELM, IPSO ELM, DE ELM compare with SaE ELM scheduling algorithms, during applied to classification and regression problem, the result that this algorithm obtains is more accurate, improves ELM stability and generalization ability.

Description

A kind of learning method of the extreme learning machine based on social force model colony optimization algorithm
Technical field
The invention belongs to field of artificial intelligence, is related to a kind of improved extreme learning machine method, and in particular to a kind of The learning method of extreme learning machine based on social force model colony optimization algorithm.
Background technology
Traditional has been widely used in multilayer feedforward based on gradient descent algorithm neutral net (such as BP neural network) In the training of neutral net, but the network has that convergence rate is slow, is easily trapped into local minimum and in different application scenarios The problems such as its lower parameter adjustment complexity.The shortcomings that in order to overcome traditional neural network, 2004, " HUANG G B, ZHU Q Y, SIEW C K.Extreme learning machine:Theory and applications.Neurocomputing, 2006,70 (1-3):489-501 " et al. proposes a kind of new artificial nerve network model, i.e. extreme learning machine (ELM). It is according to a kind of theoretical new Single hidden layer feedforward neural networks learning algorithm proposed of Moore-Penrose (MP) generalized inverse matrix (single hidden layer fendforward neural network, SLFNs), need to preset when ELM is applied Node in hidden layer, parameter (input node and the connection weight and hidden layer section of hidden node of random initializtion hidden layer node The threshold value of point), the input weights of network and the threshold value of hidden node need not be adjusted during algorithm performs, therefore can It is greatly enhanced the pace of learning of network.“Cao J,Lin Z,Huang G B.Self-adaptive evolutionary extreme learning machine[J].Neural Processing Letters,2012,36(3):285-305 " is proposed It is invalid or role is very small to randomly generate hidden node parameter and may result in some hidden nodes of network, This make it that ELM stability and generalization ability are poor.And HUANG et al. is pointed out, in order to obtain preferable error precision, ELM Usually need to set a larger the number of hidden nodes.
At past several years, colony intelligence optimized algorithm was widely used in optimization nerve net as a kind of global optimization approach The parameter of network.Existing lot of documents improves SLFNs performance using colony intelligence optimized algorithm optimization ELM parameter.“Miche Y,Sorjarnaa A,Bas P,et al.OP-ELM:Optimally pruned extreme learning machine [J].IEEE Trans on Neural Networks,2010,21(1):158-162 " is verified based on swarm intelligence algorithm Extreme learning machine, which is applied, to be returned and is having good performance in classification problem.For example, " You Xu, Yang Shu, Evolutionary extreme learning machine based on particle swarm optimization, International Symposium on Neural Networks 2006(ISNN2006),LNCS,vol.3971,2006, Pp.644-652 " proposes a kind of population extreme learning machine (PSO-ELM), is improved with reference to two kinds of algorithms of population and ELM SLFNs performance, achieves good effect, but the method ability of searching optimum based on PSO is poor, is easily absorbed in local optimum; “Han F,Yao H F,Ling Q H.An improved evolutionary extreme learning machine based on particle swarm optimization[J].Neurocomputing,2013,116:87-93 " is described A kind of improved population extreme learning machine (IPSO-ELM), optimizes the hidden of extreme learning machine using modified particle swarm optiziation Node layer parameter, its performance has obtained further raising than PSO-ELM, but IPSO-ELM weight is linearly reduced, after Phase causes the diversity of population to reduce;“P.Mohapatra,S.Chakravarty,P.K.Dash,An improved cuckoo search based extreme learni ng machine for medical data classification,Swarm The ICS-ELM that and Evolutionary Computati on 24 (2015) 25-49 " are proposed is searched for using improved cuckoo Algorithm optimization SLFNs hidden node parameter, SLFNs output weights are tried to achieve using ELM algorithms, so as to improve SLFNs's Performance;“Qin A K,Huang V L,Suganthan P N.Differential evolution algorithm with strategy adaptation for global numerical optimization[J].IEEE Trans on Evolutionary Computation,2009,13(2):398-417 " proposes a kind of differential evolution extreme learning machine (DE- ELM), the input parameter of network is optimized using differential evolution algorithm (DE), with the output weight of ELM algorithm calculating networks, still Method based on DE can not be adaptive adjusting parameter and selection strategy, be also easily trapped into Local Minimum;“Cao J,Lin Z, Huang G B.Self-adaptive evolutionary extreme learning machine[J].Neural Processing Letters,2012,36(3):285-305 " proposes SaE-ELM, using adaptive DE algorithm optimizations SLFNs Hidden node parameter, then using Moore-Penrose (MP) generalized inverse obtain network output connection weight, SaE-ELM Several extraordinary characteristics have been shown, but the problem of it is slow there is also convergence rate, and Generalization Capability is not fine.
The content of the invention
Substantial amounts of hidden layer node is needed to can be only achieved preferable precision for conventional limit learning machine, and ELM is not to The sample respond occurred in training set is poor, i.e., generalization ability deficiency the shortcomings that, the present invention in conventional limit learning machine On the basis of, a kind of learning method (SFSO-ELM) of the extreme learning machine based on social force model colony optimization algorithm is proposed, is had The stability and generalization that improve traditional ELM of effect.
Technical scheme is as follows:
Give N number of different sample (xi,yi), i=1,2 ..., N, xi,yiRepresent respectively i-th of sample input value and Output valve, xi=(xi1,xi2,...,xin)T∈Rn, yi=(yi1,yi2,...,yim)T∈Rm, wherein T expression transposition, R is real number Set, m and n represent the intrinsic dimensionality of sample;The activation primitive of hidden neuron is g (), and hidden node number is L;It is special Sign is to comprise the steps:
Step 1:Initialize the individual of population
The position of pedestrian represents a solution of optimization problem, and N number of pedestrian is initialized in search space;Pedestrian has There is following information:Speed v, current location p, historical trace h, the social force F of pedestrian;The initial velocity of pedestrian, historical trace with And social force is disposed as null vector in initialization;Wherein pedestrian α current location pα=(pα,1,pα,2,...,pα,D) according to Equation below is initialized:
pα,i=li+rand·(ui-li) (1)
P in formulaα,iFor pαI-th dimension component, i=1,2 ... D, wherein D be search space dimension, rand be [0,1] it Between random number, uiAnd liThe respectively bound of search space i-th dimension component.
Population after initialization is as follows as first generation population, the coded system of each individual:
θi,G=[w1,(i,t),...,wL,(i,t),b1,(i,t),...,bL,(i,t)] (2)
Wherein wjAnd bjJth dimension input weight and the hidden layer biasing of individual, j=1 ..., L are represented respectively;wjAnd bjPass through Formula (1) randomly generates;I=1,2 ... N;T represents iterative algebra;
Step 2:Calculate output weight and fitness value
In extreme learning machine algorithm:
Hwo=Y (3)
Wherein H is the hidden layer output matrix of network, and wo is output weight matrix, and Y represents network output matrix,
Wherein g () is activation primitive, wjAnd bjJth dimension input weight and the hidden layer biasing of individual, j=are represented respectively 1,…,L;L is hidden node number, and N is individual of sample number, and m represents the intrinsic dimensionality of sample, and T represents transposition.
Because input weight and hidden layer biasing can give at random, hidden layer output matrix H reforms into the square of a determination Battle array, the training can of such feedforward neural network change into a problem of solution exports the least square solution of weight matrix, Only need to obtain the least square solution of input weight with regard to the training of network can be completed, obtain exporting weight matrix;To each individual, Corresponding minimum output weight matrix is calculated according to equation (4).
Wo=H+Y (4)
Wherein H+Hidden layer output matrix H Moore-Penrose generalized inverses are represented, Y represents network output matrix.
If classification problem, the fitness value of each population is calculated using formula (5);Regression problem is then missed using root mean square Difference is such as fitness of the formula (6) as population.
MisclassCount is actual classification result and true classification results after being tested with inspection data collection in formula (5) Unequal number, nvIt is inspection set sample size.
Step 3:Individual choice target and kind heap sort
1) pedestrian target is selected from historical trace using probability selection mechanism, the historical trace of pedestrian is selected into object set T probability is as follows:
Fitness (h in formulaα) be pedestrian's α historical traces fitness value, max (fitness (h)), min (fitness (h)) be respectively pedestrian's historical trace fitness value maximum and minimum value.
2) as rand≤probαWhen, by pedestrian α historical trace hαIt is selected into object set T, wherein rand is between [0,1] Random number;Object set T calculates pedestrian α after determining each the distance between target, selection wherein distance are most short with target tightening Target T of the target as αα, and the pedestrian of same target is referred to a population Gk, k=1,2 ..., T.
Step 4:To population GkIn individual classified and perform corresponding search mechanisms
1) population G is obtained according to formula (8)kMiddle pedestrian turns into the probability of freely individual:
ρ in formulaαTurn into the probability of freely individual, RDis for pedestrian ααFor pedestrian α and the relative distance of target;Work as random number rand≤ραWhen, pedestrian α turns into freely individual, is otherwise non-free individual, random numbers of the rand between [0,1].
2) after the completion of group classification, different search mechanisms are performed respectively solution space is scanned for, turn into free individual Pedestrian α can abandon current location pαAnd random search is performed by formula (1), random search can improve the global search energy of algorithm Power;And non-free individual can then scan under the driving of social force towards selected target.
Step 5:Update non-free individual speed and position
Speed and position of each pedestrian after by social force are updated according to formula (9), (10);Each pedestrian is per one-dimensional Position limitation carries out bounds checking between [- 1,1], to individual, and the individual beyond [- 1,1] border is entered again by (1) formula Row assignment.
1) pedestrian α speed more new formula is as follows:
In formula, vα(t+ Δ t) and vα(t) it is respectively speed of the pedestrian α in t+ Δ t and t generations,WithRespectively For pedestrian in expected force and repulsive force suffered by t generations, t represents current iteration algebraically, and Δ t is 1.
2) pedestrian α location updating formula is as follows:
pα(t+ Δs t)=pα(t)+vα(t)Δt (10)
Wherein pα(t+ Δ t) and pα(t) it is pedestrian α respectively in the position in t+ Δ t and t generations, vα(t) it is pedestrian α in t generations Speed.
3) social expectation powerDesired speedRepulsive forces of the pedestrian β to αAnd radius r definition:
1. pedestrian α expected forceIt is defined by the formula:
In formulaPeople α desired speed, v are functioned in an acting capacity of for tα(t) people's α actual speeds are functioned in an acting capacity of for t, τ is slack time; eα(t)、Respectively t functions in an acting capacity of people α desired motion direction and undirected desired speed (scalar), is given by formula (12), (13) Go out, u, l are the bound of solution space, and Vfac is velocity factor.
T in formula (12)αFor pedestrian α target location, PαFor pedestrian α position;ρ is zoom factor in formula (13),For The control parameter of velocity interval;It is pedestrian α away from target TkDistance,For in sub-group individual to target away from From maximum;
2. pedestrian β is given by α repulsive force:
A, B are constants in formula, represent pedestrian α and other pedestrians interaction strength and sphere of action respectively;rαβ=rα+ rβFor interaction two pedestrians radius and;disα,βFor the distance between pedestrian α and β;It is that pedestrian β refers to To pedestrian α unit vector.
3. the radius r of pedestrian itself is updated in the form of weighting:
rt+1=(1- μ) rt+μ·rλ (15)
R in formulatFor pedestrian's radius in t generations, r λ are radius Dynamic gene, and μ is weight factor.
R λ size and the standard deviation δ of pedestrian's historical trace position and current locationh、δcRelevant, radius Dynamic gene r λ are pressed Formula (16) is updated:
In formulaIt is t for the historical trace position of current population position, historical trace position and primary Standard deviation.
Standard deviationIt is updated by the way of weighting, such as formula (17), (18):
P in formula(t+1))And h(t+1)Respectively the pedestrian current location in (t+1) generation and historical trace, std () are to seek mark Quasi- difference operation.
Step 6:Update historical trace
Pedestrian is determined jointly after Position And Velocity renewal using the fitness of inspection set and the norm of output weight Determine pedestrian's historical trace hgRenewal.
F (h in formulaα), f (hg) represent respectively the α pedestrian's desired positions fitness value and population in global desired positions Fitness value;γ is tolerance rate (γ > 0);The output corresponding to the α pedestrian's desired positions is represented respectively The output weight vector of global desired positions in weight vector and population.
Step 7:The cooperation stage
1) in order to strengthen the information sharing between pedestrian, historical trace is updated jointly using two kinds of cooperation modes, i.e. one-dimensional is assisted Work cooperates with multidimensional;One-dimensional cooperation cooperates with multidimensional to be provided by (20), (21) respectively:
In formula (20), (21), i, j represent randomly selected dimension, h'α,iFor i-th dimension component after the cooperation of pedestrian α one-dimensionals more New historical trace, h'αThe historical trace updated after being cooperated for pedestrian α multidimensional, hα,iAnd hβ,iRespectively pedestrian α and β is newborn Into i-th dimension component, pβ,iFor the i-th dimension component of pedestrian β current location, hαRepresent the newly-generated historical traces of pedestrian α, pβ For pedestrian β current location, hβThe newly-generated historical traces of pedestrian β are represented, φ is the random number between [- 1,1], and η and ψ are [0,1] random number between.
2) two random numbers a, b, and a are given, b ∈ [0,1], if a < b, one-dimensional cooperation are carried out by formula (20), it is no Then multidimensional cooperation is carried out by formula (21);After the completion of to individual carry out bounds checking, if renewal after historical trace be less than -1 if take - 1, take 1 more than 1;The policy update historical trace finally retained using elite.
Step 8:End condition judges
Repeat steps 2 through 7, until meeting maximum iteration or being optimal solution;Output has minimum fitness value The position of individual and corresponding output weight, are then applied to test set by optimal ELM.
For technical solution of the present invention, a kind of learning method of the extreme learning machine based on social force model colony optimization algorithm Program circuit include the description below:
Step 1:Initiation parameter value first, the parameter value include Population Size N, node in hidden layer L, slack time τ, maximum iteration Itermax, velocity factor ρ, pedestrian radius r, desired control parameterRepel force parameter A, B, weight because Sub- μ.
Step 2:Initialize the speed of each individual and position in population.
Step 3:Fitness value of each individual in t generations is calculated, t is current iteration algebraically.
Step 4:In the selection target stage, object set T is generated, and the individual of same target is referred to population Gk
Step 5:As random number rand≤ραWhen, random numbers of the wherein rand between [0,1], return to step 2, pedestrian α Initialize its speed and position Vα(t),Pα(t), α=1,2 ..., N;As random number rand > ραWhen, perform step 6.
Step 6:Social force search mechanisms are performed, update pedestrian's speed and position Vα(t),Pα(t), α=1,2 ..., N, α For natural number.
Step 7:Fitness value is calculated, judges whether to update historical trace by probability selection;Now iterative algebra t is carried out Renewal, t=t+1.
Step 8:If iterative algebra t > Itermax, step 9 is performed;If iterative algebra t≤Itermax, is performed Step 3.
Step 9:Iteration is completed, output historical trace hg, corresponding row element corresponds to optimal input weight and hidden layer Biasing, so as to obtain the optimal ELM of Generalization Capability.
Compared with existing algorithm, a kind of study side of the extreme learning machine based on social force model colony optimization algorithm of the present invention The advantages of method is:
1st, the present invention use social force model colony optimization algorithm, by expected force and repulsive force make algorithm global search with It can preferably be balanced in Local Search, algorithm can be effectively prevented from and be absorbed in local optimum, Global Optimality and generalization ability By force;2nd, the method that the present invention is combined using one-dimensional cooperation with multidimensional cooperation, improves convergence of algorithm speed and solving precision.
Brief description of the drawings
Fig. 1 is a kind of program flow diagram of the learning method of the extreme learning machine based on social force model colony optimization algorithm.
Fig. 2 is the box traction substation of six kinds of algorithm test accuracy rates on Diabetes data sets.
Fig. 3 is the box traction substation of six kinds of algorithm test accuracy rates on Credit data sets.
Fig. 4 is the box traction substation of six kinds of algorithm test accuracy rates on Wine data sets.
Fig. 5 is output weight norm value of six kinds of algorithms on Diabetes data sets.
Fig. 6 is output weight norm value of six kinds of algorithms on Credit data sets.
Fig. 7 is output weight norm value of six kinds of algorithms on Wine data sets.
Embodiment
Prove that technical scheme belongs to more excellent technical scheme relative to prior art with example below.At this Part, in order to test SFSO-ELM performance, we are by SFSO-ELM and ELM, PSO-ELM, IPSO-ELM, DE-ELM, SaE- ELM is tested respectively in Function Fitting, regression forecasting and benchmark classification problems.The Population Size of all algorithms is equal For 80, maximum iteration 200, the other parameters of algorithm are set to be provided by table 1.All experimental results being presented herein are all It is to run 30 times to average.The running environment of all algorithms is Matlab2010a.
The parameter value that five kinds of algorithms use during table 1 is tested
Example 1, Function Fitting
We divide 6 kinds of algorithms (ELM, PSO-ELM, IPSO-ELM, DE-ELM, SaE-ELM, SFSO-ELM) in this section Other that Sinc functions are fitted, the expression formula of wherein Sinc functions is:
5000 training sets are chosen respectivelyWith 5000 test set { xi,f(xi), wherein xiObey [- Being uniformly distributed 10,10].In order to simulate a kind of true environment, one group of equally distributed noise is produced from [- 0.2,0.2]Add Enter into all training samples, then training set isAnd noise is added without in test sample.To all algorithms, The 50% of test data set is used as inspection set.
Six kinds of ELMs algorithms are as shown in table 2 to the result of Sinc Function Fittings.
Result of the 2 six kinds of ELMs algorithms of table to Sinc Function Fittings
Table 2 illustrates the average results of SFSO-ELM algorithms and other 5 kinds of algorithms to Sinc Function Fittings.We can be with Easily find that SFSO-ELM possesses minimum test root-mean-square error (RMSE), i.e., 0.0042, standard variance (std) is 1.9843e-004 and minimum norm (Norm) are 1.0459e ± 4, and have used minimum node in hidden layer.Therefore, Algorithm that SFSO-ELM learning rate is combined with ELM than other algorithms can be illustrated faster, and Generalization Capability is more preferable.
Example 1 concretely comprises the following steps:
Step 1:Initialize the individual of population
The position of pedestrian represents a solution of optimization problem, and N number of pedestrian is initialized in search space;Pedestrian has There is following information:Speed v, current location p, historical trace h, the social force F of pedestrian;The initial velocity of pedestrian, historical trace with And social force is disposed as null vector in initialization;Wherein pedestrian α current location pα=(pα,1,pα,2,...,pα,D) according to Equation below is initialized:
pα,i=li+rand·(ui-li) (1)
P in formulaα,iFor pαI-th dimension component, i=1,2 ... D, wherein D be search space dimension, rand be [0,1] it Between random number, uiAnd liThe respectively bound of search space i-th dimension component, ui=1, li=-1.
Population after initialization is as follows as first generation population, the coded system of each individual:
θi,G=[w1,(i,t),...,wL,(i,t),b1,(i,t),...,bL,(i,t)] (2)
Wherein wjAnd bjJth dimension input weight and the hidden layer biasing of individual, j=1 ..., L are represented respectively;wjAnd bjPass through Formula (1) randomly generates;I=1,2 ... N;T represents iterative algebra.
Step 2:Calculate output weight and fitness value
To each individual, corresponding output weight matrix is calculated according to equation (4).Using root-mean-square error (RMSE) conduct The fitness of population, i.e., calculated according to formula (6):
Wo=H+Y (4)
N in formulavIt is inspection set sample size, nv=2500.
Step 3:Individual choice target and kind heap sort
1) pedestrian target is selected from historical trace using probability selection mechanism, the historical trace of pedestrian is selected into object set T probability is as follows:
Fitness (h in formulaα) be pedestrian's α historical traces fitness value, max (fitness (h)), min (fitness (h)) be respectively pedestrian's historical trace fitness value maximum and minimum value.
2) as rand≤probαWhen (random numbers of the rand between [0,1]), by pedestrian α historical trace hαIt is selected into target Collect in T.Object set T calculates pedestrian α and target tightening the distance between each target, the selection wherein most short work of distance after determining For α target Tα, and the pedestrian of same target is referred to a population Gk(k=1,2 ..., T).
Step 4:To population GkIn individual classified and perform corresponding search mechanisms
1) population G is obtained according to formula (8)kMiddle pedestrian turns into the probability of freely individual:
ρ in formulaαTurn into the probability of freely individual, RDis for pedestrianαFor pedestrian α and the relative distance of target.Work as random number rand≤ραWhen (random numbers of the rand between [0,1]), pedestrian α turn into freely individual, be otherwise non-free individual.
2) after the completion of group classification, different search mechanisms are performed respectively solution space is scanned for, turn into free individual Pedestrian can abandon current location pαAnd random search is performed by formula (1), random search can improve the global search energy of algorithm Power.And non-free individual can then scan under the driving of social force towards selected target.
Step 5:Update non-free individual speed and position
According to formula (9), (10) update speed and position of each pedestrian after by social force.Each pedestrian is per one-dimensional Position limitation carries out bounds checking between [- 1,1], to individual, and the individual beyond [- 1,1] border is entered again by (1) formula Row assignment.
1) pedestrian α speed more new formula is as follows:
In formula, vα(t+ Δ t) and vα(t) it is respectively speed of the pedestrian α in t+ Δ t and t generations,WithRespectively For pedestrian in expected force and repulsive force suffered by t generations, t represents current iteration algebraically, and Δ t is 1.
2) pedestrian α location updating formula is as follows:
pα(t+ Δs t)=pα(t)+vα(t)Δt (10)
Wherein pα(t+ Δ t) and pα(t) it is pedestrian α respectively in the position in t+ Δ t and t generations, vα(t) it is pedestrian α in t generations Speed.
3) social expectation powerDesired speedRepulsive forces of the pedestrian β to αAnd radius r definition:
1. pedestrian α expected forceIt is defined by the formula:
In formulaPeople α desired speed, v are functioned in an acting capacity of for tα(t) people's α actual speeds are functioned in an acting capacity of for t, τ is slack time; eα(t)、Respectively t functions in an acting capacity of people α desired motion direction and undirected desired speed (scalar), is given by formula (12), (13) Go out, u, l are the bound of solution space, and Vfac is velocity factor.
T in formula (12)αFor pedestrian α target location, PαFor pedestrian α position;ρ is zoom factor in formula (13),For The control parameter of velocity interval;It is pedestrian α away from target TkDistance,For in sub-group individual to target away from From maximum.
2. pedestrian β is given by α repulsive force:
A, B are constants in formula, represent pedestrian α and other pedestrians interaction strength and sphere of action respectively;rαβ=rα+ rβFor interaction two pedestrians radius and;disα,βFor the distance between pedestrian α and β;It is that pedestrian β refers to To pedestrian α unit vector.
3. the radius r of pedestrian itself is updated in the form of weighting:
rt+1=(1- μ) rt+μ·rλ (15)
R in formulatFor pedestrian's radius in t generations, r λ are radius Dynamic gene, and μ is weight factor.
R λ size and the standard deviation δ of pedestrian's historical trace position and current locationh、δcRelevant, radius Dynamic gene r λ are pressed Formula (16) is updated:
In formulaIt is t for the historical trace position of current population position, historical trace position and primary Standard deviation.
Standard deviationIt is updated by the way of weighting, such as formula (17), (18):
P in formula(t+1))And h(t+1)Respectively the pedestrian current location in (t+1) generation and historical trace, std () are to seek mark Quasi- difference operation.
Step 6:Update historical trace
Pedestrian is determined jointly after Position And Velocity renewal using the fitness of inspection set and the norm of output weight Determine pedestrian's historical trace hgRenewal.
F (h in formulaα), f (hg) represent respectively the α pedestrian's desired positions fitness value and population in global desired positions Fitness value;γ is tolerance rate, γ > 0;The output power corresponding to the α pedestrian's desired positions is represented respectively It is worth the output weight vector of global desired positions in vector sum population.
Step 7:The cooperation stage
1) in order to strengthen the information sharing between pedestrian, historical trace is updated jointly using two kinds of cooperation modes, i.e. one-dimensional is assisted Work cooperates with multidimensional;One-dimensional cooperation cooperates with multidimensional to be provided by (20), (21) respectively:
In formula (20), (21), i, j represent randomly selected dimension, h'α,iFor i-th dimension component after the cooperation of pedestrian α one-dimensionals more New historical trace, h'αThe historical trace updated after being cooperated for pedestrian α multidimensional, hα,iAnd hβ,iRespectively pedestrian α and β is newborn Into i-th dimension component, pβ,iFor the i-th dimension component of pedestrian β current location, hαRepresent the newly-generated historical traces of pedestrian α, pβ For pedestrian β current location, hβThe newly-generated historical traces of pedestrian β are represented, φ is the random number between [- 1,1], and η and ψ are [0,1] random number between.
3) two random numbers a, b, and a are given, b ∈ [0,1], if a < b, one-dimensional cooperation are carried out by formula (20), it is no Then multidimensional cooperation is carried out by formula (21);After the completion of to individual carry out bounds checking, if renewal after historical trace be less than -1 if take - 1, take 1 more than 1;The policy update historical trace finally retained using elite.
Step 8:End condition judges
Repeat steps 2 through 7, until meeting maximum iteration or being optimal solution;Output has minimum fitness value The position of individual and corresponding output weight, are then applied to test set by optimal ELM.
Example 2, typical True Data collection return
In this section, we by SFSO-ELM algorithms and ELM, PSO-ELM, IPSO-ELM, DE-ELM, SaE-ELM to from The 3 true regression data collection obtained in UCI machine learning storehouse are emulated.These three data sets are respectively Servo, Autompg, ForestFires.Every kind of data set is required for being normalized between [- 1,1] during emulation, and by every kind of data set with Machine is divided into training dataset and test data set, as shown in table 3.The 50% of test data set is inspection data in all algorithms Collection.
The true regression data collection of the typical case of table 3
Comparative result of the 4 six kinds of ELMs algorithms of table on Servo data sets
Comparative result of the 5 six kinds of ELMs algorithms of table on Autompg data sets
Comparative result of the 6 six kinds of ELMs algorithms of table on ForestFires data sets
Table 4-6 illustrate set forth herein comparison of the SFSO-ELM algorithms on 3 data sets between other algorithms, institute Having these results is averaged by test of many times.Minimum value in our first three columns to 3 tables of data carries out overstriking, It can easily find out that the SFSO-ELM RMSEs of test set and the Std values of test set are minimum from table.Although training The RMSEs of collection is not reaching to minimum value, but very close minimum value.ELM algorithms are on all data sets on the training time Used time is minimum, and other algorithms are in Servo, and the Autompg upper used times are very close to SFSO-ELM used times on Forest fires Most long, this is relevant with iteration optimizing.The ELM algorithms combined on the whole than original EL M algorithms in stability, in terms of accuracy more It is high.SFSO-ELM stability and Generalization Capability is more preferable than other algorithms.
The specific steps of example 2 are identical with example 1.
Example 3, classification problem
SFSO-ELM algorithms are analyzed with 3 criteria classification problems, classification problem data set therein is from UCI machines Obtain, described in detail as shown in table 7 in device learning database, wherein each data set is divided into training set, checking collection and test set, They are all branched away at random from whole data set.
The true categorized data set of the typical case of table 7
In this experiment, we are by SFSO-ELM algorithms and ELM, PSO-ELM, IPSO-ELM, DE-ELM, SaE-ELM algorithm 30 experiments are individually done in 3 kinds of classification problems to average, experimental result such as table 8.
Comparative result of the 8 six kinds of algorithms of table on true categorized data set
For Diabetes data sets, accuracy rate that SFSO-ELM algorithms are trained compared with other 5 kinds of algorithms all it is poor not It is more, but the accuracy rate tested has obtained large increase.On Credit, the experimental result of SFSO-ELM algorithms is all calculated than other The result of method is good.On Wine, IPSO-ELM training accuracy rate is highest, has reached 99.96%;SFSO-ELM test Accuracy rate is 97.65%, all higher than other algorithms.As a whole, the effect of SFSO-ELM algorithms is better than other algorithms. SFSO-ELM, SaE-ELM, DE-ELM, IPSO-ELM and PSO-ELM algorithm need more times to instruct compared to ELM algorithms Practice SLFNs.But ELM in order to reach with other algorithms similar in the more node in hidden layer of accuracy rate needs.
In order to more intuitively show the distribution situation of every kind of 30 experimental results of data set, we employ case line Figure.Fig. 2, Fig. 3, Fig. 4 show box traction substation of six kinds of ELMs algorithms on 3 data sets (Diabetes, Credit, Wine). The length of chest is drawn by 30 test accuracy rates, and heavy line therein represents the intermediate value of data, and small square frame represents the equal of data Value.From figure can with it is further seen that in the case of node in hidden layer identical, SFSO-ELM algorithms in Diabetes, Accuracy rate on Credit, Wine data set is higher than other several algorithms.
In order to detect influences of the SFSO-ELM to output weight norm value, six kinds of ELMs algorithms are asked in corresponding classification herein The Norm values after 30 experiments are done in topic to be compared.It is can be seen that from Fig. 4, Fig. 5 on Diabetes, Credit data sets SFSO-ELM broken line all more they tends to steadily than other 5 kinds of algorithms, and Norm values are on the whole also below other 5 kinds of algorithms, Illustrate that Generalization Capability of the SFSO-ELM algorithms on these data sets is more preferable than other algorithms.From fig. 6, it can be seen that in Wine DE-ELM is smaller than SFSO-ELM norm value on data set, and distribution is more stable, but SFSO-ELM is than remaining algorithm effect It is good.As a whole, the more conventional algorithm of SFSO-ELM Generalization Capability has obtained further raising.
The specific steps of example 3 and example 1 are essentially identical, when difference is selected population fitness calculation, Using formula (5).
Above-described is only the preferred embodiment of the present invention, and the invention is not restricted to above example.It is appreciated that ability The oher improvements and changes that field technique personnel directly export or associated without departing from the spirit and concept in the present invention, It is considered as being included within protection scope of the present invention.

Claims (2)

1. a kind of learning method of the extreme learning machine based on social force model colony optimization algorithm, give N number of different sample (xi, yi), i=1,2 ..., N, xi,yiThe input value and output valve of i-th of sample, x are represented respectivelyi=(xi1,xi2,...,xin)T∈ Rn, yi=(yi1,yi2,...,yim)T∈Rm, wherein T expression transposition, R is real number set, and m and n represent the intrinsic dimensionality of sample; The activation primitive of hidden neuron is g (), and hidden node number is L;It is characterized in that comprise the steps:
Step 1:Initialize the individual of population
The position of pedestrian represents a solution of optimization problem, and N number of pedestrian is initialized in search space;Pedestrian has such as Lower information:Speed v, current location p, historical trace h, the social force F of pedestrian;Initial velocity, historical trace and the society of pedestrian Meeting power is disposed as null vector in initialization;Wherein pedestrian α current location pα=(pα,1,pα,2,...,pα,D) according to as follows Formula is initialized:
pα,i=li+rand·(ui-li) (1)
P in formulaα,iFor pαI-th dimension component, i=1,2 ... D, wherein D be search space dimension, rand is between [0,1] Random number, uiAnd liThe respectively bound of search space i-th dimension component;
Population after initialization is as follows as first generation population, the coded system of each individual:
θi,G=[w1,(i,t),...,wL,(i,t),b1,(i,t),...,bL,(i,t)] (2)
Wherein wjAnd bjJth dimension input weight and the hidden layer biasing of individual, j=1 ..., L are represented respectively;wjAnd bjPass through formula (1) Randomly generate;I=1,2 ... N;T represents iterative algebra;
Step 2:Calculate output weight and fitness value
In extreme learning machine algorithm:
Hwo=Y (3)
Wherein H is the hidden layer output matrix of network, and wo is output weight matrix, and Y represents network output matrix,
<mrow> <mi>H</mi> <mo>=</mo> <msub> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <mi>g</mi> <mrow> <mo>(</mo> <msub> <mi>w</mi> <mn>1</mn> </msub> <mo>&amp;CenterDot;</mo> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>+</mo> <msub> <mi>b</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <mrow> <mi>g</mi> <mrow> <mo>(</mo> <msub> <mi>w</mi> <mi>L</mi> </msub> <mo>&amp;CenterDot;</mo> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>+</mo> <msub> <mi>b</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> <mtd> <mrow></mrow> </mtd> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> <mtd> <mrow></mrow> </mtd> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>g</mi> <mrow> <mo>(</mo> <msub> <mi>w</mi> <mn>1</mn> </msub> <mo>&amp;CenterDot;</mo> <msub> <mi>x</mi> <mi>N</mi> </msub> <mo>+</mo> <msub> <mi>b</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <mrow> <mi>g</mi> <mrow> <mo>(</mo> <msub> <mi>w</mi> <mi>L</mi> </msub> <mo>&amp;CenterDot;</mo> <msub> <mi>x</mi> <mi>N</mi> </msub> <mo>+</mo> <msub> <mi>b</mi> <mi>L</mi> </msub> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> </mfenced> <mrow> <mi>N</mi> <mo>&amp;times;</mo> <mi>L</mi> </mrow> </msub> <mo>,</mo> <mi>w</mi> <mi>o</mi> <mo>=</mo> <msub> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <msubsup> <mi>wo</mi> <mn>1</mn> <mi>T</mi> </msubsup> </mrow> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mrow> <msubsup> <mi>wo</mi> <mi>L</mi> <mi>T</mi> </msubsup> </mrow> </mtd> </mtr> </mtable> </mfenced> <mrow> <mi>L</mi> <mo>&amp;times;</mo> <mi>m</mi> </mrow> </msub> <mo>,</mo> <mi>Y</mi> <mo>=</mo> <msub> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msubsup> <mi>y</mi> <mn>1</mn> <mi>T</mi> </msubsup> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <msubsup> <mi>y</mi> <mi>L</mi> <mi>T</mi> </msubsup> </mtd> </mtr> </mtable> </mfenced> <mrow> <mi>N</mi> <mo>&amp;times;</mo> <mi>m</mi> </mrow> </msub> </mrow>
Wherein g () is activation primitive, wjAnd bjThe jth dimension input weight and hidden layer for representing individual respectively bias, j=1 ..., L;L is hidden node number, and N is individual of sample number, and m represents the intrinsic dimensionality of sample, and T represents transposition;
Because input weight and hidden layer biasing can give at random, hidden layer output matrix H reforms into the matrix of a determination, this The training can of sample feedforward neural network changes into the problem of least square solution for solving output weight matrix, it is only necessary to The least square solution of input weight is obtained with regard to the training of network can be completed, obtains exporting weight matrix;To each individual, according to etc. Formula (4) calculates corresponding minimum output weight matrix;
Wo=H+Y (4)
Wherein H+Hidden layer output matrix H Moore-Penrose generalized inverses are represented, Y represents network output matrix;
If classification problem, the fitness value of each population is calculated using formula (5);Regression problem then uses root-mean-square error such as Fitness of the formula (6) as population;
<mrow> <mi>f</mi> <mrow> <mo>(</mo> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mi>M</mi> <mi>i</mi> <mi>s</mi> <mi>c</mi> <mi>l</mi> <mi>a</mi> <mi>s</mi> <mi>s</mi> <mi>C</mi> <mi>o</mi> <mi>u</mi> <mi>n</mi> <mi>t</mi> </mrow> <msub> <mi>n</mi> <mi>v</mi> </msub> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <mi>f</mi> <mrow> <mo>(</mo> <mo>)</mo> </mrow> <mo>=</mo> <msqrt> <mfrac> <mrow> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>n</mi> <mi>v</mi> </msub> </msubsup> <mo>|</mo> <mo>|</mo> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>H</mi> </msubsup> <msub> <mi>wo</mi> <mi>i</mi> </msub> <mi>g</mi> <mrow> <mo>(</mo> <msub> <mi>w</mi> <mi>i</mi> </msub> <mo>&amp;CenterDot;</mo> <msub> <mi>x</mi> <mi>j</mi> </msub> <mo>+</mo> <msub> <mi>b</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>y</mi> <mi>j</mi> </msub> <mo>|</mo> <msubsup> <mo>|</mo> <mn>2</mn> <mn>2</mn> </msubsup> </mrow> <msub> <mi>n</mi> <mi>v</mi> </msub> </mfrac> </msqrt> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow>
MisclassCount is actual classification result and true classification results not phase after being tested with inspection data collection in formula (5) Deng number, nvIt is inspection set sample size;
Step 3:Individual choice target and kind heap sort
1) pedestrian target is selected from historical trace using probability selection mechanism, the historical trace of pedestrian is selected into object set T's Probability is as follows:
<mrow> <msub> <mi>prob</mi> <mi>&amp;alpha;</mi> </msub> <mo>=</mo> <mfrac> <mrow> <mn>0.9</mn> <mo>&amp;CenterDot;</mo> <mrow> <mo>(</mo> <mi>f</mi> <mi>i</mi> <mi>t</mi> <mi>n</mi> <mi>e</mi> <mi>s</mi> <mi>s</mi> <mo>(</mo> <msub> <mi>h</mi> <mi>&amp;alpha;</mi> </msub> <mo>)</mo> <mo>-</mo> <mi>min</mi> <mo>(</mo> <mrow> <mi>f</mi> <mi>i</mi> <mi>t</mi> <mi>n</mi> <mi>e</mi> <mi>s</mi> <mi>s</mi> <mrow> <mo>(</mo> <mi>h</mi> <mo>)</mo> </mrow> </mrow> <mo>)</mo> <mo>)</mo> </mrow> </mrow> <mrow> <mi>max</mi> <mrow> <mo>(</mo> <mi>f</mi> <mi>i</mi> <mi>t</mi> <mi>n</mi> <mi>e</mi> <mi>s</mi> <mi>s</mi> <mo>(</mo> <mi>h</mi> <mo>)</mo> <mo>)</mo> </mrow> <mo>-</mo> <mi>min</mi> <mrow> <mo>(</mo> <mi>f</mi> <mi>i</mi> <mi>t</mi> <mi>n</mi> <mi>e</mi> <mi>s</mi> <mi>s</mi> <mo>(</mo> <mi>h</mi> <mo>)</mo> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>+</mo> <mn>0.1</mn> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow>
Fitness (h in formulaα) divide for the fitness value of pedestrian's α historical traces, max (fitness (h)), min (fitness (h)) Not Wei pedestrian's historical trace fitness value maximum and minimum value;
2) as rand≤probαWhen, by pedestrian α historical trace hαBe selected into object set T, wherein rand between [0,1] with Machine number;Object set T calculates pedestrian α and target tightening the distance between each target, selection wherein distance most short one after determining Target T of the individual target as αα, and the pedestrian of same target is referred to a population Gk, k=1,2 ..., T;
Step 4:To population GkIn individual classified and perform corresponding search mechanisms
1) population G is obtained according to formula (8)kMiddle pedestrian turns into the probability of freely individual:
<mrow> <msub> <mi>&amp;rho;</mi> <mi>&amp;alpha;</mi> </msub> <mo>=</mo> <msup> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>RDis</mi> <mi>&amp;alpha;</mi> </msub> </mrow> <mn>2</mn> </mfrac> <mo>)</mo> </mrow> <mn>3.6</mn> </msup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow>
ρ in formulaαTurn into the probability of freely individual, RDis for pedestrian ααFor pedestrian α and the relative distance of target;As random number rand ≤ραWhen, pedestrian α turns into freely individual, is otherwise non-free individual, random numbers of the rand between [0,1];
2) after the completion of group classification, different search mechanisms are performed respectively solution space is scanned for, turn into the row of freely individual People α can abandon current location pαAnd random search is performed by formula (1), random search can improve the ability of searching optimum of algorithm; And non-free individual can then scan under the driving of social force towards selected target;
Step 5:Update non-free individual speed and position
Speed and position of each pedestrian after by social force are updated according to formula (9), (10);Each pedestrian is per one-dimensional position It is limited between [- 1,1], bounds checking is carried out to individual, and tax is re-started by (1) formula to the individual beyond [- 1,1] border Value;
1) pedestrian α speed more new formula is as follows:
<mrow> <msub> <mi>v</mi> <mi>&amp;alpha;</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>+</mo> <mi>&amp;Delta;</mi> <mi>t</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>v</mi> <mi>&amp;alpha;</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>+</mo> <msubsup> <mi>F</mi> <mi>&amp;alpha;</mi> <mi>d</mi> </msubsup> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mi>&amp;Delta;</mi> <mi>t</mi> <mo>+</mo> <msubsup> <mi>F</mi> <mrow> <mi>&amp;alpha;</mi> <mi>&amp;beta;</mi> </mrow> <mi>&amp;gamma;</mi> </msubsup> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mi>&amp;Delta;</mi> <mi>t</mi> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> </mrow>
In formula, vα(t+ Δ t) and vα(t) it is respectively speed of the pedestrian α in t+ Δ t and t generations,WithRespectively pedestrian In expected force and repulsive force suffered by t generations, t represents current iteration algebraically, and Δ t is 1;
2) pedestrian α location updating formula is as follows:
pα(t+ Δs t)=pα(t)+vα(t)Δt (10)
P in formulaα(t+ Δ t) and pα(t) it is pedestrian α respectively in the position in t+ Δ t and t generations, vα(t) it is speed of the pedestrian α in t generations;
3) social expectation powerDesired speedRepulsive forces of the pedestrian β to αAnd radius r definition:
1. pedestrian α expected forceIt is defined by the formula:
<mrow> <msubsup> <mi>F</mi> <mi>&amp;alpha;</mi> <mi>d</mi> </msubsup> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msubsup> <mi>v</mi> <mi>&amp;alpha;</mi> <mi>d</mi> </msubsup> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>v</mi> <mi>&amp;alpha;</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> <mi>&amp;tau;</mi> </mfrac> <mo>=</mo> <mfrac> <mrow> <msub> <mi>e</mi> <mi>&amp;alpha;</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <msubsup> <mi>v</mi> <mi>&amp;alpha;</mi> <mn>0</mn> </msubsup> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mn>1</mn> <mo>+</mo> <mi>V</mi> <mi>f</mi> <mi>a</mi> <mi>c</mi> <mo>(</mo> <mrow> <mi>u</mi> <mo>-</mo> <mi>l</mi> </mrow> <mo>)</mo> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>v</mi> <mi>&amp;alpha;</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> <mi>&amp;tau;</mi> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>11</mn> <mo>)</mo> </mrow> </mrow>
In formulaPeople α desired speed, v are functioned in an acting capacity of for tα(t) people's α actual speeds are functioned in an acting capacity of for t, τ is slack time;eα (t)、Respectively t functions in an acting capacity of people α desired motion direction and undirected desired speed (scalar), is provided by formula (12), (13), U, l is the bound of solution space, and Vfac is velocity factor;
<mrow> <msub> <mi>e</mi> <mi>&amp;alpha;</mi> </msub> <mo>=</mo> <mfrac> <mrow> <msub> <mi>T</mi> <mi>&amp;alpha;</mi> </msub> <mo>-</mo> <msub> <mi>P</mi> <mi>&amp;alpha;</mi> </msub> </mrow> <mrow> <mo>|</mo> <mrow> <msub> <mi>T</mi> <mi>&amp;alpha;</mi> </msub> <mo>-</mo> <msub> <mi>P</mi> <mi>&amp;alpha;</mi> </msub> </mrow> <mo>|</mo> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>12</mn> <mo>)</mo> </mrow> </mrow>
T in formula (12)αFor pedestrian α target location, PαFor pedestrian α position;ρ is zoom factor in formula (13),θ is speed model The control parameter enclosed;It is pedestrian α away from target TkDistance,Target range is arrived for individual in sub-group most Big value;
2. pedestrian β is given by α repulsive force:
<mrow> <msubsup> <mi>F</mi> <mrow> <mi>&amp;alpha;</mi> <mi>&amp;beta;</mi> </mrow> <mi>&amp;gamma;</mi> </msubsup> <mrow> <mo>(</mo> <mi>T</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>A</mi> <mo>&amp;CenterDot;</mo> <mi>exp</mi> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>r</mi> <mrow> <mi>&amp;alpha;</mi> <mi>&amp;beta;</mi> </mrow> </msub> <mo>-</mo> <msub> <mi>dis</mi> <mrow> <mi>&amp;alpha;</mi> <mo>,</mo> <mi>&amp;beta;</mi> </mrow> </msub> </mrow> <mi>B</mi> </mfrac> <mo>)</mo> </mrow> <msub> <mover> <mi>n</mi> <mo>&amp;RightArrow;</mo> </mover> <mrow> <mi>&amp;alpha;</mi> <mo>,</mo> <mi>&amp;beta;</mi> </mrow> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>14</mn> <mo>)</mo> </mrow> </mrow>
A, B are constants in formula, represent pedestrian α and other pedestrians interaction strength and sphere of action respectively;rαβ=rα+rβFor Interaction two pedestrians radius and;disα,βFor the distance between pedestrian α and β;It is that pedestrian β points to row People α unit vector;
3. the radius r of pedestrian itself is updated in the form of weighting:
rt+1=(1- μ) rt+μ·rλ (15)
R in formulatFor pedestrian's radius in t generations, r λ are radius Dynamic gene, and μ is weight factor;
R λ size and the standard deviation δ of pedestrian's historical trace position and current locationh、δcRelevant, radius Dynamic gene r λ press formula (16) it is updated:
<mrow> <mi>r</mi> <mi>&amp;lambda;</mi> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> <mrow> <mo>(</mo> <mfrac> <msubsup> <mi>&amp;delta;</mi> <mi>h</mi> <mi>t</mi> </msubsup> <mn>20</mn> </mfrac> <mo>,</mo> <mn>0.3</mn> <mo>)</mo> </mrow> <mo>&amp;CenterDot;</mo> <mrow> <mo>(</mo> <mfrac> <msubsup> <mi>&amp;delta;</mi> <mi>h</mi> <mi>t</mi> </msubsup> <msubsup> <mi>&amp;delta;</mi> <mi>c</mi> <mi>t</mi> </msubsup> </mfrac> <mo>)</mo> </mrow> <mo>,</mo> <mi>i</mi> <mi>f</mi> <mfrac> <msubsup> <mi>&amp;delta;</mi> <mi>h</mi> <mi>t</mi> </msubsup> <msubsup> <mi>&amp;delta;</mi> <mi>h</mi> <mn>0</mn> </msubsup> </mfrac> <mo>&gt;</mo> <mn>0.5</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> <mrow> <mo>(</mo> <mfrac> <msubsup> <mi>&amp;delta;</mi> <mi>h</mi> <mi>t</mi> </msubsup> <mn>20</mn> </mfrac> <mo>,</mo> <mn>5</mn> <mo>)</mo> </mrow> <mo>&amp;CenterDot;</mo> <mrow> <mo>(</mo> <mfrac> <msubsup> <mi>&amp;delta;</mi> <mi>h</mi> <mi>t</mi> </msubsup> <msubsup> <mi>&amp;delta;</mi> <mi>c</mi> <mi>t</mi> </msubsup> </mfrac> <mo>)</mo> </mrow> <mo>,</mo> <mi>i</mi> <mi>f</mi> <mfrac> <msubsup> <mi>&amp;delta;</mi> <mi>h</mi> <mi>t</mi> </msubsup> <msubsup> <mi>&amp;delta;</mi> <mi>h</mi> <mn>0</mn> </msubsup> </mfrac> <mo>&amp;le;</mo> <mn>0.5</mn> <mo>,</mo> <msubsup> <mi>&amp;delta;</mi> <mi>h</mi> <mi>t</mi> </msubsup> <mo>&gt;</mo> <msubsup> <mi>&amp;delta;</mi> <mi>c</mi> <mi>t</mi> </msubsup> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> <mrow> <mo>(</mo> <mfrac> <msubsup> <mi>&amp;delta;</mi> <mi>h</mi> <mi>t</mi> </msubsup> <mn>20</mn> </mfrac> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>&amp;CenterDot;</mo> <mrow> <mo>(</mo> <mfrac> <msubsup> <mi>&amp;delta;</mi> <mi>h</mi> <mi>t</mi> </msubsup> <msubsup> <mi>&amp;delta;</mi> <mi>c</mi> <mi>t</mi> </msubsup> </mfrac> <mo>)</mo> </mrow> <mo>,</mo> <mi>i</mi> <mi>f</mi> <mfrac> <msubsup> <mi>&amp;delta;</mi> <mi>h</mi> <mi>t</mi> </msubsup> <msubsup> <mi>&amp;delta;</mi> <mi>h</mi> <mn>0</mn> </msubsup> </mfrac> <mo>&amp;le;</mo> <mn>0.5</mn> <mo>,</mo> <msubsup> <mi>&amp;delta;</mi> <mi>h</mi> <mi>t</mi> </msubsup> <mo>&amp;le;</mo> <msubsup> <mi>&amp;delta;</mi> <mi>c</mi> <mi>t</mi> </msubsup> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>16</mn> <mo>)</mo> </mrow> </mrow>
In formulaStandard for t for the historical trace position of current population position, historical trace position and primary Difference;
Standard deviationIt is updated by the way of weighting, such as formula (17), (18):
<mrow> <msubsup> <mi>&amp;delta;</mi> <mi>c</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> <mo>=</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>&amp;mu;</mi> <mo>)</mo> </mrow> <msubsup> <mi>&amp;delta;</mi> <mi>c</mi> <mi>t</mi> </msubsup> <mo>+</mo> <mi>&amp;mu;</mi> <mo>&amp;CenterDot;</mo> <mi>s</mi> <mi>t</mi> <mi>d</mi> <mrow> <mo>(</mo> <msup> <mi>p</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </msup> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>17</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <msubsup> <mi>&amp;delta;</mi> <mi>h</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> <mo>=</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>&amp;mu;</mi> <mo>)</mo> </mrow> <msubsup> <mi>&amp;delta;</mi> <mi>h</mi> <mi>t</mi> </msubsup> <mo>+</mo> <mi>&amp;mu;</mi> <mo>&amp;CenterDot;</mo> <mi>s</mi> <mi>t</mi> <mi>d</mi> <mrow> <mo>(</mo> <msup> <mi>h</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </msup> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>18</mn> <mo>)</mo> </mrow> </mrow>
P in formula(t+1))And h(t+1)Respectively the pedestrian current location in (t+1) generation and historical trace, std () are to seek standard deviation Computing;
Step 6:Update historical trace
Pedestrian together decides on row after Position And Velocity renewal using the fitness of inspection set and the norm of output weight People's historical trace hgRenewal;
<mrow> <msub> <mi>h</mi> <mi>g</mi> </msub> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <msub> <mi>h</mi> <mi>&amp;alpha;</mi> </msub> </mtd> <mtd> <mrow> <mo>(</mo> <mi>f</mi> <mo>(</mo> <msub> <mi>h</mi> <mi>g</mi> </msub> <mo>)</mo> <mo>-</mo> <mi>f</mi> <mo>(</mo> <msub> <mi>h</mi> <mi>&amp;alpha;</mi> </msub> <mo>)</mo> <mo>&gt;</mo> <mi>&amp;gamma;</mi> <mo>&amp;CenterDot;</mo> <mi>f</mi> <mo>(</mo> <msub> <mi>h</mi> <mi>g</mi> </msub> <mo>)</mo> <mo>)</mo> <mi>o</mi> <mi>r</mi> <mo>(</mo> <mo>|</mo> <mrow> <mi>f</mi> <mrow> <mo>(</mo> <msub> <mi>h</mi> <mi>g</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mi>f</mi> <mrow> <mo>(</mo> <msub> <mi>h</mi> <mi>&amp;alpha;</mi> </msub> <mo>)</mo> </mrow> </mrow> <mo>|</mo> <mo>)</mo> <mo>&lt;</mo> <mi>&amp;gamma;</mi> <mo>&amp;CenterDot;</mo> <mi>f</mi> <mo>(</mo> <msub> <mi>h</mi> <mi>g</mi> </msub> <mo>)</mo> <mi>a</mi> <mi>n</mi> <mi>d</mi> <mo>|</mo> <mo>|</mo> <msub> <mi>wo</mi> <msub> <mi>h</mi> <mi>&amp;alpha;</mi> </msub> </msub> <mo>|</mo> <mo>|</mo> <mo>&lt;</mo> <mo>|</mo> <mo>|</mo> <msub> <mi>wo</mi> <msub> <mi>h</mi> <mi>g</mi> </msub> </msub> <mo>|</mo> <mo>|</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <msub> <mi>h</mi> <mi>g</mi> </msub> </mtd> <mtd> <mrow> <mi>e</mi> <mi>l</mi> <mi>s</mi> <mi>e</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>19</mn> <mo>)</mo> </mrow> </mrow>
F (h in formulaα), f (hg) represent respectively the α pedestrian's desired positions fitness value and global desired positions in population it is suitable Answer angle value;γ is tolerance rate, γ > 0;Represent respectively output weights corresponding to the α pedestrian's desired positions to The output weight vector of global desired positions in amount and population;
Step 7:The cooperation stage
1) in order to strengthen the information sharing between pedestrian, update historical trace jointly using two kinds of cooperation modes, i.e., one-dimensional cooperation with Multidimensional cooperates;One-dimensional cooperation cooperates with multidimensional to be provided by (20), (21) respectively:
<mrow> <msubsup> <mi>h</mi> <mrow> <mi>&amp;alpha;</mi> <mo>,</mo> <mi>i</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>h</mi> <mrow> <mi>&amp;alpha;</mi> <mo>,</mo> <mi>i</mi> </mrow> </msub> <mo>+</mo> <mi>&amp;phi;</mi> <mo>&amp;CenterDot;</mo> <mrow> <mo>(</mo> <msub> <mi>h</mi> <mrow> <mi>&amp;alpha;</mi> <mo>,</mo> <mi>i</mi> </mrow> </msub> <mo>-</mo> <msub> <mi>p</mi> <mrow> <mi>&amp;beta;</mi> <mo>,</mo> <mi>i</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mi>&amp;eta;</mi> <mo>&lt;</mo> <mn>0.3</mn> <mi>a</mi> <mi>n</mi> <mi>d</mi> <mi>&amp;psi;</mi> <mo>&lt;</mo> <mn>0.5</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>h</mi> <mrow> <mi>&amp;beta;</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>,</mo> <mi>&amp;eta;</mi> <mo>&lt;</mo> <mn>0.3</mn> <mi>a</mi> <mi>n</mi> <mi>d</mi> <mi>&amp;psi;</mi> <mo>&amp;GreaterEqual;</mo> <mn>0.5</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>h</mi> <mrow> <mi>&amp;alpha;</mi> <mo>,</mo> <mi>i</mi> </mrow> </msub> <mo>+</mo> <mi>&amp;phi;</mi> <mo>&amp;CenterDot;</mo> <mrow> <mo>(</mo> <msub> <mi>h</mi> <mrow> <mi>&amp;alpha;</mi> <mo>,</mo> <mi>i</mi> </mrow> </msub> <mo>-</mo> <msub> <mi>h</mi> <mrow> <mi>&amp;beta;</mi> <mo>,</mo> <mi>i</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mi>&amp;eta;</mi> <mo>&amp;GreaterEqual;</mo> <mn>0.3</mn> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>20</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <msubsup> <mi>h</mi> <mi>&amp;alpha;</mi> <mo>&amp;prime;</mo> </msubsup> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>h</mi> <mi>&amp;alpha;</mi> </msub> <mo>+</mo> <mi>&amp;phi;</mi> <mo>&amp;CenterDot;</mo> <mrow> <mo>(</mo> <msub> <mi>h</mi> <mi>&amp;alpha;</mi> </msub> <mo>-</mo> <msub> <mi>p</mi> <mi>&amp;beta;</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mi>&amp;eta;</mi> <mo>&lt;</mo> <mi>&amp;psi;</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>h</mi> <mi>&amp;alpha;</mi> </msub> <mo>+</mo> <mi>&amp;phi;</mi> <mo>&amp;CenterDot;</mo> <mrow> <mo>(</mo> <msub> <mi>h</mi> <mi>&amp;alpha;</mi> </msub> <mo>-</mo> <msub> <mi>h</mi> <mi>&amp;beta;</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mi>&amp;eta;</mi> <mo>&amp;GreaterEqual;</mo> <mi>&amp;psi;</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>21</mn> <mo>)</mo> </mrow> </mrow>
In formula (20), (21), i, j represent randomly selected dimension, h'α,iI-th dimension component updates after being cooperated for pedestrian α one-dimensionals Historical trace, h'αThe historical trace updated after being cooperated for pedestrian α multidimensional, hα,iAnd hβ,iRespectively pedestrian α and β are newly-generated I-th dimension component, pβ,iFor the i-th dimension component of pedestrian β current location, hαRepresent the newly-generated historical traces of pedestrian α, pβFor row People β current location, hβThe newly-generated historical traces of pedestrian β are represented, φ is the random number between [- 1,1], and η and ψ are [0,1] Between random number;
2) two random numbers a, b, and a are given, b ∈ [0,1], if a < b, one-dimensional cooperation is carried out by formula (20), otherwise presses Formula (21) carries out multidimensional cooperation;After the completion of to individual carry out bounds checking, if renewal after historical trace be less than -1 if take -1, greatly 1 is taken in 1;The policy update historical trace finally retained using elite;
Step 8:End condition judges
Repeat steps 2 through 7, until meeting maximum iteration or being optimal solution;Individual of the output with minimum fitness value Position and corresponding output weight, optimal ELM is then applied to test set.
2. a kind of learning method of the extreme learning machine based on social force model colony optimization algorithm according to claim 1, its It is characterised by:A kind of program circuit of the learning method of the extreme learning machine based on social force model colony optimization algorithm includes following Content:
Step 1:Initiation parameter value first, the parameter value include Population Size N, node in hidden layer L, slack time τ, most Big iterations Itermax, velocity factor ρ, pedestrian radius r, desired control parameterθ, repel force parameter A, B, weight factor μ;
Step 2:Initialize the speed of each individual and position in population;
Step 3:Fitness value of each individual in t generations is calculated, t is current iteration algebraically;
Step 4:In the selection target stage, object set T is generated, and the individual of same target is referred to population Gk
Step 5:As random number rand≤ραWhen, random numbers of the wherein rand between [0,1], return to step 2, pedestrian α initialization Its speed and position Vα(t),Pα(t), α=1,2 ..., N;As random number rand > ραWhen, perform step 6;
Step 6:Social force search mechanisms are performed, update pedestrian's speed and position Vα(t),Pα(t), α=1,2 ..., N, α be from So number;
Step 7:Fitness value is calculated, judges whether to update historical trace by probability selection;Now iterative algebra t is carried out more Newly, t=t+1;
Step 8:If iterative algebra t > Iter max, step 9 is performed;If iterative algebra t≤Iter max, perform step Rapid 3;
Step 9:Iteration is completed, output historical trace hg, corresponding row element corresponds to optimal input weight and hidden layer biases, So as to obtain the optimal ELM of Generalization Capability.
CN201710818504.8A 2017-09-12 2017-09-12 A kind of learning method of the extreme learning machine based on social force model colony optimization algorithm Pending CN107563518A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710818504.8A CN107563518A (en) 2017-09-12 2017-09-12 A kind of learning method of the extreme learning machine based on social force model colony optimization algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710818504.8A CN107563518A (en) 2017-09-12 2017-09-12 A kind of learning method of the extreme learning machine based on social force model colony optimization algorithm

Publications (1)

Publication Number Publication Date
CN107563518A true CN107563518A (en) 2018-01-09

Family

ID=60980772

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710818504.8A Pending CN107563518A (en) 2017-09-12 2017-09-12 A kind of learning method of the extreme learning machine based on social force model colony optimization algorithm

Country Status (1)

Country Link
CN (1) CN107563518A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109030378A (en) * 2018-06-04 2018-12-18 沈阳农业大学 Japonica rice canopy chlorophyll content inverse model approach based on PSO-ELM
CN109300144A (en) * 2018-09-21 2019-02-01 中国矿业大学 A kind of pedestrian track prediction technique of mosaic society's power model and Kalman filtering
CN110263380A (en) * 2019-05-23 2019-09-20 东华大学 A kind of spinning process cascade modeling piecewise interval method for parameter configuration
CN113128108A (en) * 2021-04-07 2021-07-16 汕头大学 Method for determining diameter of jet grouting pile based on differential evolution artificial intelligence

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109030378A (en) * 2018-06-04 2018-12-18 沈阳农业大学 Japonica rice canopy chlorophyll content inverse model approach based on PSO-ELM
CN109300144A (en) * 2018-09-21 2019-02-01 中国矿业大学 A kind of pedestrian track prediction technique of mosaic society's power model and Kalman filtering
CN109300144B (en) * 2018-09-21 2021-12-24 中国矿业大学 Pedestrian trajectory prediction method integrating social force model and Kalman filtering
CN110263380A (en) * 2019-05-23 2019-09-20 东华大学 A kind of spinning process cascade modeling piecewise interval method for parameter configuration
CN110263380B (en) * 2019-05-23 2020-11-24 东华大学 Spinning process cascade modeling subsection interval parameter configuration method
CN113128108A (en) * 2021-04-07 2021-07-16 汕头大学 Method for determining diameter of jet grouting pile based on differential evolution artificial intelligence

Similar Documents

Publication Publication Date Title
CN107563518A (en) A kind of learning method of the extreme learning machine based on social force model colony optimization algorithm
Li et al. T–S fuzzy model identification with a gravitational search-based hyperplane clustering algorithm
Ferreira et al. An approach to reservoir computing design and training
Papageorgiou et al. A weight adaptation method for fuzzy cognitive map learning
CN105976049A (en) Chaotic neural network-based inventory prediction model and construction method thereof
CN106529818A (en) Water quality evaluation prediction method based on fuzzy wavelet neural network
CN103473598A (en) Extreme learning machine based on length-changing particle swarm optimization algorithm
CN114217524A (en) Power grid real-time self-adaptive decision-making method based on deep reinforcement learning
Santos et al. Fuzzy systems for multicriteria decision making
Mao et al. Online sequential prediction of imbalance data with two-stage hybrid strategy by extreme learning machine
CN106570562A (en) Adaptive-DE-algorithm-based fuzzy modeling method for bridge crane
CN109146055A (en) Modified particle swarm optimization method based on orthogonalizing experiments and artificial neural network
Nagy et al. Photonic quantum policy learning in OpenAI Gym
CN106371321A (en) PID control method for fuzzy network optimization of coking-furnace hearth pressure system
Guoqiang et al. Study of RBF neural network based on PSO algorithm in nonlinear system identification
Li et al. An improved double hidden-layer variable length incremental extreme learning machine based on particle swarm optimization
Duan MCEDA: A novel many-objective optimization approach based on model and clustering
Wu et al. Forecasting stock market performance using hybrid intelligent system
CN101739565A (en) Large-capacity pattern recognition method
Nguyen et al. Fuzzy controllers using hedge algebra based semantics of vague linguistic terms
Ding et al. Evaluation of Innovation and Entrepreneurship Ability of Computer Majors based on Neural Network Optimized by Particle Swarm Optimization
Zeng et al. Modified bidirectional extreme learning machine with gram–Schmidt Orthogonalization method
Mahmudy et al. Genetic algorithmised neuro fuzzy system for forecasting the online journal visitors
Li et al. A dual-population evolutionary algorithm adapting to complementary evolutionary strategy
Krömer et al. Differential evolution with preferential interaction network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180109

RJ01 Rejection of invention patent application after publication