CN112070200A - Harmonic group optimization method and application thereof - Google Patents

Harmonic group optimization method and application thereof Download PDF

Info

Publication number
CN112070200A
CN112070200A CN201910497327.7A CN201910497327A CN112070200A CN 112070200 A CN112070200 A CN 112070200A CN 201910497327 A CN201910497327 A CN 201910497327A CN 112070200 A CN112070200 A CN 112070200A
Authority
CN
China
Prior art keywords
sso
big data
variable
gbest
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910497327.7A
Other languages
Chinese (zh)
Other versions
CN112070200B (en
Inventor
郝志峰
叶维彰
刘翔宇
王金海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Foshan University
Original Assignee
Foshan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Foshan University filed Critical Foshan University
Priority to CN201910497327.7A priority Critical patent/CN112070200B/en
Publication of CN112070200A publication Critical patent/CN112070200A/en
Application granted granted Critical
Publication of CN112070200B publication Critical patent/CN112070200B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The present disclosure provides a harmonic group optimization method and application thereof, and provides a new continuous SSO integrated single variable update mechanism UM1And a novel harmonic step size strategy HSS improves continuous simplified group optimization SSO, UM1The HSS can balance the exploration and utilization capacity of the continuous SSO in the exploration of high-dimensional multivariable and multi-modal numerical value continuous reference functions; the update mechanism UM1 only needs to update one variable, and it is completely different from the variables in SSO in that all variables need not be updated; in said UM1, the HSS enhances the utilization capacity by reducing the step size based on the harmonic sequence; and numerical experiments were performed on 18 high-dimensional functions to confirm the efficiency of the method provided by the present disclosure; the method improves the exploration and utilization performance of the traditional ABC and SSO, obtains a relatively excellent balance point between the exploration and the utilization, has wide application range, and can greatly improve the classification and prediction precision of newly obtained big data by supporting an artificial neural network, a vector machine and the like.

Description

Harmonic group optimization method and application thereof
Technical Field
The disclosure relates to the technical field of soft computing application and big data processing, in particular to a harmonic group optimization method and application thereof.
Background
There are always a number of optimization problems in practical applications, such as green supply chain management, big data and network reliability problems. It is also common in real-world problems to apply small changes (e.g., integer variables, non-linear equations, and/or multiple objectives) to the optimization model. However, even if only integer variables are used, these real-life problems become very difficult to solve, and solving them using conventional methods within an acceptable time is more complicated and enormous. Therefore, the emphasis of recent various studies has shifted to producing high quality solutions using soft computing methods rather than precise solutions within an acceptable execution time, see references [1-26 ].
More and more new soft computing methods are emerging, such as artificial neural networks [2,37], Genetic Algorithms (GA) [3,4, 34], simulated annealing [5], tabu search [6], ant colony optimization [11], particle swarm optimization algorithms [7,8,34], differential evolution [9,10], algorithm estimated distributions [12], artificial bee colony Algorithms (ABC) [13-16], the simplified group optimization algorithm/simplified group algorithm (SSO)) [17-26], empire-oriented competition algorithm [35], reinforcement learning algorithm [36], Bayesian network [38], hurricane optimization algorithm [39], gravity search algorithm [40], human clustering [41], bat algorithm [42], random diffusion search [43] and the like are new soft computing methods, and the inspiration comes from the natural phenomenon of solving a larger problem in the scientific technology in recent years. Therefore, soft computing has attracted considerable attention and has been applied to a range of real-world problems, such as Swarm Intelligence (Swarm Intelligence) and evolutionary algorithms (EAs for short), see references [44] and [45], respectively.
In soft computing, local search based exploration is the finding of optimal behavior in a neighborhood of known solutions, increasing the chances of solution quality at the expense of capture of the locally best solution [2-26], and global search with emphasis on exploration refers to searching for optimal values in the unexplored solution space to avoid settling into partially optimal at run-time costs [2-26 ]. Exploration (exploration) and utilization (exploitation) are opposite, but complementary. It is worth noting that a balance of exploration and utilization is sought to ensure the implementation of any soft computing method.
The SSO (simplified group optimization algorithm/simplified group algorithm) introduced by Yeh, see reference: "Study on quick Path Networks with Dependent Components and Apply to RAP", Technical Report NSC97-2221-E-007-099-MY3, diagnostic subjects Research Project by National Science Council, Taiwan ] one of the newly proposed population-based (population-based) soft computing methods. SSO achieves very high efficiencies and efficiencies from previously known numerical experiments [17-26 ]. Furthermore, SSO can flexibly cope with various realistic problems, and is gradually applied to different optimization applications, such as supply chain management [20,21], redundancy allocation problems [18,19,27], data mining [20,21], and other optimization problems [32,33 ].
Total variable Update Mechanism (UM)a) Is the basis for all variants of SSO, so all variables are updated in each solution. However, UMaAlways exploring undiscovered solution spaces, even if best approaching the current solution [17-27,31,32]]It may also take additional time to reach the optimum.
References to which this disclosure relates:
[1]L.Zadeh,“Fuzzy Logic,Neural Networks,and Soft Computing”,Communication of the ACM,vol.37,pp.77-84,1994.
[2]W.McCulloch,and W.Pitts,“A Logical Calculus of Ideas Immanent in Nervous Activity”,Bulletin of Mathematical Biophysics,vol.5,pp.115-133,1943.
[3]A.Fraser,“Simulation of Genetic Systems by Automatic Digital Computers. I.Introduction”,Australian Journal of Biological Sciences,vol.10,pp.484-491, 1957.
[4]D.Goldberg,Algorithms in Search,Optimization and Machine Learning,Genetic, Reading,MA:Addison-Wesley Professional,1989.
[5]S.Kirkpatrick,C.Gelatt,and M.Vecchi,“Optimization by simulated annealing”,Science,vol.220,pp.671-680,1983.
[6]F.Glover,“Future Paths for Integer Programming and Links to Artificial Intelligence”,Computers and Operations Research,vol.13,pp.533–549,1986.
[7]J.Kennedy and R.Eberhard,“Particle swarm optimization”,Proceedings of IEEE International Conference on Neural Networks,Publishing,Piscataway,NJ,USA, pp.1942-1948,1995.
[8]M.F.Tasgetiren,Y.C.Liang,M.Sevkli,and G.Gencyilmaz,“A particle swarm optimization algorithm for makespan and total flowtime minimization in the permutation flowshop sequencing problem”,European Journal of Operational Research, vol.177,pp.1930-1947,2007.
[9]R.Storn,“On the usage of differential evolution for function optimization”, Proceedings of the 1996 Biennial Conference of the North American Fuzzy Information Processing Society,Publishing,pp.519-523,1996.
[10]R.Storn,and K.Price,“Differential evolution:A simple and efficient heuristic for global optimization over continuous spaces”,Journal of Global Optimization,vol.11,pp.341-359,1997.
[11]M.Dorigo and L.Gambardella,“Ant Colony System:A Cooperative Learning Approach to the Traveling Salesman Problem”,IEEE Transactions on Evolutionary Computation,vol.1,pp.53-66,1997.
[12]P.Lozano,Estimation of distribution algorithms:A new tool for evolutionary computation.Kluwer,Boston MA,2002.
[13]D.Karaboga,“An idea based on Honey Bee Swarm for Numerical Optimization”, Technical Report TR06,Engineering Faculty,Erciyes University,2005.
[14]F.Liu,Y.Sun,G.Wang,T.Wu,“An Artificial Bee Colony Algorithm Based on Dynamic Penalty and Lévy Flight for Constrained Optimization Problems”,Arabian Journal for Science and Engineering,pp.1-20,2018.
[15]D.Karaboga and B.Bastur,“On the performance of artificial bee colony(ABC) algorithm”,Applied Soft Computing,vol.8,pp.687–697,2008.
[16]Y.C.Liang,A.H.L.Chen,Y.H.Nien,“Artificial bee colony for workflow scheduling”,Evolutionary Computation(CEC),2014 IEEE Congress on,558-564,2014.
[17]W.C.Yeh,“Study on Quickest Path Networks with Dependent Components and Apply to RAP”,Technical Report NSC97-2221-E-007-099-MY3,Distinguished Scholars Research Project granted by National Science Council,Taiwan.
[18]W.C.Yeh,“A Two-Stage Discrete Particle Swarm Optimization for the Problem of Multiple Multi-Level Redundancy Allocation in Series Systems”,Expert Systems with Applications,vol.36,pp.9192-9200,2009.
[19]C.M.Lai,W.C.Yeh,and Y.C.Huang,“Entropic simplified swarm optimization for the task assignment problem”,Applied Soft Computing,vol.58,pp.115-127,2017.
[20]Y.Jiang,P.Tsai,W.C.Yeh,and L.Cao,“A honey-bee-mating based algorithm for multilevel image segmentation using Bayesian theorem”,Applied Soft Computing, vol.52,pp.1181-1190,2017.
[21]W.C.Yeh,“Optimization of the Disassembly Sequencing Problem on the Basis of Self-adaptive Simplified Swarm Optimization”,IEEE Transactions on Systems,Man, and Cybernetics--Part A:Systems and Humans,vol.42,pp.250-261,2012.
[22]W.C.Yeh,“Novel Swarm Optimization for Mining Classification Rules on Thyroid Gland Data”,Information Sciences,vol.197,pp.65-76,2012.
[23]W.C.Yeh,“A New Parameter-Free Simplified Swarm Optimization for Artificial Neural Network training and its Application in Prediction of Time-Series”,IEEE Transactions on Neural Networks and Learning Systems,vol.24,pp.661-665,2013.
[24]C.Bae,K.Kang,G.Liu,Y.Y.Chung,“A novel real time video tracking framework using adaptive discrete swarm optimization”,Expert Systems with Applications,vol.64,pp.385-399,2016.
[25]K.Kang,C.Bae,H.W.F.Yeung,and Y.Y.Chung,“A Hybrid Gravitational Search Algorithm with Swarm Intelligence and Deep Convolutional Feature for Object Tracking Optimization”,Applied Soft Computing,https://doi.org/10.1016/j.asoc.2018.02.037, 2018.
[26]P.C.Chang and W.C.Yeh,“Simplified Swarm Optimization with Differential Evolution Mutation Strategy for Parameter Search”,ICUIMC’13 Proceedings of the 7th International Conference on Ubiquitous Information Management and Communication, Article No.25,ACM New York,NY,USA,ISBN:978-1-4503-1958-4,doi:
10.1145/2448556.2448581,2013.
[27]C.L.Huang,“A particle-based simplified swarm optimization algorithm for reliability redundancy allocation problems”,Reliability Engineering&System Safety,vol.142,pp.221-230,2015.
[28]R.Azizipanah-Abarghooee,T.Niknam,M.Gharibzadeh and F.Golestaneh, “Robust,fast and optimal solution of practical economic dispatch by a new enhanced gradient-based simplified swarm optimisation algorithm”,Generation,Transmission &Distribution,vol 7,pp.620-635,2013.
[29]Hai Xie,Bao Qing Hu,New extended patterns of fuzzy rough set models on two universes,International Journal of General Systems,vol.43,pp.570–585,2014.
[30]F.Rossi,D.Velázquez,I.Monedero and F.Biscarri,“Artificial neural networks and physical modeling for determination of baseline consumption of CHP plants”,Expert Systems with Applications,vol.41,pp.4568-4669,2014.
[31]P.Chang and X.He,“Macroscopic Indeterminacy Swarm Optimization(MISO) Algorithm for Real-Parameter Search”,Proceedings of the 2014 IEEE Congress on Evolutionary Computation(CEC2014),Beijing,China,pp.1571-1578,2014.
[32]C.Chou,C.Huang,and P.Chang,“A RFID Network Design Methodology for Decision Problem in Health Care”,Proceedings of the 2014 IEEE Congress on Evolutionary Computation(CEC2014),Beijing,China,pp.1586-1592,2014.
[33]N.Esfandiari,M.Babavalian,A.Moghadam;and V.Tabar,“Knowledge discovery in medicine:Current issue and future trend”,Expert Systems with Applications,vol. 41,pp.4434-4463,2014.
[34]O.Abedinia,M.S.Naderi,A.Jalili,and A.Mokhtarpour,“A novel hybrid GA-PSO technique for optimal tuning of fuzzy controller to improve multi-machine power system stability”,International Review of Electrical Engineering,vol.6, no.2,pp.863-873,2011.
[35]O.Abedinia,N.Amjady,K.Kiani,H.A.Shayanfar,and A.Ghasemi, “Multiobjective environmental and economic dispatch using imperialist competitive algorithm”,International Journal on Technical and Physical Problems of Engineering, 2012.
[36]M.Bagheri,V.Nurmanova,O.Abedinia,and M.S.Naderi,“Enhancing Power Quality in Microgrids With a New Online Control Strategy for DSTATCOM Using Reinforcement Learning Algorithm”,IEEE Access,vol.6,pp.38986-38996,2018.
[37]O.Abedinia,N.Amjady,and N.Ghadimi,“Solar energy forecasting based on hybrid neural network and improved metaheuristic algorithm”,Computational Intelligence,vol.34,no.1,pp.241-260,2017.
[38]Y.Jiang,P.Tsai,W.C.Yeh,and L.Cao,“A Honey-bee-mating Based Algorithm for Multilevel Image Segmentation Using Bayesian theorem”,Applied Soft Computing, vol.52,pp.1181-1190,2017.
[39]R.M.Rizk-Allah,R.A.El-Sehiemy,G.G.Wang,“A novel parallel hurricane optimization algorithm for secure emission/economic load dispatch solution”, Applied Soft Computing,vol.63,pp.206-222,2018.
[40]W.F.Yeung,G.Liu,Y.Y.Chung,E.Liu and W.C.Yeh,“Hybrid Gravitational Search Algorithm with Swarm Intelligence for Object Tracking”,The 23rdInternational Conference on Neural Information Processing(ICONIP 2016),pp.213-221,2016.
[41]L.B.Rosenberg,“Human swarming,a real-time method for parallel distributed intelligence”,Swarm/Human Blended Intelligence Workshop(SHBI),pp.1–7,2015.
[42]Q.Liu,L.Wu,W.Xiao,F.Wang,and L.Zhang,“A novel hybrid bat algorithm for solving continuous optimization problems,Applied Soft Computing,vol.73,pp. 67-82,2018.
[43]S.J.Nasuto,J.M.Bishop,and S.Lauria,“Time complexity analysis of the Stochastic Diffusion Search”,Proc.Neural Computation'98,Vienna,Austria,pp. 260-266,1998.
disclosure of Invention
The present disclosure provides a harmonic group optimization method and application thereof, in order to provide a new continuous SSO integrated single variable Update Mechanism (UM)1) And a novel harmonic step size strategy (HSS) by introducing a univariate Update Mechanism (UM)1) And a harmonic step size strategy (HSS) to improve successive simplified group optimization (SSO). The UM1The HSS can balance the exploration and utilization capacity of the continuous SSO in the exploration of high-dimensional multivariable and multimode numerical value continuous reference functions; update mechanism UM1Only one variable needs to be updated, and it is completely different from the variables in SSO and does not need to update all variables; in the UM1The HSS enhances the utilized capacity by reducing the step size based on the harmonic sequence. And numerical experiments were performed on 18 high-dimensional functions to confirm the efficiency of the method provided by the present disclosure.
In order to achieve the above object, according to an aspect of the present disclosure, there is provided a harmonic group optimization method including the steps of:
step 1, constructing a harmonic step length strategy HSS;
step 2, establishing a univariate updating mechanism UM1
Step 3, passing UM1Optimizing the SSO by the HSS to obtain the optimized SSO;
and 4, applying the optimized SSO to big data processing.
Further, in step 1, the method for constructing the harmonic step size strategy HSS includes: to improve exploration performance, a harmonic sequence-based HSS was constructed as shown below:
Figure BDA0002089054650000071
wherein N isvarIs the number of variables, UkAnd LkIs the upper and lower limits of the kth variable, i is the current generation, k is the index of the current variable, the sign
Figure BDA0002089054650000072
Is Floor function/Floor () function; referring to the sequence of 1,1/2,1/3,1/4 as harmonic sequence, the HSS can be expressed as follows based on the harmonic sequence:
Figure BDA0002089054650000073
step size Δ if each genetic cycle lasts 50 generationsi,kThe value of (a) is reduced from generation to generation of the generation period.
The gBest and some solutions are closer to the optimal state after long runs or generations, and the updates to these solutions need only be slightly changed to be closer to the optimal state without leaving the optimal region. Due to the reduction of the harmonic sequence, the step length is adjusted from a longer time generated in the early stage to a shorter time generated in the later stage of the HSS, thereby overcoming the defect of continuous SSO.
Further, in step 2, a univariate update mechanism UM is established1The method comprises the following steps: in the main soft calculation, each solution is slightly updated, and in order to reduce the random number and gradually change the stability of the solution, the UM of each solution is made1Only one randomly selected variable is updated, assuming i is the current generation number, xj,kIs to solve X from the jthjOf (a) a randomly selected variable of (b),
Figure BDA0002089054650000074
Figure BDA0002089054650000075
is obtained by modifying XjX ofj,kThe obtained time solutionThe following equation can be obtained:
Figure BDA0002089054650000076
Figure BDA0002089054650000077
Figure BDA0002089054650000078
σk,1and σk,2Is respectively in [ -0.5,0.5 [ -0.5 [)]And [0, 1]]The uniform random variables generated in (1);
ρkis [0, 1]]The uniform random variables generated in (1); gkRepresents PgBestK (k) ofth) A variable;
Lkand UkRespectively, the lower limit and the upper limit of the kth variable.
Note:
(1) UM proposed in the present disclosure1Removing the first subscript and UM of each solutionaTo reduce run time, e.g. UMaX in (1)i,jAnd xi,j,kSimplified to the proposed UM1X in (1)jAnd xj,k
(2) For simplicity and without loss of generality, the equations above are formulated to take into account the minimization problem.
(3)
Figure BDA0002089054650000081
If not after updating and substituting X*Calculation of F (X)*) It needs to be changed to its nearest boundary before.
For example, the following equation is prepared to be minimized:
Figure BDA0002089054650000082
gen is hereditaryThe algebra of (c);
wherein, the following properties are provided:
attribute 1: in the implementation of UM1The number of expected comparisons and random values in each solution is then from 3 · NvarReduced to 3, NvarTo 1.
And (3) proving that: numerical pair UM for testing the feasibility of each updating variableaAnd UM1Are all one. However, for each solution, UMaTesting each updated variable, while UM1Only one variable is tested. Furthermore, for each solution, UM based on equation (4)aThe expected number of comparisons was calculated as follows:
Nvar·(3cw+2cg+cr)≥Nvar·(3cw+3cg+3cr)=3·Nvar,
because c isr<cw<cgAnd cr+cw+cg1 and its and cg、cwAnd crComparing the probabilities in the first, second and third terms of equation (4) 1,2 and 3 times, we can get:
(cg+2cw+3cr)≤(3cg+3cw+3cr)=3。
further, in step 3, the UM is passed1And the method for optimizing the SSO by the HSS comprises the following steps: the improved continuous SSO process proposed by the present disclosure is described as follows:
and step 0, making i-j-gBest-1.
Step 1, by XjAnd calculating F (X)j);
Step 2, if F (X)gBest)<F(Xj) Let gBest be j;
step 3, if j<NsolLet j equal to j +1 and go to step 1;
step 4, let n*=1,N*50, and order
Figure BDA0002089054650000083
Wherein k is=1,2,…,Nvar
Step 5, making i ═ i +1 and j ═ 1;
step 6, from XjIn which a variable, e.g. x, is randomly selectedj,k
Step 7, let
Figure BDA0002089054650000091
Step 8, order
Figure BDA0002089054650000092
Step 9, if F (X)*)<F(Xj) Then let Xj=X*Turning to the step 10, otherwise, turning to the step 11;
step 10, if F (X)j)<F(XgBest) Let gBest be j;
if the current running time is equal to or greater than T, the process ends, step 11, and XgBestIs suitable for F (X)gBest) The final solution of (2);
step 12, if j<NsolLet j equal j +1 and go to step 6;
step 13, if i<N*Go to step 5;
step 14. mixing n*Increase by 1, N*Increase by 50, and order
Figure BDA0002089054650000093
Wherein k is 1,2, …, NvarAnd the process goes to step 5,
wherein,
Figure BDA0002089054650000094
for the optimum k variable, xi,j,kFor the current value of the kth variable in the jth solution,<<Δk,Δkis the step size, and ΔkFor example, if 100. delta. is equal tokIs the best case, xi,j,kWill require 100 generations to converge to
Figure BDA0002089054650000095
Further, in step 4, the application method of the optimized SSO in the big data processing through the artificial neural network is as follows:
step A1, collecting big data conforming to the big data type;
step A2, preprocessing and cleaning the acquired big data, and extracting a data set obtained after the big data is cleaned;
step A3, performing dimension reduction on the cleaned data set;
step A4, dividing the data set after dimensionality reduction into a data set divided into a training set and a test set;
step A5, determining a three-layer perceptron neural network with an artificial neural network structure of 6-5-1, wherein the neural network needs to optimally design 41 parameters, and the value range of the optimal design parameters of the neural network is [ -1,1 ];
step A6, determining the input variable of the artificial neural network as big data conforming to big data type, and the output variable of the artificial neural network as big data output;
step A7, determining input variables and output variables of the artificial neural network;
step A8, mixing X in each optimized SSOjThe variable in the method is decoded into parameters to be optimized of the artificial neural network, the error of the neural network after training a training set and/or a testing set is calculated, and the calculated error is used as the fitness F (X)j) Inputting the result into the optimized SSO, and obtaining the suitable F (X) obtained from the operation result of the optimized SSO (the operation result of step 0 to step 14)gBest) Of (2) a final solution XgBestDecoding parameters of the artificial neural network, and taking the obtained artificial neural network as a classification model;
step A9, classifying the newly collected big data which accords with the big data type through a classification model;
the big data conforming to the big data type comprises but is not limited to any big data conforming to the data type of traditional enterprise data, machine and sensor data and social data; traditional enterprise data includes customer data from CRM systems, traditional ERP data, inventory data, and accounting data, among others. Machine and sensor data includes call logs, smart meters, industrial equipment sensors, equipment logs, transaction data, and the like. Social data includes user behavior records, feedback data, and the like. Social media platforms such as Twitter, Facebook.
Wherein, the big data output includes but is not limited to data category confidence and predicted value in any period of time in the future.
The application of the optimized SSO can greatly improve the classification precision or the prediction capability of the newly obtained big data of the artificial neural network.
Further, the method for applying the optimized SSO to big data processing through a support vector machine is as follows:
step B1, collecting big data conforming to the big data type;
step B2, preprocessing and cleaning the acquired big data and extracting features to obtain the feature vector of the big data;
step B3, using the feature vector of the big data as a training data set;
step B4, j uniform random variables are randomly generated, and the random variable set is Xj,XjEach randomly selected variable in the vector machine stores a penalty factor C and a radial basis kernel parameter g of the support vector machine;
step B5, calculating XjFitness F (X)j) Inputting the result into the optimized SSO, and obtaining the suitable F (X) from the operation result of the optimized SSO (the operation results of step 0 to step 14)gBest) Of (2) a final solution XgBestDecoding the parameters into parameters of a support vector machine, and taking the obtained support vector machine as a classification model;
step B6, classifying the newly collected big data which accords with the big data type through a classification model;
the big data conforming to the big data type comprises but is not limited to any big data conforming to the data type of traditional enterprise data, machine and sensor data and social data; traditional enterprise data includes customer data from CRM systems, traditional ERP data, inventory data, and accounting data, among others. Machine and sensor data includes call logs, smart meters, industrial equipment sensors, equipment logs, transaction data, and the like. Social data includes user behavior records, feedback data, and the like.
Big data output includes, but is not limited to, data classification result, category confidence, among others.
The application of the optimized SSO can greatly improve the classification precision of the newly obtained big data by the support vector machine.
The beneficial effect of this disclosure does: the invention provides a harmonic wave group optimization method and application thereof, which improve the exploration and utilization performance of the traditional ABC and SSO, obtain a balance point with better quality between the exploration (exploration) and the utilization (exploration), have wide application range, can be applied to artificial neural networks, Genetic Algorithms (GA), simulated annealing, taboo search, ant colony optimization, particle swarm optimization, differential evolution, algorithm estimation distribution, artificial bee colony Algorithm (ABC), imperial competition algorithm, reinforcement learning algorithm, Bayesian networks, hurricane optimization algorithm, gravitation search algorithm, human colony, bat algorithm, random diffusion search and other methods for seeking optimized solutions, can be directly applied to the fields of big data processing, image processing, audio and video recognition and the like after parameter and other variable optimization adjustment is carried out by the disclosed method, and can greatly improve the fields of supporting artificial neural networks, image processing, audio and video recognition, The classification and prediction precision of newly obtained big data by a vector machine and the like; the accuracy and speed of image processing and audio and video recognition are improved.
Drawings
The foregoing and other features of the present disclosure will become more apparent from the detailed description of the embodiments shown in conjunction with the drawings in which like reference characters designate the same or similar elements throughout the several views, and it is apparent that the drawings in the following description are merely exemplary of the present disclosure from which other drawings may be derived without inventive effort to those skilled in the art, and in which:
FIG. 1 is a bar graph of the average success rate of SSO1 at different values of Δ and T in experiment 1;
FIG. 2 is a bar graph of SSO1 mean success rate and problems in experiment 1 for different Δ values and problems;
FIG. 3 is F obtained from ABC, GA, PSO, SSO1 and SSOa in experiment 2minA value bar graph;
FIG. 4 is a histogram of AIO (ABC, SSO1), AIO (SSOa, SSO1), and AIO (ABC, SSOa);
FIG. 5 is a bar graph of success rates at different T values for experiment 2;
FIG. 6 is F for ABC, SSO1, and SSOa in experiment 2minA value bar graph;
fig. 7 is a box plot of mean fitness values for ABC, SSO1 and SSOa at T1.25 in experiment 2;
fig. 8 is a box plot of mean fitness values for ABC, SSO1 and SSOa for experiment 2 at T ═ 1.50 (a);
fig. 9 is a box plot of mean fitness values for ABC, SSO1 and SSOa for experiment 2 at T ═ 1.50 (b);
fig. 10 is a box plot of the mean fitness values for ABC, SSO1 and SSOa for experiment 2 at T ═ 1.75;
fig. 11 is a box plot of the mean fitness values for ABC, SSO1 and SSOa at T2.00 in experiment 2;
fig. 12 is a box plot of the mean fitness values for ABC, SSO1 and SSOa at T2.25 for experiment 2;
fig. 13 is a box plot of the mean fitness values for ABC, SSO1 and SSOa for experiment 2 at T2.50;
fig. 14 is a box plot of the mean fitness values for ABC, SSO1 and SSOa for experiment 2 at T2.75;
fig. 15 is a box plot of the mean fitness values for ABC, SSO1 and SSOa for experiment 2 at T3.00;
fig. 16 is a box plot of the mean fitness values for ABC, SSO1 and SSOa for experiment 2 at T3.50;
fig. 17 is a box plot of the mean fitness values for ABC, SSO1 and SSOa for experiment 2 at T-3.75.
Detailed Description
The conception, specific structure and technical effects of the present disclosure will be clearly and completely described below in conjunction with the embodiments and the accompanying drawings to fully understand the objects, aspects and effects of the present disclosure. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
Simplified group optimization (SSO) overview:
methods for continuous SSO treatment of variables are described in references [23-25 ].
On the basis of the conventional SSO, a new continuous SSO is proposed. Hereinafter, a conventional SSO is briefly introduced before a new SSO proposed by the present disclosure is proposed.
Conventional (discrete) SSO
Conventional (discrete) SSO very effectively solves the discrete (optimization) problem (discrete variables only) [17-21,32] or the floating point limited continuous problem [22], e.g., the number of functions per value is limited in the data mining problem.
The basic idea of all types of SSO is that each variable, such as xi+1,j,kBoth need to be updated according to the following equation:
Figure BDA0002089054650000121
where x is a randomly generated feasible value. For details, see references [17-27,31,32 ].
cg、cp、cwAre parameters.
Based on the step function shown in equation (1) and the generated random number ρkDiscrete SSO updates each variable by verifying the first to last term in the step function until an interval is found that contains the generated random number.
Discrete SSO updates each variable only to a low probability value, c, that never occurred beforer. Thus, in most cases, discrete SSO can only update variables to a limited number of values, namely gBest, pBest and themselves, and these values are found only in all previous values. Furthermore, the utility modelAnd (5) carrying out a new process. The above advantages make discrete SSO very effective in solving the discrete problem, see references [17-22,32 ]]。
Continuous type SSO:
for a general continuous optimization problem, every variable of the final solution in the continuous optimization problem may never be found in all previous update processes. Therefore, there is a need to modify discrete SSOs for continuous problems without losing the simplicity and convenience of discrete SSOs [23-27,32 ]. Note that the advantages of continuous SSO are just the disadvantages of discrete SSO, and vice versa.
The basic flow of SSO variables has never changed, but is based on the step function of updating variables listed in equation (1), see references [17-27,31,32 ]. To date, the main trend in developing continuous SSO has been to add a step function to certain terms in equation (1) (see references [23-25]), or to combine SSO with other soft computing methods [26,27,31], such as differential evolution (see references [26,31]) and PSO (see references [ c.l. huang, "a particulate-based simplified timing optimization for reproducibility recovery schemes", Reliability Engineering & System Safety volume, 142, pp.221-230,2015 ]).
The all-variable update mechanism in SSO (e.g., equation (1)) can escape local optimization and explore unreachable/unrealized spaces. Therefore, the goal of continuous SSO is to explore a better approach in exploration, the main step is to add a random step size in some items of the update mechanism, which is obtained by multiplying some random number by the step size. As with the method identified by the present disclosure, we will explore better step sizes for the main purpose of the present disclosure.
Yeh first proposes a continuous SSO by adding step length to the update mechanism for predicting Time Series questions (W.C. Yeh, "A New Parameter-Free Simplified switch Optimization for architectural New Network routing and applications in Prediction of Time-Series", IEEE Transactions on Neural Networks and Learning Systems, vol, pp.661-665,2013), as shown below
Figure BDA0002089054650000131
Wherein sigmakIs in [ -1,1 [)]Uniform random numbers generated within the range, step size ΔjIs the reciprocal of the number of genetic generations for which the adaptability of the ith solution has not improved.
Note that equation (2) is also the first adaptive parameter concept, which uses the parameter cg(represented by c in the formula (2))g,i,j) And cw(represented by c in the formula (2))w,i,j) As a variable, each solution thus has its own c in SSOgAnd cw. See the references [ W.C. Yeh, "A New Parameter-Free matched Optimization for architectural Neural Network routing and its applications in Prediction of Time-Series", IEEE Transactions on Neural Networks and Learning Systems, vol.24, pp.661-665,2013 ], which is different from conventional SSO in that all parameters of the SSO are fixed from start to finish.
To determine the step size ΔjStill too large a value of [23,24 ]]Yeh proposes a new UM (referred to herein as UM) even for larger generations in equation (2)a) Updating all continuous variable intervals-1, 1 by shortening]To [ -0.5,0.5 [)]Will be ajInstead, it is changed into
Figure BDA0002089054650000141
And further multiplied by a random number in equation (2) (see references [ K.kang, C.Bae, H.W.F. Yeung, and Y.Y.Chung ] "A Hybrid graphical overview Search with switch understanding and Deep conditional feed for Object Tracking Optimization", Applied Soft Computing, https:// doi.org/10.1016/j.asoc.2018.02.037, 2018) ] as follows.
Figure BDA0002089054650000142
Wherein,
Figure BDA0002089054650000143
Figure BDA0002089054650000144
cg、cris a parameter, NvarIs the number of variables. Sigma1,kAnd σ2,kIs in the range of [ -0.5,0.5 [)]Two uniform random variables generated in (1);
ρkis at [0, 1]]The uniform random variables generated in (1); u shapekAnd LkAre the upper and lower limits of the kth variable.
In equation (3), the solution can only be updated to a better value; otherwise, UMaThe original solution (4) is preserved before updating as UMaThe most important part of this, the concept behind it is that each variable is updated to its own neighborhood, the gBest neighborhood, and the interval between itself and the gBest (if ρ)kAt and cr(first term in formula (4)), cg(second term in formula (4)) and cw=1-cr-cg(third term in the formula (4)).
Note:
1. modified reference of formula (4) [25]Error in (2), which is a loss of ρ2,var
2. In equation (4), if the updated variable is not feasible, it is set as the nearest boundary.
The disadvantages of current continuous SSO are:
in the prior continuous SSO proposed in references 23-25, the step size is fixed and every variable requires the step size of all genetic algebras.
Starting from equation (2), the step size ρ is randomk·ΔjHas a minimum value of 1/NgenOnly at ρ k1 and 1/deltaj=NgenI.e. the jth solution is not improved from start to finish. For example, if it is Ngen1000 is then ρk·Δj-0.001. For some special problems, the above values are still too large,and therefore too far from the optimum.
In equations (4) and (5), if the step size Δ is close to the end of the current generation numberkToo long, the update mechanism is not available, causing it to converge to an optimal value. On the contrary, if the step size ΔkToo short and the solution is not sufficiently momentum, it takes a long time to reach around the optimum, especially at an early stage.
For example, let the current value of the kth variable in the jth solution be xi,j,kThe k-th variable is
Figure BDA0002089054650000151
If so<<ΔkOr even deltakNeeds to be calculated at [ -0.5,0.5 [)]Two different random numbers are generated in, then there is little likelihood that the next updated solution will be close to optimal. However, if 100. delta. is equal tokIs the best case, xi,j,kWill require 100 generations to converge to
Figure BDA0002089054650000152
Therefore, it is more reasonable to have an adaptive step size instead of a fixed step size in the exploration. Thus, the present disclosure proposes a new HSS-based continuous SSO update mechanism.
UM proposed by the present disclosure1And HSS:
to improve exploration performance, the harmonic sequence based HSS proposed by the present disclosure is presented in equation (6) as follows:
Figure BDA0002089054650000153
wherein N isvarIs the number of variables, UkAnd LkIs the upper and lower limits of the kth variable, i is the current algebra, k is the index of the current variable, the sign
Figure BDA0002089054650000154
Is the Floor function/Floor () function.
If the sequence of 1,1/2,1/3,1/4 is referred to as a harmonic sequence, the harmonic-based sequence HSS proposed by the present disclosure can be represented as follows:
Figure BDA0002089054650000155
if each genetic cycle lasts 50 generations. Step size Δi,kThe value of (a) is reduced from generation to generation of the generation period. Shown in table 1 as HSS-based (if U)k-L k10 and Nvar100) step size.
Table 120 steps
Period i 1 2 3 4 5 6 7 8 9 10
Δi,k 0.08000 0.04000 0.02667 0.02000 0.01600 0.01333 0.01143 0.01000 0.00889 0.00800
Period i 11 12 13 14 15 16 17 18 19 20
Δi,k 0.00727 0.00667 0.00615 0.00571 0.00533 0.00500 0.00471 0.00444 0.00421 0.00400
The gBest and some solutions are closer to the optimal state after long runs or generations. The update of these solutions need only be slightly changed to get closer to the optimal state without leaving the optimal area. Due to the reduction of the harmonic sequence, the step length is adjusted from a longer time generated in the early stage to a shorter time generated in the later stage of the HSS, thereby overcoming the defect of continuous SSO.
Single variable UM (UM)1)
In the main soft calculation, each solution is slightly updated. For example, the UM of a PSO is a vector-based UM that is the two equations listed below, specifically (see references [7,8,27 ]):
Vi+1,j=wVi,j+c1·σ1·(G-Xi,j)+c2·σ2·(Pi-Xi,j) (8)
Xi+1,j=Xi,j+Vi+1,j (9)
wherein, Vi,jAnd Xi,jThe velocity and position of the jth particle in the ith generation respectively;
w、c1、c2is a constant; sigma1And σ2Is in the interval [0, 1]]Two uniform random variables are internally generated; piIs pBest solving i;
G=PgBestis gBest. Note: p in SSOgBestEqual to G in formula (8).
In ABC (see references [13-16]), a variable is randomly selected for each solution to be updated. The update operator in the conventional Genetic Algorithm (GA) (see references [3,4]) can change half of the variables by one cut-point crossing, and can also update two variables by one cut-point mutation. However, in SSO, all variables must be updated.
In order to reduce random numbers and gradually change the stability of the solutions, UM for each solution1Only one randomly selected variable is updated. Let i be the current number of generations, xj,kIs to solve X from the jthjOf (a) a randomly selected variable of (b),
Figure BDA0002089054650000161
is obtained by modifying XjX ofj,kThe resulting time solution. The following equation can be obtained:
Figure BDA0002089054650000162
Figure BDA0002089054650000163
Figure BDA0002089054650000164
σk,1and σk,2Is respectively in [ -0.5,0.5 [ -0.5 [)]And [0, 1]]The uniform random variables generated in (1);
ρkis [0, 1]]The uniform random variables generated in (1);
Lkand UkRespectively, the lower limit and the upper limit of the kth variable.
Note:
(1) UM proposed in the present disclosure1Removing the first subscript and UM of each solutionaTo reduce run time, e.g. UMaX in (1)i,jAnd xi,j,kSimplified to the proposed UM1X in (1)jAnd xj,k
(2) For simplicity and without loss of generality, the equations above are formulated to take into account the minimization problem.
(3)
Figure BDA0002089054650000171
If not after updating and substituting X*Calculation of F (X)*) It needs to be changed to its nearest boundary before.
For example, the following equation is prepared to be minimized:
Figure BDA0002089054650000172
table 2 shows the proposed UM basis1Different rho of1Updating of the value X7
TABLE 2 UM1Examples of (2)
Figure BDA0002089054650000173
MarkingIs from 1.216>U11.0 for onset
Figure BDA0002089054650000174
Replacement of
Figure BDA0002089054650000175
Marking!!Is from F (X)7)=F(-0.41467,-0.3)=0.008395<F(XgBest) F (-0.5, -0.9) ═ 0.0596 begins with gBest ═ 7 replacing gBest ═ 3.
Therefore, the following lemma is true.
Attribute 1: in the implementation of UM1The number of expected comparisons and random values in each solution is then from 3 · NvarReduced to 3, NvarTo 1.
And (3) proving that: numerical pair UM for testing the feasibility of each updating variableaAnd UM1Are all one. However, for each solution, UMaTesting each updated variable, while UM1Only one variable is tested. Furthermore, for each solution, UM based on equation (4)aThe expected number of comparisons was calculated as follows:
Nvar·(3cw+2cg+cr)≥Nvar·(3cw+3cg+3cr)=3·Nvar, (15)
because c isr<cw<cgAnd cr+cw+cg1 and its and cg、cwAnd crIn the first of formula (4)The probabilities in the second and third terms are compared 1,2 and 3 times to obtain:
(cg+2cw+3cr)≤(3cg+3cw+3cr)=3, (16)
if UM is implemented based on equation (11)1Then σ is required for each variable in equation (4)var,1,σvar,2And ρvarBut only for each variable in equation (11).
The improved continuous SSO process proposed by the present disclosure is described as follows:
and step 0, making i-j-gBest-1.
Step 1, create an arbitrary XjAnd calculating F (X)j);
Step 2, if F (X)gBest)<F(Xj) Let gBest be j;
step 3, if j<NsolLet j equal to j +1 and go to step 1;
step 4, let n*=1,N*50, and order
Figure BDA0002089054650000181
Wherein k is 1,2, …, Nvar
Step 5, making i ═ i +1 and j ═ 1;
step 6, from XjIn which a variable, e.g. x, is randomly selectedj,k
Step 7, let
Figure BDA0002089054650000182
Step 8, order
Figure BDA0002089054650000183
Step 9, if F (X)*)<F(Xj) Then let Xj=X*Turning to the step 10, otherwise, turning to the step 11;
step 10, if F (X)j)<F(XgBest) Let gBest be j;
step (ii) of11, if the current running time is equal to or greater than T, the flow ends, and XgBestIs suitable for F (X)gBest) The final solution of (2);
step 12, if j<NsolLet j equal j +1 and go to step 6;
step 13, if i<N*Go to step 5;
step 14. mixing n*Increase by 1, N*Increase by 50, and order
Figure BDA0002089054650000184
Wherein k is 1,2, …, NvarAnd go to step 5.
Wherein,
Figure BDA0002089054650000185
for the optimum k variable, xi,j,kFor the current value of the kth variable in the jth solution,<<Δk, Δkis the step size, and ΔkFor example, if 100. delta. is equal tokIs the best case, xi,j,kWill require 100 generations to converge to
Figure BDA0002089054650000186
Examples evaluation of properties:
in this example, two experiments, experiment 1 and experiment 2, were performed based on 18 continuous numerical functions of 50 variables extending from the baseline problem, see references [13-16,26,25], as shown in the experimental data of Table A. Data type flags in the experimental data of table a include C: characteristic data, U: unimodal data, M: multimodal data, S: separable data, N: data is not separable.
TABLE A Experimental data
Figure BDA0002089054650000191
Figure BDA0002089054650000201
Experiment design:
for ease of identification, the SSO of UMa proposed in reference [25] (k.kang, c.bae, h.w.f.yuung, and y.y.chung, "a Hybrid qualitative Search Algorithm with switch intelligent and Deep conditional feed for Object Tracking Optimization", Applied Soft Computing, https:// doi.org/10.1016/j.asoc.2018.02.037, 2018) is referred to as SSOa, and the SSO implemented by HSS and UM1 in this example is referred to as SSO1 in this example.
In experiment 1, only the role of the HSS and the different associated step size strategies were tested to determine the optimal value of the step size in the proposed HSS. The step size with the best results in experiment 1 was used in SSO1 of experiment 2.
In experiment 2, the emphasis was shifted to comparing the performance of SSO1 with the other four algorithms: ABC [13-16], SSOa [25], GA [3,4] and PSO [7,8 ]; GA and PSO are two most advanced algorithms in evolutionary computation and population intelligence; ABC [13-16] and SSOa [25] are the most common algorithms of the 50 well-known benchmark problems, with variables less than or equal to 30, if the stopping criteria are fitness function evaluation number and run time, respectively.
Each algorithm tested in all experiments was written in the C programming language. Wherein, ABC code is adapted from http:// mf.
SSOa, GA and PSO are all from http:// integration and chromatography. Each test was applied to Intel Core i7-5960X CPU 3.00GHz, 16GB RAM and 64 bit Win10, run time units CPU seconds.
In all tests, all parameters applied in SSO1, SSOa and ABC during the experiment were directly from the reference [25]]For a more accurate comparison: c. Cr=0.45,cg=0.40,cw=0.15。
For ABC, references [13-16] are used for all parameters. The bee colony size was 50, the number of food sources was 25, and if no improvement was observed after 50 · 25 updates to 750 times, the solution was regenerated.
For GA, one-point crossover, two-point mutation and elite selection were achieved with crossover rate 0.7 and mutation rate 0.3, respectively, see references [3,4 ].
For PSO, c in formula (4)w0.9 and c1=c22.0; setting the speed function to be in the range of 2 or-2 if the value of the speed function is greater than 2 or less than-2 [7,8]]。
ABC may calculate fitness values more than once in each generation, see references [13-16], and therefore, using the number of heritage as a stopping criterion is incorrect and unfair.
In addition, the second UM in ABC, the "onlooker", takes extra time to update the solution.
Thus, for a fair comparison, the time limit (denoted by t) of 1.25,1.50 … … to 3.75 computer seconds for each algorithm in experiment 2 was used as a stopping criterion to observe the trend and change of each algorithm.
Note: each run is independent for all three algorithms. For example, in the case of t 1.50, each run SSO1 must be restarted from 0 seconds, rather than simply extending from t 1.25 times 0.25 seconds to any run
For each benchmark function, the average run time is the time to obtain 50 gBest. In all tables, each subscript represents the ranking of the values. Furthermore, Nrun55 and Nsol=100。
In practical applications, algorithms are implemented and executed numerous times to find the best result. Only the solution with the best results is retained and used. Therefore, most of the related published papers are only illustrative by simply reporting and comparing the best results obtained from the algorithms, see references [13-16,25,27 ]]. Therefore, the disclosure also focuses on the best results F in experiment 2min
Experiment 1 finding the optimal step size Δ:
experimental results of experiment 1, including Favg、Fmin、Fmax、FstdSuccess rate (percentage of cases that successfully solved the problem), and the step length based Δ of experiment 1 ═ 1.0The number of fitness function calculations of (1.25) and (1.50) of 55 runs over 3.70 seconds as shown in table 3 and fig. 1 and 2, fig. 1 is the average success rate of SSO1 at different values of Δ and T in experiment 1, and fig. 2 is the average success rate of SSO1 at different values of Δ and problems in experiment 1.
Although all correlation values in table 3 tend to decrease as T increases, some fluctuation is found, for example, F at T2.00avg0.04467947084586 improves to 0.03980129242576 at T2.25 and then decreases to 0.06607165055859 at T2.50 at Δ 1.25 due to the randomness of the soft calculations and independence of each run.
From Table 3, it can be seen that Δ ≧ 1.25 has the optimum F for T ≧ 2.00 (except T ≧ 3.50)minOptimum F for T ═ 1.50, 2.00,2.25 and 2.75avg、FmaxAnd Fstd(ii) a For T1.75, 3.25 and 3.50, Δ 1.00 has the best Favg、Fmin、FmaxAnd Fstd(except that F when T is 3.25min) (ii) a For Favg,FmaxAnd FstdΔ ═ 1.75 is the optimum of the minimum and maximum T, i.e., T ═ 1.25 and T ═ 3.75.
Table 3 results obtained from different delta values in experiment 1
Figure BDA0002089054650000211
Figure BDA0002089054650000221
Fig. 1 and 2 show the average success rates for different values of t and baseline problems, respectively. The success rate is defined as the percentage of the final gBest that is equal to the best solution. For example, a success rate of 62.020% means 100 · 55 · 18 ═ 99,000 for Δ ═ 1.25 and t ═ 1.25, and finally 62.020% of gBest equals the relevant optimization, where N is equal to Nsol=100,NrunWhere 18 is the number of benchmark questions.
From the above it can be observed that Δ 1.25 always has the best success rate on average for different T values and reference problems in fig. 1 and 2. Note that the difference between Δ ═ 1.25 and Δ ═ 1.00 and between Δ ═ 1.25 and Δ ═ 1.50 is up to 27%, i.e., the probability that Δ ═ 1.25 obtains the optimum value is 27% higher than that of Δ ═ 1.00 and Δ ═ 1.50. SSO1 at Δ ═ 1.25 can solve 14 of the 18 baseline problems, which is also the best of the three settings of Δ.
As can be seen from table 3 and fig. 1 and 2, Δ ═ 1.25 is superior to 1.00 and 1.50 in the quality of the solution. Therefore, Δ is used in the proposed HSS and experiment 2 as 1.25 without further optimization of Δ.
Experiment 2: comparison between ABC, GA, PSO, SSO1 and SSOa:
f obtained from five test algorithms (i.e., ABC, GA, PSO, SSO1, and SSOa) from 55 runs within 3.70 seconds of T ═ 1.25,1.50minThe average values of (A) are shown in FIG. 3, and FIG. 3 is F obtained from ABC, GA, PSO, SSO1 and SSOa in experiment 2minThe value is obtained. The results obtained from ABC, SSO1 and SSOa are shown in FIG. 6 together with 11 box plots in the box plots for the average fitness values of ABC, SSO1 and SSOa for experiments 2 with T values of 1.25,1.50 (a), 1.50(b), 1.75, 2.00,2.25, 2.50, 2.75, 3.00, 3.50, 3.75, respectively, as shown in FIGS. 7,8, 9,10, 11, 12, 13, 14, 15, 16, 17 for experiment 2, and for ABC, SSO1 and SSOa, respectively, FIG. 6 is the F for ABC, SSO1 and SSOaminA bar graph of values, graphically depicting the overall results. Note: these 11 box plots also show the best results. Furthermore, to obtain more detail about the basic part of the result (the average from the best quartile to the third quartile), the third quartile to average worst fit result is truncated.
Average F Using SSO1 with ABC and SSOaminThe improvement rates (AIO), such as AIO (ABC, SSO1) and AIO (SSOa, SSO1), are shown in fig. 4, which is a histogram of AIO (ABC, SSO1), AIO (SSOa, SSO1), and AIO (ABC, SSOa).
Wherein,
Figure BDA0002089054650000231
wherein, both alpha and beta represent a test algorithm;
the associated success rates are summarized in fig. 5, and fig. 5 is a bar graph of the success rates at different T values in experiment 2. The number of fitness function calculations is shown in table 4.
In addition, some fluctuations were observed in some results of experiment 2. These fluctuations are caused by the random nature of the soft computing and the fact that each run is independent.
Table 4 number of fitness function calculations in experiment 2
Figure BDA0002089054650000232
Figure BDA0002089054650000241
Comprehensive analysis of the quality of solutions
FIGS. 3, 6, and 7 emphasize the effectiveness (i.e., quality of solution) of ABC, SSO1, and SSOA. As can be seen from fig. 3, both the genetic algorithm and the particle swarm algorithm performed much worse than the other three algorithms. This observation was made in other studies [13-16 ]. Furthermore, as the running time increases, the weaknesses of genetic algorithms and particle swarm algorithms become more pronounced. Thus, the remainder of the disclosure focuses only on ABC, SSO1, and SSOA.
As shown in fig. 7,8, 9,10, 11, 12, 13, 14, 15, 16, and 17, which are box charts of average fitness values of ABC, SSO1, and SSOa at values of T in experiment 2 of 1.25,1.50, 1.75, 2.00,2.25, 2.50, 2.75, 3.00, 3.50, and 3.75, respectively, the results are closer to the best solution as the operation time increases. As can be seen from FIG. 6, FIG. 6 shows F of ABC, SSO1 and SSOa in experiment 2minThe value bar graph, ABC, is better than SSO1 and SSOa for smaller run times, e.g., T1.25 and T1.50. However, due to modern advanced computer technology, the small "run-time" dependencies are not important. On the contrary, in the case of T ≧ TSSO1 outperformed ABC after 2.75 seconds, while SSOa outperformed average F for all Tsmin. For T3.75, the gap between SSO1 and ABC reaches almost 0.01, and the gap between SSO1 and SSOa is almost 0.005. Furthermore, ABC for F when T increasesavgThe SSO1 of (a) did not improve to the same extent. The above results provide evidence that ABC is easily trapped in a local optimum of a large T.
As shown in figures 7,8, 9,10, 11, 12, 13, 14, 15, 16 and 17,
ABC produces better FmaxAnd FstdThe value is obtained. In contrast, SSOa produces FmaxAnd FstdThe worst value of (c). The reason for this result is that UMa used in SSOa must update all variables, whereas only one variable was selected in ABC and UM1 was used in SSO1 for each solution of each generation. Another reason ABC is always more powerful than SSO1 and SSOa is that ABC easily traps local traps and lacks the ability to escape local optima.
In summary, the SSO1 method proposed by the present disclosure has significant advantages over other methods.
Average FminImprovement rate (AIO)
To measure the amount of improvement in average Fmin obtained from ABC, SSO1 and SSOa, the average Fmin improvement rate (AIO) is given in fig. 4, where AIO is defined as follows from ABC to SSO1, namely AIO (ABC, SSO 1):
Figure BDA0002089054650000242
measurement of F between ABC and SSO1 by AIO (ABC, SSO1)minTo improve one F in SSO1minThe higher the AIO (ABC, SSO1), the more effective the SSO1, and vice versa. For example, for T3.75 in fig. 4, AIO (ABC, SSO1) ═ 55.673% indicates the average F of SSO1minOne unit improvement of (A) will result in an average F of ABC and SSO1minThe difference between increases 0.55673. Likewise, AIO (SSOa, SSO1) and AIO (ABC, SSOa) are available.
As can be seen from fig. 4, the value of AIO (ABC, SSO1) increased from-21.227% (worst) of T1.25 to 55.673% (best) of T3.75; AIO (ABC, SSOa) decreases from 48.396% (best) at T ═ 1.25 to 4.936% (worst) at T ═ 3.25, and then increases to 29.960% at T ═ 3.75. Thus, the following conclusions can be drawn:
SSO1 always performs much faster than ABC, i.e., ABC and F increase as T increasesavgThe SSO1 was not improved to the same extent compared to.
2. Average F of SSOa from T ═ 1.25 to T ═ 3.25minThe improvement tends to be similar to the average improvement of SSO 1. However, after T3.25, the performance of SSO1 improved more than SSOa.
The quality and robustness of the solution of SSOa exceeds ABC.
The reason for these conclusions is that the third UM "scout" is not very effective in evading local optimality, because ABC is designed to randomly regenerate the current solution if it does not improve in a predetermined number of iterations. In addition, UM used in SSOaaAll variables need to be updated, which makes it more powerful in a global search, but gradually and slowly improves the quality of its solution.
Success rate:
fig. 5 shows that the success rate of the correlation value is at most 0.0001 greater than the exact solution, which is desirable because the correlation value should be as close to the exact value as possible. Fig. 5 shows similar observations as shown in fig. 7. For T<2.75 patients, ABC has a better success rate, but for FminSSO1 always has a better success rate.
The effect is as follows:
table 4 shows mainly the number of computations for the fitness function to compare the efficiency between ABC, SSO1 and SSOA.
NABCIs the total number of fitness functions, nABCIs the calculation to obtain the best final solution from ABC. N is a radical ofSSO1And nSSO1Pairs of and NSSOaAnd nSSOaThe constituent pairs are related to N in the sense that they are related to each otherABCAnd nABCIn parallel, since both components of each pair use the same algorithmDerived (N)SSO1And nSSO1From algorithms SSO1 and NSSOaAnd nSSOaFrom the algorithm SSOa. )
In Table 4, the ratio nABC/NABCLess than 42%, i.e., after 42% of the number of calculations of the fitness function of ABC, the final best solution never changes. Furthermore, as the operating time increases, the ratio nABC/NABCAnd decreases. These two observations further demonstrate that ABC is well suited for local search, but global search capability is weak. Ratio NSSO1/NABCSum ratio NSSOa/NABCAre both greater than 918, and nSSO1/nABC and nSSOa/nABCAre each at least 2,100. Therefore, UM1And UMaAre all at least 918 times faster than the UM implemented in ABC, i.e., UM1And UMaMore efficient than ABC. N in Table 4SSOa/NSSOaAnd nSSO1/NSSO1The proportions of (a) are at least 95.5% and 95.47%, respectively. Thus, in contrast to ABC, SSO1 and SSOa continue to improve their solutions almost at the end of their respective runs, i.e., SSO1 and SSOa are very strong global search algorithms.
Based on these observations, SSO1 achieves a better balance between exploration and utilization than ABC and SSOa.
And (4) conclusion:
in the embodiments of the present disclosure, UM1One variable in each solution is updated to improve the exploration capability of SSO and a new HSS is introduced to replace the fixed step size to improve the utilization capability of SSO.
Through extensive experimental studies of 18 high-dimensional functions, the resulting Δ ═ 1.25 HSS achieved better performance than Δ ═ 1.00 and Δ ═ 1.50. Furthermore, the proposed UM1 uses the proposed HSS for Δ ═ 1.25, achieving a better compromise between exploration and utilization than ABC, GA, PSO and SSOa.
Note: ABC always has a smaller deviation, which makes it more robust than SSO1 and SSOa because it is less prone to fall into local optima than SSO1 and SSOa.
However, the proposed algorithm of the present disclosure still has some limitations, however, still has some limitations for the proposed algorithm, even including ABC, GA, PSO, SSOa and all current soft calculations: a) among some of the problems with the analysis benchmark, the configuration considered globally optimal does not have the highest performance; b) the proposed heuristic requires knowledge of the limits (upper and lower) of the optimal values. Therefore, the proposed algorithm has to be improved to overcome the two main obstacles described above.
Although the results of the algorithm proposed by the present disclosure are superior to ABC and SSOa in both solution quality and efficiency. The algorithm proposed by the present disclosure may be sensitive to constant parameters. Therefore, considering the evolution process of SSO, its sensitivity aspect will be explored in future work to adjust the step size.
Abbreviations/terms
Δ represents a step size (step length);
ρkis represented by [0, 1]]A uniform random number created from the kth variable (kth);
ABC represents an artificial bee colony algorithm;
AOI denotes FminAverage FminThe improvement rate;
F(Xi,j) Represents Xi,jA fitness function of;
gBest represents the historically best solution;
pBest represents its own historically best solution;
Favgrepresents the average fitness of the 50 best gBest;
Fmaxrepresents the worst fitness of the 50 best gBest;
Fminrepresents the best fitness of the 50 best gBest;
Fstdthe fitness standard deviation of the 50 best gBest is shown;
GA represents a genetic algorithm;
gkrepresents PgBestK (k) ofth) A variable;
Nrepresents the mean fitness metric evaluation number of the algorithm ●;
nrepresentation findingThe mean fitness estimate of the final gBest of the algorithm ●;
Navgrepresenting the average fitness evaluation number;
navgthe average fitness evaluation number representing the final gBest is found;
NgBestrepresents the number of gBest;
Ngenrepresenting a genetic algebra;
Nrunrepresenting the number of independent runs;
Nsolrepresenting the number of solutions;
Nvarrepresenting the number of variables;
PgBestrepresents the current gBest (historically best solution);
Pirepresents the ith (i)th) Current pBest of solutions (the historically best solution of itself);
pi,jrepresents PiJ (j) ofth) A variable;
PSO represents a particle swarm optimization algorithm/a particle swarm algorithm;
SSO stands for simplified population optimization algorithm/simplified population algorithm;
t represents an operating time limit;
UM1representing a univariate update mechanism;
UMarepresenting an all-variable update mechanism;
Xi,jdenotes the j (j) thth) Generation i (i)th) Solving;
xi,j,krepresents Xgen,solK (k) ofth) A variable;
Xgen,solrepresents the sol solution of the gen generation.

Claims (6)

1. A method of harmonic group optimization, the method comprising the steps of:
step 1, constructing a harmonic step length strategy HSS;
step 2, establishing a univariate updating mechanism UM1
Step 3, passing through UM1Optimizing the SSO by the HSS to obtain the optimized SSO;
and 4, applying the optimized SSO to big data processing.
2. The method of claim 1, wherein in step 1, the method for constructing the harmonic step size strategy HSS comprises: the construction of a harmonic sequence-based HSS is shown as follows:
Figure FDA0002089054640000011
wherein N isvarIs the number of variables, UkAnd LkIs the upper and lower limits of the kth variable, i is the current algebra, k is the index of the current variable; if the sequence of 1,1/2,1/3,1/4 is referred to as a harmonic sequence, the HSS can be expressed based on the harmonic sequence as follows:
Figure FDA0002089054640000012
step size Δ if each genetic cycle lasts 50 generationsi,kThe value of (a) is reduced from generation to generation of the generation period.
3. The method of claim 1, wherein in step 2, a univariate update mechanism UM is established1The method comprises the following steps: let UM of each solution1In the method, only one random selection variable is updated, and i is assumed to be the current generation number, xj,kIs to solve X from the jthjOf (a) a randomly selected variable of (b),
Figure FDA0002089054640000013
is obtained by modifying XjX ofj,kThe resulting time solution yields the following equation:
Figure FDA0002089054640000014
Figure FDA0002089054640000015
Figure FDA0002089054640000021
σk,1and σk,2Is respectively in [ -0.5,0.5 [ -0.5 [)]And [0, 1]]The uniform random variables generated in (1);
ρkis [0, 1]]The uniform random variables generated in (1); gkRepresents PgBestK (k) ofth) A variable;
Lkand UkRespectively, the lower limit and the upper limit of the kth variable.
4. The harmonic cluster optimization method of claim 2 or 3 wherein in step 3 the UM is passed1And the method for optimizing the SSO by the HSS comprises the following steps:
and step 0, making i-j-gBest-1.
Step 1, by XjAnd calculating F (X)j);
Step 2, if F (X)gBest)<F(Xj) Let gBest be j;
step 3, if j<NsolLet j equal to j +1 and go to step 1;
step 4, let n*=1,N*50, and order
Figure FDA0002089054640000022
Wherein k is 1,2, …, Nvar
Step 5, making i ═ i +1 and j ═ 1;
step 6, from XjIn which a variable, e.g. x, is randomly selectedj,k
In the step 7, the step of,order to
Figure FDA0002089054640000023
Step 8, order
Figure FDA0002089054640000024
Step 9, if F (X)*)<F(Xj) Then let Xj=X*Turning to the step 10, otherwise, turning to the step 11;
step 10, if F (X)j)<F(XgBest) Let gBest be j;
if the current running time is equal to or greater than T, the process ends, step 11, and XgBestIs suitable for F (X)gBest) The final solution of (2);
step 12, if j<NsolLet j equal j +1 and go to step 6;
step 13, if i<N*Go to step 5;
step 14. mixing n*Increase by 1, N*Increase by 50, and order
Figure FDA0002089054640000025
Wherein k is 1,2, …, NvarAnd go to step 5.
5. The application of a harmonic group optimization method in big data processing through an artificial neural network as claimed in claim 4, wherein the method is applied as follows:
step A1, collecting big data conforming to the big data type;
step A2, preprocessing and cleaning the acquired big data, and extracting a data set obtained after the big data is cleaned;
step A3, performing dimension reduction on the cleaned data set;
step A4, dividing the data set after dimensionality reduction into a data set divided into a training set and a test set;
step A5, determining a three-layer perceptron neural network with an artificial neural network structure of 6-5-1, wherein the neural network needs to optimally design 41 parameters, and the value range of the optimal design parameters of the neural network is [ -1,1 ];
step A6, determining the input variable of the artificial neural network as big data conforming to big data type, and the output variable of the artificial neural network as big data output;
step A7, determining input variables and output variables of the artificial neural network;
step A8, mixing X in each optimized SSOjThe variable in the method is decoded into parameters to be optimized of the artificial neural network, the error of the neural network after training a training set and/or a testing set is calculated, and the calculated error is used as the fitness F (X)j) Inputting the result into the optimized SSO, and obtaining the suitable F (X) from the operation result of the optimized SSOgBest) Of (2) a final solution XgBestDecoding parameters of the artificial neural network, and taking the obtained artificial neural network as a classification model;
step A9, classifying the newly collected big data which is in accordance with the big data type through a classification model.
6. The application of a harmonic group optimization method in big data processing by a support vector machine as claimed in claim 4, characterized in that the method is applied as follows:
step B1, collecting big data conforming to the big data type;
step B2, preprocessing and cleaning the acquired big data and extracting features to obtain a feature vector of the big data;
step B3, using the feature vector of the big data as a training data set;
step B4, j uniform random variables are randomly generated, and the random variable set is Xj,XjEach randomly selected variable in the vector machine stores a penalty factor C and a radial basis kernel parameter g of the support vector machine;
step B5, calculating XjFitness F (X)j) Inputting the result into the optimized SSO, and obtaining the suitable F (X) from the operation result of the optimized SSOgBest) Of (2) a final solution XgBestDecoding the parameters into parameters of a support vector machine, and taking the obtained support vector machine as a classification model;
and step B6, classifying the newly acquired big data which is in accordance with the big data type through a classification model.
CN201910497327.7A 2019-06-10 2019-06-10 Harmonic group optimization method and application thereof Active CN112070200B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910497327.7A CN112070200B (en) 2019-06-10 2019-06-10 Harmonic group optimization method and application thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910497327.7A CN112070200B (en) 2019-06-10 2019-06-10 Harmonic group optimization method and application thereof

Publications (2)

Publication Number Publication Date
CN112070200A true CN112070200A (en) 2020-12-11
CN112070200B CN112070200B (en) 2024-04-02

Family

ID=73658186

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910497327.7A Active CN112070200B (en) 2019-06-10 2019-06-10 Harmonic group optimization method and application thereof

Country Status (1)

Country Link
CN (1) CN112070200B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113141102A (en) * 2021-04-13 2021-07-20 李怡心 Specific harmonic elimination method based on improved hybrid particle swarm taboo algorithm
CN115830411A (en) * 2022-11-18 2023-03-21 智慧眼科技股份有限公司 Biological feature model training method, biological feature extraction method and related equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104881703A (en) * 2015-05-20 2015-09-02 东北石油大学 Tent mapping improved bee colony algorithm for image threshold segmentation
CN108470018A (en) * 2018-02-22 2018-08-31 中国铁道科学研究院 Smoothing method and device based on the intrinsic mode functions that empirical mode decomposition decomposes
CN109816000A (en) * 2019-01-09 2019-05-28 浙江工业大学 A kind of new feature selecting and parameter optimization method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104881703A (en) * 2015-05-20 2015-09-02 东北石油大学 Tent mapping improved bee colony algorithm for image threshold segmentation
CN108470018A (en) * 2018-02-22 2018-08-31 中国铁道科学研究院 Smoothing method and device based on the intrinsic mode functions that empirical mode decomposition decomposes
CN109816000A (en) * 2019-01-09 2019-05-28 浙江工业大学 A kind of new feature selecting and parameter optimization method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
WEI-CHANG YEH,: "New Parameter-Free Simplified Swarm Optimization for Artificial Neural Network Training and Its Application in the Prediction of Time Series", 《IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS》, vol. 24, no. 4, 30 April 2013 (2013-04-30), pages 661 - 665, XP011494122, DOI: 10.1109/TNNLS.2012.2232678 *
WEI-CHANG YEH: "A new harmonic continuous simplified swarm optimization", 《APPLIED SOFT COMPUTING JOURNAL》, 18 June 2019 (2019-06-18), pages 1 - 10 *
公忠盛 等: "基于改进混合蜂群算法的非线性电路谐波平衡分析", 《计算机应用研究》, vol. 35, no. 7, 31 July 2018 (2018-07-31), pages 1970 - 1995 *
吕干云 等: "一种基于粒子群优化算法的间谐波分析方法", 《电工技术学报》, vol. 24, no. 12, 31 December 2009 (2009-12-31), pages 156 - 161 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113141102A (en) * 2021-04-13 2021-07-20 李怡心 Specific harmonic elimination method based on improved hybrid particle swarm taboo algorithm
CN115830411A (en) * 2022-11-18 2023-03-21 智慧眼科技股份有限公司 Biological feature model training method, biological feature extraction method and related equipment
CN115830411B (en) * 2022-11-18 2023-09-01 智慧眼科技股份有限公司 Biological feature model training method, biological feature extraction method and related equipment

Also Published As

Publication number Publication date
CN112070200B (en) 2024-04-02

Similar Documents

Publication Publication Date Title
Xuan et al. Multi-model fusion short-term load forecasting based on random forest feature selection and hybrid neural network
Ishibuchi et al. Analysis of interpretability-accuracy tradeoff of fuzzy systems by multiobjective fuzzy genetics-based machine learning
Jia et al. An optimized RBF neural network algorithm based on partial least squares and genetic algorithm for classification of small sample
Li et al. Development and investigation of efficient artificial bee colony algorithm for numerical function optimization
Yang et al. Fast economic dispatch in smart grids using deep learning: An active constraint screening approach
Gholizadeh et al. Optimal design of structures subjected to time history loading by swarm intelligence and an advanced metamodel
Han et al. Information-utilization-method-assisted multimodal multiobjective optimization and application to credit card fraud detection
Bukharov et al. Development of a decision support system based on neural networks and a genetic algorithm
Wang et al. A hybrid optimization-based recurrent neural network for real-time data prediction
He et al. Optimising the job-shop scheduling problem using a multi-objective Jaya algorithm
CN106909933A (en) A kind of stealing classification Forecasting Methodology of three stages various visual angles Fusion Features
Zhang et al. Efficient and merged biogeography-based optimization algorithm for global optimization problems
Orouskhani et al. Evolutionary dynamic multi-objective optimization algorithm based on Borda count method
WO2020198520A1 (en) Process and system including an optimization engine with evolutionary surrogate-assisted prescriptions
CN113409898B (en) Molecular structure acquisition method and device, electronic equipment and storage medium
CN112070200B (en) Harmonic group optimization method and application thereof
CN106453294A (en) Security situation prediction method based on niche technology with fuzzy elimination mechanism
CN115185804A (en) Server performance prediction method, system, terminal and storage medium
Han et al. An efficient genetic algorithm for optimization problems with time-consuming fitness evaluation
Cao et al. Differential evolution algorithm with dynamic multi-population applied to flexible job shop schedule
Zhang et al. A convolutional neural network based on an evolutionary algorithm and its application
Li et al. Learning high-order fuzzy cognitive maps via multimodal artificial bee colony algorithm and nearest-better clustering: Applications on multivariate time series prediction
Wang Enhanced differential evolution with generalised opposition–based learning and orientation neighbourhood mining
Yu [Retracted] Research on Optimization Strategy of Task Scheduling Software Based on Genetic Algorithm in Cloud Computing Environment
CN110689320A (en) Large-scale multi-target project scheduling method based on co-evolution algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 528000 No. 18, Jiangwan Road, Chancheng District, Guangdong, Foshan

Patentee after: Foshan University

Country or region after: China

Address before: 528000 No. 18, Jiangwan Road, Chancheng District, Guangdong, Foshan

Patentee before: FOSHAN University

Country or region before: China

CP03 Change of name, title or address