CN112070200A - A Harmonic Group Optimization Method and Its Application - Google Patents
A Harmonic Group Optimization Method and Its Application Download PDFInfo
- Publication number
- CN112070200A CN112070200A CN201910497327.7A CN201910497327A CN112070200A CN 112070200 A CN112070200 A CN 112070200A CN 201910497327 A CN201910497327 A CN 201910497327A CN 112070200 A CN112070200 A CN 112070200A
- Authority
- CN
- China
- Prior art keywords
- sso
- big data
- variable
- gbest
- harmonic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000005457 optimization Methods 0.000 title claims abstract description 50
- 238000000034 method Methods 0.000 title claims abstract description 43
- 238000013528 artificial neural network Methods 0.000 claims abstract description 33
- 230000007246 mechanism Effects 0.000 claims abstract description 16
- 238000012360 testing method Methods 0.000 claims description 13
- 238000012545 processing Methods 0.000 claims description 11
- 238000012549 training Methods 0.000 claims description 10
- 238000012706 support-vector machine Methods 0.000 claims description 9
- 238000013145 classification model Methods 0.000 claims description 8
- 238000004140 cleaning Methods 0.000 claims description 8
- 230000007423 decrease Effects 0.000 claims description 6
- 230000008569 process Effects 0.000 claims description 6
- 238000007781 pre-processing Methods 0.000 claims description 4
- 230000009467 reduction Effects 0.000 claims description 4
- 238000013461 design Methods 0.000 claims description 2
- 238000002474 experimental method Methods 0.000 abstract description 46
- 230000006870 function Effects 0.000 abstract description 26
- 101100257809 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) SSO1 gene Proteins 0.000 description 79
- 101100465990 Schizosaccharomyces pombe (strain 972 / ATCC 24843) psy1 gene Proteins 0.000 description 79
- 238000004422 calculation algorithm Methods 0.000 description 55
- 230000002068 genetic effect Effects 0.000 description 11
- 238000004364 calculation method Methods 0.000 description 9
- 230000006872 improvement Effects 0.000 description 9
- 239000002245 particle Substances 0.000 description 8
- 230000008859 change Effects 0.000 description 6
- 238000011156 evaluation Methods 0.000 description 6
- 238000010845 search algorithm Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 241000256844 Apis mellifera Species 0.000 description 3
- 230000006399 behavior Effects 0.000 description 3
- 230000035772 mutation Effects 0.000 description 3
- 241000282414 Homo sapiens Species 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000009792 diffusion process Methods 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000009472 formulation Methods 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 230000002787 reinforcement Effects 0.000 description 2
- 238000002922 simulated annealing Methods 0.000 description 2
- 238000013068 supply chain management Methods 0.000 description 2
- -1 AIO(ABC Proteins 0.000 description 1
- RZVAJINKPMORJF-UHFFFAOYSA-N Acetaminophen Chemical compound CC(=O)NC1=CC=C(O)C=C1 RZVAJINKPMORJF-UHFFFAOYSA-N 0.000 description 1
- 101100289792 Squirrel monkey polyomavirus large T gene Proteins 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000000205 computational method Methods 0.000 description 1
- 238000007418 data mining Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013401 experimental design Methods 0.000 description 1
- 235000013305 food Nutrition 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/004—Artificial life, i.e. computing arrangements simulating life
- G06N3/006—Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
本公开提供一种谐波群优化方法及其应用,提出一种新的连续SSO集成单变量更新机制UM1和新型谐波步长策略HSS改进了连续简化群优化SSO,UM1和HSS能够平衡连续SSO在探索高维多变量和多模态数值连续基准函数中的探索与利用能力;更新机制UM1仅需要更新一个变量,并且它完全不同于SSO中的变量不需要更新所有变量;在所述的UM1中,HSS通过基于谐波序列减小步长来增强利用容量;并对18个高维函数进行了数值实验,以确认本公开所提供方法的效率;提高了传统的ABC和SSO的探索与利用性能、在探索与利用之间求得了一个比较优秀的平衡点,并且应用面广泛,能够大幅提高支持人工神经网络、向量机等对新获得的大数据的分类、预测精度。
The present disclosure provides a harmonic group optimization method and its application, and proposes a new continuous SSO integrated univariate update mechanism UM 1 and a new harmonic step size strategy HSS, which improves the continuous simplified group optimization SSO, and UM 1 and HSS can balance The exploration and utilization ability of continuous SSO in exploring high-dimensional multivariate and multimodal numerical continuous benchmark functions; the update mechanism UM1 only needs to update one variable, and it is completely different from the variables in SSO and does not need to update all variables; described in In UM1, HSS enhances the utilization capacity by reducing the step size based on harmonic sequences; numerical experiments are performed on 18 high-dimensional functions to confirm the efficiency of the method provided in this disclosure; the exploration of traditional ABC and SSO is improved It achieves an excellent balance between exploration and utilization, and has a wide range of applications, which can greatly improve the classification and prediction accuracy of newly acquired big data such as artificial neural networks and vector machines.
Description
技术领域technical field
本公开涉及软计算应用、大数据处理技术领域,具体涉及一种谐波群优化方法及其应用。The present disclosure relates to the technical fields of soft computing applications and big data processing, and in particular to a harmonic group optimization method and applications thereof.
背景技术Background technique
在实际应用中总是存在大量的优化问题,例如绿色供应链管理,大数据和网络可靠性问 题。在实际现实问题中对优化模型应用微小变化(例如,整数变量,非线性方程和/或多目标) 也是常见的。然而,即使仅仅使用整数变量,这些现实生活中的问题也变得非常难以解决, 并且在可接受的时间内使用传统方法解决这些问题会更加复杂和巨大。因此,最近的各种研 究的重点已经转移,使用软计算方法而不是在可接受的执行时间内的精确解来生产高质量的 解,见参考文献[1-26]。There are always a lot of optimization problems in practical applications, such as green supply chain management, big data and network reliability issues. It is also common to apply small variations (eg, integer variables, nonlinear equations, and/or multiple objectives) to the optimization model in real-world problems. However, even with just integer variables, these real-life problems become very difficult to solve, and would be much more complex and enormous to solve using traditional methods in acceptable time. As a result, the focus of various recent studies has shifted to producing high-quality solutions using soft computational methods rather than exact solutions within acceptable execution times, see refs [1-26].
越来越多的新的软计算方法正在涌现,例如人工神经网络[2,37]、遗传算法(GA)[3,4, 34]、模拟退火[5]、禁忌搜索[6]、蚁群优化[11]、粒子群优化算法[7,8,34]、差分进化[9,10]、 算法估计分布[12]、人工蜂群算法(ABC)[13-16]、简化群优化算法/简化群体演算法(SSO)) [17-26]、帝国主义竞争算法[35]、强化学习算法[36]、贝叶斯网络[38]、飓风优化算法[39]、 引力搜索算法[40]、人类群集[41]、蝙蝠算法[42]、随机扩散搜索[43]等都是新的软计算方 法,其灵感来自近年来解决科学技术中较大问题的自然现象。因此,软计算已经引起了值得 注意的关注,并且已经应用于一系列现实世界中的问题,例如集群智慧(Swarm Intelligence) 和演化算法(evolutionary algorithms,简称EAs),分别参见参考文献[44]和[45]。More and more new soft computing methods are emerging, such as artificial neural network [2, 37], genetic algorithm (GA) [3, 4, 34], simulated annealing [5], tabu search [6], ant colony Optimization [11], particle swarm optimization algorithm [7, 8, 34], differential evolution [9, 10], algorithm estimation distribution [12], artificial bee colony algorithm (ABC) [13-16], simplified swarm optimization algorithm/ Simplified Swarm Algorithm (SSO) [17-26], Imperialist Competition Algorithm [35], Reinforcement Learning Algorithm [36], Bayesian Networks [38], Hurricane Optimization Algorithm [39], Gravity Search Algorithm [40] , Human Clustering [41], Bat Algorithm [42], Random Diffusion Search [43], etc. are all new soft computing methods inspired by natural phenomena that solve larger problems in science and technology in recent years. Therefore, soft computing has attracted noteworthy attention and has been applied to a range of real-world problems, such as swarm intelligence and evolutionary algorithms (EAs), respectively, see References [44] and [45].
在软计算中,基于局部搜索的探索是在已知解的邻域区域中找到最优的行为,以局部最 优解的捕获为代价提高解质量的机会[2-26],并且以探索为重点的全局搜索指的是在未探索 的解空间中搜索最优值,以避免在运行时间的成本下陷入局部最优[2-26]。探索 (exploitation)与利用(exploration)是相反的,但相互补充。值得注意的是寻求探索与利用 的平衡以保证任何软计算方法的实现。In soft computing, the exploration based on local search is to find the optimal behavior in the neighborhood region of the known solution, improve the chance of solution quality at the cost of capturing the local optimal solution [2-26], and explore as The focused global search refers to searching for optimal values in an unexplored solution space to avoid getting stuck in local optima at the cost of running time [2-26]. Exploration and exploitation are opposites but complement each other. It is worth noting that a balance of exploration and exploitation is sought to ensure the realization of any soft computing approach.
Yeh介绍的SSO(简化群优化算法/简化群体演算法),见参考文献:【W.C.Yeh,“Study on Quickest Path Networks with Dependent Components and Apply to RAP”,Technical Report NSC97-2221-E-007-099-MY3,Distinguished Scholars ResearchProject granted by National Science Council,Taiwan】新提出的基于人口(population-based)的软计算方法之一。SSO 从现有已知的数值实验中获得了很高的效率和效率[17-26]。此外,SSO可灵活应对各种现实 问题,逐渐应用于不同的优化应用,如供应链管理[20,21],冗余分配问题[18,19,27],数据 挖掘[20,21],和其它的优化问题[32,33]。The SSO (Simplified Swarm Optimization Algorithm/Simplified Swarm Optimization Algorithm) introduced by Yeh, see reference: [W.C.Yeh, "Study on Quickest Path Networks with Dependent Components and Apply to RAP", Technical Report NSC97-2221-E-007-099 -MY3, Distinguished Scholars ResearchProject granted by National Science Council, Taiwan] One of the newly proposed population-based soft computing methods. SSO obtains high efficiency and efficiency from existing known numerical experiments [17-26]. In addition, SSO can flexibly deal with various real-world problems and is gradually applied to different optimization applications, such as supply chain management [20, 21], redundancy allocation problems [18, 19, 27], data mining [20, 21], and Other optimization problems [32, 33].
全变量更新机制(UMa)是SSO所有变体的基础,因此所有变量都在每个解中更新。然而, UMa总是探索未被发现的解空间,即使最佳接近当前的解[17-27,31,32],它也可能花费额外 的时间来达到最佳状态。The full variable update mechanism (UM a ) is the basis for all variants of SSO, so all variables are updated in each solution. However, UM a always explores the undiscovered solution space, and even if the best is close to the current solution [17-27, 31, 32], it may take extra time to reach the optimal state.
本公开涉及的参考文献:References involved in this disclosure:
[1]L.Zadeh,“Fuzzy Logic,Neural Networks,and Soft Computing”,Communication of the ACM,vol.37,pp.77-84,1994.[1] L. Zadeh, "Fuzzy Logic, Neural Networks, and Soft Computing", Communication of the ACM, vol.37, pp.77-84, 1994.
[2]W.McCulloch,and W.Pitts,“A Logical Calculus of Ideas Immanent inNervous Activity”,Bulletin of Mathematical Biophysics,vol.5,pp.115-133,1943.[2] W. McCulloch, and W. Pitts, "A Logical Calculus of Ideas Immanent in Nervous Activity", Bulletin of Mathematical Biophysics, vol.5, pp.115-133, 1943.
[3]A.Fraser,“Simulation of Genetic Systems by Automatic DigitalComputers. I.Introduction”,Australian Journal of Biological Sciences,vol.10,pp.484-491, 1957.[3] A. Fraser, "Simulation of Genetic Systems by Automatic Digital Computers. I. Introduction", Australian Journal of Biological Sciences, vol.10, pp.484-491, 1957.
[4]D.Goldberg,Algorithms in Search,Optimization and Machine Learning,Genetic, Reading,MA:Addison-Wesley Professional,1989.[4] D. Goldberg, Algorithms in Search, Optimization and Machine Learning, Genetic, Reading, MA: Addison-Wesley Professional, 1989.
[5]S.Kirkpatrick,C.Gelatt,and M.Vecchi,“Optimization by simulatedannealing”,Science,vol.220,pp.671-680,1983.[5] S. Kirkpatrick, C. Gelatt, and M. Vecchi, "Optimization by simulatedannealing", Science, vol. 220, pp. 671-680, 1983.
[6]F.Glover,“Future Paths for Integer Programming and Links toArtificial Intelligence”,Computers and Operations Research,vol.13,pp.533–549,1986.[6] F. Glover, "Future Paths for Integer Programming and Links to Artificial Intelligence", Computers and Operations Research, vol.13, pp.533–549, 1986.
[7]J.Kennedy and R.Eberhard,“Particle swarm optimization”,Proceedingsof IEEE International Conference on Neural Networks,Publishing,Piscataway,NJ,USA, pp.1942-1948,1995.[7] J. Kennedy and R. Eberhard, "Particle swarm optimization", Proceedings of IEEE International Conference on Neural Networks, Publishing, Piscataway, NJ, USA, pp. 1942-1948, 1995.
[8]M.F.Tasgetiren,Y.C.Liang,M.Sevkli,and G.Gencyilmaz,“A particleswarm optimization algorithm for makespan and total flowtime minimization inthe permutation flowshop sequencing problem”,European Journal of OperationalResearch, vol.177,pp.1930-1947,2007.[8] M.F.Tasgetiren, Y.C.Liang, M.Sevkli, and G.Gencyilmaz, "A particleswarm optimization algorithm for makespan and total flowtime minimization in the permutation flowshop sequencing problem", European Journal of Operational Research, vol.177, pp.1930-1947 , 2007.
[9]R.Storn,“On the usage of differential evolution for functionoptimization”, Proceedings of the 1996 Biennial Conference of the NorthAmerican Fuzzy Information Processing Society,Publishing,pp.519-523,1996.[9] R.Storn, "On the usage of differential evolution for functionoptimization", Proceedings of the 1996 Biennial Conference of the NorthAmerican Fuzzy Information Processing Society, Publishing, pp.519-523, 1996.
[10]R.Storn,and K.Price,“Differential evolution:A simple andefficient heuristic for global optimization over continuous spaces”,Journalof Global Optimization,vol.11,pp.341-359,1997.[10] R.Storn, and K. Price, "Differential evolution: A simple and efficient heuristic for global optimization over continuous spaces", Journal of Global Optimization, vol.11, pp.341-359, 1997.
[11]M.Dorigo and L.Gambardella,“Ant Colony System:A CooperativeLearning Approach to the Traveling Salesman Problem”,IEEE Transactions onEvolutionary Computation,vol.1,pp.53-66,1997.[11] M. Dorigo and L. Gambardella, "Ant Colony System: A Cooperative Learning Approach to the Traveling Salesman Problem", IEEE Transactions on Evolutionary Computation, vol.1, pp.53-66, 1997.
[12]P.Lozano,Estimation of distribution algorithms:A new tool forevolutionary computation.Kluwer,Boston MA,2002.[12] P. Lozano, Estimation of distribution algorithms: A new tool forevolutionary computation. Kluwer, Boston MA, 2002.
[13]D.Karaboga,“An idea based on Honey Bee Swarm for NumericalOptimization”, Technical Report TR06,Engineering Faculty,Erciyes University,2005.[13] D. Karaboga, "An idea based on Honey Bee Swarm for NumericalOptimization", Technical Report TR06, Engineering Faculty, Erciyes University, 2005.
[14]F.Liu,Y.Sun,G.Wang,T.Wu,“An Artificial Bee Colony Algorithm Basedon Dynamic Penalty and Lévy Flight for Constrained Optimization Problems”,Arabian Journal for Science and Engineering,pp.1-20,2018.[14] F. Liu, Y. Sun, G. Wang, T. Wu, "An Artificial Bee Colony Algorithm Basedon Dynamic Penalty and Lévy Flight for Constrained Optimization Problems", Arabian Journal for Science and Engineering, pp.1-20, 2018.
[15]D.Karaboga and B.Bastur,“On the performance of artificial beecolony(ABC) algorithm”,Applied Soft Computing,vol.8,pp.687–697,2008.[15] D. Karaboga and B. Bastur, "On the performance of artificial beecolony (ABC) algorithm", Applied Soft Computing, vol.8, pp.687–697, 2008.
[16]Y.C.Liang,A.H.L.Chen,Y.H.Nien,“Artificial bee colony for workflowscheduling”,Evolutionary Computation(CEC),2014 IEEE Congress on,558-564,2014.[16] Y.C.Liang, A.H.L.Chen, Y.H.Nien, "Artificial bee colony for workflow scheduling", Evolutionary Computation (CEC), 2014 IEEE Congress on, 558-564, 2014.
[17]W.C.Yeh,“Study on Quickest Path Networks with DependentComponents and Apply to RAP”,Technical Report NSC97-2221-E-007-099-MY3,Distinguished Scholars Research Project granted by National Science Council,Taiwan.[17] W.C.Yeh, “Study on Quickest Path Networks with DependentComponents and Apply to RAP”, Technical Report NSC97-2221-E-007-099-MY3, Distinguished Scholars Research Project granted by National Science Council, Taiwan.
[18]W.C.Yeh,“A Two-Stage Discrete Particle Swarm Optimization for theProblem of Multiple Multi-Level Redundancy Allocation in Series Systems”,Expert Systems with Applications,vol.36,pp.9192-9200,2009.[18] W.C.Yeh, "A Two-Stage Discrete Particle Swarm Optimization for the Problem of Multiple Multi-Level Redundancy Allocation in Series Systems", Expert Systems with Applications, vol.36, pp.9192-9200, 2009.
[19]C.M.Lai,W.C.Yeh,and Y.C.Huang,“Entropic simplified swarmoptimization for the task assignment problem”,Applied Soft Computing,vol.58,pp.115-127,2017.[19] C.M.Lai, W.C.Yeh, and Y.C.Huang, "Entropic simplified swarmoptimization for the task assignment problem", Applied Soft Computing, vol.58, pp.115-127, 2017.
[20]Y.Jiang,P.Tsai,W.C.Yeh,and L.Cao,“A honey-bee-mating basedalgorithm for multilevel image segmentation using Bayesian theorem”,AppliedSoft Computing, vol.52,pp.1181-1190,2017.[20] Y.Jiang, P.Tsai, W.C.Yeh, and L.Cao, "A honey-bee-mating based algorithm for multilevel image segmentation using Bayesian theorem", AppliedSoft Computing, vol.52, pp.1181-1190, 2017 .
[21]W.C.Yeh,“Optimization of the Disassembly Sequencing Problem onthe Basis of Self-adaptive Simplified Swarm Optimization”,IEEE Transactionson Systems,Man, and Cybernetics--Part A:Systems and Humans,vol.42,pp.250-261,2012.[21] W.C.Yeh, "Optimization of the Disassembly Sequencing Problem on the Basis of Self-adaptive Simplified Swarm Optimization", IEEE Transactionson Systems, Man, and Cybernetics--Part A: Systems and Humans, vol.42, pp.250-261 , 2012.
[22]W.C.Yeh,“Novel Swarm Optimization for Mining Classification Ruleson Thyroid Gland Data”,Information Sciences,vol.197,pp.65-76,2012.[22] W.C.Yeh, "Novel Swarm Optimization for Mining Classification Rules on Thyroid Gland Data", Information Sciences, vol.197, pp.65-76, 2012.
[23]W.C.Yeh,“A New Parameter-Free Simplified Swarm Optimization forArtificial Neural Network training and its Application in Prediction of Time-Series”,IEEE Transactions on Neural Networks and Learning Systems,vol.24,pp.661-665,2013.[23] W.C.Yeh, "A New Parameter-Free Simplified Swarm Optimization for Artificial Neural Network training and its Application in Prediction of Time-Series", IEEE Transactions on Neural Networks and Learning Systems, vol.24, pp.661-665, 2013 .
[24]C.Bae,K.Kang,G.Liu,Y.Y.Chung,“A novel real time video trackingframework using adaptive discrete swarm optimization”,Expert Systems withApplications,vol.64,pp.385-399,2016.[24] C. Bae, K. Kang, G. Liu, Y. Y. Chung, "A novel real time video tracking framework using adaptive discrete swarm optimization", Expert Systems with Applications, vol.64, pp.385-399, 2016.
[25]K.Kang,C.Bae,H.W.F.Yeung,and Y.Y.Chung,“A Hybrid GravitationalSearch Algorithm with Swarm Intelligence and Deep Convolutional Feature forObject Tracking Optimization”,Applied Soft Computing,https://doi.org/10.1016/j.asoc.2018.02.037, 2018.[25] K. Kang, C. Bae, H.W.F. Yeung, and Y.Y. Chung, “A Hybrid GravitationalSearch Algorithm with Swarm Intelligence and Deep Convolutional Feature for Object Tracking Optimization”, Applied Soft Computing, https://doi.org/10.1016/j .asoc.2018.02.037, 2018.
[26]P.C.Chang and W.C.Yeh,“Simplified Swarm Optimization withDifferential Evolution Mutation Strategy for Parameter Search”,ICUIMC’13Proceedings of the 7th International Conference on Ubiquitous InformationManagement and Communication, Article No.25,ACM New York,NY,USA,ISBN:978-1-4503-1958-4,doi:[26] P.C.Chang and W.C.Yeh, "Simplified Swarm Optimization with Differential Evolution Mutation Strategy for Parameter Search", ICUIMC'13 Proceedings of the 7th International Conference on Ubiquitous Information Management and Communication, Article No.25, ACM New York, NY, USA, ISBN :978-1-4503-1958-4, doi:
10.1145/2448556.2448581,2013.10.1145/2448556.2448581, 2013.
[27]C.L.Huang,“A particle-based simplified swarm optimizationalgorithm for reliability redundancy allocation problems”,ReliabilityEngineering&System Safety,vol.142,pp.221-230,2015.[27] C.L. Huang, “A particle-based simplified swarm optimization algorithm for reliability redundancy allocation problems”, Reliability Engineering & System Safety, vol. 142, pp. 221-230, 2015.
[28]R.Azizipanah-Abarghooee,T.Niknam,M.Gharibzadeh and F.Golestaneh,“Robust,fast and optimal solution of practical economic dispatch by a newenhanced gradient-based simplified swarm optimisation algorithm”,Generation,Transmission &Distribution,vol 7,pp.620-635,2013.[28] R.Azizipanah-Abarghooee, T.Niknam, M.Gharibzadeh and F.Golestaneh, "Robust, fast and optimal solution of practical economic dispatch by a newenhanced gradient-based simplified swarm optimisation algorithm", Generation, Transmission &Distribution,
[29]Hai Xie,Bao Qing Hu,New extended patterns of fuzzy rough setmodels on two universes,International Journal of General Systems,vol.43,pp.570–585,2014.[29] Hai Xie, Bao Qing Hu, New extended patterns of fuzzy rough setmodels on two universes, International Journal of General Systems, vol.43, pp.570–585, 2014.
[30]F.Rossi,D.Velázquez,I.Monedero and F.Biscarri,“Artificial neuralnetworks and physical modeling for determination of baseline consumption ofCHP plants”,Expert Systems with Applications,vol.41,pp.4568-4669,2014.[30] F. Rossi, D. Velázquez, I. Monedero and F. Biscarri, "Artificial neural networks and physical modeling for determination of baseline consumption of CHP plants", Expert Systems with Applications, vol.41, pp.4568-4669, 2014 .
[31]P.Chang and X.He,“Macroscopic Indeterminacy Swarm Optimization(MISO) Algorithm for Real-Parameter Search”,Proceedings of the 2014 IEEECongress on Evolutionary Computation(CEC2014),Beijing,China,pp.1571-1578,2014.[31] P.Chang and X.He, "Macroscopic Indeterminacy Swarm Optimization (MISO) Algorithm for Real-Parameter Search", Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC2014), Beijing, China, pp.1571-1578, 2014 .
[32]C.Chou,C.Huang,and P.Chang,“A RFID Network Design Methodology forDecision Problem in Health Care”,Proceedings of the 2014 IEEE Congress onEvolutionary Computation(CEC2014),Beijing,China,pp.1586-1592,2014.[32] C. Chou, C. Huang, and P. Chang, “A RFID Network Design Methodology for Decision Problem in Health Care”, Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC2014), Beijing, China, pp.1586-1592 , 2014.
[33]N.Esfandiari,M.Babavalian,A.Moghadam;and V.Tabar,“Knowledgediscovery in medicine:Current issue and future trend”,Expert Systems withApplications,vol. 41,pp.4434-4463,2014.[33] N. Esfandiari, M. Babavalian, A. Moghadam; and V. Tabar, “Knowledgediscovery in medicine: Current issue and future trend”, Expert Systems with Applications, vol. 41, pp.4434-4463, 2014.
[34]O.Abedinia,M.S.Naderi,A.Jalili,and A.Mokhtarpour,“A novel hybridGA-PSO technique for optimal tuning of fuzzy controller to improve multi-machine power system stability”,International Review of ElectricalEngineering,vol.6, no.2,pp.863-873,2011.[34] O. Abedinia, M.S. Naderi, A. Jalili, and A. Mokhtarpour, "A novel hybridGA-PSO technique for optimal tuning of fuzzy controller to improve multi-machine power system stability", International Review of Electrical Engineering, vol.6 , no.2, pp.863-873, 2011.
[35]O.Abedinia,N.Amjady,K.Kiani,H.A.Shayanfar,and A.Ghasemi,“Multiobjective environmental and economic dispatch using imperialistcompetitive algorithm”,International Journal on Technical and PhysicalProblems of Engineering, 2012.[35] O.Abedinia, N.Amjady, K.Kiani, H.A.Shayanfar, and A.Ghasemi, “Multiobjective environmental and economic dispatch using imperialistcompetitive algorithm”, International Journal on Technical and PhysicalProblems of Engineering, 2012.
[36]M.Bagheri,V.Nurmanova,O.Abedinia,and M.S.Naderi,“Enhancing PowerQuality in Microgrids With a New Online Control Strategy for DSTATCOM UsingReinforcement Learning Algorithm”,IEEE Access,vol.6,pp.38986-38996,2018.[36] M. Bagheri, V. Nurmanova, O. Abedinia, and M.S. Naderi, "Enhancing PowerQuality in Microgrids With a New Online Control Strategy for DSTATCOM Using Reinforcement Learning Algorithm", IEEE Access, vol.6, pp.38986-38996, 2018.
[37]O.Abedinia,N.Amjady,and N.Ghadimi,“Solar energy forecasting basedon hybrid neural network and improved metaheuristic algorithm”,ComputationalIntelligence,vol.34,no.1,pp.241-260,2017.[37] O.Abedinia, N.Amjady, and N.Ghadimi, "Solar energy forecasting based on hybrid neural network and improved metaheuristic algorithm", Computational Intelligence, vol.34, no.1, pp.241-260, 2017.
[38]Y.Jiang,P.Tsai,W.C.Yeh,and L.Cao,“A Honey-bee-mating BasedAlgorithm for Multilevel Image Segmentation Using Bayesian theorem”,AppliedSoft Computing, vol.52,pp.1181-1190,2017.[38] Y. Jiang, P. Tsai, W. C. Yeh, and L. Cao, “A Honey-bee-mating Based Algorithm for Multilevel Image Segmentation Using Bayesian theorem”, AppliedSoft Computing, vol.52, pp.1181-1190, 2017 .
[39]R.M.Rizk-Allah,R.A.El-Sehiemy,G.G.Wang,“A novel parallelhurricane optimization algorithm for secure emission/economic load dispatchsolution”, Applied Soft Computing,vol.63,pp.206-222,2018.[39] R.M.Rizk-Allah, R.A.El-Sehiemy, G.G.Wang, "A novel parallelhurricane optimization algorithm for secure emission/economic load dispatchsolution", Applied Soft Computing, vol.63, pp.206-222, 2018.
[40]W.F.Yeung,G.Liu,Y.Y.Chung,E.Liu and W.C.Yeh,“Hybrid GravitationalSearch Algorithm with Swarm Intelligence for Object Tracking”,The 23rdInternational Conference on Neural Information Processing(ICONIP 2016),pp.213-221,2016.[40] WF Yeung, G. Liu, YY Chung, E. Liu and WCYeh, "Hybrid GravitationalSearch Algorithm with Swarm Intelligence for Object Tracking", The 23rd International Conference on Neural Information Processing ( ICONIP 2016), pp.213-221, 2016 .
[41]L.B.Rosenberg,“Human swarming,a real-time method for paralleldistributed intelligence”,Swarm/Human Blended Intelligence Workshop(SHBI),pp.1–7,2015.[41] L.B.Rosenberg, "Human swarming, a real-time method for paralleldistributed intelligence", Swarm/Human Blended Intelligence Workshop (SHBI), pp.1–7, 2015.
[42]Q.Liu,L.Wu,W.Xiao,F.Wang,and L.Zhang,“A novel hybrid batalgorithm for solving continuous optimization problems,Applied SoftComputing,vol.73,pp. 67-82,2018.[42] Q. Liu, L. Wu, W. Xiao, F. Wang, and L. Zhang, “A novel hybrid batalgorithm for solving continuous optimization problems, Applied SoftComputing, vol.73, pp. 67-82, 2018.
[43]S.J.Nasuto,J.M.Bishop,and S.Lauria,“Time complexity analysis ofthe Stochastic Diffusion Search”,Proc.Neural Computation'98,Vienna,Austria,pp. 260-266,1998.[43] S.J.Nasuto, J.M.Bishop, and S.Lauria, "Time complexity analysis of the Stochastic Diffusion Search", Proc.Neural Computation'98, Vienna, Austria, pp. 260-266, 1998.
发明内容SUMMARY OF THE INVENTION
本公开提供一种谐波群优化方法及其应用,为了提出一种新的连续SSO集成单变量更新 机制(UM1)和新型谐波步长策略(HSS),通过引入单变量更新机制(UM1)和谐波步长策略(HSS) 以改进连续简化群优化(SSO)。所述的UM1和HSS能够平衡连续SSO在探索高维多变量和多模 态数值连续基准函数中的探索与利用能力;更新机制UM1仅需要更新一个变量,并且它完全不 同于SSO中的变量不需要更新所有变量;在所述的UM1中,HSS通过基于谐波序列减小步长来 增强利用容量。并对18个高维函数进行了数值实验,以确认本公开所提供方法的效率。The present disclosure provides a harmonic group optimization method and its application. In order to propose a new continuous SSO integrated univariate update mechanism (UM 1 ) and a new harmonic step size strategy (HSS), the 1 ) and Harmonic Step Size Strategy (HSS) to improve Continuous Simplified Group Optimization (SSO). The described UM 1 and HSS can balance the exploration and utilization capabilities of continuous SSO in exploring high-dimensional multivariate and multimodal numerical continuous benchmark functions; the update mechanism UM 1 only needs to update one variable, and it is completely different from the one in SSO. Variables do not need to update all variables; in the described UM 1 , HSS enhances the utilization capacity by reducing the step size based on the harmonic sequence. Numerical experiments were carried out on 18 high-dimensional functions to confirm the efficiency of the method provided by the present disclosure.
为了实现上述目的,根据本公开的一方面,提供一种谐波群优化方法,所述方法包括以 下步骤:In order to achieve the above object, according to an aspect of the present disclosure, a harmonic group optimization method is provided, the method comprising the following steps:
步骤1,构建谐波步长策略HSS;
步骤2,建立单变量更新机制UM1;
步骤3,通过UM1和HSS优化连续简化群优化SSO得到优化后的SSO;In
步骤4,将优化后的SSO应用于大数据处理。
进一步地,在步骤1中,构建谐波步长策略HSS的方法为:为了提高探索性能,构建基 于谐波序列的HSS如下式所示:Further, in
其中,Nvar是变量的数量,Uk和Lk是第k个变量的上限和下限,i是当前的代数,k是当前变量的索引,符号是floor函数/Floor()函数;将1,1/2,1/3,1/4,的序列称为谐波序列,则基于谐波序列HSS可表示如下:where Nvar is the number of variables, Uk and Lk are the upper and lower bounds of the kth variable, i is the current algebra, k is the index of the current variable, and the notation is the floor function/Floor() function; the sequence of 1, 1/2, 1/3, 1/4, is called the harmonic sequence, then based on the harmonic sequence HSS can be expressed as follows:
如果每个遗传的周期持续50代,步长Δi,k的值从一代生成周期逐代减少。If each inherited cycle lasts for 50 generations, the value of the step size Δi ,k decreases from generation to generation from generation to generation.
gBest和一些解在长期运行或多代后更接近最佳状态,这些解的更新只需稍微改变即可 更接近最佳状态而不会离开最佳区域。由于谐波序列减小,步长从早期生成的较长时间调整 到HSS的后期生成中较短时间,从而克服了连续SSO的缺陷。gBest and some solutions are closer to the optimal state after long runs or multiple generations, and the updates of these solutions only need to change slightly to get closer to the optimal state without leaving the optimal region. As the harmonic sequence is reduced, the step size is adjusted from a longer time in the early generation to a shorter time in the later generation of the HSS, thus overcoming the defect of continuous SSO.
进一步地,在步骤2中,建立单变量更新机制UM1的方法为:在主要的软计算中,每种解 都会略有更新,为了减少随机数并逐渐改变解的稳定性,令每个解的UM1中只更新一个随机选 择变量,假设i是当前的生成数,xj,k是从第j个解Xj中随机选择的变量, 是通过修改Xj的xj,k得到的时间解,可以得到以下等式:Further, in
σk,1和σk,2是分别在[-0.5,0.5]和[0,1]中生成的均匀随机变量;σ k,1 and σ k,2 are uniform random variables generated in [-0.5, 0.5] and [0,1], respectively;
ρk是[0,1]中生成的均匀随机变量;gk表示PgBest的第k个(kth)变量;ρ k is a uniform random variable generated in [0,1]; g k represents the kth (k th ) variable of P gBest ;
Lk和Uk分别是第k个变量的下限和上限。L k and U k are the lower and upper bounds of the kth variable, respectively.
注:Note:
(1)在本公开所提出的UM1中移除每个解的第一个下标和UMa中的变量以减少运行时间, 例如,UMa中的Xi,j和xi,j,k被简化为所提出的UM1中的Xj和xj,k。(1) In the proposed UM 1 of the present disclosure, the first subscript of each solution and the variables in UM a are removed to reduce the running time, for example, X i,j and x i,j in UM a , k is simplified to X j and x j,k in the proposed UM 1 .
(2)为简单起见而不失一般性,考虑到最小化问题,提出了上述方程的公式。(2) For simplicity without loss of generality, the formulation of the above equation is proposed considering the minimization problem.
(3)的上述值如果在更新后不可行并且在代入X*计算F(X*)之前需要更改为其最近的 边界。(3) The above value of if is not feasible after the update and needs to be changed to its nearest bound before substituting into X * to compute F(X * ).
例如,准备最小化下式:For example, prepare to minimize the following equation:
gen为遗传的代数; gen is the algebra of inheritance;
其中,具有以下属性:Among them, has the following properties:
属性1:在实施UM1后,每个解中的预期比较和随机值的数量从3·Nvar减少到3,Nvar减少 到1。Property 1: After implementing UM 1 , the number of expected comparisons and random values in each solution is reduced from 3 N var to 3 and N var to 1.
证明:测试每个更新变量的可行性的数字对UMa和UM1都是一个。然而,对于每个解,UMa测试每个更新的变量,而UM1只测试一个变量。此外,对于每个解,基于式(4)的UMa预期比较数计算如下:Proof: The number that tests the feasibility of each update variable is one for both UM a and UM 1 . However, for each solution, UM a tests each updated variable, while UM 1 tests only one variable. Furthermore, for each solution, the expected number of comparisons for UM a based on Eq. (4) is calculated as follows:
Nvar·(3cw+2cg+cr)≥Nvar·(3cw+3cg+3cr)=3·Nvar,N var ·(3c w +2c g +c r )≥N var ·(3c w +3c g +3c r )=3·N var ,
因为cr<cw<cg和cr+cw+cg=1以及它与cg、cw和cr在式(4)的第一、第二和第三项中的概 率进行了1、2和3次比较,可以得到:Because cr <c w <c g and cr +c w +c g =1 and it is related to the probabilities of c g , c w and cr in the first, second and third terms of equation (4 ) After 1, 2 and 3 comparisons, we get:
(cg+2cw+3cr)≤(3cg+3cw+3cr)=3。(c g +2c w +3c r )≤(3c g +3c w +3c r )=3.
进一步地,在步骤3中,通过UM1和HSS优化连续简化群优化SSO得到优化后的SSO的方法为: 本公开所提出的改进的连续SSO方法描述如下:Further, in
步骤0,令i=j=gBest=1.Step 0, let i=j=gBest=1.
步骤1,通过Xj并计算F(Xj);
步骤2,如果F(XgBest)<F(Xj)则令gBest=j;
步骤3,如果j<Nsol,令j=j+1并转到步骤1;
步骤4,令n*=1,N*=50,且令其中,k=1,2,…,Nvar;
步骤5,令i=i+1并且j=1;
步骤6,从Xj中随机选择一个变量,例如xj,k;
步骤7,令
步骤8,令
步骤9,如果F(X*)<F(Xj)则令Xj=X*并转到步骤10,否则转到步骤11;
步骤10,如果F(Xj)<F(XgBest)令gBest=j;
步骤11,如果当前运行时间等于或大于T,则流程结束,并且XgBest是适合F(XgBest)的最终 解;
步骤12,如果j<Nsol则令j=j+1,并转到步骤6;
步骤13.如果i<N*则转到步骤5;Step 13. If i<N * then go to
步骤14.将n*增加1,将N*增加50,并令其中,k=1,2,…,Nvar,并转到步骤5,
其中,为最佳的第k个变量,xi,j,k为第j个解中第k个变量的当前值, δ<<Δk,Δk为步长,δ与Δk的关系例如,如果δ=100·Δk是最好的情况,xi,j,k将需要100代才能 收敛到 in, is the optimal kth variable, x i,j,k is the current value of the kth variable in the jth solution, δ<< Δk , Δk is the step size, the relationship between δ and Δk For example, if δ=100· Δk is the best case, x i,j,k will take 100 generations to converge to
进一步地,在步骤4中,将优化后的SSO应用通过人工神经网络在大数据处理中的应用 方法为:Further, in
步骤A1,采集符合大数据类型的大数据;Step A1, collecting big data that conforms to the big data type;
步骤A2,对所采集到的大数据进行预处理和清洗,然后抽取出大数据清洗后得到的数据 集;Step A2, preprocessing and cleaning the collected big data, and then extracting the data set obtained after the big data cleaning;
步骤A3,对清洗后得到的数据集进行降维;Step A3, dimensionality reduction is performed on the data set obtained after cleaning;
步骤A4,将降维后的数据集划分为数据集划分为训练集和测试集;Step A4, dividing the data set after dimensionality reduction into a data set into a training set and a test set;
步骤A5,确定一个人工神经网络的结构为6-5-1的三层感知器神经网络,神经网络共需 优化设计41个参数,且神经网络优化设计参数的取值范围为[-1,1];Step A5, determine an artificial neural network with a structure of 6-5-1 three-layer perceptron neural network, the neural network needs a total of 41 parameters to be optimally designed, and the value range of the optimal design parameters of the neural network is [-1,1 ];
步骤A6,确定人工神经网络的输入变量为符合大数据类型的大数据,人工神经网络的输 出变量为大数据输出;Step A6, determine that the input variable of the artificial neural network is the big data that meets the big data type, and the output variable of the artificial neural network is the big data output;
步骤A7,确定人工神经网络的输入变量和输出变量;Step A7, determine the input variable and the output variable of the artificial neural network;
步骤A8,将每个优化后的SSO中的Xj中的变量解码为人工神经网络需优化的参数,计算 神经网络在训练了训练集和/或测试集后的误差,并将计算得到的误差作为适应度F(Xj),输 入到优化后的SSO中,并将优化后的SSO的运行结果(步骤0到步骤14的运行结果)得到的 适合F(XgBest)的最终解XgBest解码为人工神经网络的参数,并将得到的人工神经网络作为分类模 型;Step A8: Decode the variables in X j in each optimized SSO into the parameters to be optimized by the artificial neural network, calculate the error of the neural network after training the training set and/or the test set, and calculate the calculated error. As the fitness F(X j ), it is input into the optimized SSO, and the final solution X gBest suitable for F(X gBest ) obtained from the operating results of the optimized SSO (the operating results of step 0 to step 14) is decoded is the parameter of the artificial neural network, and the obtained artificial neural network is used as the classification model;
步骤A9,通过分类模型对新采集的符合大数据类型的大数据进行分类;Step A9, classifying the newly collected big data conforming to the big data type by the classification model;
其中,符合大数据类型的大数据包括但不限于符合数据类型为传统企业数据、机器和传 感器数据、社交数据中的任何一种大数据;传统企业数据包括CRM systems的消费者数据, 传统的ERP数据,库存数据以及账目数据等。机器和传感器数据包括呼叫记录,智能仪表, 工业设备传感器,设备日志,交易数据等。社交数据包括用户行为记录,反馈数据等。如Twitter, Facebook这样的社交媒体平台。Among them, the big data that conforms to the big data type includes but is not limited to any kind of big data that conforms to the data type: traditional enterprise data, machine and sensor data, and social data; traditional enterprise data includes consumer data of CRM systems, traditional ERP data data, inventory data, and accounting data, etc. Machine and sensor data includes call records, smart meters, industrial equipment sensors, equipment logs, transaction data, and more. Social data includes user behavior records, feedback data, etc. Social media platforms like Twitter, Facebook.
其中,大数据输出包括但不限于数据类别置信度、未来任意一段时间的预测值。The big data output includes, but is not limited to, the confidence level of the data category and the predicted value for any period of time in the future.
优化后的SSO的该项应用能够大幅提高人工神经网络在新获得的大数据的分类精度或预 测能力。The application of the optimized SSO can greatly improve the classification accuracy or prediction ability of artificial neural network in newly acquired big data.
进一步地,将优化后的SSO通过支持向量机在大数据处理中的应用的方法为:Further, the method of applying the optimized SSO in big data processing through support vector machine is:
步骤B1,采集符合大数据类型的大数据;Step B1, collecting big data conforming to the big data type;
步骤B2,对所采集到的大数据进行预处理和清洗并进行特征提取,得到大数据的特征向 量;Step B2, preprocessing and cleaning the collected big data and extracting features to obtain the feature vector of the big data;
步骤B3,将大数据的特征向量作为训练数据集;Step B3, the feature vector of the big data is used as the training data set;
步骤B4,随机生成j个均匀的随机变量,随机变量集合为Xj,Xj中的每个随机选择的变 量都存储了支持向量机的惩罚因子C和径向基核参数g;Step B4, randomly generate j uniform random variables, the set of random variables is X j , and each randomly selected variable in X j stores the penalty factor C of the support vector machine and the radial basis kernel parameter g;
步骤B5,计算Xj的适应度F(Xj),输入到优化后的SSO中,并将优化后的SSO的运行结果(步骤0到步骤14的运行结果)得到的适合F(XgBest)的最终解XgBest解码为支持向量机的参数,并将得到的支持向量机作为分类模型;Step B5: Calculate the fitness F(X j ) of X j , input it into the optimized SSO, and use the optimized SSO operation result (the operation result of step 0 to step 14) to obtain the suitable F(X gBest ) The final solution X gBest is decoded into the parameters of the support vector machine, and the obtained support vector machine is used as the classification model;
步骤B6,通过分类模型对新采集的符合大数据类型的大数据进行分类;Step B6, classifying the newly collected big data conforming to the big data type by the classification model;
其中,符合大数据类型的大数据包括但不限于符合数据类型为传统企业数据、机器和传 感器数据、社交数据中的任何一种大数据;传统企业数据包括CRM systems的消费者数据, 传统的ERP数据,库存数据以及账目数据等。机器和传感器数据包括呼叫记录,智能仪表, 工业设备传感器,设备日志,交易数据等。社交数据包括用户行为记录,反馈数据等。Among them, the big data that conforms to the big data type includes but is not limited to any kind of big data that conforms to the data type: traditional enterprise data, machine and sensor data, and social data; traditional enterprise data includes consumer data of CRM systems, traditional ERP data data, inventory data, and accounting data, etc. Machine and sensor data includes call records, smart meters, industrial equipment sensors, equipment logs, transaction data, and more. Social data includes user behavior records, feedback data, etc.
其中,大数据输出包括但不限于数据分类结果、类别置信度。Among them, the big data output includes but is not limited to data classification results and category confidence.
优化后的SSO的该项应用能够大幅提高支持向量机对新获得的大数据的分类精度。The application of the optimized SSO can greatly improve the classification accuracy of the newly acquired big data by the support vector machine.
本公开的有益效果为:本发明提供一种谐波群优化方法及其应用,提高了传统的ABC和 SSO的探索与利用性能、在探索(exploitation)与利用(exploration)之间求得了一个比较优 秀的平衡点,并且应用面广泛,能够应用在人工神经网络、遗传算法(GA)、模拟退火、禁忌 搜索、蚁群优化、粒子群优化算法、差分进化、算法估计分布、人工蜂群算法(ABC)、帝国 主义竞争算法、强化学习算法、贝叶斯网络、飓风优化算法、引力搜索算法、人类群集、蝙 蝠算法、随机扩散搜索等寻求优化解的方法通过本公开的方法进行参数等变量优化调整后直 接应用于大数据处理、图像处理、音视频的识别等领域,能够大幅提高支持人工神经网络、 向量机等对新获得的大数据的分类、预测精度;提高了图像处理、音视频的识别的准确度和 速度。The beneficial effects of the present disclosure are as follows: the present invention provides a harmonic group optimization method and its application, improves the performance of exploration and utilization of traditional ABC and SSO, and obtains a comparison between exploration and utilization Excellent balance point, and has a wide range of applications, can be applied in artificial neural network, genetic algorithm (GA), simulated annealing, tabu search, ant colony optimization, particle swarm optimization algorithm, differential evolution, algorithm estimation distribution, artificial bee colony algorithm ( ABC), imperialist competition algorithm, reinforcement learning algorithm, Bayesian network, hurricane optimization algorithm, gravitational search algorithm, human swarming, bat algorithm, random diffusion search and other methods for seeking optimization solutions, parameters and other variables are optimized by the method of the present disclosure After adjustment, it can be directly applied to the fields of big data processing, image processing, audio and video recognition, etc., which can greatly improve the classification and prediction accuracy of newly acquired big data such as artificial neural networks and vector machines; improve the performance of image processing, audio and video. Accuracy and speed of recognition.
附图说明Description of drawings
通过对结合附图所示出的实施方式进行详细说明,本公开的上述以及其他特征将更加明 显,本公开附图中相同的参考标号表示相同或相似的元素,显而易见地,下面描述中的附图 仅仅是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下, 还可以根据这些附图获得其他的附图,在附图中:The above-mentioned and other features of the present disclosure will become more apparent from the detailed description of the embodiments shown in conjunction with the accompanying drawings, in which the same reference numerals refer to the same or similar elements of the present disclosure. The drawings are only some embodiments of the present disclosure. For those of ordinary skill in the art, other drawings can also be obtained from these drawings without creative efforts. In the drawings:
图1为在实验1中不同的Δ值和T值下SSO1的平均成功率条形图;Figure 1 is a bar graph of the average success rate of SSO1 under different Δ and T values in
图2为不同Δ值和问题下的SSO1平均成功率和实验1中的问题条形图;Figure 2 is a bar graph of the average success rate of SSO1 and the problems in
图3为从实验2中的ABC,GA,PSO,SSO1和SSOa获得的Fmin值条形图;Figure 3 is a bar graph of F min values obtained from ABC, GA, PSO, SSO1 and SSOa in
图4为AIO(ABC,SSO1),AIO(SSOa,SSO1),和AIO(ABC,SSOa)的柱状图;Figure 4 is a histogram of AIO (ABC, SSO1), AIO (SSOa, SSO1), and AIO (ABC, SSOa);
图5为实验2中不同T值下的成功率条形图;Figure 5 is a bar graph of the success rate under different T values in
图6为实验2中ABC,SSO1,and SSOa的Fmin值条形图;Figure 6 is a bar graph of the F min value of ABC, SSO1, and SSOa in
图7为实验2中T=1.25时ABC,SSO1和SSOa的平均适应度值的箱形图;Figure 7 is a box plot of the average fitness values of ABC, SSO1 and SSOa when T=1.25 in
图8为实验2中T=1.50(a)时ABC,SSO1和SSOa的平均适应度值的箱形图;Figure 8 is a box plot of the average fitness values of ABC, SSO1 and SSOa when T=1.50(a) in
图9为实验2中T=1.50(b)时ABC,SSO1和SSOa的平均适应度值的箱形图;Figure 9 is a box plot of the average fitness values of ABC, SSO1 and SSOa when T=1.50(b) in
图10为实验2中T=1.75时ABC,SSO1和SSOa的平均适应度值的箱形图;Figure 10 is a box plot of the average fitness values of ABC, SSO1 and SSOa when T=1.75 in
图11为实验2中T=2.00时ABC,SSO1和SSOa的平均适应度值的箱形图;Figure 11 is a box plot of the average fitness values of ABC, SSO1 and SSOa when T=2.00 in
图12为实验2中T=2.25时ABC,SSO1和SSOa的平均适应度值的箱形图;Figure 12 is a box plot of the average fitness values of ABC, SSO1 and SSOa when T=2.25 in
图13为实验2中T=2.50时ABC,SSO1和SSOa的平均适应度值的箱形图;Figure 13 is a box plot of the average fitness values of ABC, SSO1 and SSOa when T=2.50 in
图14为实验2中T=2.75时ABC,SSO1和SSOa的平均适应度值的箱形图;Figure 14 is a box plot of the average fitness values of ABC, SSO1 and SSOa when T=2.75 in
图15为实验2中T=3.00时ABC,SSO1和SSOa的平均适应度值的箱形图;Figure 15 is a box plot of the average fitness values of ABC, SSO1 and SSOa when T=3.00 in
图16为实验2中T=3.50时ABC,SSO1和SSOa的平均适应度值的箱形图;Figure 16 is a box plot of the average fitness values of ABC, SSO1 and SSOa when T=3.50 in
图17为实验2中T=3.75时ABC,SSO1和SSOa的平均适应度值的箱形图。Figure 17 is a boxplot of the average fitness values of ABC, SSO1 and SSOa at T=3.75 in
具体实施方式Detailed ways
以下将结合实施例和附图对本公开的构思、具体结构及产生的技术效果进行清楚、完整 的描述,以充分地理解本公开的目的、方案和效果。需要说明的是,在不冲突的情况下,本 申请中的实施例及实施例中的特征可以相互组合。The concept, specific structure and technical effects of the present disclosure will be clearly and completely described below in conjunction with the embodiments and the accompanying drawings, so as to fully understand the purpose, solutions and effects of the present disclosure. It should be noted that the embodiments in the present application and the features of the embodiments may be combined with each other under the condition of no conflict.
简化群优化(SSO)概述:Simplified Swarm Optimization (SSO) overview:
连续型SSO处理变量的方法见参考文献[23-25]。The method of continuous SSO processing variables is shown in References [23-25].
在传统SSO的基础上,提出了新的连续SSO。此后,在提出本公开提出的新的SSO之前, 简要介绍了传统的SSO。On the basis of traditional SSO, a new continuous SSO is proposed. Hereinafter, before presenting the new SSO proposed by the present disclosure, the conventional SSO is briefly introduced.
传统(离散)SSOTraditional (Discrete) SSO
传统的(离散的)SSO非常有效地解决离散(优化)问题(仅限离散变量)[17-21,32]或 浮点数有限的连续问题[22],例如,每个值的数量功能在数据挖掘问题中受到限制。Traditional (discrete) SSO is very efficient at solving discrete (optimization) problems (discrete variables only) [17-21, 32] or continuous problems with finite floating-point numbers [22], e.g., the number of values per function in the data Limited in mining problems.
所有类型的SSO的基本思想是,每个变量,比如xi+1,j,k,,都需要根据以下方程式进行更新:The basic idea of all types of SSO is that each variable, say x i+1,j,k , needs to be updated according to the following equation:
其中x是随机生成的可行值。其具体细节见参考文献[17-27,31,32]。where x is a randomly generated feasible value. For details, see references [17-27, 31, 32].
cg、cp、cw为参数。c g , c p , and c w are parameters.
基于等式(1)所示的阶跃函数和生成的随机数ρk,离散SSO通过验证阶跃函数中的第一 项到最后一项,更新每个变量,直到找到包含生成的随机数的间隔。Based on the step function shown in equation (1) and the generated random number ρ k , discrete SSO updates each variable by validating the first to last term in the step function until it finds a random number containing the generated random number interval.
离散SSO仅将每个变量更新为以前从未出现过的低概率值,即cr。因此,大多数情况下, 离散SSO只能将变量更新为有限数量的值,即gBest、pBest及其自身,并且这些值仅在以前 的所有值中找到。更新过程。上述优点使得离散SSO在解决离散问题时变得非常有效,见参 考文献[17-22,32]。Discrete SSO only updates each variable to a low probability value that has never been seen before, i.e. cr . Therefore, in most cases, discrete SSO can only update a variable to a limited number of values, namely gBest, pBest, and itself, and these values are only found among all previous values. update process. The above advantages make discrete SSO very effective in solving discrete problems, see References [17-22, 32].
连续型的SSO:Continuous SSO:
对于一般的连续优化问题,在所有先前的更新过程中可能永远不会发现连续优化问题中 的最终解的每个变量。因此,需要针对连续问题修改离散SSO,而不会失去离散SSO的简单 性和方便性[23-27,32]。注意,连续SSO的优点正是离散SSO的缺点,反之亦然。For a general continuous optimization problem, every variable of the final solution in the continuous optimization problem may never be found in all previous update processes. Therefore, discrete SSO needs to be modified for continuous problems without losing the simplicity and convenience of discrete SSO [23-27,32]. Note that the advantages of continuous SSO are exactly the disadvantages of discrete SSO, and vice versa.
SSO变量的基本的流程从未改变,只是基于等式(1)中列出的更新变量的步骤函数,参 见参考文献[17-27,31,32]。到目前为止,发展连续SSO的主要趋势是在等式(1)(参见参考 文献[23-25])中的某些项上增加一个阶跃函数,或者将SSO与其他软计算方法结合起来 [26,27,31],例如微分进化(参见参考文献[26,31])和PSO(参见参考文献【C.L.Huang,“A particle-based simplified swarm optimization algorithm forreliability redundancy allocation problems”,Reliability Engineering&SystemSafety,vol.142,pp.221-230,2015】)。The basic flow of SSO variables has never changed, but is based on the step function of updating the variables listed in Equation (1), see References [17-27, 31, 32]. So far, the main trend to develop continuous SSO is to add a step function to some terms in equation (1) (see refs [23–25]), or to combine SSO with other soft computing methods [23–25]. 26, 27, 31], such as differential evolution (see refs [26, 31]) and PSO (see refs [C.L. Huang, "A particle-based simplified swarm optimization algorithm for reliability redundancy allocation problems", Reliability Engineering & System Safety, vol. 142, pp. 221-230, 2015]).
SSO中的全变量更新机制(例如,等式(1))能够逃避局部优化并探索未到达/未实现的 空间。因此,连续SSO的目标是在探索中探索更好的方法,主要步骤是在更新机制的某些项 中添加随机步长,该步长由一些随机数乘以步长获得。正如本公开所确定的方法那样,我们 对本公开主要目的将是探索更好的步长。The fully variable update mechanism in SSO (e.g., equation (1)) is able to escape local optimization and explore unreached/unrealized spaces. So the goal of continuous SSO is to explore better ways in exploration, the main step is to add a random step size in some terms of the update mechanism, which is obtained by multiplying the step size by some random number. Our main purpose for this disclosure will be to explore better step sizes, as the method identified in this disclosure.
Yeh首先通过在更新机制中添加步骤长度来提出一个连续的SSO,用于预测时间序列问题 见参考文献(W.C.Yeh,“A New Parameter-Free Simplified Swarm Optimizationfor Artificial Neural Network training and its Application in Prediction ofTime-Series”,IEEE Transactions on Neural Networks and Learning Systems,vol.24,pp.661-665,2013),如下所示Yeh first proposed a continuous SSO by adding a step length to the update mechanism for prediction of time series problems. See reference (W.C.Yeh, "A New Parameter-Free Simplified Swarm Optimization for Artificial Neural Network training and its Application in Prediction of Time- Series”, IEEE Transactions on Neural Networks and Learning Systems, vol.24, pp.661-665, 2013), as follows
其中σk是在[-1,1]范围内产生的均匀随机数,步长Δj是第i个解的适应性未得到改善的遗 传代数的倒数。where σk is a uniform random number generated in the range [-1,1], and the step size Δj is the reciprocal of the genetic algebra for which the fitness of the ith solution is not improved.
注意,式(2)也是第一个自适应参数概念,它将参数cg(式(2)中表示为cg,i,j)和cw(式(2)中表示为cw,i,j)作为变量,这样每个解在SSO中都有自己的cg和cw。见参考文献【W.C.Yeh,“A New Parameter-Free Simplified Swarm Optimization for Artificial NeuralNetwork training and its Application in Prediction of Time-Series”,IEEETransactions on Neural Networks and Learning Systems,vol.24,pp.661-665,2013】,这个概念与传统的SSO不同,SSO的所有参数从 开始到结束都是固定的。Note that Equation (2) is also the first adaptive parameter concept, which takes the parameters c g (denoted as c g,i,j in Equation (2)) and cw (denoted as c w ,i in Equation (2)) ,j ) as variables such that each solution has its own c g and c w in SSO. See reference [WCYeh, "A New Parameter-Free Simplified Swarm Optimization for Artificial NeuralNetwork training and its Application in Prediction of Time-Series", IEEE Transactions on Neural Networks and Learning Systems, vol.24, pp.661-665, 2013] , this concept is different from traditional SSO, all parameters of SSO are fixed from start to end.
为了确定步长Δj的值仍然太大[23,24]即使对于方程(2)中更大的世代数,Yeh提出了一 个新的UM(这里称为UMa)通过缩短来更新所有连续变量区间[-1,1]到[-0.5,0.5],将Δj改为 并在方程(2)中再乘以一个随机数,(见参考文献【K.Kang,C.Bae,H.W.F. Yeung,and Y.Y.Chung,“A Hybrid Gravitational Search Algorithmwith Swarm Intelligence and Deep Convolutional Feature for Object TrackingOptimization”,Applied Soft Computing, https://doi.org/10.1016/j.asoc.2018.02.037,2018】),如下。In order to determine that the value of the step size Δj is still too large [23, 24] even for a larger number of generations in Eq. (2), Yeh proposed a new UM (here called UM a ) to update all continuous variables by shortening interval [-1,1] to [-0.5,0.5], change Δ j to and multiplied by a random number in equation (2), (see references [K. Kang, C. Bae, HWF Yeung, and YY Chung, "A Hybrid Gravitational Search Algorithm with Swarm Intelligence and Deep Convolutional Feature for Object TrackingOptimization", Applied Soft Computing, https://doi.org/10.1016/j.asoc.2018.02.037, 2018]), below.
其中, in,
cg、cr为参数,Nvar是变量的数量。σ1,k和σ2,k是在[-0.5,0.5]中生成的两个均匀随机变量;c g , cr are parameters, N var is the number of variables. σ 1,k and σ 2,k are two uniform random variables generated in [-0.5, 0.5];
ρk是在[0,1]中生成的均匀随机变量;Uk和Lk是第k个变量的上限和下限。ρ k is a uniform random variable generated in [0,1]; U k and L k are the upper and lower bounds of the kth variable.
在式(3)中,解只能更新为更好的值;否则,UMa在更新之前保留原始解式(4)是UMa中最重要的部分,其背后的概念是每个变量更新到其自身的邻域,gBest的邻域,以及它自身 与gBest之间的间隔(如果ρk在与cr(式(4)中的第一项),cg(式(4)中的第二项)和cw=1-cr-cg (式(4)中的第三项)相关的区间内)。In Equation (3), the solution can only be updated to a better value; otherwise, UM a retains the original solution before updating Equation (4) is the most important part of UM a , the concept behind it is that each variable is updated to Its own neighborhood, the neighborhood of gBest , and the interval between itself and gBest (if ρ k is in the binomial) and c w =1- cr-c g ( the third term in equation (4)) within the interval).
注:Note:
1、式(4)修正参考文献[25]中的错误,该错误为丢失了ρ2,var。1. Equation (4) corrects the error in Reference [25], which is the loss of ρ 2,var .
2、在式(4)中,如果更新后的变量不可行,则将其设置为最近的边界。2. In Eq. (4), if the updated variable is not feasible, it is set to the nearest boundary.
当前连续SSO的缺点为:The disadvantages of current continuous SSO are:
在参考文献[23-25]中提出的现有的连续SSO中,步长都是固定的,并且每个变量需要 所有遗传代数的步长。In the existing continuous SSO proposed in refs [23–25], the step sizes are all fixed, and each variable requires the step size of all genetic algebras.
从式(2)开始,随机步长ρk·Δj的最小值为1/Ngen,仅在ρk=-1和1/Δj=Ngen,即第j个解 从开始到结束未改进。例如,如果是Ngen=1000,则为ρk·Δj=-0.001。对于某些特殊问题,上述 值仍然太大,因此与最佳值相差太远。Starting from equation (2), the minimum value of the random step size ρ k ·Δ j is 1/N gen , only when ρ k =-1 and 1/Δ j =N gen , that is, the jth solution is not changed from the beginning to the end. Improve. For example, if N gen =1000, then ρ k ·Δ j =-0.001. For some special problems, the above values are still too large and therefore too far from optimal.
在方程(4)和(5)中,当当前的代数接近其后期的终点时,如果步长Δk太长,更新机 制就无法进行利用,使其收敛到最优值。相反,如果步长Δk太短,且解的动量不足,则需要 很长时间才能达到最佳值附近,尤其是在早期阶段。In equations (4) and (5), when the current algebra approaches its late end point, if the step size Δk is too long, the update mechanism cannot be utilized to make it converge to the optimal value. Conversely, if the step size Δk is too short and the momentum of the solution is insufficient, it will take a long time to get around the optimal value, especially in the early stages.
例如,让第j个解中第k个变量的当前值为xi,j,k,最佳的第k个变量为如 果δ<<Δk甚至Δk需要计算在[-0.5,0.5]内生成的两个不同的随机数,则下一个更新的解接近 最优的可能性很小。然而,如果δ=100·Δk是最好的情况,xi,j,k将需要100代才能收敛到 For example, let the current value of the kth variable in the jth solution be x i,j,k , and the best kth variable is If δ<< Δk or even Δk needs to compute two different random numbers generated within [-0.5, 0.5], there is little chance that the next updated solution will be close to optimal. However, if δ=100· Δk is the best case, x i,j,k will take 100 generations to converge to
因此,在探索中具有自适应步长而不是固定步长更合理。因此,本公开提出了一种新的 基于HSS的连续SSO更新机制。Therefore, it is more reasonable to have an adaptive step size instead of a fixed step size in exploration. Therefore, the present disclosure proposes a new HSS-based continuous SSO update mechanism.
本公开提出的UM1和HSS:UM 1 and HSS proposed by this disclosure:
为了提高探索性能,本公开提出的基于谐波序列的HSS在式(6)中呈现如下:In order to improve the exploration performance, the harmonic sequence-based HSS proposed in the present disclosure is presented in equation (6) as follows:
其中,Nvar是变量的数量,Uk和Lk是第k个变量的上限和下限,i是当前的代数,k是当前变量的索引,符号是floor函数/Floor()函数。where Nvar is the number of variables, Uk and Lk are the upper and lower bounds of the kth variable, i is the current algebra, k is the index of the current variable, and the notation is the floor function/Floor() function.
若将1,1/2,1/3,1/4,的序列称为谐波序列,则本公开提出的基于谐波序列HSS可表示如下:If the sequence of 1, 1/2, 1/3, and 1/4 is called a harmonic sequence, the harmonic sequence-based HSS proposed by the present disclosure can be expressed as follows:
如果每个遗传的周期持续50代。步长Δi,k的值从一代生成周期逐代减少。如表1所示为 基于HSS(如果Uk-Lk=10和Nvar=100)的步长。If each genetic cycle lasts for 50 generations. The value of the step size Δi ,k decreases from generation to generation from generation to generation. Table 1 shows the step size based on HSS (if Uk- Lk =10 and Nvar = 100).
表1 20个步长Table 1 20 steps
gBest和一些解在长期运行或多代后更接近最佳状态。这些解的更新只需稍微改变即可更 接近最佳状态而不会离开最佳区域。由于谐波序列减小,步长从早期生成的较长时间调整到HSS的后期生成中较短时间,从而克服了连续SSO的缺陷。gBest and some solutions are closer to optimal after long runs or multiple generations. The updates of these solutions only need to change slightly to get closer to the optimal state without leaving the optimal region. Due to the reduced harmonic sequence, the step size is adjusted from a longer time in the early generation to a shorter time in the later generation of the HSS, thus overcoming the defect of continuous SSO.
单变量UM(UM1)Univariate UM (UM 1 )
在主要的软计算中,每种解都会略有更新。例如,PSO的UM是基于向量的UM为下面列 出的两个等式,具体(参见参考文献[7,8,27]):In major soft computations, each solution is updated slightly. For example, PSO's UM is a vector-based UM as the two equations listed below, specifically (see References [7, 8, 27]):
Vi+1,j=wVi,j+c1·σ1·(G-Xi,j)+c2·σ2·(Pi-Xi,j) (8)V i+1,j =wV i,j +c 1 ·σ 1 ·(GX i,j )+c 2 ·σ 2 ·(P i -X i,j ) (8)
Xi+1,j=Xi,j+Vi+1,j (9)X i+1,j =X i,j +V i+1,j (9)
其中,Vi,j和Xi,j分别是第i代中第j个粒子的速度和位置;where Vi,j and X i,j are the velocity and position of the jth particle in the ith generation, respectively;
w、c1、c2是常量;σ1和σ2是在区间[0,1]内生成的两个均匀随机变量;Pi是解i的pBest;w, c 1 , c 2 are constants; σ 1 and σ 2 are two uniform random variables generated in the interval [0,1]; P i is the pBest of the solution i;
G=PgBest为gBest。注:SSO中的PgBest等于式(8)中的G。G=P gBest is gBest. Note: P gBest in SSO is equal to G in equation (8).
在ABC中(参见参考文献[13-16]),为每个解随机选择一个变量进行更新。传统遗传算法 (GA)中的更新算子(参见参考文献[3,4])可以通过一个切点交叉更改一半变量,也可以 通过一个切点突变更新两个变量。但是,在SSO中,必须更新所有变量。In ABC (see References [13-16]), a variable is randomly selected for each solution to update. The update operator in the traditional genetic algorithm (GA) (see References [3, 4]) can change half of the variables through a cut-point crossing, or update two variables through a cut-point mutation. However, in SSO, all variables must be updated.
为了减少随机数并逐渐改变解的稳定性,每个解的UM1中只更新一个随机选择变量。假设i 是当前的生成数,xj,k是从第j个解Xj中随机选择的变量,是通过修改Xj的xj,k得到的时间解。可以得到以下等式:To reduce random numbers and gradually change the stability of the solution, only one randomly selected variable is updated in UM 1 for each solution. Suppose i is the current generation number, x j,k are variables randomly selected from the jth solution X j , is the time solution obtained by modifying x j,k of X j . The following equation can be obtained:
σk,1和σk,2是分别在[-0.5,0.5]和[0,1]中生成的均匀随机变量;σ k,1 and σ k,2 are uniform random variables generated in [-0.5, 0.5] and [0,1], respectively;
ρk是[0,1]中生成的均匀随机变量;ρ k is a uniform random variable generated in [0,1];
Lk和Uk分别是第k个变量的下限和上限。L k and U k are the lower and upper bounds of the kth variable, respectively.
注:Note:
(1)在本公开所提出的UM1中移除每个解的第一个下标和UMa中的变量以减少运行时间, 例如,UMa中的Xi,j和xi,j,k被简化为所提出的UM1中的Xj和xj,k。(1) In the proposed UM 1 of the present disclosure, the first subscript of each solution and the variables in UM a are removed to reduce the running time, for example, X i,j and x i,j in UM a , k is simplified to X j and x j,k in the proposed UM 1 .
(2)为简单起见而不失一般性,考虑到最小化问题,提出了上述方程的公式。(2) For simplicity without loss of generality, the formulation of the above equation is proposed considering the minimization problem.
(3)的上述值如果在更新后不可行并且在代入X*计算F(X*)之前需要更改为其最近的 边界。(3) The above value of if is not feasible after the update and needs to be changed to its nearest bound before substituting into X * to compute F(X * ).
例如,准备最小化下式:For example, prepare to minimize the following equation:
表2列出了基于拟定UM1的不同ρ1值的更新X7。Table 2 lists the updated X 7 based on different p 1 values for the proposed UM 1 .
表2 UM1的例子Table 2 Example of UM 1
标记!为从1.216>U1=1.0开始用替换 Mark ! For starting from 1.216>U 1 =1.0 replace
标记!!为从F(X7)=F(-0.41467,-0.3)=0.008395<F(XgBest)=F(-0.5,-0.9)=0.0596开始用gBest=7 替换gBest=3。Mark ! ! To replace gBest=3 with gBest=7 starting from F(X 7 )=F(-0.41467,-0.3)=0.008395<F(X gBest )=F(-0.5,-0.9)=0.0596.
因此,以下引理为真。Therefore, the following lemma is true.
属性1:在实施UM1后,每个解中的预期比较和随机值的数量从3·Nvar减少到3,Nvar减 少到1。Property 1: After implementing UM 1 , the number of expected comparisons and random values in each solution is reduced from 3 N var to 3 and N var to 1.
证明:测试每个更新变量的可行性的数字对UMa和UM1都是一个。然而,对于每个解,UMa测试每个更新的变量,而UM1只测试一个变量。此外,对于每个解,基于式(4)的UMa预期比较数计算如下:Proof: The number that tests the feasibility of each update variable is one for both UM a and UM 1 . However, for each solution, UM a tests each updated variable, while UM 1 tests only one variable. Furthermore, for each solution, the expected number of comparisons for UM a based on Eq. (4) is calculated as follows:
Nvar·(3cw+2cg+cr)≥Nvar·(3cw+3cg+3cr)=3·Nvar, (15)N var ·(3c w +2c g +c r )≥N var ·(3c w +3c g +3c r )=3·N var , (15)
因为cr<cw<cg和cr+cw+cg=1以及它与cg、cw和cr在式(4)的第一、第二和第三项中的概 率进行了1、2和3次比较,可以得到:Because cr <c w <c g and cr +c w +c g =1 and it is related to the probabilities of c g , c w and cr in the first, second and third terms of equation (4 ) After 1, 2 and 3 comparisons, we get:
(cg+2cw+3cr)≤(3cg+3cw+3cr)=3, (16)(c g +2c w +3c r )≤(3c g +3c w +3c r )=3, (16)
如果基于式(11)实施UM1,则式(4)中的每个变量都需要σvar,1,σvar,2和ρvar,但仅适用 于式(11)中的每个变量。If UM 1 is implemented based on equation (11), σ var,1 , σ var,2 and ρ var are required for each variable in equation (4), but only for each variable in equation (11).
本公开所提出的改进的连续SSO方法描述如下:The improved continuous SSO method proposed by the present disclosure is described as follows:
步骤0,令i=j=gBest=1.Step 0, let i=j=gBest=1.
步骤1,创建任意Xj并计算F(Xj);
步骤2,如果F(XgBest)<F(Xj)则令gBest=j;
步骤3,如果j<Nsol,令j=j+1并转到步骤1;
步骤4,令n*=1,N*=50,且令其中,k=1,2,…,Nvar;
步骤5,令i=i+1并且j=1;
步骤6,从Xj中随机选择一个变量,例如xj,k;
步骤7,令
步骤8,令
步骤9,如果F(X*)<F(Xj)则令Xj=X*并转到步骤10,否则转到步骤11;
步骤10,如果F(Xj)<F(XgBest)令gBest=j;
步骤11,如果当前运行时间等于或大于T,则流程结束,并且XgBest是适合F(XgBest)的最终 解;
步骤12,如果j<Nsol则令j=j+1,并转到步骤6;
步骤13.如果i<N*则转到步骤5;Step 13. If i<N * then go to
步骤14.将n*增加1,将N*增加50,并令其中,k=1,2,…,Nvar,并转到步骤5。
其中,为最佳的第k个变量,xi,j,k为第j个解中第k个变量的当前值,δ<<Δk, Δk为步长,δ与Δk的关系例如,如果δ=100·Δk是最好的情况,xi,j,k将需要100代才能收敛到 in, is the optimal kth variable, x i,j,k is the current value of the kth variable in the jth solution, δ<< Δk , Δk is the step size, the relationship between δ and Δk For example, if δ=100· Δk is the best case, x i,j,k will take 100 generations to converge to
实施例性能评价:Example performance evaluation:
在本实施例中,基于从基准问题扩展的18个50变量连续数值函数见参考文献[13-16,26,25],进行了两个实验,实验1和实验2,如表A实验数据所示。表A实验数据中 的数据类型标记包括C:特征数据,U:单峰型数据,M:多峰型数据,S:可分离数据,N: 不可分离数据。In this example, based on 18 50-variable continuous numerical functions extended from the benchmark problem (see References [13-16, 26, 25], two experiments,
表A实验数据Table A Experimental data
实验设计:experimental design:
为了便于识别,将参考文献[25](K.Kang,C.Bae,H.W.F.Yeung,and Y.Y.Chung,“AHybrid Gravitational Search Algorithm with Swarm Intelligence and DeepConvolutional Feature for Object Tracking Optimization”,Applied SoftComputing,https://doi.org/10.1016/j.asoc.2018.02.037, 2018)中提出的UMa的SSO称为SSOa,将本实施例中的HSS和UM1实现的SSO在本实 施例中称为SSO1。For easy identification, reference [25] (K.Kang, C.Bae, H.W.F.Yeung, and Y.Y.Chung, "AHybrid Gravitational Search Algorithm with Swarm Intelligence and DeepConvolutional Feature for Object Tracking Optimization", Applied SoftComputing, https:// The SSO of UMa proposed in doi.org/10.1016/j.asoc.2018.02.037, 2018) is called SSOa, and the SSO implemented by HSS and UM1 in this embodiment is called SSO1 in this embodiment.
在实验1中,仅测试HSS的作用和不同的相关步长策略,以确定所提出的HSS中步长的 最佳值。在实验2的SSO1中使用实验1中具有最佳结果的步长。In
在实验2中,重点转移到比较SSO1与其他四种算法的性能:ABC[13-16],SSOa[25],GA[3,4] 和PSO[7,8];GA和PSO是进化计算和群体智能中的两种最先进的算法;如果停止标准分别是适 应度函数评估数和运行时间,则ABC[13-16]和SSOa[25]是50个著名基准问题中最常见的算 法,其变量小于或等于30。In
在所有实验中测试的每个算法都是用C编程语言编写的。其中,ABC代码改编自http://mf.erciyes.edu.tr/abc/。Every algorithm tested in all experiments is written in the C programming language. Among them, the ABC code is adapted from http://mf.erciyes.edu.tr/abc/.
SSOa,GA和PSO均来自http://integrationandcollaboration.org/SSO.html。每个测 试都应用于Intel Core i7-5960X CPU 3.00GHz,16GB RAM和64位Win10,运行时单位为 CPU秒。SSOa, GA and PSO are all from http://integrationandcollaboration.org/SSO.html. Each test was applied to an Intel Core i7-5960X CPU 3.00GHz, 16GB RAM, and 64-bit Win10, with runtime in CPU seconds.
在所有测试中,在实验过程中在SSO1,SSOa和ABC中应用的所有参数都直接从参考文献 [25]中获取,以进行更准确的比较:cr=0.45,cg=0.40,cw=0.15。In all tests, all parameters applied in SSO1, SSOa and ABC during the experiment were taken directly from ref. [25] for a more accurate comparison: cr = 0.45, c g = 0.40, c w = 0.15.
对于ABC,所有参数均采用参考文献[13-16]。蜂群大小为50,食物来源数为25,如果 在50·25=750次更新后没有改善,则重新生成解。For ABC, references [13-16] were adopted for all parameters. The colony size is 50, the number of food sources is 25, and if there is no improvement after 50 25 = 750 updates, regenerate the solution.
对于GA,一点交叉,两点突变和精英选择分别以交叉率0.7和突变率0.3实现,见参考 文献[3,4]。For GA, one-point crossover, two-point mutation and elite selection were achieved with a crossover rate of 0.7 and a mutation rate of 0.3, respectively, see refs [3,4].
对于PSO,在式(4)中cw=0.9并且c1=c2=2.0;如果速度函数的值大于2或小于-2,则将速度函数设置在2或-2的范围内见参考文献[7,8]。For PSO, c w =0.9 and c 1 =c 2 =2.0 in equation (4); if the value of the velocity function is greater than 2 or less than -2, set the velocity function in the range of 2 or -2 See Ref. [7,8].
在每一代中,ABC可能不止一次计算适应度值,见参考文献[13-16],因此,使用遗传代 数作为停止标准是不正确和不公平的。In each generation, ABC may calculate the fitness value more than once, see References [13-16], therefore, it is incorrect and unfair to use the genetic generation as the stopping criterion.
另外,ABC中的第二个UM,即“旁观者(the onlooker)”,需要花费额外的时间来更新 解。Also, the second UM in ABC, "the onlooker", takes extra time to update the solution.
因此,为了进行公平的比较,实验2中每种算法在1.25、1.50……到3.75计算机秒的时 间限制(用t表示)被用作停止标准,以观察每种算法的趋势和变化。Therefore, for a fair comparison, time limits (denoted by t) of 1.25, 1.50... to 3.75 computer seconds for each algorithm in
注:每一次运行对于所有三种算法都是独立的。例如,在t=1.50的情况下,每次运行SSO1 必须从0秒重新开始,而不是简单地从t=1.25乘以0.25秒扩展到任何运行NOTE: Each run is independent for all three algorithms. For example, in the case of t=1.50, each run of SSO1 must restart from 0 seconds, rather than simply extending from t=1.25 times 0.25 seconds to any run
对于每个基准函数,平均运行时间是获得50gBest的时间。在所有表中,每个下标表示 值的排名。此外,Nrun=55且Nsol=100。For each benchmark function, the average run time is the time to get 50gBest. In all tables, each subscript represents the rank of the value. Furthermore, N run =55 and N sol =100.
在实际应用中,算法被实施并执行很多次以找到最佳结果。只保留和使用具有最佳结果 的解。因此,大多数相关的发表论文只是简单地报告和比较了从算法中获得的最佳结果来证 明性,见参考文献[13-16,25,27]。因此,本公开也在实验2中也关注于最佳结果Fmin。In practical applications, the algorithm is implemented and executed many times to find the best result. Only the solution with the best result is kept and used. Therefore, most relevant published papers simply report and compare the best results obtained from the algorithms to demonstrate the properties, see refs [13-16, 25, 27]. Therefore, the present disclosure also focuses on the best result Fmin in
实验1:找到最佳步长Δ:Experiment 1: Find the optimal step size Δ:
实验1的实验结果,包括Favg、Fmin、Fmax、Fstd成功率(成功解决问题的案例的百分比), 以及实验1的基于步长Δ=1.0获得的适应度函数计算的数量,在T=1.25,1.50,......, 3.70秒内的55次运行的1.25和1.50,如表3和图1和图2所示,图1为在实验1中不同的 Δ值和T值下SSO1的平均成功率,图2为不同Δ值和问题下的SSO1平均成功率和实验1 中的问题。The experimental results of
尽管表3中随着T增加所有相关值趋于降低,但发现一些波动,例如,T=2.00时的Favg=0.04467947084586改善到T=2.25时的0.03980129242576,然后Δ=1.25的T=2.50时降低至0.06607165055859,这种波动是由于软计算的随机性以及每次运行是独立的。Although all correlation values in Table 3 tend to decrease as T increases, some fluctuations are found, eg, F avg = 0.04467947084586 at T = 2.00 improves to 0.03980129242576 at T = 2.25, and then decreases at T = 2.50 at Δ = 1.25 Up to 0.06607165055859, this fluctuation is due to the randomness of soft computation and that each run is independent.
根据表3,可以看出Δ=1.25对于T≥2.00(除了T=3.50)具有最佳Fmin,对于T=1.50, 2.00,2.25和2.75具有最佳Favg、Fmax和Fstd;对于T=1.75,3.25和3.50,Δ=1.00具有最佳的Favg、Fmin、Fmax和Fstd(除了T=3.25时的Fmin);对于Favg,Fmax和Fstd,Δ=1.75是 最小和最大T的最佳值,即T=1.25和T=3.75。From Table 3, it can be seen that Δ=1.25 has the best Fmin for T≥2.00 (except for T=3.50), and the best Favg , Fmax and Fstd for T=1.50, 2.00, 2.25 and 2.75; for T =1.75, 3.25 and 3.50, Δ=1.00 has the best Favg , Fmin , Fmax and Fstd (except Fmin at T=3.25 ) ; for Favg , Fmax and Fstd , Δ=1.75 is Optimal values for minimum and maximum T, namely T=1.25 and T=3.75.
表3从实验1中的不同Δ值获得的结果Table 3 Results obtained from different Δ values in
图1和2分别显示了不同t值和基准问题的平均成功率。成功率定义为最终gBest与最 优解相等的百分比。例如,62.020%的成功率意味着,Δ=1.25和t=1.25的100·55·18=99,000, 最终gBest的62.020%等于相关的优化,其中Nsol=100,Nrun=55,其中,18是基准问题的数 量。Figures 1 and 2 show the average success rates for different t values and benchmark problems, respectively. Success rate is defined as the percentage that the final gBest is equal to the optimal solution. For example, a 62.020% success rate means, 100·55·18=99,000 for Δ=1.25 and t=1.25, 62.020% of the final gBest equals the relevant optimization, where N sol = 100 and N run = 55, where 18 is the number of benchmark questions.
根据以上所述可以观察到,对于图1和图2中的不同T值和基准问题,Δ=1.25总是具 有最佳平均成功率。注意,Δ=1.25和Δ=1.00之间以及Δ=1.25和Δ=1.50之间的差异 高达27%,即,Δ=1.25获得最佳值的概率比Δ=1.00和Δ=1.50高27%。在Δ=1.25下的SSO1能够解决18个基准问题中的14个,这也是Δ的三个设置中最好的。From the above it can be observed that Δ=1.25 always has the best average success rate for different T values and benchmark problems in Figures 1 and 2. Note that the difference between Δ=1.25 and Δ=1.00 and between Δ=1.25 and Δ=1.50 is as high as 27%, ie, Δ=1.25 has a 27% higher probability of obtaining the best value than Δ=1.00 and Δ=1.50. SSO1 at Δ = 1.25 is able to solve 14 of the 18 benchmark problems, which is also the best of the three settings for Δ.
从表3和图1和2可知,Δ=1.25在解的质量方面优于1.00和1.50。因此,在所提出的HSS和实验2中使用Δ=1.25而不进一步优化Δ。From Table 3 and Figures 1 and 2, Δ=1.25 is better than 1.00 and 1.50 in terms of the quality of the solution. Therefore, Δ=1.25 was used in the proposed HSS and
实验2:ABC,GA,PSO,SSO1和SSOa之间的比较:Experiment 2: Comparison between ABC, GA, PSO, SSO1 and SSOa:
从T=1.25,1.50,......,3.70秒内的55次运行中,从五种测试算法(即ABC,GA,PSO, SSO1和SSOa)获得的Fmin的平均值如图3所示,图3为从实验2中的ABC,GA,PSO,SSO1 和SSOa获得的Fmin值。将从ABC,SSO1和SSOa获得的结果作为与如图7、图8、图9、图10、 图11、图12、图13、图14、图15、图16、图17所示,分别为实验2中T的值分别1.25、 1.50(a)、1.50(b)、1.75、2.00、2.25、2.50、2.75、3.00、3.50、3.75时ABC,SSO1和 SSOa的平均适应度值的箱形图中的11个箱形图一起显示在图6中,图6为实验2中ABC,SSO1, and SSOa的Fmin值条形图,以图形化的方式描绘了全部结果。注:这11个箱形图也显示了最 佳结果。此外,为了获得关于结果的基本部分(从最佳四分位数到第三四分位数的平均值) 的更多细节,第三四分位数到平均最差拟合结果被截断。The mean values of F min obtained from the five tested algorithms (i.e. ABC, GA, PSO, SSO1 and SSOa) from 55 runs in T = 1.25, 1.50, ..., 3.70 seconds are shown in Figure 3 shown in Figure 3 are the F min values obtained from ABC, GA, PSO, SSO1 and SSOa in
使用SSO1与ABC和SSOa的平均Fmin改善率(AIO),例如AIO(ABC,SSO1)和AIO(SSOa,SSO1),如图4所示,图4为AIO(ABC,SSO1),AIO(SSOa,SSO1),和AIO(ABC,SSOa)的柱状 图。Average F min improvement (AIO) using SSO1 with ABC and SSOa, such as AIO (ABC, SSO1) and AIO (SSOa, SSO1), as shown in Figure 4, Figure 4 for AIO (ABC, SSO1), AIO (SSOa) , SSO1), and AIO (ABC, SSOa) histograms.
其中,其中,α,β均表示测 试算法;in, Among them, α and β both represent the test algorithm;
相关的成功率总结在图5中,图5为实验2中不同T值下的成功率条形图。适应度函数 计算的数量如表4所示。The associated success rates are summarized in Figure 5, which is a bar graph of the success rates at different T values in
此外,在实验2的一些结果中观察到一些波动。这些波动是由软计算的随机性质和每次 运行是独立的实际情况引起的。Furthermore, some fluctuations were observed in some results of
表4实验2中适应度函数计算的数量Table 4 Number of fitness function calculations in
解的质量的综合分析Comprehensive analysis of the quality of the solution
图3、图6和图7强调了ABC、SSO1和SSOA的有效性(即解的质量)。从图3可以看 出,遗传算法和粒子群算法的性能都比其他三种算法差得多。这一观察是在其他研究中进行的[13-16]。此外,随着运行时间的增加,遗传算法和粒子群算法的弱点变得更加明显。因此, 本公开的其余部分只关注ABC、SSO1和SSOA。Figures 3, 6, and 7 highlight the effectiveness (i.e., the quality of the solution) of ABC, SSO1, and SSOA. As can be seen from Figure 3, the performance of both the genetic algorithm and the particle swarm algorithm is much worse than the other three algorithms. This observation was made in other studies [13-16]. Furthermore, the weaknesses of genetic algorithms and particle swarm algorithms become more apparent as the running time increases. Therefore, the remainder of this disclosure focuses only on ABC, SSO1 and SSOA.
如图7、图8、图9、图10、图11、图12、图13、图14、图15、图16、图17所示,分 别为实验2中T的值分别1.25、1.50、1.50、1.75、2.00、2.25、2.50、2.75、3.00、3.50、 3.75时ABC,SSO1和SSOa的平均适应度值的箱形图,随着运行时间的增加,结果更接近最 优解。从图6可以看出,图6为实验2中ABC,SSO1,and SSOa的Fmin值条形图,对于较小 的运行时间,ABC优于SSO1和SSOa,例如,T=1.25和T=1.50。然而,由于现代先进的 计算机技术,较小的“运行时间”的相关性并不重要。相反,在T≥2.75秒后SSO1优于ABC, 而对于所有T,SSOa优于平均Fmin。对于T=3.75,SSO1和ABC之间的间隙几乎达到0.01, SSO1和SSOa之间的间隙几乎为0.005。此外,当T增加时,ABC对于Favg的SSO1没有提高 到相同的程度。上述结果提供了证据表明ABC很容易陷入较大T的局部最优值。As shown in Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16, Figure 17, the values of T in
如图7、图8、图9、图10、图11、图12、图13、图14、图15、图16、图17所示,As shown in Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16, Figure 17,
ABC产生更好的Fmax和Fstd值。相反,SSOa产生Fmax和Fstd的最差值。这个结果的原因是 SSOa中使用的UMa必须更新所有变量,而ABC中只选择了一个变量,并且SSO1中为每一代的每个解使用了UM1。ABC总是比SSO1和SSOa更强大的另一个原因是ABC容易陷入 局部陷阱并且缺乏逃避局部最优的能力。ABC yields better Fmax and Fstd values . Conversely, SSOa produces the worst values for Fmax and Fstd . The reason for this result is that UMa used in SSOa has to update all variables, whereas only one variable is selected in ABC, and UM1 is used in SSO1 for each solution of each generation. Another reason why ABC is always stronger than SSO1 and SSOa is that ABC is prone to local traps and lacks the ability to escape local optima.
总之,本公开所提出的SSO1方法比其他的方法具有明显的优势。In conclusion, the SSO1 method proposed in the present disclosure has obvious advantages over other methods.
平均Fmin改善率(AIO)Average F min improvement rate (AIO)
为了测量从ABC,SSO1和SSOa获得的平均Fmin的改善量,平均Fmin改善率(AIO) 在图4中给出,其中AIO从ABC到SSO1,即AIO(ABC,SSO1),定义如下:To measure the amount of improvement in average Fmin obtained from ABC, SSO1 and SSOa, the average Fmin improvement rate (AIO) is given in Figure 4, where AIO goes from ABC to SSO1, i.e. AIO(ABC, SSO1), defined as follows:
通过AIO(ABC,SSO1)测量ABC和SSO1之间的Fmin的差异,以改善SSO1中的一个Fmin单位,即更高的AIO(ABC,SSO1),SSO1越有效,反之亦然。例如,对于图4中的T=3.75, AIO(ABC,SSO1)=55.673%表明SSO1的平均Fmin的一个单位改善将导致ABC和SSO1的平均 Fmin之间的差异增加0.55673。同样,可以得到AIO(SSOa,SSO1)和AIO(ABC,SSOa)。The difference in F min between ABC and SSO1 was measured by AIO(ABC, SSO1) to improve one F min unit in SSO1, i.e. the higher AIO(ABC, SSO1), the more efficient SSO1 and vice versa. For example, for T=3.75 in Figure 4, AIO(ABC, SSO1)=55.673% indicates that a one-unit improvement in the mean Fmin of SSO1 will result in a 0.55673 increase in the difference between the mean Fmin of ABC and SSO1. Likewise, AIO(SSOa, SSO1) and AIO(ABC, SSOa) can be obtained.
从图4可以看出,AIO(ABC,SSO1)的值从T=1.25的-21.227%(最差)增加到T=3.75 的55.673%(最好);AIO(ABC,SSOa)从T=1.25的48.396%(最好)下降到T=3.25 (最差)的4.936%,然后在T=3.75时增加到29.960%。因此,可以得到有以下结论:As can be seen from Figure 4, the value of AIO(ABC, SSO1) increases from -21.227% (worst) for T=1.25 to 55.673% (best) for T=3.75; AIO(ABC, SSOa) increases from T=1.25 48.396% of (best) dropped to 4.936% at T=3.25 (worst) and then increased to 29.960% at T=3.75. Therefore, the following conclusions can be drawn:
1.SSO1的性能总是比ABC快得多,即当T增加时,ABC与Favg的SSO1相比没有提高 到相同的程度。1. The performance of SSO1 is always much faster than that of ABC, i.e. when T increases, ABC does not improve to the same extent compared to SSO1 of F avg .
2.从T=1.25到T=3.25,SSOa的平均Fmin改善倾向于与SSO1的平均改善相似。然而, 在T=3.25之后,SSO1的性能比SSOa提高了更多。2. From T=1.25 to T=3.25, the average Fmin improvement for SSOa tends to be similar to the average improvement for SSO1. However, after T=3.25, the performance of SSO1 improved more than that of SSOa.
3.SSOa的解决的质量和稳健性超过ABC。3. The quality and robustness of the solution of SSOa exceeds that of ABC.
这些结论的原因是第三个UM“侦察”在逃避局部最优方面不是非常有效,因为如果当前 解在预定次数的迭代中没有改善的话,ABC被设计为随机地再生当前解。此外,SSOa中使用 的UMa需要更新所有变量,这使得它在全局搜索中更强大,但是逐渐和缓慢地提高其解决的 质量。The reason for these conclusions is that the third UM "scout" is not very effective at evading local optima, since ABC is designed to randomly regenerate the current solution if it does not improve in a predetermined number of iterations. Furthermore, UM a used in SSOa requires updating all variables, which makes it more robust in global search, but gradually and slowly improves the quality of its solution.
成功率:Success rate:
图5示出了相关值的成功率最多比精确解大0.0001,这是可取的,因为相关值应尽可能 接近精确值。图5显示了与图7所示类似的观察结果。对于T<2.75的患者,ABC有更好的成功率,但是对于Fmin,SSO1总是有更好的成功率。Figure 5 shows that the success rate of the correlation value is at most 0.0001 greater than the exact solution, which is desirable because the correlation value should be as close as possible to the exact value. Figure 5 shows observations similar to those shown in Figure 7. For patients with T<2.75, ABC had better success rate, but for Fmin , SSO1 always had better success rate.
效果:Effect:
表4主要示出了适应度函数的计算数,以比较ABC、SSO1和SSOA之间的效率。Table 4 mainly shows the calculation number of fitness function to compare the efficiency among ABC, SSO1 and SSOA.
NABC是适应度函数的总计算数,nABC是从ABC获得最佳最终解的计算数。NSSO1和nSSO1组成的对以及NSSOa和nSSOa组成的对在它们彼此相关的意义上与NABC和nABC平行,因为每 对的两个分量都是使用相同的算法导出的(NSSO1和nSSO1来自算法SSO1和NSSOa和nSSOa来 自算法SSOa。)N ABC is the total number of computations of the fitness function, and n ABC is the number of computations to obtain the best final solution from ABC. The pair of N SSO1 and n SSO1 and the pair of N SSOa and n SSOa are parallel to N ABC and n ABC in the sense that they are related to each other, since both components of each pair are derived using the same algorithm (N SSO1 and n SSO1 from algorithm SSO1 and N SSOa and n SSOa from algorithm SSOa.)
在表4中,比率nABC/NABC小于42%,即在ABC的适应度函数的计算数量的42%之后,最终的最佳解永远不会改变。此外,随着运行时间的增加,比率nABC/NABC降低。这两个观 察结果进一步证实,ABC非常适合局部搜索,但全局搜索能力较弱。比率NSSO1/NABC和比 率NSSOa/NABC的值均大于918,并且nSSO1/nABC和nSSOa/nABC的值均至少为2,100。因此, UM1和UMa都比ABC中实现的UM快至少918倍,即,UM1和UMa比ABC更有效。表4 中nSSOa/NSSOa和nSSO1/NSSO1的比例分别为至少95.5%和95.47%。因此,与ABC相反,SSO1 和SSOa几乎在其各自的运行结束时继续改进它们的解,即SSO1和SSOa是非常强的全局搜 索算法。In Table 4, the ratio nABC / NABC is less than 42%, that is, after 42% of the number of computations of the fitness function of ABC, the final optimal solution never changes. Furthermore, the ratio n ABC /N ABC decreases as the running time increases. These two observations further confirm that ABC is well suited for local search but weaker for global search. Both the ratio N SSO1 /N ABC and the ratio N SSOa /N ABC have values greater than 918, and both nSSO1 /nABC and n SSOa /n ABC have values of at least 2,100. Therefore, both UM 1 and UM a are at least 918 times faster than the UM implemented in ABC, ie, UM 1 and UM a are more efficient than ABC. The ratios of n SSOa /N SSOa and n SSO1 /N SSO1 in Table 4 are at least 95.5% and 95.47%, respectively. Thus, in contrast to ABC, SSO1 and SSOa continue to improve their solutions almost at the end of their respective runs, ie SSO1 and SSOa are very strong global search algorithms.
基于这些观察结果,与ABC和SSOa相比,SSO1在探索和利用之间实现了更好的平衡。Based on these observations, SSO1 achieves a better balance between exploration and utilization compared to ABC and SSOa.
结论:in conclusion:
在本公开的实施例中,UM1更新每个解中的一个变量,以提高SSO的探索能力,并引入 一个新的HSS来取代固定的步长,以提高SSO的利用能力。In the embodiment of the present disclosure, UM 1 updates a variable in each solution to improve the exploration ability of SSO, and introduces a new HSS to replace the fixed step size to improve the utilization ability of SSO.
通过对18个高维函数的广泛实验研究,得到的Δ=1.25的HSS实现了比Δ=1.00和Δ= 1.50更好的性能。此外,所提出的UM1使用所提出的HSS用于Δ=1.25,在探索和利用之间 实现了比ABC,GA,PSO和SSOa更好的折衷。Through an extensive experimental study of 18 high-dimensional functions, the resulting HSS with Δ=1.25 achieves better performance than Δ=1.00 and Δ=1.50. Furthermore, the proposed UM1 uses the proposed HSS for Δ = 1.25, achieving a better compromise between exploration and exploitation than ABC, GA, PSO and SSOa.
注:ABC总是具有较小的偏差,这使得它比SSO1和SSOa更加稳健,因为它相对SSO1和SSOa更不容易陷入局部最优。Note: ABC always has a smaller bias, which makes it more robust than SSO1 and SSOa, since it is less likely to fall into local optima than SSO1 and SSOa.
然而,本公开所提出的算法仍然存在一些局限性,然而,对于所提出的算法仍然存在一 些限制,甚至包括ABC,GA,PSO,SSOa和所有当前的软计算:a)在分析基准的一些问题中, 被认为是全局最佳的配置没有最高的表现;b)提出的启发式方法需要了解最佳值的限制(上 限和下限)。因此,必须改进所提出的算法以克服上述两个主要障碍。However, the proposed algorithm in this disclosure still has some limitations, however, there are still some limitations for the proposed algorithm, even including ABC, GA, PSO, SSOa and all current soft computations: a) Some problems in analyzing benchmarks , the configuration that is considered to be globally optimal does not have the highest performance; b) the proposed heuristic method requires knowledge of the limits (upper and lower bounds) of the optimal value. Therefore, the proposed algorithm must be improved to overcome the two main obstacles mentioned above.
尽管本公开所提出的算法的结果在解决方案质量和效率方面都优于ABC和SSOa。本公 开所提出的算法可能对常数参数敏感。因此,考虑到SSO的演化过程,将在未来的工作中 探索其灵敏度方面以调整步长。Although the results of the proposed algorithm in this disclosure are superior to ABC and SSOa both in terms of solution quality and efficiency. The algorithms proposed in this disclosure may be sensitive to constant parameters. Therefore, considering the evolution process of SSO, its sensitivity aspect will be explored in future work to tune the step size.
缩写/术语Abbreviations/Terms
Δ表示步长(步进长度);Δ represents the step size (step length);
ρk表示在[0,1]中第k个(kth)变量创建的均匀随机数;ρ k represents the uniform random number created by the kth (kth) variable in [0,1];
ABC表示人工蜂群算法;ABC stands for artificial bee colony algorithm;
AOI表示Fmin平均Fmin改善率;AOI means F min average F min improvement rate;
F(Xi,j)表示Xi,j的适应度函数;F(X i,j ) represents the fitness function of X i,j ;
gBest表示历史上最好的解;gBest represents the best solution in history;
pBest表示其自身的历史上最好的解;pBest represents the best solution in its own history;
Favg表示50个最佳gBest的平均适应度;F avg represents the average fitness of the 50 best gBests;
Fmax表示50个最佳gBest中最差的适应度;Fmax represents the worst fitness among the 50 best gBests ;
Fmin表示50个最佳gBest的最佳适应度;F min represents the best fitness of the 50 best gBest;
Fstd表示50个最佳gBest的适应度标准差;F std represents the fitness standard deviation of the 50 best gBest;
GA表示遗传算法;GA stands for Genetic Algorithm;
gk表示PgBest的第k个(kth)变量;g k represents the kth (k th ) variable of P gBest ;
N●表示算法●的平均适应度评估数;N ● represents the average number of fitness evaluations of the algorithm ●;
n●表示找到算法●的最终gBest的平均适应度评估数;n represents the average number of fitness evaluations of the final gBest that finds the algorithm ;
Navg表示平均适应度评估数;N avg represents the average number of fitness evaluations;
navg表示找到最终gBest的平均适应度评估数;n avg represents the average number of fitness evaluations to find the final gBest;
NgBest表示gBest的数量;N gBest indicates the number of gBest;
Ngen表示遗传代数;N gen represents the genetic algebra;
Nrun表示独立运行次数;N run represents the number of independent runs;
Nsol表示解数量;N sol represents the number of solutions;
Nvar表示变量数量;N var represents the number of variables;
PgBest表示当前的gBest(历史上最好的解);P gBest represents the current gBest (the best solution in history);
Pi表示第i个(ith)解的当前pBest(其自身的历史上最好的解);P i represents the current pBest of the ith (i th ) solution (its own historical best solution);
pi,j表示Pi的第j(jth)个变量;p i,j represents the jth (j th ) variable of P i ;
PSO表示粒子群优化算法/粒子群演算法;PSO stands for particle swarm optimization algorithm/particle swarm optimization algorithm;
SSO表示简化群优化算法/简化群体演算法;SSO stands for Simplified Swarm Optimization Algorithm / Simplified Swarm Algorithm;
T表示运行时间限制;T represents the running time limit;
UM1表示单变量更新机制;UM 1 represents a univariate update mechanism;
UMa表示全变量更新机制;UM a represents the full variable update mechanism;
Xi,j表示第j(jth)代的第i(ith)个解;X i,j represents the i(i th ) solution of the j(j th ) generation;
xi,j,k表示Xgen,sol的第k个(kth)变量;x i,j,k represents the kth (k th ) variable of X gen,sol ;
Xgen,sol表示第gen代第sol个解。X gen,sol represents the sol-th solution of the gen-th generation.
Claims (6)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910497327.7A CN112070200B (en) | 2019-06-10 | 2019-06-10 | Harmonic group optimization method and application thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910497327.7A CN112070200B (en) | 2019-06-10 | 2019-06-10 | Harmonic group optimization method and application thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112070200A true CN112070200A (en) | 2020-12-11 |
CN112070200B CN112070200B (en) | 2024-04-02 |
Family
ID=73658186
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910497327.7A Active CN112070200B (en) | 2019-06-10 | 2019-06-10 | Harmonic group optimization method and application thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112070200B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113141102A (en) * | 2021-04-13 | 2021-07-20 | 李怡心 | Specific harmonic elimination method based on improved hybrid particle swarm taboo algorithm |
CN115830411A (en) * | 2022-11-18 | 2023-03-21 | 智慧眼科技股份有限公司 | Biological feature model training method, biological feature extraction method and related equipment |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104881703A (en) * | 2015-05-20 | 2015-09-02 | 东北石油大学 | Tent mapping improved bee colony algorithm for image threshold segmentation |
CN108470018A (en) * | 2018-02-22 | 2018-08-31 | 中国铁道科学研究院 | Smoothing method and device based on the intrinsic mode functions that empirical mode decomposition decomposes |
CN109816000A (en) * | 2019-01-09 | 2019-05-28 | 浙江工业大学 | A New Feature Selection and Parameter Optimization Method |
-
2019
- 2019-06-10 CN CN201910497327.7A patent/CN112070200B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104881703A (en) * | 2015-05-20 | 2015-09-02 | 东北石油大学 | Tent mapping improved bee colony algorithm for image threshold segmentation |
CN108470018A (en) * | 2018-02-22 | 2018-08-31 | 中国铁道科学研究院 | Smoothing method and device based on the intrinsic mode functions that empirical mode decomposition decomposes |
CN109816000A (en) * | 2019-01-09 | 2019-05-28 | 浙江工业大学 | A New Feature Selection and Parameter Optimization Method |
Non-Patent Citations (4)
Title |
---|
WEI-CHANG YEH,: "New Parameter-Free Simplified Swarm Optimization for Artificial Neural Network Training and Its Application in the Prediction of Time Series", 《IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS》, vol. 24, no. 4, 30 April 2013 (2013-04-30), pages 661 - 665, XP011494122, DOI: 10.1109/TNNLS.2012.2232678 * |
WEI-CHANG YEH: "A new harmonic continuous simplified swarm optimization", 《APPLIED SOFT COMPUTING JOURNAL》, 18 June 2019 (2019-06-18), pages 1 - 10 * |
公忠盛 等: "基于改进混合蜂群算法的非线性电路谐波平衡分析", 《计算机应用研究》, vol. 35, no. 7, 31 July 2018 (2018-07-31), pages 1970 - 1995 * |
吕干云 等: "一种基于粒子群优化算法的间谐波分析方法", 《电工技术学报》, vol. 24, no. 12, 31 December 2009 (2009-12-31), pages 156 - 161 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113141102A (en) * | 2021-04-13 | 2021-07-20 | 李怡心 | Specific harmonic elimination method based on improved hybrid particle swarm taboo algorithm |
CN115830411A (en) * | 2022-11-18 | 2023-03-21 | 智慧眼科技股份有限公司 | Biological feature model training method, biological feature extraction method and related equipment |
CN115830411B (en) * | 2022-11-18 | 2023-09-01 | 智慧眼科技股份有限公司 | Biological feature model training method, biological feature extraction method and related equipment |
Also Published As
Publication number | Publication date |
---|---|
CN112070200B (en) | 2024-04-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Mocanu et al. | Unsupervised energy prediction in a Smart Grid context using reinforcement cross-building transfer learning | |
Arnaldo et al. | Multiple regression genetic programming | |
Li et al. | Development and investigation of efficient artificial bee colony algorithm for numerical function optimization | |
Behnamian et al. | Development of a PSO–SA hybrid metaheuristic for a new comprehensive regression model to time-series forecasting | |
Chopra et al. | Recurrent neural networks with non-sequential data to predict hospital readmission of diabetic patients | |
Somani et al. | Stock market prediction using hidden Markov model | |
Wang et al. | A hybrid optimization-based recurrent neural network for real-time data prediction | |
CN113744089B (en) | Transformer area household variable relation identification method and device | |
Kaboli et al. | An expression-driven approach for long-term electric power consumption forecasting | |
Al Mamun et al. | A hybrid deep learning model with evolutionary algorithm for short-term load forecasting | |
Kitonyi et al. | Hybrid gradient descent grey wolf optimizer for optimal feature selection | |
CN113657937A (en) | Day-ahead electricity price prediction method based on EEMD-CNN + SAE-RFR hybrid algorithm | |
CN105844353A (en) | Aquatic product price prediction method and device | |
Zhou et al. | A runtime analysis of evolutionary algorithms for constrained optimization problems | |
Yang et al. | Prediction of equipment performance index based on improved chaotic lion swarm optimization–LSTM | |
CN110210661A (en) | A kind of Mid-long term load forecasting method and device for using energy characteristic based on user | |
CN112070200A (en) | A Harmonic Group Optimization Method and Its Application | |
Parouha et al. | An innovative hybrid algorithm for bound-unconstrained optimization problems and applications | |
CN109101395A (en) | A kind of High Performance Computing Cluster application monitoring method and system based on LSTM | |
Garg | A novel approach for solving fuzzy differential equations using Runge-Kutta and Biogeography-based optimization | |
CN112580868B (en) | Power transmission blocking management method, system, equipment and storage medium for power system | |
CN115185804A (en) | Server performance prediction method, system, terminal and storage medium | |
Zhang et al. | A convolutional neural network based on an evolutionary algorithm and its application | |
Lee et al. | Taiwan stock investment with gene expression programming | |
Yu | [Retracted] Research on Optimization Strategy of Task Scheduling Software Based on Genetic Algorithm in Cloud Computing Environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP03 | Change of name, title or address | ||
CP03 | Change of name, title or address |
Address after: 528000 No. 18, Jiangwan Road, Chancheng District, Guangdong, Foshan Patentee after: Foshan University Country or region after: China Address before: 528000 No. 18, Jiangwan Road, Chancheng District, Guangdong, Foshan Patentee before: FOSHAN University Country or region before: China |