CN111709511A - Harris eagle optimization algorithm based on random unscented Sigma point variation - Google Patents

Harris eagle optimization algorithm based on random unscented Sigma point variation Download PDF

Info

Publication number
CN111709511A
CN111709511A CN202010377624.0A CN202010377624A CN111709511A CN 111709511 A CN111709511 A CN 111709511A CN 202010377624 A CN202010377624 A CN 202010377624A CN 111709511 A CN111709511 A CN 111709511A
Authority
CN
China
Prior art keywords
random
individual
eagle
strategy
algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010377624.0A
Other languages
Chinese (zh)
Inventor
郭文艳
许�鹏
戴芳
赵凤群
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Technology
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Technology filed Critical Xian University of Technology
Priority to CN202010377624.0A priority Critical patent/CN111709511A/en
Publication of CN111709511A publication Critical patent/CN111709511A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Feedback Control In General (AREA)

Abstract

The invention discloses a Harris eagle optimization algorithm based on random unscented Sigma point variation, which comprises the following steps: generating initial positions of N hawks in a feasible region by using a random initialization method; recording the optimal position of the hawk, and carrying out random traceless Sigma point variation; updating escape energy by utilizing a nonlinear strategy; selecting and executing corresponding exploration and development strategies according to the escape energy and the random number, and updating by using quasi-reflection and quasi-inverse solution according to conditions in the development strategies; calculating the fitness value of the eagle population individuals, and determining the current optimal solution and the optimal target value according to the optimal principle; repeatedly executing the steps S2-S7 until the maximum iteration number T is reached; and returning the optimal solution and the optimal target value. The quasi-inverse and quasi-reflex learning strategy of the invention enhances the diversity of the population and accelerates the convergence rate of the algorithm; logarithmic nonlinear convergence factors, and balanced algorithm exploration and development capacity; and the random symmetric Sigma point strategy avoids the algorithm from falling into local optimum.

Description

Harris eagle optimization algorithm based on random unscented Sigma point variation
Technical Field
The invention belongs to the technical field of group intelligent optimization algorithms, and relates to a Harris eagle optimization algorithm based on random unscented Sigma point variation.
Background
In recent years, with the continuous development of the fields of artificial intelligence, urban traffic planning, engineering design, complex networks, data processing and the like, people face more and more complex optimization problems. Solving these problems using traditional optimization methods is often ineffective or difficult to solve. And the occurrence of the group random optimization algorithm provides beneficial supplement for solving the problems. The group random optimization method is a solving algorithm which is inspired by group cooperation behaviors in biology or nature and provided according to mechanisms such as behavior rules, survival criteria and the like. As a new solving method, the method has the advantages of high solving efficiency, simple structure, easy realization and the like, and is paid attention and researched by a plurality of scholars. The ant colony algorithm is a biomimetic evolutionary algorithm proposed by Dorigo et al in 1992. The ACO is a meta-heuristic algorithm proposed by simulating the behavior of information transfer by using released pheromones in the foraging process of ant populations. The ant colony algorithm is widely applied to various optimization problems, so that the development speed of the colony intelligent algorithm is accelerated. The scholars successively put forward various swarm intelligent optimization algorithms, such as a particle swarm algorithm, an artificial bee swarm algorithm, a wolf optimization algorithm, a sine and cosine algorithm SCA, a whale optimization algorithm and the like. These algorithms simulate the foraging behavior of populations of different species, updating the behavior of individual species of the species in a specific and random manner, through information sharing between species. The cooperative behavior promotes the population to evolve to the global optimal target, and the method is widely applied to numerous fields. Harris eagle optimization algorithm HHO is a new class of population optimization algorithms proposed in 2019 by Heidariet et al. The design of the algorithm is based on the predation behavior of Harris eagle populations in nature, and an algorithm model is constructed by simulating the processes of the eagle populations on hunting, attacking and the like. The harris eagle group is used as a plurality of feasible solutions, and the solutions are searched in parallel in a solution space to seek a satisfactory solution.
As a novel intelligent algorithm, the HHO algorithm is successfully applied in many fields, but similar to other population optimization algorithms, the HHO algorithm also has the defects of low solving precision and difficulty in better coordinating exploration and development capacity.
For the group intelligence algorithm, how to better balance the exploration and development capabilities is particularly important. At the initial stage of algorithm evolution, strong global exploration capacity is needed to maintain the diversity of the population, and the situation that a better global optimal solution cannot be obtained due to the fact that the population is trapped in local optimization is avoided. In the later stage of algorithm evolution, better development capability is required to ensure that the algorithm can accurately perform local search, and the convergence speed and the solving precision of the algorithm are improved. For HHO algorithms, the escape energy E of the prey is an important parameter for the transition of the exploration phase to the development phase of the algorithm, and is used to control the balance of the exploration phase and the development phase. The original definition of E is as follows
Figure BDA0002480778150000021
When | E | ≧ 1, the eagle group will expand the search range to better explore the location of the prey, at which time the algorithm enters the exploration phase. When | E | < 1, the eagle group will narrow the search range and make the last attack to the prey, at this time, the algorithm enters the development stage. The escape energy E is defined by the initial energy E of the prey0Is determined together with the iteration number t and linearly decreases from 1 to 0 as the iteration number t increases. The linear decreasing strategy enables the variation delta E of each iteration energy to be the same, namely, any iteration time is treated equally. However, the swarm intelligence algorithm generally needs to have strong exploration capability in the initial stage of iteration, and pays more attention to development capability in the later stage of iteration. The mathematical expression for the transition parameter E is thus improved.
Meanwhile, the HHO algorithm requires the current population to learn towards the current optimal individual when the population position is updated and iterated. At this time, once the optimal individual falls into the local optimal and cannot jump out, the algorithm may be stopped or the optimization precision may not be high. Therefore, for the current optimal individual, it is necessary to perform random mutation operation on it to avoid it falling into local optimization.
Disclosure of Invention
The invention aims to provide a Harris eagle optimization algorithm based on random unscented Sigma point variation, which solves the problem that the current population is required to learn towards the current optimal individual when the HHO algorithm carries out population position updating iteration; once the optimal individual falls into the local optimal and cannot jump out, the problems of algorithm stagnation or low optimization precision can be caused
The invention adopts the technical scheme that a Harris eagle optimization algorithm based on random unscented Sigma point variation is implemented according to the following steps:
s1: initializing a population scale N, and setting a maximum iteration time T;
s2: in solution space [ LBi,UBi]Randomly generating initial positions of N Harris hawks as an initial population;
s3: calculating the fitness of each individual and recording the current optimal individual position as the position X of the food source*(t);
S4: carrying out random Sigma point variation on the current optimal individual position;
s5: updating the non-linear decreasing factor a (t) and prey energy E;
s6: selecting and executing a corresponding updating strategy according to the relative sizes of the prey energy E and the random number r, and obtaining a new population;
s7: judging whether the iteration reaches the maximum iteration time T, if not, repeatedly executing the steps S3-S7 until the iteration is finished;
s8: returning the optimal solution X*(t) and an optimum target value F (X)*(t))。
The present invention is also characterized in that,
optionally, in step S2, the initialization position of the population is recorded as
P(0)={X1(0),X2(0),...,XN(0) And has:
Xi(0)=LBi+(UBi-LBi)×rand()
wherein, i is 1,2, and N is the ith harrisck eagle, LBi,UBiRespectively is the lower limit and the upper limit of the value range of the ith solution variable, and rand () is the interval [0, 1%]Randomly generated real numbers in between.
Optionally, in step S3, the positions of the population individuals are substituted into the objective function as variables, and the minimum fitness value correspondence is retainedOf individual X*(t) that is
Figure BDA0002480778150000041
Optionally, in step S4, the current optimal solution is subjected to random unscented transformation, and further developed in a high-quality region where the current optimal solution is located, so as to improve the calculation accuracy of the algorithm
Figure BDA0002480778150000042
Mutation was performed to generate 2D +1 random variation points.
Figure BDA0002480778150000043
Figure BDA0002480778150000044
Where T is the current iteration, T is the maximum number of iterations, r6Calculating the fitness value of 2D +1 variant individuals, and updating the current optimal individual by greedy search, namely the current optimal individual is updated
Figure BDA0002480778150000045
Further, in step S5, the update strategy based on the logarithmic non-linear convergence factor and the description of the prey energy E are as follows
Figure BDA0002480778150000046
Wherein E is a natural constant, T is a current iteration, T is a maximum iteration number, and E0The convergence factor a (t) decreases nonlinearly from 1 to 0 as the number of iterations increases.
Further, in step S6, according to the size of the prey energy E, a search and development strategy is executed, specifically:
(1) when the escape energy | E | ≧ 1, which indicates that the prey is outside the visual reach of the eagle group, the eagle randomly searches the prey, at this time, HHO enters the exploration phase, and the position of the eagle group individual is updated in two ways according to probability:
Figure BDA0002480778150000051
wherein, Xi(t +1) is the position of the ith individual in the next iteration, Xrand(t) randomly selecting the position of an individual in the current population, wherein X (t) represents the position of the current optimal solution, and Xm(t) representing the average position of the current eagle population, wherein the calculation method comprises the following steps:
Figure BDA0002480778150000052
r1,r2,r3,r4all are random numbers uniformly distributed on (0, 1). ubi,lbiRespectively represent variable XiUpper and lower bounds of (t).
(2) When the escape energy | E | is less than 1, the prey enters the visual range of the hawk, the hawk group corrects the position of the hawk group according to the escape capability of the prey and four situations according to the probability r, at the moment, HHO enters the development stage, r is defined as a random number between (0 and 1), and the reverse learning strategy is used as an effective method for improving the diversity of the group and is widely applied to the improvement of an intelligent algorithm.
(2a) When r is more than or equal to 0.5 and E is more than or equal to 0.5, the hunting object escaping capability is strong, whether the hunting object can be caught or not has certain randomness, the eagle group individual executes the soft trap strategy, and the position updating method comprises the following steps:
Xi(t+1)=X*(t)-Xi(t)-E|Ji·X*(t)-Xi(t)|
Ji=2(1-r5)
wherein, JiIs a random number between (0,2), r5Is a random number between (0, 1).
Further executing a quasi-reverse strategy, when r is more than or equal to 0.5 and E is more than or equal to 0.5, the eagle group individuals need to search in a larger range to capture the prey, and at the moment, in order to expand the search range, the ith individual X after the HHO is used for position updating is subjected to position updatingi old(t +1) using a pseudo-inverse learning strategy to generate a pseudo-inverse solution in a neighborhood farther from the candidate solution
Figure BDA0002480778150000061
Selecting the better individual from the quasi-inverse solution and the original individual to enter the next generation by a greedy selection strategy, namely
Figure BDA0002480778150000062
Wherein the content of the first and second substances,
Figure BDA0002480778150000063
Figure BDA0002480778150000064
(2b) when r is more than or equal to 0.5 and E is less than 0.5, the eagle group can capture the prey, and the eagle group individual executes the hard trapping strategy, wherein the individual position updating formula is as follows:
Xi(t+1)=X*(t)-E|X*(t)-Xi(t)|
further, a pseudo-reflex strategy is implemented, when r is more than or equal to 0.5 and E is less than 0.5, the eagle population can capture the prey within a smaller range, and at the moment, the ith individual X for executing the HHO updating is subjected toi old(t +1) adopting a pseudo-reflection learning strategy to further enhance the development capability of the pseudo-reflection learning strategy, generating a pseudo-reflection solution in a neighborhood close to the candidate solution, carrying out greedy selection on the pseudo-reflection solution and the original individual, and keeping the better individual to enter the next iteration updating, namely
Figure BDA0002480778150000065
Wherein the content of the first and second substances,
Figure BDA0002480778150000066
the quasi-reverse and quasi-reflection strategies are randomly adopted, and an appropriate search strategy is selected according to the behaviors of different individuals, so that the optimization efficiency of the algorithm in the later development stage can be accelerated, and the global convergence capability of the algorithm is improved.
(2c) When r is less than 0.5 and E is more than or equal to 0.5, the eagle group individual executes a soft trap strategy with quick dive, and the eagle group individual position updating formula is as follows:
Figure BDA0002480778150000067
wherein Y isi(t)=X*(t)-E|Ji·X*(t)-Xi(t)|,Zi(t)=Yi(t)+Si·LF(D),SiIs a random vector of dimension D, LF (D) represents a random vector of dimension D levy distribution, SiLF (D) denotes a point-to-point multiplication, 1-dimensional Lvy distribution
Figure BDA0002480778150000071
Figure BDA0002480778150000072
Mu, upsilon is interval [0,1]β is a predetermined constant of 1.5, (. cndot.) is a gamma function,
Figure BDA0002480778150000073
(2d) when r is less than 0.5 and E is less than 0.5, the eagle group individual executes a hard trap strategy with quick dive, and the position of the eagle group individual is updated as follows:
Figure BDA0002480778150000074
wherein
Figure BDA0002480778150000075
Zi(t)=Yi(t)+Si·LF(D)。
Further, in step S7, it is determined whether the current iteration T is less than the maximum iteration number T, if T is less than T, steps S3-S7 are repeated, otherwise, go to S8.
Further, in step S8, when the algorithm is finished, the optimal solution X (t) and the optimal target value F (X (t)) are returned, that is, the optimal target value F (X (t)) is returned
Figure BDA0002480778150000076
The Harris eagle optimization algorithm based on random unscented Sigma point variation has the advantages that the quasi-inverse learning and quasi-reflection learning strategies are introduced according to the probability of the HHO algorithm, and quasi-inverse and quasi-reflection operation is carried out on population individuals, so that the diversity of the population is enhanced by utilizing the characteristics of the quasi-inverse and quasi-reflection operators, and the convergence speed and the calculation accuracy of the algorithm are improved; by designing a new nonlinear convergence factor, the linear convergence factor in the original algorithm is replaced, so that the algorithm can better balance the transition of exploration and development stages; in order to fully exert the leading function of the globally optimal individual in the algorithm, the unscented transformation is utilized to have the characteristic of second-order precision on the estimation of the random variable function moment, the random unscented Sigma point variation strategy is introduced to carry out variation operation on the globally optimal individual, the capability of the algorithm for jumping out of the local optimal is improved, numerical experiment results and engineering constraint optimization problems are solved, and the numerical value optimizing capability of the algorithm and the effectiveness, robustness and practicability of the algorithm are verified.
Drawings
FIG. 1 is a flow chart of the implementation of the Harris eagle optimization algorithm based on random unscented Sigma point variation according to the present invention;
FIG. 2 is a schematic diagram of the value ranges of the pseudo-inversion points and the pseudo-reflection points in the two-dimensional space according to the present invention;
FIG. 3 is a graph of the change of the logarithmic nonlinear convergence factor of the present invention;
FIG. 4 is a distribution diagram of Sigma sample points in three-dimensional space according to the present invention;
FIG. 5 is a graph comparing the convergence curves of the partial basis functions of the present invention using 9 intelligent algorithms;
FIG. 6 is a graph of the number of times each function is improved by the unscented Sigma point mutation strategy over 500 iterations of the present invention;
FIG. 7 is a graph of the number of functions that are improved for an optimal individual after each iteration using the unscented Sigma point mutation strategy in 500 iterations of the present invention;
FIG. 8 is a schematic illustration of an I-beam design problem for engineering optimization according to the present invention;
FIG. 9 is a schematic illustration of the engineering optimized two reducer design problem of the present invention;
FIG. 10 is a schematic diagram of the engineering optimization three-cantilever optimization problem of the present invention.
Detailed Description
The following detailed description of the embodiments of the invention is provided in connection with the accompanying drawings.
And (3) reverse learning strategy: the main idea of the reverse learning is to consider the advantages and disadvantages of the candidate solution and the reverse solution thereof at the same time, and select the better individual to enter the next generation population, so that the convergence rate and the solving precision of the algorithm can be better improved.
Defining: let X be (X)1,x2,…,xD) Being a point in D-dimensional space, xi∈[lbi,ubi]1, 2.. D, the reverse point of point X is defined as
Figure BDA0002480778150000091
The quasi-reversal point is
Figure BDA0002480778150000092
The point of pseudo-reflection is
Figure BDA0002480778150000093
Wherein
Figure BDA0002480778150000094
Figure BDA0002480778150000095
Figure BDA0002480778150000096
Two-dimensional space midpoint X ═ X1,x2) The value ranges of the pseudo-inversion point and the pseudo-reflection point are shown in FIG. 2, wherein the region A represents the pseudo-reflection point of the point X
Figure BDA0002480778150000097
Area B represents the quasi-reversal point of point X
Figure BDA0002480778150000098
The value range of (a).
From FIG. 2, it can be seen that the point of quasi-reversal
Figure BDA0002480778150000099
By contrast, pseudo-reflection point
Figure BDA00024807781500000910
The quasi-inversion point is far away from the candidate solution, and a search space which cannot be reached by the candidate solution can be explored.
And adjusting a nonlinear convergence factor. For the HHO algorithm, the escape energy E of a prey is an important parameter for the transition from the exploration phase to the development phase of the algorithm and is used for controlling the balance of the exploration and development phases, when the | E | is more than or equal to 1, the eagle group expands the search range so as to better explore the position of the prey, at the moment, the algorithm enters the exploration phase, the eagle group reduces the search range and finally attacks the prey, at this point, the algorithm enters a development phaseE is the initial energy E from the prey0The invention provides an updating strategy based on a logarithmic nonlinear convergence factor, which is described as an updating strategy based on a logarithmic nonlinear convergence factor, and is characterized in that the updating strategy is determined together with the iteration time T, the factor (1-T/T) in an original algorithm is linearly decreased from 1 to 0 along with the increase of the iteration time T, the linear decreasing strategy ensures that the variation delta E of each iteration energy is the same, namely, any iteration time is treated equally
Figure BDA0002480778150000101
Wherein E is a natural constant, T is a current iteration, T is a maximum iteration number, and E0The change curves of the convergence factor a (T) and a (T) are shown in FIG. 3, wherein the change curves are non-linearly decreased from 1 to 0 (1-T/T) along with the increase of the iteration number.
As can be seen from FIG. 3, compared with the linear change of (1-T/T), the nonlinear change of a (T) makes the reduction speed of the HHO algorithm faster in the early stage of iteration and gradually reduced at a slower speed in the later stage, because the population diversity of the HHO algorithm is better in the early stage of iteration and stronger local development capacity needs to be ensured in the later stage of iteration, the adjustment parameters provided by the invention can better meet the more stable transition between the early stage exploration and the later stage development, so that the algorithm can ensure the balance between the two, and the convergence speed and the solving precision of the HHO algorithm are further improved.
Random unscented Sigma point mutation strategy. The essence of the variation is to finely develop in the visual range of the current optimal solution to realize further optimization in the current exploration range
Figure BDA0002480778150000102
Sum variance PxGenerates 2D +1 points for estimating a given random variable functionThe generation principle of the mean and variance, symmetric and unscented Sigma points is as follows:
Figure BDA0002480778150000103
wherein, kappa is α2(D + lambda) -D, lambda is 3-D, α is a very small positive number, and the value of the invention is 1 × 10-2And D is the dimension of the problem,
Figure BDA0002480778150000111
is the mean of the variable x, PxIs a covariance matrix of the variable x,
Figure BDA0002480778150000112
is (D + kappa) PxThe unscented transformation ensures that the mean and variance of the random variable function to be estimated are at least second order accurate, and figure 4 gives a diagram of the Sigma points when D is 3.
The improved Harris eagle optimization algorithm (IHHO) based on the reverse learning, nonlinear adjustment and unscented Sigma point mutation strategy comprises the following steps:
s1: for decision variable x1,x2,…,xDThe objective function is F (x)1,x2,…,xD) The D-dimension minimization problem of (1), randomly generating a population P (0) { X ] containing the initial positions of N Harris eagles1(0),X2(0),…,XN(0)}.
Wherein the ith Harris eagle is at position Xi(0)=(xi1(0),xi2(0),…,xiD(0)),i=1,2,…,N.xij(0)=lbj+rand·(ubj-lbj) J ═ 1,2, …, d.rand denotes [0,1]Random numbers, ub, uniformly distributedj,lbjRespectively representing the upper and lower limits of the j-th dimension.
S2: calculating the objective function value F (X) of each eaglei(t)), i ═ 1,2, …, n. the position at which the objective function value was the smallest was recorded as the position of the food source
Figure BDA0002480778150000113
And t is the current iteration number.
S3: and carrying out random unscented mutation on the current globally optimal individual X (t). Note the book
Figure BDA0002480778150000114
2D +1 random variation points are generated.
Figure BDA0002480778150000115
Wherein κ is α2(D + λ) -D, λ -3-D, α is a very small positive number, 1 × 10-2D is the dimension of the problem, PXCovariance matrix for current population P (t)
Figure BDA0002480778150000116
Xm(t) is the average position of the current population,
Figure BDA0002480778150000117
is (D + kappa) PXColumn i of the square root matrix of (2).
Figure BDA0002480778150000121
The method is a monotonously decreasing scale factor, and ensures that the algorithm can be finely developed in a small range of the optimal solution along with the increase of the iteration times, wherein T is the current iteration, and T is the maximum iteration time.
Calculating objective function values of 2D +1 variant individuals, and updating the current optimal individual by greedy search, namely
Figure BDA0002480778150000122
S4: updating escape energy E-2E by utilizing nonlinear strategy0a(t)。
Wherein the content of the first and second substances,
Figure BDA0002480778150000123
e is a natural constant, T is the current iteration, T is the maximum iteration number, E0For the initial escape energy, take [ -1,1]A random value of (c).
S5: for the ith individual XiAnd (t) selecting and executing a corresponding exploration or development updating strategy according to the E and the random number r.
(1) When the escape energy | E | ≧ 1, which indicates that the prey is outside the visual reach of the eagle group, the eagle randomly searches for the prey, at this time, the IHHO enters the exploration phase, and the position of the eagle group individual is updated by the following two ways according to the random number on [0,1 ]:
Figure BDA0002480778150000124
wherein, Xi(t +1) is the position of the ith individual in the next iteration, Xrand(t) randomly selecting the position of an individual in the current population, wherein X (t) represents the position of the current optimal solution, and Xm(t) represents the average position of the current eagle population,
Figure BDA0002480778150000125
r1,r2,r3,r4are all random numbers uniformly distributed on (0,1) UBi,LBiRespectively represent an individual XiUpper and lower bounds of (t), UBi=[ubi1,ubi2,…,ubiD],LBi=[lbi1,lbi2,…,lbiD],ubij,lbijRespectively represent an individual Xi(t) upper and lower bounds of the jth dimension.
(2) When the escape energy | E | is less than 1, the prey enters the visual range of the eagle, and the eagle group corrects the position of the eagle according to the escape capability of the prey in four situations according to the probability r, and at the moment, IHHO enters the development stage.
(2a) When r is more than or equal to 0.5 and E is more than or equal to 0.5, the hunting objects have stronger escape capability, whether the hunting objects can be caught or not has certain randomness, and the eagle colony individual X has certain randomnessi(t) executing a soft trap strategy, wherein the position updating method comprises the following steps:
Xi old(t+1)=X*(t)-Xi(t)-E|Ji·X*(t)-Xi(t)|
Ji=2(1-r5)
whereinX (t) is the current optimum position, JiIs a random number between (0,2), r5Is a random number between (0, 1).
And further executing a quasi-reverse strategy on the corrected individuals. When r is more than or equal to 0.5 and E is more than or equal to 0.5, the eagle group individual needs to search in a larger range to capture a prey, and at the moment, in order to enlarge the search range, the ith individual X after the HHO is used for carrying out position updating is subjected toi old(t +1) using a pseudo-inverse learning strategy to generate a pseudo-inverse solution in a neighborhood farther from the candidate solution
Figure BDA0002480778150000131
Selecting the better individual from the quasi-inverse solution and the original individual to enter the next generation by a greedy selection strategy, namely
Figure BDA0002480778150000132
Wherein the content of the first and second substances,
Figure BDA0002480778150000133
Figure BDA0002480778150000134
Figure BDA0002480778150000135
(2b) when r is more than or equal to 0.5 and E is less than 0.5, the eagle group can capture the prey, and the eagle group individual executes the hard trapping strategy, wherein the individual position updating formula is as follows:
Xi(t+1)=X*(t)-E|X*(t)-Xi(t)|
and further executing a pseudo-reflection strategy. When r is more than or equal to 0.5 and E is less than 0.5, the eagle population can capture the prey in a smaller range, and at the moment, the ith individual X for executing HHO updatingi old(t +1) adopting a pseudo-reflection learning strategy to further enhance the development capability of the pseudo-reflection learning strategy, generating a pseudo-reflection solution in a neighborhood close to the candidate solution, carrying out greedy selection on the pseudo-reflection solution and the original individual, and keeping the better individual to enterAnd updating the next iteration. Namely, it is
Figure BDA0002480778150000141
Wherein the content of the first and second substances,
Figure BDA0002480778150000142
the quasi-reverse and quasi-reflection strategies are randomly adopted, and an appropriate search strategy is selected according to the behaviors of different individuals, so that the optimization efficiency of the algorithm in the later development stage can be accelerated, and the global convergence capability of the algorithm is improved.
(2c) When r is less than 0.5 and E is more than or equal to 0.5, the eagle group individual executes a soft trap strategy with quick dive, and the eagle group individual position updating formula is as follows:
Figure BDA0002480778150000143
wherein
Figure BDA0002480778150000144
X (t) is the current optimum position, Zi(t+1)=Yi(t+1)+Si·LF(D),SiIs a random vector of dimension D, LF (D) represents a random vector of dimension D levy distribution, SiLF (D) denotes a point-to-point multiplication, 1-dimensional Lvy distribution
Figure BDA0002480778150000145
Mu, upsilon is interval [0,1]β is a predetermined constant of 1.5, (. cndot.) is a gamma function,
Figure BDA0002480778150000146
(2d) when r is less than 0.5 and E is less than 0.5, the eagle group individual executes a hard trap strategy with quick dive, and the position of the eagle group individual is updated as follows:
Figure BDA0002480778150000147
wherein
Figure BDA0002480778150000148
X (t) is the current optimum position, Zi(t+1)=Yi(t+1)+Si·LF(D),SiLF (D) has the same meaning as (2 c).
S6: and judging whether the current iteration T is smaller than the maximum iteration time T, if T is smaller than T, making T equal to T +1, and repeating the steps S2-S6, otherwise, turning to S7.
S7: returning the optimal solution X*(t) and an optimum target value F (X)*(t)), the algorithm ends.
(1) The invention introduces a quasi-inverse learning strategy and a quasi-reflection learning strategy according to the self probability of the HHO algorithm, and performs quasi-inverse and quasi-reflection operations on population individuals, thereby enhancing the diversity of the population and improving the convergence speed and the calculation precision of the algorithm by utilizing the self characteristics of the quasi-inverse and quasi-reflection operators.
(2) The invention replaces the linear convergence factor in the original algorithm by designing a new nonlinear convergence factor, so that the algorithm can better balance the transition of exploration and development stages.
(3) In order to fully play the leading role of the globally optimal individual in the algorithm, the invention utilizes the characteristic that unscented transformation has second-order precision on random variable function moment estimation, introduces a random unscented Sigma point mutation strategy to carry out mutation operation on the globally optimal individual, and improves the capability of the algorithm for jumping out of local optimality.
In order to verify the effectiveness of the IHHO algorithm in solving a complex optimization problem and the influence of an individual improvement strategy on the algorithm, the invention carries out three groups of experiments, wherein the first group of experiments uses a CEC2014 standard test set to carry out numerical experiments on complex functions, the second group of experiments carries out experiments on 15 high-dimensional standard functions, and the third group of experiments selects three engineering examples of an I-shaped beam design problem, a reducer structure design problem and a cantilever beam optimization problem to carry out global optimization, thereby verifying the practicability of the algorithm. The IHHO algorithm and the HHO algorithm (marked as OHO) only adding the reverse learning strategy, the HHO (marked as MHHO) only adding the random unscented Sigma point variation strategy, the HHO, the particle swarm optimization algorithm (marked as PSO), the Grey wolf optimization algorithm (marked as GWO), the sine-cosine algorithm (marked as SCA), the dolphin optimization algorithm (marked as WOA) and the goblet sea squirt algorithm (marked as SSA) are compared, the algorithm parameter setting used in the experiment is shown in the table 1, the simulation experiment is based on Matlab2016b software, and the hardware configuration is Windows 7(64bit) operating system, Intel (R) core (TM) i5-3470 CPU, 3.20GHz master frequency and 4GB memory.
TABLE 1 Algorithm parameter set
Figure BDA0002480778150000151
Figure BDA0002480778150000161
In the first set of experiments, 30 benchmark test functions of CEC2014 are selected and performed, and the test functions with different characteristics, such as Unimodal (Unimodal), Multimodal (Multimodal), Hybrid (Hybrid), Composition (Composition), and the like, are included, and the variable value ranges and the theoretical optimal values of the test functions are given in table 2.
TABLE 2 CEC2014 benchmark test functions
Figure BDA0002480778150000162
Figure BDA0002480778150000171
For the sake of fairness, all algorithms use the same experimental parameters, i.e., the population size N is 50, the dimension Dim is 30, and the maximum estimation times is Dim 10000 times, each algorithm is independently operated 30 times in consideration of the randomness of the group intelligent algorithm, and the average Ave and the standard deviation Std of the error between the 30 optimal values and the theoretical values are recorded, as shown in table 3, wherein the bold data in the table represent the optimal values obtained by 9 algorithms.
Table 3 comparison of IHHO with the results obtained by the alignment algorithm on the test functions F1-F30
Figure BDA0002480778150000172
Figure BDA0002480778150000181
Figure BDA0002480778150000191
From table 3, it can be seen that IHHO has better optimization effect on unimodal functions F2, F2 than other 8 algorithms, IHHO performs best when solving F2, and F2 among 13 multimodal functions (F2-F2), IHHO performs better at F2, and F2 for more complex 6 hybrid functions (F2-F2), IHHO finds better results in the last 8 complex functions (F2-F2), OHHO performs better when solving the complex functions than MHHO algorithm using only inverse strategy OHHO and random unscented sigma point variation strategy, HHO algorithm finds better results on F2, and PSO algorithm ranking first algorithm, and all first algorithm solutions 2, F2 and F2, the data in the comprehensive analysis table shows that the IHHO has better optimizing results on 21 functions, the algorithm performance is ranked first on the whole, the solving precision is better than that of other comparison algorithms, and most problems can be effectively solved.
The second set of experiments was simulated using 15 standard test functions, including a unimodal function f1~f5Multimodal function f6~f10And a fixed dimensional function f11~f15The selected unimodal function is used for testing the development performance of the algorithm, the multimodal function is used for testing the capability of the algorithm for jumping out of local optima, and the mixing and combining function is used for testing the capability of the algorithm for stably searching and developing.
TABLE 415 high-dimensional benchmark test functions
Figure BDA0002480778150000201
In the experiment, the population scale is 30, the maximum estimation times are 500, 100 is set except for a fixed dimensional function, each algorithm performs 30 independent experiments, and the average value and the standard deviation of 30 calculation results are shown in table 5.
TABLE 5 IHHO compares the results of optimization of high-dimensional reference functions with other algorithms
Figure BDA0002480778150000211
As can be seen from Table 5, the IHHO algorithm operates at function f1、f2、f3、f4、f6、f8、f13~f15All converge to the theoretical optimum OHO at function f1、f3、f6、f8、f13Up to the optimum value, the HHO algorithm is in function f6、f8SSA algorithm reaches theoretical optimum at function f11At a value close to the theoretical optimum, the GWO algorithm is at function f14The position is close to the optimum, the SSA algorithm has poor solving precision in 15 functions, and has large error with the theoretical optimum value6,f8When the value reaches the theoretical optimal value, the PSO algorithm is in a function f11、f12Near the theoretical optimum, the last row of the table, the symbols "+/-" indicate the number of functions that each comparison algorithm performs better, equivalent and worse than the IHHO algorithm using Wilcoxon rank-sum test table 6 gives the P values for the Wilcoxon rank-sum test of all the reference functions at a significant level of α -0.05.
TABLE 6 benchmark function Wilcoxon rank sum test P value
Figure BDA0002480778150000221
From Table 6, the IHHO algorithm pair f10And f12The optimization effect of (1) is not much different from OHO, MHHO and HHO, but is better than other comparison algorithms.
FIG. 5 is a comparison graph of the convergence process of part of function curves of IHHO and other algorithms when optimizing test functions, for convenience of observation, log10(f (x)) is taken as an ordinate of the convergence curve graphs of (a) to (g). As shown in FIG. 5, when the IHHO solves a unimodal function, the convergence speed and the solution precision of the IHHO are greatly improved compared with those of HHO and other algorithms, and the IHHO can quickly converge to a better value.
And (5) analyzing the effectiveness of the unscented Sigma point mutation strategy. In order to discuss the influence of the unscented Sigma point variation strategy provided by the invention on the performance of the algorithm, the influence degree of 15 standard test functions on the optimal individual in 500 iterations is respectively counted, the number of times that each function in 500 iterations is influenced by the unscented Sigma point variation strategy is improved is given in fig. 6, and the number of functions that the optimal individual is improved after the unscented Sigma point variation strategy is used in each iteration in 500 iterations is given in fig. 7.
As can be seen in FIG. 6, the unscented Sigma point mutation strategy pairs f5And f12Has obvious effect on f9、f10、f13、f14And f15More than half of the iterative process has an effect, divided by f6、f7And f8Outer, to f1、f2、f3、f4And f11It can be known from the change of fig. 7 that almost all functions can further improve the optimal individual through a variation strategy in the early stage of the iteration, and the number of improved functions shows a decreasing trend in the middle and later stages of the iteration, mainly because the variation of each dimension of the population individuals is small in the later stage of the iteration, the covariance matrix of the population is correspondingly reduced, so that the variation disturbance amplitude of the optimal individual in the variation strategy is gradually reduced, and therefore, fewer improved variant individuals are obtained in the later stage of the iteration, and the number of function optimization times in fig. 6 and the number of improved functions in fig. 7 together show the effectiveness of the traceless sigma point variation strategy on 15 functions.
And the third group of experiments selects three examples to verify the application of the IHHO algorithm in the engineering constraint optimization problem.
And (3.1) I-shaped beam design problem. I-beam as shown in fig. 8, the width b, height h, and thickness w and f of the two layers of the I-beam are four structural parameters of this problem, which minimizes the vertical deflection of the beam by optimizing the parameters.
Let [ x)1x2x3x4]=[b h twtf]The mathematical model of the I-beam is:
Figure BDA0002480778150000231
s.t.g(x)=2x1x3+x3(x2-2x4)-300≤0
10≤x1≤50,10≤x2≤80,0.9≤x3≤5,0.9≤x4≤5.
the I-beam problem was solved using IHHO, HHO, cuckoo algorithm (denoted CS), moth fire suppression algorithm (denoted MFO), adaptive response surface method (denoted ARSM) and symbiont search (denoted SOS), and the comparison results are shown in Table 7.
TABLE 7 comparison of results of IHHO and other algorithms on I-Beam design problems
Figure BDA0002480778150000241
As can be seen from Table 7, the solution results for IHHO are optimal.
And (3.2) designing a speed reducer. As shown in FIG. 9, the weight of the reducer is affected by such factors as the bending stress of the gears, the surface stress, the lateral deflection of the shaft and the stress in the shaft1,x2,…x7Respectively representing the face width b, the gear module m, the number of teeth z of the pinion, the first axial length l between the bearings1Length l of second shaft between bearings2First axis diameter d1And a second shaft diameter d2. The design of the reducer is a matter of choosing reasonable parameters to minimize the overall weight of the reducer, which is a typical hybridThe reducer problem has 11 limiting factors, and the objective function and the constraint condition are described as follows:
Figure BDA0002480778150000242
Figure BDA0002480778150000243
Figure BDA0002480778150000244
Figure BDA0002480778150000245
Figure BDA0002480778150000246
Figure BDA0002480778150000247
Figure BDA0002480778150000251
Figure BDA0002480778150000252
Figure BDA0002480778150000253
Figure BDA0002480778150000254
Figure BDA0002480778150000255
Figure BDA0002480778150000256
wherein x is more than or equal to 2.61≤3.6,0.7≤x2≤0.8,17≤x3≤28,7.3≤x4≤8.3,7.8≤x5≤8.3 2.9≤x6≤3.9,5.0≤x7≤5.5.
The results of solving the reducer structure design problem by using 6 algorithms, namely the IHHO algorithm, the HHO algorithm, the artificial bee colony algorithm (denoted as ABC), the cuckoo algorithm (denoted as CS), the social behavior simulation model (denoted as SBSM) and the simple evolution algorithm (denoted as SEA), are shown in table 8.
TABLE 8 IHHO and other Algorithms comparison of results of calculations on the design of a retarder Structure
Figure BDA0002480778150000257
As can be seen from table 8, the IHHO calculation results are similar to the HHO results, and the IHHO solution accuracy is highest compared to other algorithms.
And (3.3) cantilever beam optimization. The cantilever shown in figure 10 is made up of five units each having a square hollow section of constant thickness, the cantilever having a rigid support at node 1 and an external vertical force at node 6, the mathematical description of the problem is:
min f(x)=0.0624(x1+x2+x3+x4+x5)
Figure BDA0002480778150000261
0.01≤xj≤100,j=1,2,3,4,5.
the results of the calculations of the cantilever optimization problem for the 6 algorithms IHHO, HHO, CS, MFO, SOS, and SEA are shown in table 9.
Table 9 comparison of results of IHHO and other algorithms on optimization of cantilever optimization problem
Figure BDA0002480778150000262
The results of the calculations in table 9 indicate that IHHO has the highest resolution.
The invention introduces a quasi-reverse learning strategy and a quasi-reflection learning strategy to enhance the diversity of population and accelerate the convergence speed and the solving precision of the algorithm on the basis of a Harris eagle optimization algorithm, can effectively balance the exploration and development capacity of the algorithm through the adjustment of a nonlinear convergence factor, adopts random unscented Sigma point variation to disturb the current optimal individual in iteration, effectively avoids the algorithm from falling into local optimization, enables the algorithm to better find a global optimal value, proposes a Harris eagle optimization algorithm (IHHO) based on the random unscented Sigma point variation strategy, utilizes unscented transformation to estimate the mean value and the variance of a random variable function to achieve the advantage of second-order precision, designs new random symmetric unscented transformation, ensures that the algorithm is randomly developed and has certain theoretical support, overcomes the defect that the random optimization algorithm is theoretically deficient to a certain extent, and adopts 30 reference test functions, 15 classical test functions and 3 engineering constraint optimization functions of CEC2014 The experimental result of the problem shows that IHHO has better global exploration and local development capability and is superior to other comparison algorithms in solving precision and convergence speed.

Claims (8)

1. A Harris eagle optimization algorithm based on random unscented Sigma point variation is characterized by comprising the following steps:
s1: initializing a population scale N, and setting a maximum iteration time T;
s2: in solution space [ LBi,UBi]Randomly generating initial positions of N Harris hawks as an initial population;
s3: calculating the fitness of each eagle and recording the position X of the current optimal individual*(t);
S4: carrying out random Sigma point variation on the current optimal individual position;
s5: updating the non-linear decreasing factor a (t) and prey energy E;
s6: selecting and executing a corresponding updating strategy according to the relative size of the prey energy E and the random number r to obtain a new population;
s7: judging whether the iteration of S3-S6 reaches the maximum iteration time T, if not, repeatedly executing the steps S3-S6 until the iteration is finished;
s8: returning the optimal solution X*(t) and an optimum target value F (X)*(t)), the algorithm ends.
2. The harris eagle optimization algorithm based on random unscented Sigma point variation as claimed in claim 1, wherein in step 2, for decision variable x1,x2,…,xDThe objective function is F (x)1,x2,…,xD) The D-dimension minimization problem of (1), randomly generating a population P (0) { X ] containing the initial positions of N Harris eagles1(0),X2(0),…,XN(0) The ith Harris hawk is in X positioni(0)=(xi1(0),xi2(0),…,xiD(0) I ═ 1,2, …, N, where xij(0)=lbj+rand·(ubj-lbj) J-1, 2, …, D, rand denotes [0,1]Random numbers, ub, uniformly distributedj,lbjRespectively representing the upper and lower limits of the j-th dimension.
3. The harris eagle optimization algorithm based on random unscented Sigma point variation as claimed in claim 1, wherein in step 3, the objective function value F (X) of each eagle is calculatedi(t)), i ═ 1,2, …, N, and the position optimum position at which the objective function value was the smallest was recorded
Figure FDA0002480778140000021
And t is the current iteration number.
4. The harris eagle optimization algorithm based on random unscented Sigma point variation according to claim 1, characterized in that the step 4 is specifically implemented as the following steps:
s4.1 notes
Figure FDA0002480778140000022
Generating 2D +1 random variation points of
Figure FDA0002480778140000023
Wherein κ is α2(D + λ) -D, λ -3-D, α is a very small positive number, 1 × 10-2D is dimension, PXThe covariance matrix of the current population p (t),
Figure FDA0002480778140000024
Xm(t) is the average position of the current population,
Figure FDA0002480778140000025
Figure FDA0002480778140000026
is (D + kappa) PXThe ith column of the square root matrix of (2),
Figure FDA0002480778140000027
the scale factor is monotonically decreased, the algorithm can be finely developed in a small range of the optimal solution along with the increase of the iteration times, T is the current iteration, and T is the maximum iteration time;
s4.2 calculating objective function values of 2D +1 variant individuals, and updating the current optimal individual by greedy search, namely
Figure FDA0002480778140000028
5. The harris eagle optimization algorithm based on random unscented Sigma point variation as claimed in claim 1, wherein in the step 5, the escape energy E-2E is updated by using a nonlinear strategy0a (t), wherein,
Figure FDA0002480778140000029
e is a natural constant, T is the current iteration, T is the maximum iteration number, E0In order to initially escape the energy,take the form of [ -1,1]A random value of (c).
6. The Harris eagle optimization algorithm based on random unscented Sigma point variation as claimed in claim 1, wherein in step 6, for the ith individual Xi(t) selecting and executing a corresponding exploration or development updating strategy according to the E and the random number r, and specifically implementing the following steps:
s6.1 when the escape energy | E | is more than or equal to 1, the IHHO enters an exploration phase, and the position of the eagle colony individual is updated according to random numbers on [0,1] in the following two ways:
Figure FDA0002480778140000031
wherein, Xi(t +1) is the position of the ith individual in the next iteration, Xrand(t) randomly selecting the position of an individual in the current population, X*(t) represents the position of the current optimal solution, Xm(t) represents the average position of the current eagle population,
Figure FDA0002480778140000032
r1,r2,r3,r4are all random numbers uniformly distributed on (0,1) UBi,LBiRespectively represent an individual XiUpper and lower bounds of (t), UBi=[ubi1,ubi2,…,ubiD],LBi=[lbi1,lbi2,…,lbiD],ubij,lbijRespectively represent an individual Xi(t) upper and lower bounds of the jth dimension;
s6.2, defining r as a random number between (0,1), when escape energy | E | is less than 1, correcting the position of an eagle group according to the escape capability of a prey in four situations according to probability r, and then enabling the IHHO to enter a development stage, wherein the specific correction mode is as follows:
(1.1) when r is more than or equal to 0.5 and | E | is more than or equal to 0.5, the eagle colony individual Xi(t) executing a soft trap strategy, wherein the position updating method comprises the following steps:
Xi old(t+1)=X*(t)-Xi(t)-E|Ji·X*(t)-Xi(t)|
wherein X*(t) is the current optimal position, Ji=2(1-r5) Is a random number between (0,2), r5Is a random number between (0, 1);
(1.2) further correcting the i-th individual Xi old(t +1) individuals implement a quasi-inverse strategy to generate a quasi-inverse solution in a neighborhood farther from the candidate solution
Figure FDA0002480778140000033
Selecting the better individual from the quasi-inverse solution and the original individual to enter the next generation by a greedy selection strategy, namely
Figure FDA0002480778140000041
Wherein the content of the first and second substances,
Figure FDA0002480778140000042
Figure FDA0002480778140000043
(2.1) when r is more than or equal to 0.5 and | E | is less than 0.5, executing a hard trapping strategy by the eagle group individuals, wherein the individual position updating formula is as follows:
Xi(t+1)=X*(t)-E|X*(t)-Xi(t)|;
(2.2) further on the ith individual X performing the HHO updatei old(t +1) adopting a pseudo-reflection learning strategy, executing the pseudo-reflection strategy, generating a pseudo-reflection solution in a neighborhood close to the candidate solution, carrying out greedy selection on the pseudo-reflection solution and the original individual, and reserving the superior individual to enter the next iteration updating, namely
Figure FDA0002480778140000044
Wherein the content of the first and second substances,
Figure FDA0002480778140000045
(3) when r is less than 0.5 and | E | ≧ 0.5, the eagle group individual executes a soft trap strategy with fast dive, and the eagle group individual position updating formula is as follows:
Figure FDA0002480778140000046
wherein
Figure FDA0002480778140000047
X (t) is the current optimum position, Zi(t+1)=Yi(t+1)+Si·LF(D),SiIs a random vector of dimension D, LF (D) represents a random vector of dimension D levy distribution, SiLF (D) denotes point-to-point multiplication with a 1-dimensional Levy distribution of
Figure FDA0002480778140000051
Mu, upsilon is interval [0,1]β is a predetermined constant, (. cndot.) is a gamma function,
Figure FDA0002480778140000052
(4) when r is less than 0.5 and | E | is less than 0.5, the eagle swarm individual executes a hard trap strategy with fast dive, and the position of the eagle swarm individual is updated as follows:
Figure FDA0002480778140000053
wherein
Figure FDA0002480778140000054
X (t) is the current optimum position, Zi(t+1)=Yi(t+1)+Si·LF(D),SiLF (D) has the same meaning as (3).
7. The harris eagle optimization algorithm based on random unscented Sigma point variation of claim 1, characterized in that in the step 7, it is determined whether the current iteration T is less than the maximum iteration number T, if T < T, let T be T +1, repeat the steps S3-S7, otherwise, go to S8.
8. The harris eagle optimization algorithm based on random unscented Sigma point variation as claimed in claim 1, characterized in that when the algorithm is finished, the optimal solution X is returned*(t) and an optimum target value F (X)*(t)), i.e.
Figure FDA0002480778140000055
CN202010377624.0A 2020-05-07 2020-05-07 Harris eagle optimization algorithm based on random unscented Sigma point variation Pending CN111709511A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010377624.0A CN111709511A (en) 2020-05-07 2020-05-07 Harris eagle optimization algorithm based on random unscented Sigma point variation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010377624.0A CN111709511A (en) 2020-05-07 2020-05-07 Harris eagle optimization algorithm based on random unscented Sigma point variation

Publications (1)

Publication Number Publication Date
CN111709511A true CN111709511A (en) 2020-09-25

Family

ID=72536587

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010377624.0A Pending CN111709511A (en) 2020-05-07 2020-05-07 Harris eagle optimization algorithm based on random unscented Sigma point variation

Country Status (1)

Country Link
CN (1) CN111709511A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112577507A (en) * 2020-11-04 2021-03-30 杭州电子科技大学 Electric vehicle path planning method based on Harris eagle optimization algorithm
CN113326912A (en) * 2021-05-28 2021-08-31 南京邮电大学 Information sharing Harris eagle optimization-based ultra-wideband positioning method
CN113722853A (en) * 2021-08-30 2021-11-30 河南大学 Intelligent calculation-oriented group intelligent evolutionary optimization method
CN114117907A (en) * 2021-11-24 2022-03-01 大连大学 Reducer design method based on TQA algorithm
CN116432872A (en) * 2023-06-13 2023-07-14 中国人民解放军战略支援部队航天工程大学 HHO algorithm-based multi-constraint resource scheduling method and system
CN116089063B (en) * 2022-12-06 2023-10-03 广东工业大学 Northern hawk optimization WNGO algorithm and similar integer code service combination optimization method based on guidance of prey generation by using whale optimization algorithm

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110222812A (en) * 2019-06-14 2019-09-10 辽宁工程技术大学 A kind of improvement chestnut wing hawk optimization algorithm for the regulating strategy that periodically successively decreases with energy

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110222812A (en) * 2019-06-14 2019-09-10 辽宁工程技术大学 A kind of improvement chestnut wing hawk optimization algorithm for the regulating strategy that periodically successively decreases with energy

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112577507A (en) * 2020-11-04 2021-03-30 杭州电子科技大学 Electric vehicle path planning method based on Harris eagle optimization algorithm
CN113326912A (en) * 2021-05-28 2021-08-31 南京邮电大学 Information sharing Harris eagle optimization-based ultra-wideband positioning method
CN113326912B (en) * 2021-05-28 2022-08-09 南京邮电大学 Information sharing Harris eagle optimization-based ultra-wideband positioning method
CN113722853A (en) * 2021-08-30 2021-11-30 河南大学 Intelligent calculation-oriented group intelligent evolutionary optimization method
CN113722853B (en) * 2021-08-30 2024-03-05 河南大学 Group intelligent evolutionary engineering design constraint optimization method for intelligent computation
CN114117907A (en) * 2021-11-24 2022-03-01 大连大学 Reducer design method based on TQA algorithm
CN114117907B (en) * 2021-11-24 2024-04-16 大连大学 Speed reducer design method based on TQA algorithm
CN116089063B (en) * 2022-12-06 2023-10-03 广东工业大学 Northern hawk optimization WNGO algorithm and similar integer code service combination optimization method based on guidance of prey generation by using whale optimization algorithm
CN116432872A (en) * 2023-06-13 2023-07-14 中国人民解放军战略支援部队航天工程大学 HHO algorithm-based multi-constraint resource scheduling method and system
CN116432872B (en) * 2023-06-13 2023-09-22 中国人民解放军战略支援部队航天工程大学 HHO algorithm-based multi-constraint resource scheduling method and system

Similar Documents

Publication Publication Date Title
CN111709511A (en) Harris eagle optimization algorithm based on random unscented Sigma point variation
CN111262858B (en) Network security situation prediction method based on SA _ SOA _ BP neural network
Mohamed et al. Optimal power flow using moth swarm algorithm
CN107330902B (en) Chaotic genetic BP neural network image segmentation method based on Arnold transformation
CN108038507A (en) Local receptor field extreme learning machine image classification method based on particle group optimizing
CN110110380B (en) Piezoelectric actuator hysteresis nonlinear modeling method and application
CN112577507A (en) Electric vehicle path planning method based on Harris eagle optimization algorithm
CN114708479B (en) Self-adaptive defense method based on graph structure and characteristics
CN110766125A (en) Multi-target weapon-target allocation method based on artificial fish swarm algorithm
CN116454863A (en) Optimal weight determining method of wind power combination prediction model based on improved hawk optimization algorithm
CN111832911A (en) Underwater combat effectiveness evaluation method based on neural network algorithm
CN110222816B (en) Deep learning model establishing method, image processing method and device
CN114880806A (en) New energy automobile sales prediction model parameter optimization method based on particle swarm optimization
CN108876038B (en) Big data, artificial intelligence and super calculation synergetic material performance prediction method
CN114091745A (en) Industry power consumption prediction method based on improved multi-storage pool echo state network
CN116933948A (en) Prediction method and system based on improved seagull algorithm and back propagation neural network
CN110210072B (en) Method for solving high-dimensional optimization problem based on approximate model and differential evolution algorithm
Xu et al. Elman neural network for predicting aero optical imaging deviation based on improved slime mould algorithm
Asaduzzaman et al. Faster training using fusion of activation functions for feed forward neural networks
CN116522747A (en) Two-stage optimized extrusion casting process parameter optimization design method
CN114330119B (en) Deep learning-based extraction and storage unit adjusting system identification method
CN113722853B (en) Group intelligent evolutionary engineering design constraint optimization method for intelligent computation
CN112132259B (en) Neural network model input parameter dimension reduction method and computer readable storage medium
CN114662638A (en) Mobile robot path planning method based on improved artificial bee colony algorithm
CN114492800A (en) Countermeasure sample defense method and device based on robust structure search

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200925