CN110046710A - A kind of the nonlinear function Extremal optimization method and system of neural network - Google Patents

A kind of the nonlinear function Extremal optimization method and system of neural network Download PDF

Info

Publication number
CN110046710A
CN110046710A CN201910289721.1A CN201910289721A CN110046710A CN 110046710 A CN110046710 A CN 110046710A CN 201910289721 A CN201910289721 A CN 201910289721A CN 110046710 A CN110046710 A CN 110046710A
Authority
CN
China
Prior art keywords
neural network
individual
value
initial
population
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910289721.1A
Other languages
Chinese (zh)
Inventor
张婕
张永胜
段佳希
黄晓翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Normal University
Original Assignee
Shandong Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Normal University filed Critical Shandong Normal University
Priority to CN201910289721.1A priority Critical patent/CN110046710A/en
Publication of CN110046710A publication Critical patent/CN110046710A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/12Computing arrangements based on biological models using genetic models
    • G06N3/126Evolutionary algorithms, e.g. genetic algorithms or genetic programming

Abstract

The invention discloses the nonlinear function Extremal optimization methods and system of a kind of neural network, effectively improve the solving precision of nonlinear function extreme value optimizing.Method includes the following steps: calculating all weights and threshold value of neural network using sinusoidal self-adapted genetic algorithm;Prediction data is inputted and is predicted using trained neural network, the prediction output valve of neural network is obtained using training data training neural network according to all weights and threshold value of the neural network after optimization;According to the prediction output valve of neural network, the optimal extreme value of nonlinear function is found using sinusoidal self-adapted genetic algorithm.

Description

A kind of the nonlinear function Extremal optimization method and system of neural network
Technical field
This disclosure relates to Neural Network Optimization field, and in particular to a kind of BP nerve based on sinusoidal self-adapted genetic algorithm The nonlinear function Extremal optimization method and system of network.
Background technique
It is more next as people deepen continuously and handle the advantage shown in challenge to neural network research More scholars participate in the research of neural network, and neural network is increasingly becoming the research hot topic problem of academia.
The purpose of optimizing algorithm is to find optimal experiment condition and experimental result, is held very much under the conditions of known to the optimizing equation Optimal value easily is found, but is frequently encountered problems some so in practical engineering applications, such as during chemical reaction Aircraft power model problem, robot path planning's problem in controller parameter optimization problem, space industry and because when Between and funding problems experiments the problems such as can only carrying out finite number of time, these problems be all some complicated or be difficult to use mathematical modulo The nonlinear model that type accurately indicates, it is known that condition there was only some discrete inputoutput datas, and model be usually present it is more A local extremum, therefore be difficult to find extreme value.
For unknown nonlinear function, it is difficult accurately to find pole only by the limited times inputoutput data of function Value.From the current study, artificial neural network (ANN) is the neuromechanism for learning human brain and the multiple-unit group established The network model of mould assembly has certain practical application in art of mathematics, weather prognosis field and area of pattern recognition etc..Mind It can be generally divided into supervised learning, unsupervised learning and blended learning etc. through network, be mainly used to solve in pattern-recognition Nonlinear Mapping and classification problem, it has good adaptive ability, memory associative ability and non-linear mapping capability. Wherein, BP neural network is the core of feedforward network, is the essential part of neural network, and any complexity can be achieved because it has The function of Nonlinear Mapping and become the more successful prediction model of application study.Although BP neural network local search ability is strong It is brave, but the speed of learning algorithm is slow, and is easily trapped into local extremum.
Genetic algorithm (GA) is the full search algorithm based on biological evolution mechanism, is often used in making up traditional neural network The problems such as search mechanisms are poor, robustness is insufficient and is easily trapped into Local Extremum.Based on GA and the characteristics of BP algorithm, therefore BPGA combinational algorithm is produced, application of the combinational algorithm in terms of optimization and prediction is very extensive.Genetic algorithm both can be into Row parallel computation, and since the algorithm is a kind of effective heuristic search algorithm, thus also there is very high search efficiency. For above-mentioned extreme value of a function optimization problem, existing optimization method has: (1) using inertia weight adaptive re-configuration police It improves PSO algorithm and carries out extreme value of a function optimizing;(2) function is carried out using the initialization information element of genetic algorithm optimization ant group algorithm Extreme value optimizing;The advantages of inventor has found in R&D process, the above method is that the initialization value of algorithm is not set at random, It but by algorithm optimization, but must be it is known that functional equation is to be difficult table in many studies there is also functional equation The deficiency shown and found out.(3) BP neural network and genetic algorithm are combined and carries out extreme value optimizing, given full play to BP mind The global optimizing ability of nonlinear function approximation ability and genetic algorithm through network, overcoming functional equation known must lack Point, but inventor has found in R&D process, the initial weight of BP neural network randomly selects, and can have network shakiness Determine, convergence rate is slow, is easy to produce the problems such as office's value.
Summary of the invention
In order to overcome the above-mentioned deficiencies of the prior art, present disclose provides a kind of BP based on sinusoidal self-adapted genetic algorithm The nonlinear function Extremal optimization method and system of neural network effectively improve the solution essence of nonlinear function extreme value optimizing Degree.
A kind of technical solution of the nonlinear function Extremal optimization method of neural network of the first aspect of the disclosure is:
A kind of nonlinear function Extremal optimization method of neural network, method includes the following steps:
All weights and threshold value of neural network are calculated using sinusoidal self-adapted genetic algorithm;
It will be predicted according to all weights and threshold value of the neural network after optimization using training data training neural network Data are input to trained neural network and are predicted, obtain the prediction output valve of neural network;
According to the prediction output valve of neural network, the optimal pole of nonlinear function is found using sinusoidal self-adapted genetic algorithm Value.
A kind of technical solution of the nonlinear function extremal optimization system of neural network of the second aspect of the disclosure is:
A kind of nonlinear function extremal optimization system of neural network, the system include:
Weight threshold optimization module, for calculating all weights and threshold of neural network using sinusoidal self-adapted genetic algorithm Value;
Neural metwork training fitting module utilizes instruction for all weights and threshold value according to the neural network after optimization Practice data training neural network, prediction data is input to trained neural network and is predicted, the pre- of neural network is obtained Survey output valve;
Extreme value optimizing module is found for the prediction output valve according to neural network using sinusoidal self-adapted genetic algorithm The optimal extreme value of nonlinear function.
A kind of technical solution of computer readable storage medium of the third aspect of the disclosure is:
A kind of computer readable storage medium, is stored thereon with computer program, realization when which is executed by processor Step in the nonlinear function Extremal optimization method of neural network as described above.
A kind of technical solution of computer equipment of the fourth aspect of the disclosure is:
A kind of computer equipment can be run on a memory and on a processor including memory, processor and storage Computer program, the processor realize the nonlinear function extremal optimization of neural network as described above when executing described program Step in method.
Through the above technical solutions, the beneficial effect of the disclosure is:
(1) disclosure solves genetic algorithm pole using sinusoidal adaptive algorithm dynamic setting crossing-over rate and aberration rate first The problem of easily falling into locally optimal solution, the weight and threshold value for then trampling Revised genetic algorithum SAGA Optimized BP Neural Network come Models fitting is carried out, finally carries out extreme value optimizing with SAGA again.
(2) disclosure is dual by traditional GABP method, GA algorithm double optimization BP neural network method and SAGA algorithm Optimized BP Neural Network method carries out simulation comparison, and the simulation result surface disclosure effectively improves nonlinear function extreme value and seeks Excellent solving precision;
(3) disclosure is optimized in terms of weight threshold initialization and global optimizing two using Revised genetic algorithum, Extreme value optimizing accurately very can be carried out to unknown function, extreme value optimizing effect is very good.
Detailed description of the invention
The Figure of description for constituting a part of this disclosure is used to provide further understanding of the disclosure, and the disclosure is shown Meaning property embodiment and its explanation do not constitute the improper restriction to the disclosure for explaining the application.
Fig. 1 is the adaptive crossover mutation sine curve figure of embodiment one;
Fig. 2 is the self-adaptive mutation sine curve figure of embodiment one;
Fig. 3 is three layers of BP neural network structure chart of embodiment one;
Fig. 4 is the nonlinear function Extremal optimization method flow chart of the BP neural network of embodiment one;
Fig. 5 is the BP neural network forecast Error Graph of the 5th experiment of embodiment one;
Fig. 6 is the extreme value optimizing curve graph of the 5th experiment of embodiment one;
Fig. 7 is the BP neural network forecast Error Graph of the 9th experiment of embodiment one;
Fig. 8 is the extreme value optimizing curve graph of the 9th experiment of embodiment one;
Fig. 9 is the BP neural network forecast Error Graph of the tenth experiment of embodiment one
Figure 10 is the extreme value optimizing curve graph of the tenth experiment of embodiment one;
Figure 11 is the fitting effect schematic diagram of the prediction output and desired output of embodiment one.
Figure 12 is the structural block diagram of the nonlinear function extremal optimization system of two neural network of embodiment.
Specific embodiment
The disclosure is described further with embodiment with reference to the accompanying drawing.
It is noted that following detailed description is all illustrative, it is intended to provide further instruction to the disclosure.Unless another It indicates, all technical and scientific terms that the disclosure uses have logical with disclosure person of an ordinary skill in the technical field The identical meanings understood.
It should be noted that term used herein above is merely to describe specific embodiment, and be not intended to restricted root According to the illustrative embodiments of the application.As used herein, unless the context clearly indicates otherwise, otherwise singular Also it is intended to include plural form, additionally, it should be understood that, when in the present specification using term "comprising" and/or " packet Include " when, indicate existing characteristics, step, operation, device, component and/or their combination.
Explanation of nouns:
(1) SAGA, sinusoidal self-adapted genetic algorithm.
Embodiment one
The present embodiment provides a kind of nonlinear function extreme value of BP neural network based on sinusoidal self-adapted genetic algorithm is excellent Change method easily falls into part using sinusoidal adaptive algorithm dynamic setting crossing-over rate and aberration rate first to solve genetic algorithm The problem of optimal solution, it is quasi- then to carry out model using the weight and threshold value of Revised genetic algorithum SAGA Optimized BP Neural Network It closes, finally carries out extreme value optimizing with SAGA again.
(1) improved sinusoidal self-adapted genetic algorithm
Genetic algorithm is similar to natural evolution, its basic principle is to initialize kind to after the gene coding on chromosome Group, then selected, intersected and make a variation etc. and operated, and then optimal solution is sought according to fitness function.
The basic procedure of traditional genetic algorithm is as follows:
A) initialization of population: will be set as t=0 on the counter of evolutionary generation, maximum evolutionary generation is T, last random It generates M individual and is used as initial population P (0).
B) calculate fitness value: if fitness value meets termination condition, heredity is terminated, and otherwise continues to hold downwards Row.
C) selection operation: purpose is to eliminate the low individual of fitness, retains the high individual of fitness.
D) crossover operation: purpose is to generate new individual, explores new solution space.
E) mutation operation: purpose is to maintain the diversity of population.It is operated, is produced new by selection, intersection and variation etc. Population P (t=1).
F) fitness value is calculated.
G) judge whether heredity terminates: if t≤T and fitness value is unsatisfactory for termination condition, t=t+1 is jumped Start a new wheel heredity to step d;If t > T or fitness value meet termination condition, heredity is terminated.
The selection of crossing-over rate and aberration rate is one of core of genetic algorithm, it will affect the convergence of genetic evolution and searches Suo Sudu.The crossing-over rate and aberration rate of traditional genetic algorithm using fixed value need that reality is repeated for different problems Test to determine optimal values, but effect is not often good enough and same problem locating for difference genetic phase crossing-over rate and aberration rate need Asking is also difference, therefore Srinivas puts forward self-adapted genetic algorithm (AGA).
The basic thought of AGA is that crossing-over rate and aberration rate are adjusted according to the variation of fitness, i.e. the evolution when individual State is in when reaching unanimity or being limited to local optimum, increases crossing-over rate and aberration rate, is compared point when individual is in fitness When dissipating, crossing-over rate and aberration rate are reduced.Meanwhile being greater than the individual of group's average fitness to fitness, reduce crossing-over rate And aberration rate, save defect individual;It is less than the individual of group's average fitness for fitness, increases crossing-over rate and aberration rate, It eliminates the individual of difference and increases new individual.The crossing-over rate of self-adapted genetic algorithm and the adjustment formula of aberration rate are as follows:
In formula, K1~K4It is all self adaptive control parameter, pcIt is crossing-over rate, pmIt is aberration rate, fmaxIt is maximum in population Fitness value, favgIt is the average fitness value of all individuals of population, f' is the larger ideal adaptation that crossing operation is participated in population Angle value, f are the fitness values of variation individual.
Inventor has found that AGA algorithm is existing defects in R&D process, from formula (1) (2), when in population Body fitness is closer to maximum adaptation degree, i.e. fmaxWhen-f' ≈ 0, pcAnd pmClose to 0, such case is at the genetic algorithm later period It is proper, but it is very unfavorable in the early stage, it is easy to produce precocious phenomenon.
For this problem, Ren Ziwu et al. proposes improved self-adapted genetic algorithm (IAGA) on the basis of AGA, i.e., Elite retention strategy is increased on the basis of SAGA, formula is as follows:
In formula, pc1It is the fixed maximum values of crossover probability, pc2It is the fixation minimum value of crossover probability, pm1It is mutation probability Fixed maximum values, pm2It is the fixation minimum value of mutation probability.In this experiment, pc1=0.9, pc2=0.6, pm1=are set 0.1, pm2=0.001.
Inventor has found that such algorithm still has certain problems in R&D process, and even there are larger in population When the fitness of the individual of scale is close to average fitness, its crossover probability and mutation probability can be very big, if closely In maximum adaptation degree, then crossover probability and mutation probability at this time can very little.The adaptive crossover mutation of IAGA and variation are general Rate can be very precipitous, causes local convergence.For this problem, there is sinusoidal self-adapted genetic algorithm (SAGA).The algorithm root According to the distribution situation of population and adaptive value, adaptively changes the crossing-over rate and aberration rate of entire population, their variation is made to become Gesture gradually becomes stable from concussion.Design evolution early period has biggish crossing-over rate and aberration rate, to enhance search capability, rear Phase uses lower crossing-over rate and aberration rate, to determine optimized individual.
The formula of sinusoidal self-adapted genetic algorithm (SAGA) is as follows:
As depicted in figs. 1 and 2 it is the image of formula (5) and (6), is all sinusoidal image.It, can be weak due to -1 < sina < 1 Change as fitness close to average fitness or maximum adaptation degree and caused by operator probability it is too large or too small, it is sinusoidal Self-adapted genetic algorithm can guarantee that crossing-over rate and aberration rate variation be not precipitous, be changed in a kind of stable mode.
(2) BP neural network structure is determined.
The topological structure of BP neural network includes input layer, hidden layer (can multilayer) and output layer three parts, as shown in Figure 3 For three layers of BP neural network.From topological structure, BP neural network can regard a nonlinear function, input and output value as Regard argument of function and dependent variable as respectively.The activation primitive of the network uses Sigmiod function, and algorithm uses Levenberg-Marquardt (LM) algorithm.The core of BP neural network is BP algorithm, forward-propagating and error including signal Backpropagation two parts.When forward-propagating, the signal inputted from input layer reaches output layer by hidden layer, if output layer As a result do not reach the expected back-propagation process for being then transferred to error.When backpropagation, pass through the error reverse-direction derivation of output layer The error of each layer neuron, and with the connection weight and threshold value of error size corrective networks, final purpose is to reduce output to miss Difference achieves the desired results.
One: Kolmogorov theorem of theorem.The function f:R of given arbitrary continuationm→Rn, y=f (x), R here are closed zones Between [0,1], f can complete arbitrarily approach function with BP network.
Theorem two: if giving any ε and in L2Norm under: f:[0,1]m→Rn, then, there will be one three layers BP neural network can approach f in the range of being less than any ε square error.
Specifically, the determination specific implementation of the BP neural network structure is as follows:
(1) number of nodes of input layer and output layer is determined.
The number of nodes of input layer and output layer depends on outputting and inputting for practical problem.
(2) number of plies of hidden layer is determined.
By theorem one and two it is found that one three layers of BP neural network can be crossed in the range of being less than any ε square error Approximating function, it is seen that three layers of BP neural network are a general function approximators, very widely used.
(3) number of nodes of hidden layer is determined.
Influence of the node in hidden layer to BP network is very big, if number of nodes is very little, cannot learn well, if section Points are too many, and the training time not only can be very long, can also lead to the problem of over-fitting, existing to propose " trichotomy " algorithm, still It is more complicated, this system is not suitable for.The optimal implicit number of plies can refer to following empirical equation:
Wherein, n is input layer number, and m is node in hidden layer, and l is output layer number of nodes, and ɑ is normal between 0-10 Number determines general scope in actually calculating according to formula first, then determines optimal node number with gathering examination method again, generally For, the error of BP neural network increases afterwards as the increase of the number of node first reduces.
(3) the nonlinear function Extremal optimization method of BP neural network
Specifically, it using the weight and threshold value of improved sinusoidal self-adapted genetic algorithm Optimized BP Neural Network, and carries out Network fitting training and extreme value optimizing.
Please refer to attached drawing 4, the nonlinear function Extremal optimization method of the BP neural network the specific implementation process is as follows:
S101, using all weights and threshold value of improved sinusoidal self-adapted genetic algorithm Optimized BP Neural Network.
Each of population of sinusoidal self-adapted genetic algorithm individual contains all weights and threshold value of neural network, Individual calculates the fitness of each individual by fitness function, is met the requirements by operations such as selection, intersection and variations Individual, individual decoding just obtains the initial weight and threshold value of neural network.
Specifically, the specific implementation of the step 101 is as follows:
S1011, obtains the data that output and input of nonlinear function, and it is normalized.
Specifically, obtain nonlinear function outputs and inputs data, is carried out using the mapminmax function in MATLAB What is obtained outputs and inputs data normalization processing, makes to output and input data and is normalized between -1~1, in order to adapt to BP The output of neural network, while also can reduce weighed value adjusting amplitude.Choose the defeated of the nonlinear function of the part after normalizing Enter with output data as training data, remaining data are as prediction data.
S1012, obtains all initial weights and threshold value of BP neural network, and encodes to it.
Since the weight and threshold value value range of BP neural network are all between (- 1,1), if using binary coding, It is too long so to will cause chromosome, therefore the present embodiment uses real coding mode.Each individual is a real number string, by defeated Enter layer and four hidden layer connection weight, hidden layer and output layer connection weight, hidden layer threshold value and output layer threshold value parts Composition.
Specifically, all input layers in BP neural network are obtained to connect with hidden layer connection weight, hidden layer with output layer Weight, hidden layer threshold value and output layer threshold value to input layers all in BP neural network and are implied using real coding mode Layer connection weight, hidden layer and output layer connection weight, hidden layer threshold value and output layer threshold value are encoded, and are obtained comprising defeated Enter the real number square of layer and hidden layer connection weight, hidden layer and output layer connection weight, hidden layer threshold value and output layer threshold value Battle array or vector.
S1013 calculates the training error of BP neural network, the initial fitness value as each individual.
Specifically, according to all initial weights and threshold value of BP neural network are obtained, training data training BP nerve is utilized It after network, is exported using trained BP neural network forecasting system, calculates BP neural network prediction output valve and each node The absolute value of the difference of desired output obtains the training error of BP neural network, as the initial adaptation of each individual Angle value.
Specifically, the step of training BP neural network includes
(1) initial weight of acquisition and threshold value are assigned a value of to the initialization value of BP neural network;
(2) randomly select a large amount of nonlinear functions outputs and inputs data as training data;
(3) BP neural network then is constructed with BP network parameter setting function newff;
(4) training parameter is set, and maximum frequency of training is set as 20, and learning rate is set as 0.1, and training requirement precision is 0.00001;
(5) using BP network training function train training data training BP neural network;
(6) when meeting training requirement precision 0.00001 or more than maximum frequency of training, training terminates, and is trained BP neural network.
S1014 selects new individual to form new population according to the initial fitness value of each individual.
New individual is selected to form new population according to the initial fitness value that step 1013 obtains.The present embodiment is using wheel Disk bet method, the i.e. selection strategy based on fitness ratio, the select probability p of each individual iiIt is calculated with following formula:
Wherein, FiFor individual i fitness value.Since fitness value is the smaller the better, so first right before calculating select probability Fitness value asks reciprocal, i.e. fi, wherein k is coefficient, in the present embodiment k=1.
S1015 carries out crossover operation to the individual in new population, obtains new individual after intersection, form new population.
Since the present embodiment uses the coding mode of real number, in the present embodiment, crossover operation uses real number interior extrapolation method. Individual xiWith individual xjCrossing formula on the position k are as follows:
Wherein, b is the random number between [0,1];xikFor the kth position of i-th of individual;xjkFor the kth position of j-th of individual.
S1016 carries out mutation operation to the individual in new population.
Specifically, mutation operation is carried out to the individual in new population, the new individual after being made a variation forms new kind Group.J-th of gene x of i-th of individualijMutation operation formula it is as follows:
Wherein, xmaxAnd xminIt is gene x respectivelyijThe upper bound and lower bound;GmaxIt is the maximum evolution number of genetic algorithm;G is The current iteration number of algorithm;R, r' are the random numbers between [0,1].
S1017 calculates optimal fitness value.
Specifically, the adaptive optimal control degree that each individual in new population is calculated using fitness function, utilizes each individual Adaptive optimal control degree, calculate all best initial weights and threshold value of BP neural network.
S1018 judges whether to meet termination condition.
Specifically, described to judge whether that the specific implementation for meeting termination condition is as follows:
When reaching the maximum number of iterations of genetic algorithm, meet termination condition, exports all optimal of BP neural network Weight and threshold value.
When the not up to maximum number of iterations of genetic algorithm, it is unsatisfactory for termination condition, repeats step (1-3)-(1-8), Until meeting termination condition.
S102 utilizes optimal weight and Threshold-training BP neural network.
Suitable neural network structure is constructed according to the characteristics of optimizing function, outputs and inputs number with nonlinear function According to training BP neural network, relatively good Function Fitting is obtained.
Specifically, the specific implementation of the step 102 is as follows:
S1021 determines the topological structure of BP neural network;
The topological structure of the BP neural network includes the number of nodes of input layer and output layer, the number of plies of hidden layer and implicit The number of nodes of layer.
S1022 initializes the weight and threshold length of BP neural network.
Since fitting function is there are two an output valve is inputted, intermediate node is set as 5, therefore BP network structure is 2-5-1, So initial weight and threshold length are made of 5*2+5*1+5*1+1=11 real number.
S1023 obtains optimal weight and threshold value, chooses training data, training BP neural network.
Specifically, the training process of the BP neural network is as follows:
(1) best initial weights and threshold value that step 101 obtains are assigned a value of to the initialization value of BP neural network;
(2) randomly select a large amount of nonlinear functions outputs and inputs data as training data;
(3) BP neural network then is constructed with BP network parameter setting function newff;
(4) training parameter is set, and maximum frequency of training is set as 20, and learning rate is set as 0.1, and training requirement precision is 0.00001;
(5) using BP network training function train training data training BP neural network;
(6) when meeting training requirement precision 0.00001 or more than maximum frequency of training, training terminates, and is trained BP neural network.
S1024 chooses prediction data, inputs trained BP neural network and predicted, obtain the pre- of BP neural network Measured value.
Specifically, the nonlinear function other than as training data is output and input into data as prediction data, The input value of prediction data is input in trained BP neural network, using trained BP neural network to prediction data It is predicted, obtains prediction output valve.
S103 finds the optimal extreme value of nonlinear function using improved sinusoidal self-adapted genetic algorithm.
Specifically, value BP neural network predicted is as the fitness value of sinusoidal self-adapted genetic algorithm, then passes through The optimal extreme value of nonlinear function is found in the operations such as selection, intersection and variation.
Specifically, the specific implementation of the step 1033 is as follows:
S1031, initialization population.
It is random to generate M individual as initial population.
S1032 calculates the fitness value of each individual in initial population.
Specifically, the fitness value of each individual in population is calculated using fitness function, i.e., it will be every in BP neural network Error between the prediction output valve for the BP neural network that the desired output and step 102 of a node obtain is as each individual Fitness value, formula are as follows:
In formula, k is coefficient, in the present embodiment k=1;L is the number of nodes of BP network;diIt is i-th of BP neural network The desired output of node, oiFor the prediction output valve of BP neural network.
S1032 selects new individual to form new population according to the fitness value of each individual.
The fitness value of each node selects new node to form newly in the BP neural network obtained according to step S1032 Population.The present embodiment uses wheel disc bet method, the i.e. selection strategy based on fitness ratio, the select probability p of each individual ii It is calculated with following formula:
Wherein, FiFor individual i fitness value.Since fitness value is the smaller the better, so first right before calculating select probability Fitness value asks reciprocal, i.e. fi, wherein k is coefficient, in the present embodiment k=1.
S1034 carries out crossover operation to the individual in new population, obtains new individual, form new population.
Since the present embodiment uses the coding mode of real number, in the present embodiment, crossover operation uses real number interior extrapolation method. Individual xiWith individual xjCrossing formula on the position k are as follows:
Wherein, b is the random number between [0,1];xikFor the kth position of i-th of individual;xjkFor the kth position of j-th of individual.
S1035 carries out mutation operation to the individual in new population.
Specifically, mutation operation is carried out to the individual in new population, obtains new individual, forms new population.I-th J-th of gene x of nodeijMutation operation formula it is as follows:
Wherein, xmaxAnd xminIt is gene x respectivelyijThe upper bound and lower bound;GmaxIt is the maximum evolution number of genetic algorithm;G is The current iteration number of algorithm;R, r' are the random numbers between [0,1].
S1036 judges whether to meet termination condition.
Specifically, described to judge whether that the specific implementation for meeting termination condition is as follows:
When reaching genetic algorithm maximum number of iterations, meet termination condition, chooses the minimum fitness value in population, i.e., For optimal extreme value.
When not reaching genetic algorithm maximum number of iterations, it is unsatisfactory for termination condition, repeats step (3-2)-(3-6), directly To meeting termination condition.
In the present embodiment, genetic algorithm maximum number of iterations is set as 100.
The present embodiment additionally provides the experimental verification to the nonlinear function Extremal optimization method of above-mentioned BP neural network. To traditional BPGA method, GA algorithm double optimization BP neural network method and the present embodiment under Matlab R2018a environment The effect of the SAGA algorithm double optimization BP neural network method of proposition carries out emulation experiment.
(1) Preparatory work of experiment
In order to guarantee the consistency of experimental situation, select the function of fitting forIt is easy to obtain the function Global minimum is 0, and corresponding coordinate is (0,0), although being readily seen extreme point from function expression, in function Ultimate attainment solution is carried out in the case that equation is unknown to be difficult.Take 4000 groups of inputoutput datas of fitting function as real at random Data are tested, then therefrom randomly select 3900 groups of data as training data, remaining 100 groups of data are as prediction data.The reality In testing, the maximum number of iterations of genetic algorithm is set as 100, population scale 20.It is defeated according to function two inputs one of experiment Out the characteristics of, determines the input layer of BP neural network and output layer number of nodes is respectively 2 and 1, rule of thumb formula and examination The method node in hidden layer of gathering is set to 5.
(2) comparative experiments
As shown in table 1 for using ten experimental results of the ultimate attainment optimizing of traditional BP GA method.It can easily be seen that the ten of the experiment The absolute average of secondary result is 0.068, and runing time is intended all at 45 seconds or so wherein 3 second time was used to carry out network training It closes, remaining 42 seconds are that genetic algorithm seeks extreme value.However, it will be apparent that experimental result is very unstable, jump bigger.Due to equal Value is most close with the 5th experimental result, so the 5th experimental result of analysis.Fig. 5 and Fig. 6 is respectively the BP of the 5th experiment (x-axis is sample in Fig. 5, and y-axis is error for neural network forecast Error Graph and extreme value optimizing curve graph.X-axis is evolutionary generation, y in Fig. 6 Axis is fitness), for trained BP neural network forecast error between -0.5-0.3, optimal extreme value is -0.063.It is traditional on the whole The network of BPGA extreme value optimizing algorithm is fitted and extreme value optimizing effect is general.
The 1. ultimate attainment optimizing experimental result of traditional BP GA method of table
SAGA algorithm as shown in table 2 for GA algorithm double optimization BP neural network algorithm and the present embodiment proposition is dual excellent Change ten experimental results of BP neural network algorithm, wherein GA algorithm double optimization BP neural network algorithm is in the traditional BP side GA It is optimized again with initial weight and threshold value of the GA algorithm to BP network on the basis of method, the SAGA algorithm that the present embodiment proposes is double Re-optimization BP neural network algorithm is intersection on the basis of GA algorithm double optimization BP neural network algorithm to genetic algorithm Rate and aberration rate are optimized.Ten result mean values for being easy to calculate GA algorithm double optimization BP neural network algorithm are 0.0438, ten experimental result mean values of SAGA algorithm double optimization BP neural network algorithm that the present embodiment proposes are 0.0084, but runing time, wherein 400 seconds or so time was used to carry out network training fitting, remains between 400~600 seconds Under 40 seconds or so be that genetic algorithm seeks extreme value, it is seen that improve rear algorithm complexity and improve very much.But in performance On the traditional BP GA algorithm that compares be significantly improved, and optimizing effect is very stable.
2. innovatory algorithm experimental result table of table
Since ten experiment mean values of GA algorithm double optimization BP neural network algorithm and the 9th experimental result are closest, So the 9th experimental result of primary study.If Fig. 7 and 8 is respectively that the BP neural network forecast Error Graph of the 9th experiment and extreme value are sought Excellent curve graph.BP neural network forecast error is readily seen between -0.15-0.1, and extreme value is 0.0491.
Similarly, ten experiment mean values of the method using SAGA algorithm double optimization BP neural network that the present embodiment proposes It is close with the tenth experiment, as Fig. 9,10 show the BP neural network forecast Error Graph and extreme value optimizing curve graph of the tenth experiment. It can easily be seen that BP neural network predict error between -0.008-0.008, and extreme value be 0.0069, (0.001, 0.0044) it obtains.In this ten results, the experimental result of first time is very good, can be accurate to 0.00007, Figure 11 as prediction The fitting effect of output and desired output
Table 3 show the statistics of three kinds of method prediction errors and extreme value, as seen from table SAGA algorithm double optimization BP mind More accurate fitting function and extreme value solution can be carried out compared to first two algorithm through network extreme value optimizing algorithm.
The average forecasting error and statistics of extremes table of 3 three kinds of methods of table
In conclusion traditional BPGA algorithm can complete extreme value optimizing task, runing time is short, and extreme value can be accurate To -0.063, but the initialization weight and threshold value of BP network are set at random, the crossing-over rate and aberration rate of genetic algorithm It is fixed and invariable, so operational effect is very unstable.GA algorithm double optimization BP network algorithm is on the basis of traditional algorithm Solve BP netinit weight and Threshold with genetic algorithm again, by predicting that the smallest principle of error makes BP network A proper initial value has been determined, but algorithm complexity increases, extreme value has reached 0.0491, and network is steady It is qualitative to enhance.Finally, SAGA algorithm double optimization BP network algorithm by SIN function solve genetic algorithm crossing-over rate and The select permeability of aberration rate, although runing time increases again, extreme value can be accurate to 0.0069, and the fitting effect of network Fruit and optimizing effect are very stable.
The present embodiment is dual excellent by traditional GABP method, GA algorithm double optimization BP neural network method and SAGA algorithm Change BP neural network method and carries out simulation comparison, simulation results show: the algorithm proposed effectively improves nonlinear function The solving precision of extreme value optimizing, neural network forecast error reach -0.008~0.008, and average extreme value is accurate to 0.0084, best pole Value is accurate to 0.00007.
Embodiment two
Attached drawing 12 is please referred to, the present embodiment provides a kind of nonlinear function extremal optimization system of neural network, the systems Include:
Weight threshold optimization module, for calculating all weights and threshold of neural network using sinusoidal self-adapted genetic algorithm Value;
Neural metwork training fitting module utilizes instruction for all weights and threshold value according to the neural network after optimization Practice data training neural network, prediction data is inputted and is predicted using trained neural network, neural network is obtained Predict output valve;
Extreme value optimizing module is found for the prediction output valve according to neural network using sinusoidal self-adapted genetic algorithm The optimal extreme value of nonlinear function.
Specifically, the weight threshold optimization module is specifically used for:
Obtain nonlinear function outputs and inputs data, and it is normalized;
Data after choosing a part of normalized are as training data, and remaining data are as prediction data;
All initial weights and threshold value of neural network are obtained, and it is encoded;
According to all initial weights and threshold value of neural network, using training data training neural network, by prediction data Input is predicted using trained neural network, obtains the initial predicted output valve of neural network;
Calculate the exhausted of the difference of the initial predicted output valve of neural network and the desired output of each node of neural network To value, the training error of neural network is obtained, and as the initial fitness value of each individual;
Individual composition population is selected according to the initial fitness value of each individual, the individual in population is successively intersected After mutation operation, obtains new individual and form new population;
The adaptive optimal control degree that each individual in new population is calculated using fitness function, to the adaptive optimal control of each individual Degree is decoded, and obtains all best initial weights and threshold value of neural network.
Specifically, the extreme value optimizing module is specifically used for:
Multiple individuals are generated at random as initial population;
The fitness value of each individual in initial population is calculated using fitness function;
Individual composition population is selected according to the initial fitness value of each individual, the individual in population is successively intersected After mutation operation, obtains new individual and form new population;
Judge whether that the maximum number of iterations for reaching setting chooses the minimum fitness value in new population if reaching Optimal extreme value as nonlinear function.
Embodiment three
The present embodiment provides a kind of computer readable storage mediums, are stored thereon with computer program, characterized in that the journey The step in the nonlinear function Extremal optimization method of neural network as described above is realized when sequence is executed by processor.
Example IV
The present embodiment provides a kind of computer equipment, including memory, processor and storage on a memory and can located The computer program run on reason device, characterized in that the processor realizes nerve net as described above when executing described program Step in the nonlinear function Extremal optimization method of network.
Although above-mentioned be described in conjunction with specific embodiment of the attached drawing to the disclosure, model not is protected to the disclosure The limitation enclosed, those skilled in the art should understand that, on the basis of the technical solution of the disclosure, those skilled in the art are not Need to make the creative labor the various modifications or changes that can be made still within the protection scope of the disclosure.

Claims (10)

1. a kind of nonlinear function Extremal optimization method of neural network, characterized in that method includes the following steps:
All weights and threshold value of neural network are calculated using sinusoidal self-adapted genetic algorithm;
According to all weights and threshold value of the neural network after optimization, and using training data training neural network, number will be predicted It is predicted according to trained neural network is input to, obtains the prediction output valve of neural network;
According to the prediction output valve of neural network, the optimal extreme value of nonlinear function is found using sinusoidal self-adapted genetic algorithm.
2. the nonlinear function Extremal optimization method of neural network according to claim 1, characterized in that described using just String self-adapted genetic algorithm calculate neural network all weights and threshold value the step of include:
Obtain nonlinear function outputs and inputs data, and it is normalized;
Data after choosing a part of normalized are as training data, and remaining data are as prediction data;
All initial weights and threshold value of neural network are obtained, and it is encoded;
Prediction data is inputted using training data training neural network according to all initial weights and threshold value of neural network It is predicted to trained neural network, obtains the initial predicted output valve of neural network;
The absolute value of the difference of the initial predicted output valve of neural network and the desired output of each node of neural network is calculated, The initial training error of neural network is obtained, and as the initial fitness value of each individual;
Individual composition population is selected according to the initial fitness value of each individual, the individual in population is successively intersected and become After ETTHER-OR operation, obtains new individual and form new population;
The adaptive optimal control degree that each individual in new population is calculated using fitness function, to the adaptive optimal control degree of each individual into Row decoding, obtains all best initial weights and threshold value of neural network.
3. the nonlinear function Extremal optimization method of neural network according to claim 1, characterized in that described according to excellent All weights and threshold value of neural network after change, using training data training neural network the step of include:
Determine the topological structure of neural network;
Using obtained all best initial weights and threshold value as the initialization value of neural network;
Training parameter is set, training data training neural network is utilized;
When being more than maximum frequency of training, training terminates, and obtains trained neural network.
4. the nonlinear function Extremal optimization method of neural network according to claim 1, characterized in that the nerve net The topological structure of network includes the number of nodes of the number of nodes of input layer and output layer, the number of plies of hidden layer and hidden layer.
5. the nonlinear function Extremal optimization method of neural network according to claim 1, characterized in that described using just String self-adapted genetic algorithm find nonlinear function optimal extreme value the step of include:
Multiple individuals are generated at random as initial population;
According to the prediction output valve of neural network, the fitness value of each individual in initial population is calculated using fitness function;
Individual composition population is selected according to the initial fitness value of each individual, the individual in population is successively intersected and become After ETTHER-OR operation, obtains new individual and form new population;
Judge whether that the maximum number of iterations for reaching setting chooses in new population minimum fitness value as non-if reaching The optimal extreme value of linear function.
6. a kind of nonlinear function extremal optimization system of neural network, characterized in that the system includes:
Weight threshold optimization module, for calculating all weights and threshold value of neural network using sinusoidal self-adapted genetic algorithm;
Neural metwork training fitting module for all weights and threshold value according to the neural network after optimization, and utilizes training Data train neural network, and prediction data is input to trained neural network and is predicted, the prediction of neural network is obtained Output valve;
Extreme value optimizing module is found non-thread for the prediction output valve according to neural network using sinusoidal self-adapted genetic algorithm The optimal extreme value of property function.
7. the nonlinear function extremal optimization system of neural network according to claim 6, characterized in that the weight threshold Value optimization module is specifically used for:
Obtain nonlinear function outputs and inputs data, and it is normalized;
Data after choosing a part of normalized are as training data, and remaining data are as prediction data;
All initial weights and threshold value of neural network are obtained, and it is encoded;
Prediction data is inputted using training data training neural network according to all initial weights and threshold value of neural network It is predicted to trained neural network, obtains the initial predicted output valve of neural network;
The absolute value of the difference of the initial predicted output valve of neural network and the desired output of each node of neural network is calculated, The initial training error of neural network is obtained, and as the initial fitness value of each individual;
Individual composition population is selected according to the initial fitness value of each individual, the individual in population is successively intersected and become After ETTHER-OR operation, obtains new individual and form new population;
The adaptive optimal control degree that each individual in new population is calculated using fitness function, to the adaptive optimal control degree of each individual into Row decoding, obtains all best initial weights and threshold value of neural network.
8. the nonlinear function extremal optimization system of neural network according to claim 6, characterized in that the extreme value is sought Excellent module is specifically used for:
Multiple individuals are generated at random as initial population;
According to the prediction output valve of neural network, the fitness value of each individual in initial population is calculated using fitness function;
Individual composition population is selected according to the initial fitness value of each individual, the individual in population is successively intersected and become After ETTHER-OR operation, obtains new individual and form new population;
Judge whether that the maximum number of iterations for reaching setting chooses the minimum fitness value conduct in new population if reaching The optimal extreme value of nonlinear function.
9. a kind of computer readable storage medium, is stored thereon with computer program, characterized in that the program is executed by processor Step in the nonlinear function Extremal optimization method of Shi Shixian neural network according to any one of claims 1 to 5.
10. a kind of computer equipment including memory, processor and stores the meter that can be run on a memory and on a processor Calculation machine program, characterized in that the processor realizes nerve according to any one of claims 1 to 5 when executing described program Step in the nonlinear function Extremal optimization method of network.
CN201910289721.1A 2019-04-11 2019-04-11 A kind of the nonlinear function Extremal optimization method and system of neural network Pending CN110046710A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910289721.1A CN110046710A (en) 2019-04-11 2019-04-11 A kind of the nonlinear function Extremal optimization method and system of neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910289721.1A CN110046710A (en) 2019-04-11 2019-04-11 A kind of the nonlinear function Extremal optimization method and system of neural network

Publications (1)

Publication Number Publication Date
CN110046710A true CN110046710A (en) 2019-07-23

Family

ID=67276835

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910289721.1A Pending CN110046710A (en) 2019-04-11 2019-04-11 A kind of the nonlinear function Extremal optimization method and system of neural network

Country Status (1)

Country Link
CN (1) CN110046710A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110610261A (en) * 2019-08-23 2019-12-24 广东奥博信息产业股份有限公司 Water body dissolved oxygen prediction method based on neural network
CN110705756A (en) * 2019-09-07 2020-01-17 创新奇智(重庆)科技有限公司 Electric power energy consumption optimization control method based on input convex neural network
CN111126560A (en) * 2019-11-07 2020-05-08 云南民族大学 Method for optimizing BP neural network based on cloud genetic algorithm
CN111859807A (en) * 2020-07-23 2020-10-30 润电能源科学技术有限公司 Initial pressure optimizing method, device, equipment and storage medium for steam turbine
CN111882041A (en) * 2020-07-31 2020-11-03 国网重庆市电力公司电力科学研究院 Power grid attack detection method and device based on improved RNN (neural network)
CN112446157A (en) * 2020-12-14 2021-03-05 广东省科学院智能制造研究所 Method and device for predicting residual life of traffic equipment
CN112564138A (en) * 2020-11-18 2021-03-26 国网山西省电力公司晋城供电公司 Three-phase unbalanced reactive power optimization method and system thereof
CN112580771A (en) * 2020-12-25 2021-03-30 核工业北京地质研究院 Inversion method for obtaining total phosphorus content of black soil by using emissivity data
CN113033781A (en) * 2021-03-26 2021-06-25 南京信息工程大学 Nonlinear equalization compensation method based on self-evolution optimization BP neural network
CN113159299A (en) * 2021-04-30 2021-07-23 杭州电子科技大学 Fractional order depth BP neural network optimization method based on extremum optimization
CN113255887A (en) * 2021-05-25 2021-08-13 上海机电工程研究所 Radar error compensation method and system based on genetic algorithm optimization BP neural network
CN115270506A (en) * 2022-08-16 2022-11-01 青岛理工大学 Method and system for predicting passing time of people going upstairs along stairs
US11628848B2 (en) 2020-03-31 2023-04-18 Toyota Research Institute, Inc. Systems and methods for training a neural network for estimating a trajectory of a vehicle

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110610261B (en) * 2019-08-23 2023-02-28 广东奥博信息产业股份有限公司 Water body dissolved oxygen prediction method based on neural network
CN110610261A (en) * 2019-08-23 2019-12-24 广东奥博信息产业股份有限公司 Water body dissolved oxygen prediction method based on neural network
CN110705756A (en) * 2019-09-07 2020-01-17 创新奇智(重庆)科技有限公司 Electric power energy consumption optimization control method based on input convex neural network
CN111126560A (en) * 2019-11-07 2020-05-08 云南民族大学 Method for optimizing BP neural network based on cloud genetic algorithm
US11628848B2 (en) 2020-03-31 2023-04-18 Toyota Research Institute, Inc. Systems and methods for training a neural network for estimating a trajectory of a vehicle
CN111859807A (en) * 2020-07-23 2020-10-30 润电能源科学技术有限公司 Initial pressure optimizing method, device, equipment and storage medium for steam turbine
CN111882041A (en) * 2020-07-31 2020-11-03 国网重庆市电力公司电力科学研究院 Power grid attack detection method and device based on improved RNN (neural network)
CN112564138A (en) * 2020-11-18 2021-03-26 国网山西省电力公司晋城供电公司 Three-phase unbalanced reactive power optimization method and system thereof
CN112446157A (en) * 2020-12-14 2021-03-05 广东省科学院智能制造研究所 Method and device for predicting residual life of traffic equipment
CN112580771A (en) * 2020-12-25 2021-03-30 核工业北京地质研究院 Inversion method for obtaining total phosphorus content of black soil by using emissivity data
CN113033781A (en) * 2021-03-26 2021-06-25 南京信息工程大学 Nonlinear equalization compensation method based on self-evolution optimization BP neural network
CN113159299A (en) * 2021-04-30 2021-07-23 杭州电子科技大学 Fractional order depth BP neural network optimization method based on extremum optimization
CN113159299B (en) * 2021-04-30 2024-02-06 杭州电子科技大学 Fractional order depth BP neural network optimization method based on extremum optimization
CN113255887A (en) * 2021-05-25 2021-08-13 上海机电工程研究所 Radar error compensation method and system based on genetic algorithm optimization BP neural network
CN115270506A (en) * 2022-08-16 2022-11-01 青岛理工大学 Method and system for predicting passing time of people going upstairs along stairs
CN115270506B (en) * 2022-08-16 2024-02-23 青岛理工大学 Method and system for predicting passing time of crowd ascending along stairs

Similar Documents

Publication Publication Date Title
CN110046710A (en) A kind of the nonlinear function Extremal optimization method and system of neural network
Ding et al. Evolutionary artificial neural networks: a review
Song et al. New chaotic PSO-based neural network predictive control for nonlinear process
US5751915A (en) Elastic fuzzy logic system
CN104751228B (en) Construction method and system for the deep neural network of speech recognition
CN103105246A (en) Greenhouse environment forecasting feedback method of back propagation (BP) neural network based on improvement of genetic algorithm
CN108090658A (en) Arc fault diagnostic method based on time domain charactreristic parameter fusion
CN104636801A (en) Transmission line audible noise prediction method based on BP neural network optimization
CN104636985A (en) Method for predicting radio disturbance of electric transmission line by using improved BP (back propagation) neural network
CN112001496B (en) Neural network structure searching method and system, electronic device and storage medium
CN110472738A (en) A kind of unmanned boat Real Time Obstacle Avoiding algorithm based on deeply study
Liu et al. A fault diagnosis intelligent algorithm based on improved BP neural network
CN113722980A (en) Ocean wave height prediction method, system, computer equipment, storage medium and terminal
CN108594793A (en) A kind of improved RBF flight control systems fault diagnosis network training method
CN113255873A (en) Clustering longicorn herd optimization method, system, computer equipment and storage medium
CN112163671A (en) New energy scene generation method and system
Ferreira et al. Genetic algorithm for reservoir computing optimization
CN109886405A (en) It is a kind of inhibit noise based on artificial neural network structure&#39;s optimization method
CN112200208B (en) Cloud workflow task execution time prediction method based on multi-dimensional feature fusion
CN113537365A (en) Multitask learning self-adaptive balancing method based on information entropy dynamic weighting
CN115879412A (en) Layout level circuit diagram size parameter optimization method based on transfer learning
Gao et al. Establishment of economic forecasting model of high-tech industry based on genetic optimization neural network
KR102624710B1 (en) Structural response estimation method using gated recurrent unit
Ying et al. A novel optimization algorithm for BP neural network based on RS-MEA
CN113807005A (en) Bearing residual life prediction method based on improved FPA-DBN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190723

RJ01 Rejection of invention patent application after publication