CN109918749A - The fan design two generations algorithm Multipurpose Optimal Method of learning rate changing network modelling - Google Patents

The fan design two generations algorithm Multipurpose Optimal Method of learning rate changing network modelling Download PDF

Info

Publication number
CN109918749A
CN109918749A CN201910139400.3A CN201910139400A CN109918749A CN 109918749 A CN109918749 A CN 109918749A CN 201910139400 A CN201910139400 A CN 201910139400A CN 109918749 A CN109918749 A CN 109918749A
Authority
CN
China
Prior art keywords
learning rate
variable
value
rate changing
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910139400.3A
Other languages
Chinese (zh)
Inventor
徐英杰
许亮峰
刘成
徐美金
高健飞
吕乔榕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BEIJING PICOHOOD TECHNOLOGY Co Ltd
Original Assignee
BEIJING PICOHOOD TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BEIJING PICOHOOD TECHNOLOGY Co Ltd filed Critical BEIJING PICOHOOD TECHNOLOGY Co Ltd
Priority to CN201910139400.3A priority Critical patent/CN109918749A/en
Publication of CN109918749A publication Critical patent/CN109918749A/en
Pending legal-status Critical Current

Links

Abstract

A kind of fan design two generations algorithm Multipurpose Optimal Method of learning rate changing network modelling, comprising the following steps: step 1: wind pressure and air quantity are given value, and efficiency and cost are that target variable, wherein structure variable and the data sample of target variable are obtained by experiment;Step 2: enabling structure variable is input variable, target variable is output variable, is trained to data sample, completes the foundation of learning rate changing network model, wherein updating weight and threshold value using learning rate changing method;Step 3: establishing two generation algorithm models, wherein designing operator using non-dominated ranking operator and elitism strategy;Step 4: predicting the energy consumption and cost of blower by the learning rate changing network established, and predicted value is used for seeking for target function value in two generation Genetic Algorithm Models, to obtain the forward position pareto.Finally the structure variable value after renormalization is applied in the actual design of blower.Target of the present invention is comprehensive, precision is higher.

Description

The fan design two generations algorithm Multipurpose Optimal Method of learning rate changing network modelling
Technical field
The invention belongs to the analog simulation fields of the operating parameter designing technique of blower and industrial process, are related to a kind of change The fan design two generations algorithm Multipurpose Optimal Method of habit rate network modelling.
Background technique
Blower belongs to fluid machinery, and effect is to belong to fluid machinery to gas compression and gas transport.The operation of blower Process is the process fluid flow and complicated energy transfer process of a disorder.
The change of each structure variable of blower can generate comprehensive effect to blower.I.e. when structure variable changes, blower The variation tendency of each target variable (general to select efficiency and cost) is not consistent.And we are not only the optimization to efficiency, together When be also optimization to cost.So, it would be desirable under conditions of meeting real work, obtain one group of optimal design variable Combination.
Traditional multi-objective calculation is really the weighted calculation calculated single goal, the value and staff's experience of weight There is very big association, it is difficult to realize accurate, effective design.If being with electronic computer by CFD approach, that is, Fluid Mechanics Computation Tool, using the mathematical method of various discretizations, the problem of Fluid Mechanics, carries out sunykatuib analysis, and accuracy is not high, not It is applicable in.And current intelligent optimization algorithm includes that genetic algorithm, particle swarm algorithm etc. possess quick global optimizing ability, extensively It is general to be used to solve multi-objective optimization question.
Summary of the invention
In order to overcome the deficiencies of target of existing blower multi-objective optimization design of power method do not be's comprehensive, precision is lower place, this Invention provide a kind of target is comprehensive, the higher fan design based on learning rate changing network modelling of precision based on two generation algorithms Multipurpose Optimal Method.
The technical solution adopted by the present invention to solve the technical problems is:
A kind of fan design two generations algorithm Multipurpose Optimal Method of learning rate changing network modelling, comprising the following steps:
Step 1: collecting to fan operation efficiency and the very big structure variable of cost impact, and wind pressure and air quantity be to Definite value, efficiency and cost are that target variable, wherein structure variable and the data sample of target variable can be obtained by experiment;
Step 2: enabling structure variable is input variable, target variable is output variable, is trained to data sample, complete At the foundation of learning rate changing network model, wherein updating weight and threshold value using learning rate changing method;
Step 3: establishing two generation algorithm models, wherein designing operator using non-dominated ranking operator and elitism strategy;
Step 4: predicting the energy consumption and cost of blower by the learning rate changing network established, and by predicted value It is sought for target function value in two generation Genetic Algorithm Models, to obtain the forward position pareto.Finally by the knot after renormalization Structure variate-value is applied in the actual design of blower.
In the step 1, the input variable is chosen as follows: selection blade exit established angle, the number of blade, impeller go out Mouth width degree is as structure variable, and the input variable enabled as neural network model;By this group of target variable of efficiency and cost, enable For the output variable of neural network model.
It is as follows to the foundation, initialization, training process of learning rate changing network model in the step 2: first to sample number According to being handled;Then, then by the way that treated data are right respectively to calculate neural network hidden layer node and output layer node Answer input value and output valve;Finally, according to the more new formula of weight in learning rate changing network and threshold value, in the close instruction of training error When practicing target, according to the learning rate that mathematical model reduces neural network the adjustment amount of weight can be become smaller;If error is lower than instruction Practice target, then learning rate changing network completes modeling.
The processing step of the step 2 is as follows:
2.1 data processing
Relevant parameter, that is, impeller outlet established angle in collection step one, the number of blade, impeller outlet width, efficiency, wind pressure or Impeller outlet established angle, the number of blade, impeller outlet width, total pressure is normalized in air quantity, arrives it between [0,1], Formula is as follows:
Wherein, k is the data after normalization, and x is to be normalized data, and xmin is the minimum value being normalized in data, Xmax is to be normalized maximum value in data;
The classification of 2.2 data
The data set obtained after processing is divided into two parts, wherein it randomly selects in data set 70% and is used as training set, Again using remaining 30% data as test set;
The initialization of 2.3 networks
Assuming that the node number of input layer is n, the node number of hidden layer is l, and the node number of output layer is m, is implied Number is one layer layer by layer, number of nodes rule of thumb formula:It obtains, wherein n, m are input, output node number, and a is Constant takes 1~10.Cycle-index is set as n times;Each layer weight initial value takes the random number between [- 1,1];Wherein, input layer To the weight w of hidden layerij, the weight of hidden layer to output layer is wjk, the threshold value of input layer to hidden layer is aj, hidden layer is to defeated The threshold value of layer is b outk;Learning rate η generally takes 0.1~0.2, and training objective generally takes 10-3~10-6, cycle-index N (N > 200) Secondary, excitation function is g (x), and wherein excitation function takes Sigmoid function, form are as follows:
2.4 training neural networks, process are as follows:
2.4.1 the forward-propagating of signal
2.4.1.1 the output of hidden layer:
xiFor the data of the input of input layer, HjFor the output of hidden layer node;
2.4.1.2 the output of output layer:
OkFor the output of output layer node;
2.4.1.3 the calculating of error:
ek=(Yk-Ok)Ok(1-Ok)
In above formula, i=1...n, j=1...l, k=1...m, YkFor reality output data;
By comparing the reality output and desired output of output layer, the error of the two is obtained.If error is not in requirement In error range, then error back propagation is transferred to;
2.4.2 the backpropagation of signal (error)
2.4.2.1 the update of weight:
According to error ekUpdate the weight ω between network input layer and hidden layerij, power between hidden layer and output layer Value ωjkFormula is as follows:
ωjkjk+ηHjek
2.4.2.2 the update of threshold value:
According to error ekUpdate network node threshold value a, b;
bk=bk+ηek
2.4.3 weight adjusted, threshold value assign BP neural network again, repeat step 2.4.1-2.4.2 when calculating misses Record recycles step number st at this time when difference is twice of training objective, and learning rate η calculation formula is as follows when recycling at this time the M times:
2.5 data test
After repeating the data training that step 2.4.1-2.4.3 completes all training sets, with the data of test set to nerve net Network is tested.If error is lower than training objective, neural network completes modeling, i.e. outputting and inputting for neural network model is expired Sufficient mapping relations.
The step of step 3, the described two generation algorithm, is as follows:
3.1 generate parent population Pt, Population Size N;
3.2 calculate non-dominant rank individual in parent population Pt, crowding distance;
3.3 are selected again, are intersected, being made a variation, and subgroup Qt primary is generated;
3.4 merge subgroup Qt primary and parent population Pt, generate new parent population Pt+1, Population Size 2N;
3.5 calculate non-dominant rank individual in new parent population Pt+1, crowding distance;
3.6 couples of new parent population Pt+1 are selected, are intersected, are made a variation, and new parent subgroup Qt+1, Population Size N are generated;
3.7 judge whether evolutionary generation is less than maximum algebra G, if it is not, then exporting Qt+1;If so, t=t+1, returns to Four steps continue to recycle.
Further, in the first step, population coding mode uses real coding, and population scale is set as N, evolutionary generation is set It is set as 20%~45% for t, the general fork probability of intersection, mutation probability is set as 1%~8%.
Further, in the second step and the 5th step, using quick non-dominated ranking operator and crowding comparison operator, Meanwhile it must be calculated the predicted value of neural network model as target function value in two generation algorithms.
Wherein, the principle of quick non-dominated ranking are as follows: find out non-dominant disaggregation in population first, be denoted as the first non-dominant layer F1, non-dominant ordinal number is irank=1, and F1 is removed to the non-dominant disaggregation for finding out remaining population again, is denoted as F2.Successively So.The non-dominant higher individual of ordinal number is preferentially selected.
Crowding: 2 adjacent with i individuals the distance between i+1 and i-1 on object space.In same non-dominant layer F (i) it in, wins standard using the two attributes as individual, a physical efficiency in the quasi- domain Paroet is made to expand to entire Pareto Domain is uniformly distributed, and maintains the diversity of population.
Further, in the third step and the 6th step, selection refers to through quick non-dominated ranking operator, crowding ratio It is selected compared with operator;Intersect the combined crosswise referred to through chromosome, generates new individual;Variation refers to from group optionally An individual makes a variation to generate more excellent individual to certain section of coding in selected chromosome;Using real number interior extrapolation method Crossover operation is carried out, using j-th of gene a of i-th of individualijMutation operation is carried out, variation method is as follows:
aij=aij+(aij-amax) * f (g) r > 0.5
aij=aij+(amin-aij)*f(g) r≤0.5
In formula, random number of the r between [0,1], amaxFor the previous of gene, aminFor the next time of gene;F (g)=rand × (1-g/Gmax)2;Random number of the rand between [0,1];G is current iteration number;GmaxFor maximum evolution number.
Again further, in the 4th step, operator, i.e., the son generated parent population with it are designed using elitism strategy It combines for population, is selected by quick non-dominated ranking operator and crowding comparison operator, generate next-generation population, this has Enter the next generation conducive to the defect individual kept in parent, guarantees that the optimum individual in population will not be lost.
The treatment process of the step 4 are as follows: first pass through the training to data sample, complete learning rate changing network model It establishes, which goes out the good mapping relations for outputting and inputting variable.Then predicted value neural network obtained is used Target function value seeks in Yu Erdai algorithm;Finally by two generation algorithms carry out global optimizing, find out fan operation efficiency and Most ideal point, that is, pareto optimum point of wind pressure or efficiency and air quantity;Since the end value that the model obtains is after normalizing Value, therefore need to carry out anti-normalization processing to structure variable value corresponding to pareto optimum point again, it is converted into true value, it is public Formula is as follows: x=k (xmax-xmin)+xmin
In the present invention, in the case where having chosen appropriate algorithm, it is also necessary to the prediction model of foundation;Wherein, artificial neuron Network is abstracted to biological neural network and is established naive model, the Nonlinear Mapping energy with height by bionics Power, self-learning capability etc. have been applied to many fields.And learning rate changing network eliminate it is big to error gradient in BP neural network Small complicated calculations.Genetic algorithm and neural network are combined, algorithm the convergence speed and variable parameter can be effectively improved Accuracy is quickly obtained optimal structure variable numerical value.
Obtain that target is comprehensive and the higher blower fan structure variable value of precision, and in the fan design being widely used in, no Actual utility only is played to enterprise, and is beneficial to the energy-saving and emission-reduction in industrial process.
Beneficial effects of the present invention are mainly manifested in:
Learning rate changing is established through network model.The problems such as reducing the oscillation or diverging in prediction is realized quick and effective Study convergence process, efficiently fit the mapping relations between the variable output and input.
Establish two generation algorithm models.Two generation algorithms have good global optimizing ability, and solve multiple optimization aims Between conflicting relationship, pareto optimal solution can be obtained.
The global optimizing ability of the local optimal searching ability of learning rate changing network and two generation algorithms is combined, it can be accurate The i.e. optimal revolving speed of value for obtaining the forward position pareto and counter structure variable and movable vane established angle, realize the height of fan design Reliability and high-precision.
Detailed description of the invention
Fig. 1 is the process of the fan design two generations algorithm Multipurpose Optimal Method of learning rate changing network modelling of the invention Figure.
Specific embodiment
The invention will be further described below in conjunction with the accompanying drawings.
Referring to Fig.1, the fan design two generations algorithm Multipurpose Optimal Method of a kind of learning rate changing network modelling, including it is following Step:
Step 1: collecting to fan operation efficiency and the very big structure variable of cost impact, and wind pressure and air quantity be to Definite value, efficiency and cost are that target variable, wherein structure variable and the data sample of target variable can be obtained by experiment;
Step 2: enabling structure variable is input variable, target variable is output variable, is trained to data sample, complete At the foundation of learning rate changing network model, wherein updating weight and threshold value using learning rate changing method;
Step 3: establishing two generation algorithm models, wherein designing operator using non-dominated ranking operator and elitism strategy;
Step 4: predicting the energy consumption and cost of blower by the learning rate changing network established, and by predicted value It is sought for target function value in two generation Genetic Algorithm Models, to obtain the forward position pareto.Finally by the knot after renormalization Structure variate-value is applied in the actual design of blower.
In the step 1, structure variable includes blade exit established angle, the number of blade, impeller outlet width, blade tip clearance Deng the input variable is chosen as follows: select blade exit established angle, the number of blade, impeller outlet width as structure variable, And the input variable enabled as neural network model;This group of target variable of efficiency and cost is enabled for the defeated of neural network model Variable out.
It is as follows to the foundation, initialization, training process of learning rate changing network model in the step 2: first to sample number According to being handled;Then, then by the way that treated data are right respectively to calculate neural network hidden layer node and output layer node Answer input value and output valve;Finally, according to the more new formula of weight in learning rate changing network and threshold value, in the close instruction of training error When practicing target, according to the learning rate that mathematical model reduces neural network the adjustment amount of weight can be become smaller;If error is lower than instruction Practice target, then learning rate changing network completes modeling.
The processing step of the step 2 is as follows:
2.1 data processing
Relevant parameter, that is, impeller outlet established angle in collection step one, the number of blade, impeller outlet width, efficiency, wind pressure or Impeller outlet established angle, the number of blade, impeller outlet width, total pressure is normalized in air quantity, arrives it between [0,1], Formula is as follows:
Wherein, k is the data after normalization, and x is to be normalized data, and xmin is the minimum value being normalized in data, Xmax is to be normalized maximum value in data;
The classification of 2.2 data
The data set obtained after processing is divided into two parts, wherein it randomly selects in data set 70% and is used as training set, Again using remaining 30% data as test set;
The initialization of 2.3 networks
Assuming that the node number of input layer is n, the node number of hidden layer is l, and the node number of output layer is m, is implied Number is one layer layer by layer, number of nodes rule of thumb formula:It obtains, wherein n, m are input, output node number, and a is Constant takes 1~10.Cycle-index is set as n times;Each layer weight initial value takes the random number between [- 1,1];Wherein, input layer To the weight w of hidden layerij, the weight of hidden layer to output layer is wjk, the threshold value of input layer to hidden layer is aj, hidden layer is to defeated The threshold value of layer is b outk;Learning rate η generally takes 0.1~0.2, and training objective generally takes 10-3~10-6, cycle-index N (N > 200) Secondary, excitation function is g (x), and wherein excitation function takes Sigmoid function, form are as follows:
2.4 training neural networks, process are as follows:
2.4.1 the forward-propagating of signal
2.4.1.1 the output of hidden layer:
xiFor the data of the input of input layer, HjFor the output of hidden layer node;
2.4.1.2 the output of output layer:
OkFor the output of output layer node;
2.4.1.3 the calculating of error:
ek=(Yk-Ok)Ok(1-Ok)
In above formula, i=1...n, j=1...l, k=1...m, YkFor reality output data;
By comparing the reality output and desired output of output layer, the error of the two is obtained.If error is not in requirement In error range, then error back propagation is transferred to;
2.4.2 the backpropagation of signal (error)
2.4.2.1 the update of weight:
According to error ekUpdate the weight ω between network input layer and hidden layerij, power between hidden layer and output layer Value ωjkFormula is as follows:
ωjkjk+ηHjek
2.4.2.2 the update of threshold value:
According to error ekUpdate network node threshold value a, b;
bk=bk+ηek2.4.3 weight adjusted, threshold value assign BP neural network again, repeat step 2.4.1- 2.4.2 when calculating error and being twice of training objective, record recycles step number st at this time, and learning rate η is calculated when recycling at this time the M times Formula is as follows:
2.5 data test
After repeating the data training that step 2.4.1-2.4.3 completes all training sets, with the data of test set to nerve net Network is tested.If error is lower than training objective, neural network completes modeling, i.e. outputting and inputting for neural network model is expired Sufficient mapping relations.
The step of step 3, the described two generation algorithm, is as follows:
3.1 generate parent population Pt, Population Size N;
3.2 calculate non-dominant rank individual in parent population Pt, crowding distance;
3.3 are selected again, are intersected, being made a variation, and subgroup Qt primary is generated;
3.4 merge subgroup Qt primary and parent population Pt, generate new parent population Pt+1, Population Size 2N;
3.5 calculate non-dominant rank individual in new parent population Pt+1, crowding distance;
3.6 couples of new parent population Pt+1 are selected, are intersected, are made a variation, and new parent subgroup Qt+1, Population Size N are generated;
3.7 judge whether evolutionary generation is less than maximum algebra G, if it is not, then exporting Qt+1;If so, t=t+1, returns to Four steps continue to recycle.
Further, in the first step, population coding mode uses real coding, and population scale is set as N, evolutionary generation is set It is set as 20%~45% for t, the general fork probability of intersection, mutation probability is set as 1%~8%.
Further, in the second step and the 5th step, using quick non-dominated ranking operator and crowding comparison operator, Meanwhile it must be calculated the predicted value of neural network model as target function value in two generation algorithms in the method.
Wherein, the principle of quick non-dominated ranking are as follows: find out non-dominant disaggregation in population first, be denoted as the first non-dominant layer F1, non-dominant ordinal number is irank=1, and F1 is removed to the non-dominant disaggregation for finding out remaining population again, is denoted as F2.Successively So.The non-dominant higher individual of ordinal number is preferentially selected.
Crowding: 2 adjacent with i individuals the distance between i+1 and i-1 on object space.In same non-dominant layer F (i) it in, wins standard using the two attributes as individual, a physical efficiency in the quasi- domain Paroet is made to expand to entire Pareto Domain is uniformly distributed, and maintains the diversity of population.
Further, in the third step and the 6th step, selection refers to through quick non-dominated ranking operator, crowding ratio It is selected compared with operator;Intersect the combined crosswise referred to through chromosome, generates new individual;Variation refers to from group optionally An individual makes a variation to generate more excellent individual to certain section of coding in selected chromosome;Using real number interior extrapolation method Crossover operation is carried out, using j-th of gene a of i-th of individualijMutation operation is carried out, variation method is as follows:
aij=aij+(aij-amax) * f (g) r > 0.5
aij=aij+(amin-aij)*f(g) r≤0.5
In formula, random number of the r between [0,1], amaxFor the previous of gene, aminFor the next time of gene;F (g)=rand × (1-g/Gmax)2;Random number of the rand between [0,1];G is current iteration number;GmaxFor maximum evolution number.
Again further, in the 4th step, operator, i.e., the son generated parent population with it are designed using elitism strategy It combines for population, is selected by quick non-dominated ranking operator and crowding comparison operator, generate next-generation population, this has Enter the next generation conducive to the defect individual kept in parent, guarantees that the optimum individual in population will not be lost.
The treatment process of the step 4 are as follows: first pass through the training to data sample, complete learning rate changing network model It establishes, which goes out the good mapping relations for outputting and inputting variable.Then predicted value neural network obtained is used Target function value seeks in Yu Erdai algorithm;Finally by two generation algorithms carry out global optimizing, find out fan operation efficiency and Most ideal point, that is, pareto optimum point of wind pressure or efficiency and air quantity;Since the end value that the model obtains is after normalizing Value, therefore need to carry out anti-normalization processing to structure variable value corresponding to pareto optimum point again, it is converted into true value, it is public Formula is as follows: x=k (xmax-xmin)+xmin

Claims (10)

1. a kind of fan design two generations algorithm Multipurpose Optimal Method of learning rate changing network modelling, which is characterized in that the side Method the following steps are included:
Step 1: collecting to fan operation efficiency and the very big structure variable of cost impact, and wind pressure and air quantity are given value, Efficiency and cost are that target variable, wherein structure variable and the data sample of target variable can be obtained by experiment;
Step 2: enabling structure variable is input variable, target variable is output variable, is trained to data sample, completes to become The foundation of learning rate network model, wherein updating weight and threshold value using learning rate changing method;
Step 3: establishing two generation algorithm models, wherein designing operator using non-dominated ranking operator and elitism strategy;
Step 4: the energy consumption and cost of blower are predicted by the learning rate changing network established, and predicted value is used for Target function value is sought in two generation Genetic Algorithm Models, obtaining the forward position pareto, is finally become the structure after renormalization Magnitude is applied in the actual design of blower.
2. the fan design two generations algorithm Multipurpose Optimal Method of learning rate changing network modelling as described in claim 1, special Sign is, in the step 1, the input variable is chosen as follows: selection blade exit established angle, the number of blade, impeller outlet Width is as structure variable, and the input variable enabled as neural network model;By this group of target variable of efficiency and cost, enables and be The output variable of neural network model.
3. the fan design two generations algorithm Multipurpose Optimal Method of learning rate changing network modelling as claimed in claim 1 or 2, It is characterized in that, it is as follows to the foundation, initialization, training process of learning rate changing network model in the step 2: first to sample number According to being handled;Then, then by the way that treated data are right respectively to calculate neural network hidden layer node and output layer node Answer input value and output valve;Finally, according to the more new formula of weight in learning rate changing network and threshold value, in the close instruction of training error When practicing target, according to the learning rate that mathematical model reduces neural network the adjustment amount of weight can be become smaller;If error is lower than instruction Practice target, then learning rate changing network completes modeling.
4. the fan design two generations algorithm Multipurpose Optimal Method of learning rate changing network modelling as claimed in claim 3, special Sign is that the processing step of the step 2 is as follows:
2.1 data processing
Relevant parameter, that is, impeller outlet established angle, the number of blade, impeller outlet width, efficiency, wind pressure or air quantity in collection step one, Impeller outlet established angle, the number of blade, impeller outlet width, total pressure are normalized, arrive it between [0,1], formula It is as follows:
Wherein, k is the data after normalization, and x is to be normalized data, and xmin is the minimum value being normalized in data, xmax To be normalized maximum value in data;
The classification of 2.2 data
The data set obtained after processing is divided into two parts, wherein randomly select in data set 70% and be used as training set, then will Remaining 30% data are as test set;
The initialization of 2.3 networks
Assuming that the node number of input layer is n, the node number of hidden layer is l, and the node number of output layer is m, is implied layer by layer Number is one layer, number of nodes rule of thumb formula:It obtains, wherein n, m are input, output node number, and a is constant 1~10 is taken, cycle-index is set as n times;Each layer weight initial value takes the random number between [- 1,1];Wherein, input layer is to hidden Weight w containing layerij, the weight of hidden layer to output layer is wjk, the threshold value of input layer to hidden layer is aj, hidden layer to output layer Threshold value be bk;Learning rate η takes 0.1~0.2, and training objective takes 10-3~10-6, cycle-index n times, excitation function is g (x), Middle excitation function takes Sigmoid function, form are as follows:
2.4 training neural networks;
2.5 data test
After repeating the data training that step 2.4.1-2.4.3 completes all training sets, with the data of test set to neural network into Row test;If error is lower than training objective, neural network completes modeling, i.e. the satisfaction that outputs and inputs of neural network model is reflected Penetrate relationship.
5. the fan design two generations algorithm Multipurpose Optimal Method of learning rate changing network modelling as claimed in claim 4, special Sign is, in described 2.4, the process of training neural network is as follows:
2.4.1 the forward-propagating of signal
2.4.1.1 the output of hidden layer:
xiFor the data of the input of input layer, HjFor the output of hidden layer node;
2.4.1.2 the output of output layer:
OkFor the output of output layer node;
2.4.1.3 the calculating of error:
ek=(Yk-Ok)Ok(1-Ok)
In above formula, i=1...n, j=1...l, k=1...m, YkFor reality output data;
By comparing the reality output and desired output of output layer, the error of the two is obtained, if error is not in the error of requirement In range, then error back propagation is transferred to;
2.4.2 the backpropagation of signal
2.4.2.1 the update of weight:
According to error ekUpdate the weight ω between network input layer and hidden layerij, weight ω between hidden layer and output layerjk Formula is as follows:
ωjkjk+ηHjek
2.4.2.2 the update of threshold value:
According to error ekUpdate network node threshold value a, b;
bk=bk+ηek
2.4.3 weight adjusted, threshold value assign BP neural network again, repeat step 2.4.1-2.4.2 when calculating error is Record recycles step number st at this time when twice of training objective, and learning rate η calculation formula is as follows when recycling at this time the M times:
6. the fan design two generations algorithm Multipurpose Optimal Method of learning rate changing network modelling as claimed in claim 1 or 2, It is characterized in that, the step 3, the step of described two generation algorithm is as follows:
3.1 generate parent population Pt, Population Size N;
3.2 calculate non-dominant rank individual in parent population Pt, crowding distance;
3.3 are selected again, are intersected, being made a variation, and subgroup Qt primary is generated;
3.4 merge subgroup Qt primary and parent population Pt, generate new parent population Pt+1, Population Size 2N;
3.5 calculate non-dominant rank individual in new parent population Pt+1, crowding distance;
3.6 couples of new parent population Pt+1 are selected, are intersected, are made a variation, and new parent subgroup Qt+1, Population Size N are generated;
3.7 judge whether evolutionary generation is less than maximum algebra G, if it is not, then exporting Qt+1;If so, t=t+1, returns to the 4th step Continue to recycle.
7. the fan design two generations algorithm Multipurpose Optimal Method of learning rate changing network modelling as claimed in claim 1 or 2, It is characterized in that, in the first step, population coding mode uses real coding, and population scale is set as N, evolutionary generation is set as t, hands over The general fork probability of fork is set as 20%~45%, and mutation probability is set as 1%~8%.
8. a kind of Multipurpose Optimal Method based on two generation genetic algorithms of fan design as claimed in claim 1 or 2, special Sign is, in the second step and the 5th step, using quick non-dominated ranking operator and crowding comparison operator, meanwhile, it will be refreshing Predicted value through network model must be calculated as target function value in two generation algorithms.
9. the fan design two generations algorithm Multipurpose Optimal Method of learning rate changing network modelling as claimed in claim 8, special Sign is, in the third step and the 6th step, selection refers to through quick non-dominated ranking operator, the progress of crowding comparison operator Selection;Intersect the combined crosswise referred to through chromosome, generates new individual;Variation refers to an optional individual from group, It makes a variation certain section of coding in selected chromosome to generate more excellent individual;Intersection behaviour is carried out using real number interior extrapolation method Make, using j-th of gene a of i-th of individualijMutation operation is carried out, variation method is as follows:
aij=aij+(aij-amax) * f (g) r > 0.5
aij=aij+(amin-aij)*f(g) r≤0.5
In formula, random number of the r between [0,1], amaxFor the previous of gene, aminFor the next time of gene;F (g)=rand × (1-g/ Gmax)2;Random number of the rand between [0,1];G is current iteration number;GmaxFor maximum evolution number.
10. the fan design two generations algorithm Multipurpose Optimal Method of learning rate changing network modelling as claimed in claim 1 or 2, It is characterized in that, designing operator, i.e., the progeny population group generated parent population with it using elitism strategy in the 4th step It closes, is selected by quick non-dominated ranking operator and crowding comparison operator, generate next-generation population, this is conducive to keep Defect individual in parent enters the next generation, guarantees that the optimum individual in population will not be lost;
The treatment process of the step 4 are as follows: the training to data sample is first passed through, the foundation of learning rate changing network model is completed, The models fitting goes out the good mapping relations for outputting and inputting variable;Then predicted value neural network obtained was used for for two generations Target function value seeks in algorithm;Finally by two generation algorithms carry out global optimizing, find out fan operation efficiency and wind pressure or Most ideal point, that is, pareto optimum point of efficiency and air quantity;Since the end value that the model obtains is the value after normalization, therefore It needs to carry out anti-normalization processing to structure variable value corresponding to pareto optimum point again, is converted into true value, formula is such as Under: x=k (xmax-xmin)+xmin
CN201910139400.3A 2019-02-25 2019-02-25 The fan design two generations algorithm Multipurpose Optimal Method of learning rate changing network modelling Pending CN109918749A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910139400.3A CN109918749A (en) 2019-02-25 2019-02-25 The fan design two generations algorithm Multipurpose Optimal Method of learning rate changing network modelling

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910139400.3A CN109918749A (en) 2019-02-25 2019-02-25 The fan design two generations algorithm Multipurpose Optimal Method of learning rate changing network modelling

Publications (1)

Publication Number Publication Date
CN109918749A true CN109918749A (en) 2019-06-21

Family

ID=66962203

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910139400.3A Pending CN109918749A (en) 2019-02-25 2019-02-25 The fan design two generations algorithm Multipurpose Optimal Method of learning rate changing network modelling

Country Status (1)

Country Link
CN (1) CN109918749A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103606014A (en) * 2013-10-18 2014-02-26 国家电网公司 Multi-target-based island distributed generation optimization method
CN106681146A (en) * 2016-12-31 2017-05-17 浙江大学 Blast furnace multi-target optimization control algorithm based on BP neural network and genetic algorithm
CN106951983A (en) * 2017-02-27 2017-07-14 浙江工业大学 Injector performance Forecasting Methodology based on the artificial neural network using many parent genetic algorithms

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103606014A (en) * 2013-10-18 2014-02-26 国家电网公司 Multi-target-based island distributed generation optimization method
CN106681146A (en) * 2016-12-31 2017-05-17 浙江大学 Blast furnace multi-target optimization control algorithm based on BP neural network and genetic algorithm
CN106951983A (en) * 2017-02-27 2017-07-14 浙江工业大学 Injector performance Forecasting Methodology based on the artificial neural network using many parent genetic algorithms

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
詹书俊 等: "基于NSGA-II的供水管网模型校核", 《给水排水》 *

Similar Documents

Publication Publication Date Title
Qiao et al. Nature-inspired hybrid techniques of IWO, DA, ES, GA, and ICA, validated through a k-fold validation process predicting monthly natural gas consumption
CN109932903A (en) The air-blower control Multipurpose Optimal Method of more parent optimization networks and genetic algorithm
Wang et al. Stochastic economic emission load dispatch through a modified particle swarm optimization algorithm
Kang et al. Application of BP neural network optimized by genetic simulated annealing algorithm to prediction of air quality index in Lanzhou
CN108846526A (en) A kind of CO2 emissions prediction technique
CN109214449A (en) A kind of electric grid investment needing forecasting method
CN109084415A (en) Central air-conditioning operating parameter optimization method based on artificial neural network and genetic algorithms
CN106650920A (en) Prediction model based on optimized extreme learning machine (ELM)
Zhu et al. Coke price prediction approach based on dense GRU and opposition-based learning salp swarm algorithm
Deng et al. A novel improved whale optimization algorithm for optimization problems with multi-strategy and hybrid algorithm
CN113361761A (en) Short-term wind power integration prediction method and system based on error correction
CN112069656A (en) Durable concrete mix proportion multi-objective optimization method based on LSSVM-NSGAII
CN109886448A (en) Using learning rate changing BP neural network and the heat pump multiobjective optimization control method of NSGA-II algorithm
CN107400935A (en) Adjusting method based on the melt-spinning technology for improving ELM
Yang et al. The stochastic decision making framework for long-term multi-objective energy-water supply-ecology operation in parallel reservoirs system under uncertainties
Bekker Applying the cross-entropy method in multi-objective optimisation of dynamic stochastic systems
Zhou et al. Advances in teaching-learning-based optimization algorithm: A comprehensive survey
CN109829244A (en) The blower optimum design method of algorithm optimization depth network and three generations's genetic algorithm
Kubuś et al. A new learning approach for fuzzy cognitive maps based on system performance indicators
CN110033118A (en) Elastomeric network modeling and the blower multiobjective optimization control method based on genetic algorithm
Hu et al. Hybrid prediction model for the interindustry carbon emissions transfer network based on the grey model and general vector machine
Fountas et al. Single and multi-objective optimization methodologies in CNC machining
CN109918749A (en) The fan design two generations algorithm Multipurpose Optimal Method of learning rate changing network modelling
Zhao et al. Application of LDHA-BP in prediction of atmospheric pm2. 5 concentration
CN114819151A (en) Biochemical path planning method based on improved agent-assisted shuffled frog leaping algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190621

RJ01 Rejection of invention patent application after publication