CN109782606A - Recursive small echo Ai Erman neural network based on modified form gravitation searching method - Google Patents
Recursive small echo Ai Erman neural network based on modified form gravitation searching method Download PDFInfo
- Publication number
- CN109782606A CN109782606A CN201910185284.9A CN201910185284A CN109782606A CN 109782606 A CN109782606 A CN 109782606A CN 201910185284 A CN201910185284 A CN 201910185284A CN 109782606 A CN109782606 A CN 109782606A
- Authority
- CN
- China
- Prior art keywords
- layer
- small echo
- erman
- recursive
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Landscapes
- Feedback Control In General (AREA)
Abstract
The present invention discloses the recursive small echo Ai Erman neural network based on modified form gravitation searching method, and recursive small echo Ai Erman neural network includes five layers of neural network configuration, respectively input layer, hidden layer, context level, output feedback layer and output layer;Modified form gravitation searching method the following steps are included: initialization, fitness function, small echo mutation, group update, calculate different directions total power, acceleration calculation, the speed of more new particle and position, repeat step 2 to seven, until reaching stopping criterion.Recursive small echo Ai Erman neural network of the present invention reaches wave-activated power generation in the revolving speed of Wei Ersi turbine blade angle and generator based on the recursive small echo Ai Erman Application of Neural Network of modified form gravitation search algorithm, and the electric system that the renewable sources of energy for having high accounting and high permeability can be allowed to generate electricity by way of merging two or more grid systems maintains good dynamic characteristic and momentary stabilization degree.
Description
Technical field
The present invention relates to recursive small echo Ai Erman neural networks, specifically pulling over based on modified form gravitation searching method
Formula small echo Ai Erman neural network.
Background technique
The output power of wave-power device depends primarily on wave velocity of wave size, however other than secondary speed condition,
The output power of the wave-power device is still related with many Factors, which additionally comprises the density of water, leaf
Piece radius and torque coefficient.Wherein, which has a functional relation with blade angle again, and can be expressed as characteristic song
Line.There is the characteristic curve best operating point must can make this as can the wave-power device is operated in the best operating point
Wave-power device runs on relatively high power output state, reaches one and preferably exports.Operate in the torque coefficient most preferably
Operating point can be completed using the blade angle is adjusted.However the quadratic power of the output power of the wave-power device and wave height
It is directly proportional, there is a non-linear relation, if being can not width when wave puts electrical power generators using existing linear control method
Identical control performance is maintained in wide working range, therefore will be unable to reach preferable control effect, Traditional Wavelet Ai Erman mind
Learning rate through network can only artificially be corrected according to real response, the bad diverging for be easy to causeing network of correction effect,
Wave-activated generator is set to be unable to maintain that a good control dynamic,
Summary of the invention
The purpose of the present invention is to provide the recursive small echo Ai Erman neural network based on modified form gravitation searching method,
Recursive small echo Ai Erman Application of Neural Network based on modified form gravitation search algorithm in Wei Ersi turbine blade angle and
The revolving speed of generator reaches wave-activated power generation, power train that the renewable sources of energy for having high accounting and high permeability can be allowed to generate electricity by way of merging two or more grid systems
System maintains good dynamic characteristic and momentary stabilization degree;Recursive small echo Ai Erman neural network of the invention ends than Traditional Wavelet
Germania neural network has faster convergence rate, and the learning rate of Traditional Wavelet Ai Erman neural network can only be according to practical sound
Should artificially be corrected, the bad diverging for be easy to causeing network of correction effect, make wave-activated generator be unable to maintain that one it is good
Control dynamic, the optimal learning rate of recursive small echo Ai Erman neural network is searched using modified form gravitation search algorithm.
The purpose of the present invention can be achieved through the following technical solutions:
Recursive small echo Ai Erman neural network based on modified form gravitation searching method, recursive small echo Ai Erman nerve
Network includes that five layers of neural network configure, respectively input layer, hidden layer, context level, output feedback layer and output layer, described
Recursive small echo Ai Erman neural network is initialised, and the supervised study based on gradient descent method is used to train this network system
System, is adjusted using the parameter of pattern pair recursive small echo Ai Erman neural network, by the Recursion Application of chain rule,
Every layer of error term is calculated first, and energy function E is indicated are as follows:
The training process of network is as follows: the 4th layer of weight WljIt updates
The calculating of the transmitting of 4th layer of error term is as follows:
WljIt is as follows to update formula:
Wlj(N+1)=Wlj(N)+η1ΔWlj
η1It is the 4th layer of weight WljLearning rate.
Third layer weight WkjIt updates
In third layer, by using chain rule, WkjUpdate rule are as follows:
Wkj(N+1)=Wkj(N)+η2ΔWkj
η2It is third layer weight WkjLearning rate.
Second layer weight WijIt updates
In the second layer, by using chain rule, WijUpdate rule are as follows:
Wij(N+1)=Wij(N)+η3ΔWij
η3It is second layer weight WijLearning rate.
Translation and expansion parameters are respectively ajAnd bjNewer it is as follows:
aj(N+1)=aj(N)+ηaΔaj
bj(N+1)=bj(N)+ηbΔbj
ηaAnd ηbTo translate the learning rate with expansion parameters.
Modified form gravitation searching method adjusts learning rate (η on instant line1,η2,η3,ηa,ηb), optimize recursive small echo
Recursive small echo Ai Erman neural network, method the following steps are included:
Step 1: initialization
WithDefine the position of i-th of particle, whereinIt is the d dimension of i-th of particle
Value, group's size are N.
Step 2: fitness function
For each particle, adaptability value is assessed, in order to calculate fitness value, FIT is the optimum meter of fitness function
Formula,
Step 3: small echo mutation
It on the basis of wavelet transformation, is made a variation to the vector in group, having obtained one has good tuning special
Property small echo mutation operation F, use not Lay small echo to substitute into following formula as the female of small echo:
Step 4: group updates
Center of gravity constant G is initial value G0With the function of time t;It is initialized when starting, and as the time is reduced to control
The precision of search is made, as follows:
α is constant, and Iter is iterative times, and Itermax is maximum iterative times, and center of gravity and mass function are according to adaptability
Assessment calculates.
Step 5: the total power of different directions is calculated
After calculating the quality of each particle, their acceleration is found, according to newton rule, to each object
Total power caused by one group of heavier quality that body applies calculates as follows:
Step 6: acceleration calculation
Based on newton rule, the calculating of acceleration is as follows:
Rij(t) it is Euclidean distance between two particles i and j, Kbest is the set of preceding K object.
Step 7: the speed of more new particle and position
For position of the more new particle in optimization method, we calculate the speed of each particle, according to following equation
Formula calculate each particle speed and new position:
xi(t+1)=xi(t)+vi(t+1)
c1With c2It is Studying factors, rand1() and rand2() is two random numbers between 0-1, pbest (t) and
Gbest (t) is individual and universe optimal value.
Step 8: step 2 is repeated to seven, until reaching stopping criterion.
Further, the recursive small echo Ai Erman neural network first layer: input layer
WithRespectively i-th of first layer of input and output valve, N are that n-th iterates value.
Further, the recursive small echo Ai Erman neural network second layer: hidden layer
The input and output of hidden layer are as follows:
WithRespectively j-th of the second layer of input and output valve;The associated shift and expansion parameters of wavelet function
Respectively ajAnd bj, Wij、WkjAnd WljIt is input neuron and hidden neuron, upper and lower neuron and hidden neuron respectively, defeated
The connection weight of feedback neural member and hidden neuron out, the summation of the intensity of activation of input layer, upper and lower level and output feedback layer
Use θjIt indicates:
Select the first derivative of Gaussian function φ (x) as Mexican hat wavelet function,
φ (x)=(1-x2)exp[-(1/2)x2]
Further, the third layer: upper and lower level
WithRespectively k-th of third layer of input and output valve, α are from connection feedback oscillator.
Further, described 4th layer: output feedback layer
Exporting feedback layer neuron is with exp (- x2) form activation function non-linear neural member, have higher
Efficiency is practised, the 4th layer of node, which is output and input, to be expressed as follows:
WithRespectively the 4th layer of first of input and output valve,For the output of layer 5.
Further, the layer 5: output layer
WithRespectively o-th of layer 5 of input and output valve, WjoIt is the connection between hidden layer and output layer
Weight is set as 1,It is the output of recursive small echo Ai Erman neural network, is the control force for proposing this method.
Further, describedAnd ωrFor the reference rotation velocity and actual speed of generator, e is the error of rotating speed response.
Further, the group updates is updated by following formula:
Mi(t) and FITi(t) be respectively the i-th particle mass function and fitness function, best and worst are group
Minimum and maximum adaptive value, respectively indicates are as follows:
Beneficial effects of the present invention:
1, recursive small echo Ai Erman neural network of the present invention is ended based on the recursive small echo of modified form gravitation search algorithm
Germania Application of Neural Network reaches wave-activated power generation in the revolving speed of Wei Ersi turbine blade angle and generator, can allow and has height
The electric system that accounting and the renewable sources of energy of high permeability are generated electricity by way of merging two or more grid systems maintains good dynamic characteristic and momentary stabilization degree;
2, recursive small echo Ai Erman neural network of the present invention has faster convergence than Traditional Wavelet Ai Erman neural network
Speed searches the optimal learning rate of recursive small echo Ai Erman neural network using modified form gravitation search algorithm.
Detailed description of the invention
The present invention will be further described below with reference to the drawings.
Fig. 1 is neural network configuring frame composition of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts all other
Embodiment shall fall within the protection scope of the present invention.
Recursive small echo Ai Erman neural network based on modified form gravitation searching method, recursive small echo Ai Erman nerve
Network includes the configuration of five layers of neural network, as shown in Figure 1, respectively input layer, hidden layer, context level, output feedback layer and
Output layer, z-1Indicate the time delay as feedback unit.The output of hidden layer is recycled to upper and lower level by a time delay.
The output of output layer is recycled to hidden layer by a time delay, and upper and lower level is recycled by another time delay.
First layer: input layer
WhereinWithRespectively i-th of first layer of input and output valve, N are that n-th iterates value.
The second layer: hidden layer
Each neuron uses different wavelet function collection, and it is empty significantly to increase search by different translation and expansion
Between.Due to the first derivative of Gaussian function and the similitude of small echo, as morther wavelet, in hidden layer, each node from
A small echo is generated in the wavelet function of itself again, the input and output of hidden layer are as follows:
WhereinWithRespectively j-th of the second layer of input and output valve;The associated shift and expansion of wavelet function
Parameter is respectively ajAnd bj.Meanwhile Wij、WkjAnd WljIt is to input neuron and hidden neuron, upper and lower neuron and hide respectively
Neuron, the connection weight for exporting feedback neural member and hidden neuron.The activation of input layer, upper and lower level and output feedback layer is strong
The summation θ of degreejIt indicates:
In addition, selecting the first derivative of Gaussian function φ (x) as Mexican hat wavelet function.
φ (x)=(1-x2)exp[-(1/2)x2]
Third layer: upper and lower level
WhereinWithRespectively k-th of third layer of input and output valve, α are from connection feedback oscillator.
4th layer: output feedback layer
Exporting feedback layer neuron is with exp (- x2) form activation function non-linear neural member, have higher
Efficiency is practised, the 4th layer of node, which is output and input, to be expressed as follows:
WhereinWithRespectively the 4th layer of first of input and output valve,For the output of layer 5.
Layer 5: output layer
WhereinWithRespectively o-th of layer 5 of input and output valve.WjoIt is between hidden layer and output layer
Connection weight is set as 1,It is the output of recursive small echo Ai Erman neural network, is the control force for proposing this method.
Recursive small echo Ai Erman neural network is initialised, and the supervised study based on gradient descent method is used to train
This network system is adjusted using the parameter of pattern pair recursive small echo Ai Erman neural network, passes through chain rule
Recursion Application, calculates every layer of error term first, and energy function E is indicated are as follows:
WhereinAnd ωrFor the reference rotation velocity and actual speed of generator, e is the error of rotating speed response.The training of network
Process is as follows:
4th layer of weight WljIt updates
The calculating of the transmitting of this layer of error term is as follows:
And WljIt is as follows to update formula:
Wlj(N+1)=Wlj(N)+η1ΔWlj
Wherein η1It is the 4th layer of weight WljLearning rate;
Third layer weight WkjIt updates
In third layer, by using chain rule, WkjUpdate rule are as follows:
Wkj(N+1)=Wkj(N)+η2ΔWkj
Wherein η2It is third layer weight WkjLearning rate;
Second layer weight WijIt updates
In the second layer, by using chain rule, WijUpdate rule are as follows:
Wij(N+1)=Wij(N)+η3ΔWij
Wherein η3It is second layer weight WijLearning rate;
Translation and expansion parameters are respectively ajAnd bjNewer it is as follows:
aj(N+1)=aj(N)+ηaΔaj
bj(N+1)=bj(N)+ηbΔbj
Wherein ηaAnd ηbTo translate the learning rate with expansion parameters.
Modified form gravitation searching method, regularized learning algorithm rate (η immediately on instant line1,η2,η3,ηa,ηb), optimize recursive
Small echo recursive small echo Ai Erman neural network, method the following steps are included:
Step 1: initialization
We useDefine the position of i-th of particle, whereinIt is the d of i-th of particle
Dimension values, group's size are N.
Step 2: fitness function
For each particle, adaptability value is assessed.In order to calculate fitness value, FIT is the optimum meter of fitness function
Formula.
Step 3: small echo mutation
It on the basis of wavelet transformation, is made a variation to the vector in group, having obtained one has good tuning special
Property small echo mutation operation F, use not Lay small echo to substitute into following formula as the female of small echo:
Step 4: group updates
Center of gravity constant G is initial value G0With the function of time t;It is initialized when starting, and as the time is reduced to control
The precision of search is made, as follows:
Wherein α constant, Iter are iterative times, and Itermax is maximum iterative times
Center of gravity and mass function are calculated according to adaptability teaching, are updated by following formula:
Mi(t) and FITi(t) be respectively the i-th particle mass function and fitness function.In addition, best and worst is race
The minimum and maximum adaptive value of group, respectively indicates are as follows:
Step 5: the total power of different directions is calculated
After calculating the quality of each particle, their acceleration is found, therefore, according to newton rule, to every
Total power caused by one group of heavier quality that one object applies calculates as follows:
randj() is the random number between 0-1, and ε is a small constant.
Step 6: acceleration calculation
Based on newton rule, the calculating of acceleration is as follows:
Wherein, Rij(t) it is Euclidean distance between two particles i and j.Kbest is the set of preceding K object, it
There is best adaptation value and bigger quality, the value of K reduces with each iteration.
Step 7: the speed of more new particle and position
For position of the more new particle in optimization method, we calculate the speed of each particle, according to following equation
Formula calculate each particle speed and new position:
xi(t+1)=xi(t)+vi(t+1)
Wherein, c1With c2It is Studying factors, rand1() and rand2() is two random numbers between 0-1, pbest (t)
It is individual and universe optimal value with gbest (t).
Step 8: step 2 is repeated to seven, until reaching stopping criterion.
Step 2-7 is repeated, until optimal adaptation value is obviously improved or generates one group of iteration.Selection adaptation value is highest
Best learning rate of the solution as recursive small echo Ai Erman neural network.
In the description of this specification, the description of reference term " one embodiment ", " example ", " specific example " etc. means
Particular features, structures, materials, or characteristics described in conjunction with this embodiment or example are contained at least one implementation of the invention
In example or example.In the present specification, schematic expression of the above terms may not refer to the same embodiment or example.
Moreover, particular features, structures, materials, or characteristics described can be in any one or more of the embodiments or examples to close
Suitable mode combines.
The basic principles, main features and advantages of the present invention have been shown and described above.The technology of the industry
Personnel are it should be appreciated that the present invention is not limited to the above embodiments, and the above embodiments and description only describe this
The principle of invention, without departing from the spirit and scope of the present invention, various changes and improvements may be made to the invention, these changes
Change and improvement all fall within the protetion scope of the claimed invention.
Claims (8)
1. a kind of recursive small echo Ai Erman neural network based on modified form gravitation searching method, recursive small echo Ai Erman mind
Include five layers of neural networks configuration, respectively input layer, hidden layer, context level, output feedback layer and output layer through network,
Be characterized in that, the recursive small echo Ai Erman neural network is initialised, the study of supervised based on gradient descent method by with
This network system is trained, is adjusted using the parameter of pattern pair recursive small echo Ai Erman neural network, passes through chain
The Recursion Application of rule, calculates every layer of error term first, and energy function E is indicated are as follows:
The training process of network is as follows: the 4th layer of weight WljIt updates
The calculating of the transmitting of 4th layer of error term is as follows:
WljIt is as follows to update formula:
Wlj(N+1)=Wlj(N)+η1ΔWlj
η1It is the 4th layer of weight WljLearning rate;
Third layer weight WkjIt updates
In third layer, by using chain rule, WkjUpdate rule are as follows:
Wkj(N+1)=Wkj(N)+η2ΔWkj
η2It is third layer weight WkjLearning rate;
Second layer weight WijIt updates
In the second layer, by using chain rule, WijUpdate rule are as follows:
Wij(N+1)=Wij(N)+η3ΔWij
η3It is second layer weight WijLearning rate;
Translation and expansion parameters are respectively ajAnd bjNewer it is as follows:
aj(N+1)=aj(N)+ηaΔaj
bj(N+1)=bj(N)+ηbΔbj
ηaAnd ηbTo translate the learning rate with expansion parameters;
Modified form gravitation searching method adjusts learning rate (η on instant line1,η2,η3,ηa,ηb), optimization recursive small echo is pulled over
Formula small echo Ai Erman neural network, method the following steps are included:
Step 1: initialization
WithDefine the position of i-th of particle, whereinIt is the d dimension values of i-th of particle, race
Group's size is N;
Step 2: fitness function
For each particle, adaptability value is assessed, in order to calculate fitness value, FIT is the optimum calculating formula of fitness function
Son,
Step 3: small echo mutation
It on the basis of wavelet transformation, is made a variation to the vector in group, having obtained one has good tuning characteristic
Small echo mutation operation F uses the mother of not Lay small echo as small echo to substitute into following formula:
Step 4: group updates
Center of gravity constant G is initial value G0With the function of time t;It is initialized when starting, and as the time is reduced with command deployment
Precision, it is as follows:
α is constant, and Iter is iterative times, and Itermax is maximum iterative times, and center of gravity and mass function are according to adaptability teaching
It calculates;
Step 5: the total power of different directions is calculated
After calculating the quality of each particle, their acceleration is found, according to newton rule, each object is applied
Total power caused by the one group of heavier quality added calculates as follows:
Step 6: acceleration calculation
Based on newton rule, the calculating of acceleration is as follows:
Rij(t) it is Euclidean distance between two particles i and j, Kbest is the set of preceding K object;
Step 7: the speed of more new particle and position
For position of the more new particle in optimization method, we calculate the speed of each particle, according to following equation meter
Calculate each particle speed and new position:
xi(t+1)=xi(t)+vi(t+1)
c1With c2It is Studying factors, rand1() and rand2() is two random numbers between 0-1, pbest (t) and gbest (t)
It is individual and universe optimal value;
Step 8: step 2 is repeated to seven, until reaching stopping criterion.
2. the recursive small echo Ai Erman neural network according to claim 1 based on modified form gravitation searching method,
It is characterized in that, the recursive small echo Ai Erman neural network first layer: input layer
WithRespectively i-th of first layer of input and output valve, N are that n-th iterates value.
3. the recursive small echo Ai Erman neural network according to claim 1 based on modified form gravitation searching method,
It is characterized in that, the recursive small echo Ai Erman neural network second layer: hidden layer
The input and output of hidden layer are as follows:
WithRespectively j-th of the second layer of input and output valve;The associated shift and expansion parameters of wavelet function are distinguished
For ajAnd bj, Wij、WkjAnd WljIt is input neuron and hidden neuron, upper and lower neuron and hidden neuron respectively, exports instead
Present the connection weight of neuron and hidden neuron, the summation θ of the intensity of activation of input layer, upper and lower level and output feedback layerj
It indicates:
Select the first derivative of Gaussian function φ (x) as Mexican hat wavelet function,
φ (x)=(1-x2)exp[-(1/2)x2]
4. the recursive small echo Ai Erman neural network according to claim 1 based on modified form gravitation searching method,
It is characterized in that, the third layer: upper and lower level
WithRespectively k-th of third layer of input and output valve, α are from connection feedback oscillator.
5. the recursive small echo Ai Erman neural network according to claim 1 based on modified form gravitation searching method,
It is characterized in that, described 4th layer: output feedback layer
Exporting feedback layer neuron is with exp (- x2) form activation function non-linear neural member, have higher study imitate
Rate, the 4th layer of node, which is output and input, to be expressed as follows:
WithRespectively the 4th layer of first of input and output valve,For the output of layer 5.
6. the recursive small echo Ai Erman neural network according to claim 1 based on modified form gravitation searching method,
It is characterized in that, the layer 5: output layer
WithRespectively o-th of layer 5 of input and output valve, WjoIt is the connection weight between hidden layer and output layer,
1 is set as,It is the output of recursive small echo Ai Erman neural network, is the control force for proposing this method.
7. the recursive small echo Ai Erman neural network according to claim 1 based on modified form gravitation searching method,
It is characterized in that, it is describedAnd ωrFor the reference rotation velocity and actual speed of generator, e is the error of rotating speed response.
8. the recursive small echo Ai Erman neural network according to claim 1 based on modified form gravitation searching method,
It is characterized in that, the group updates to be updated by following formula:
Mi(t) and FITi(t) be respectively the i-th particle mass function and fitness function, best and worst are the maximum of group
With minimum adaptive value, respectively indicate are as follows:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910185284.9A CN109782606A (en) | 2019-03-12 | 2019-03-12 | Recursive small echo Ai Erman neural network based on modified form gravitation searching method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910185284.9A CN109782606A (en) | 2019-03-12 | 2019-03-12 | Recursive small echo Ai Erman neural network based on modified form gravitation searching method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109782606A true CN109782606A (en) | 2019-05-21 |
Family
ID=66489071
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910185284.9A Withdrawn CN109782606A (en) | 2019-03-12 | 2019-03-12 | Recursive small echo Ai Erman neural network based on modified form gravitation searching method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109782606A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111680783A (en) * | 2020-05-25 | 2020-09-18 | 江门市华恩电子研究院有限公司 | Deep learning training and optimizing method and system based on novel wavelet excitation function |
CN111829786A (en) * | 2020-06-12 | 2020-10-27 | 上海电力大学 | Gas turbine fault diagnosis method based on GSA (generalized likelihood analysis) optimized BP (back propagation) neural network |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105844332A (en) * | 2016-03-10 | 2016-08-10 | 中国石油大学(华东) | Fast recursive Elman neural network modeling and learning algorithm |
CN106779177A (en) * | 2016-11-28 | 2017-05-31 | 国网冀北电力有限公司唐山供电公司 | Multiresolution wavelet neutral net electricity demand forecasting method based on particle group optimizing |
CN108983181A (en) * | 2018-06-28 | 2018-12-11 | 浙江大学 | A kind of radar marine target detection system of gunz optimizing |
-
2019
- 2019-03-12 CN CN201910185284.9A patent/CN109782606A/en not_active Withdrawn
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105844332A (en) * | 2016-03-10 | 2016-08-10 | 中国石油大学(华东) | Fast recursive Elman neural network modeling and learning algorithm |
CN106779177A (en) * | 2016-11-28 | 2017-05-31 | 国网冀北电力有限公司唐山供电公司 | Multiresolution wavelet neutral net electricity demand forecasting method based on particle group optimizing |
CN108983181A (en) * | 2018-06-28 | 2018-12-11 | 浙江大学 | A kind of radar marine target detection system of gunz optimizing |
Non-Patent Citations (2)
Title |
---|
KAI-HUNG LU 等: "Recurrent wavelet-based Elman neural network with modified gravitational search algorithm control for integrated offshore wind and wave power generation systems", 《ENERGY》 * |
袁杰 等: "基于ANN和GMM融合的语音情感识别方法的研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111680783A (en) * | 2020-05-25 | 2020-09-18 | 江门市华恩电子研究院有限公司 | Deep learning training and optimizing method and system based on novel wavelet excitation function |
CN111829786A (en) * | 2020-06-12 | 2020-10-27 | 上海电力大学 | Gas turbine fault diagnosis method based on GSA (generalized likelihood analysis) optimized BP (back propagation) neural network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Lin et al. | Intelligent approach to maximum power point tracking control strategy for variable-speed wind turbine generation system | |
WO2021073090A1 (en) | Real-time robust variable-pitch wind turbine generator control system and method employing reinforcement learning | |
CN105913150A (en) | BP neural network photovoltaic power station generating capacity prediction method based on genetic algorithm | |
CN105913151A (en) | Photovoltaic power station power generation amount predication method based on adaptive mutation particle swarm and BP network | |
CN106875041A (en) | A kind of short-term wind speed forecasting method | |
CN106532778A (en) | Method for calculating distributed photovoltaic grid connected maximum penetration level | |
CN109782606A (en) | Recursive small echo Ai Erman neural network based on modified form gravitation searching method | |
CN109737008A (en) | Wind turbines intelligence variable blade control system and method, Wind turbines | |
CN104314755A (en) | IPSO (Immune Particle Swarm Optimization)-based DFIG (Doubly-fed Induction Generator) variable pitch LADRC (Linear Active Disturbance Rejection Control) method and system | |
CN103425867A (en) | Short-term wind power combination prediction method | |
CN113489015A (en) | Power distribution network multi-time scale reactive voltage control method based on reinforcement learning | |
CN110445127A (en) | A kind of var Optimization Method in Network Distribution and system towards multiple stochastic uncertainty | |
CN111798037B (en) | Data-driven optimal power flow calculation method based on stacked extreme learning machine framework | |
CN114512995B (en) | Multi-device cooperative broadband oscillation suppression method for offshore wind power flexible direct grid-connected system | |
CN113469332A (en) | Virtual synchronous generator inertia damping self-adaptive control method based on fuzzy nerves | |
CN108468622A (en) | Wind turbines blade root load method of estimation based on extreme learning machine | |
CN114725944A (en) | Power electronic distribution network source and network load optimization operation control method and system | |
CN116757446A (en) | Cascade hydropower station scheduling method and system based on improved particle swarm optimization | |
CN114566971A (en) | Real-time optimal power flow calculation method based on near-end strategy optimization algorithm | |
Farhane et al. | Smart algorithms to control a variable speed wind turbine | |
CN118472964A (en) | Reactive power/voltage control method of power distribution network with self-adaptive topology change | |
Abatari et al. | Application of bat optimization algorithm in optimal power flow | |
Vohra et al. | End-to-end learning with multiple modalities for system-optimised renewables nowcasting | |
CN108223274A (en) | Large Scale Variable Pitch Wind Turbine System discrimination method based on optimization RBF neural | |
Arora et al. | AI based MPPT methods for grid connected PV systems under non linear changing solar irradiation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20190521 |