CN104102838A - Transformer noise prediction method based on wavelet neural network and wavelet technology - Google Patents

Transformer noise prediction method based on wavelet neural network and wavelet technology Download PDF

Info

Publication number
CN104102838A
CN104102838A CN201410334009.6A CN201410334009A CN104102838A CN 104102838 A CN104102838 A CN 104102838A CN 201410334009 A CN201410334009 A CN 201410334009A CN 104102838 A CN104102838 A CN 104102838A
Authority
CN
China
Prior art keywords
wavelet
neural network
neuron
output
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410334009.6A
Other languages
Chinese (zh)
Inventor
姜鸿羽
李凯
许洪华
马宏忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hohai University HHU
Nanjing Power Supply Co of Jiangsu Electric Power Co
Original Assignee
Hohai University HHU
Nanjing Power Supply Co of Jiangsu Electric Power Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hohai University HHU, Nanjing Power Supply Co of Jiangsu Electric Power Co filed Critical Hohai University HHU
Priority to CN201410334009.6A priority Critical patent/CN104102838A/en
Publication of CN104102838A publication Critical patent/CN104102838A/en
Pending legal-status Critical Current

Links

Abstract

The invention discloses a transformer noise prediction method based on a wavelet neural network and the wavelet technology. A neuronal hyperbolic tangent S-type excitation function of a hidden layer in the traditional BP (back propagation) neural network is replaced with a wavelet-based function, momentum factors are introduced when parameters of the neural system are adjusted, and accordingly, a prediction model is higher in convergence speed and higher in error precision. Vibration and noise digital signals are decomposed by means of the wavelet decomposition technology, wavelet low-frequency coefficients obtained are used as input-output pairs for the prediction model, the wavelet low-frequency coefficients obtained by prediction are reconstructed by means of the wavelet reconstruction technology after modeling, and predicted noise digital signals are obtained. The transformer noise prediction method based on the wavelet neural network and the wavelet technology has the advantages that fewer training samples are required, time of training neurons in the neural network is shortened, and the problem that poor prediction effect is caused by ambient high-frequency interference noise contained in actually-measured transformer noise data is further avoided.

Description

A kind of transformer noise Forecasting Methodology based on wavelet neural network and wavelet technique
Technical field
The present invention relates to a kind of transformer noise Forecasting Methodology, especially a kind of transformer noise Forecasting Methodology based on wavelet neural network and wavelet technique, belongs to environmental protection for electric power technical field.
Background technology
Along with high-power transformer is come into resident living area, the low-frequency noise of its generation is polluted the physical and mental health that has had a strong impact on resident.Traditional a transformer passive Noise technology centering, high frequency noise are effective, unsatisfactory to the control effect of low-frequency noise.For control transformer low-frequency noise effectively, domestic and international many scholars are applied to transformer noise by Technique of Active Noise Control and suppress problem.Although these researchs can obtain certain noise reduction, but effect is unsatisfactory, one of them main cause is: while utilizing Technique of Active Noise Control, the feedback of secondary sound source and transformer surrounding enviroment change the elementary noise signal of noise meeting severe jamming causing, the stability of whole system and validity are greatly affected.
Yet transformer noise forecasting techniques can make up this deficiency just.The transformer noise that prediction obtains can be used as initial noisy digit signal and directly inputs in active denoising system, no longer need to utilize initial sonic transducer to obtain initial noisy digit signal, the interference that can effectively avoid like this secondary sound source and transformer ambient noise to cause when the initial noise signal of sonic transducer collection.
At present, less for the software of noise prediction, the Lima that mainly contains German CadnaA, SoundPLAN and Denmark using abroad, not yet has related software to emerge at home.Technical scheme for transformer station's noise prediction mainly relies on experience and theoretical calculating, does not also form a kind of general method.A kind of transformer station noise prediction software is established, and first this software utilize noise source sound pressure level to deduct the pad value that in noise transmission approach, various factors causes, obtains the sound pressure level of a certain acceptance point; Then by the sound pressure level of an acceptance point, represent a sound pressure level in zonule; Finally by several zonules, form a large region again, each zonule sound pressure level has represented the noise profile situation in this large region.Li Yongming etc. have proposed transformer station's noise prediction and simulating analysis, the method utilize respectively grey GM (1,1) and these two kinds of methods of radial basis function neural network to transformer station's noise through having gone prediction.Although above Forecasting Methodology can dope the noise sound in transformer station region more exactly, but these methods are also not suitable for transformer noise prediction, and the noise sound that prediction obtains not is the required initial noisy digit signal of active denoising system.
Summary of the invention
Technical matters to be solved by this invention is to provide a kind of transformer noise Forecasting Methodology based on wavelet neural network and wavelet technique, utilize wavelet basis function to replace traditional BP neural network hidden layer neuron tanh S type excitation function, and introduce factor of momentum when adjusting neural network parameter, make forecast model there is the error precision of speed of convergence and Geng Gao faster.Utilize Wavelet Decomposition Technology to decompose through row vibration and noisy digit signal, input-output pair using the small echo low frequency coefficient obtaining as forecast model, after modeling completes, utilize wavelet reconstruction technology to predicting that the small echo low frequency coefficient obtaining, through line reconstruction, obtains the noisy digit signal of prediction.
The present invention is for solving the problems of the technologies described above by the following technical solutions:
The invention provides a kind of transformer noise Forecasting Methodology based on wavelet neural network and wavelet technique, by the vibration digital signal at diverse location place, transformer surface, directly predict the noisy digit signal at certain some place of transformer one side, concrete implementation step is as follows:
Step 1, gathers respectively the vibration digital signal at I diverse location place, transformer surface, the noisy digit signal at outer certain the some place of transformer, wherein, and I >=2;
Step 2, utilizes wavelet function to carry out wavelet decomposition to the I road vibration digital signal collecting, and the small echo low frequency coefficient that decomposition is obtained is normalized rear as wavelet neural network input quantity; Utilize wavelet function to carry out wavelet decomposition to the noisy digit signal collecting, the small echo low frequency coefficient that decomposition is obtained is normalized rear as wavelet neural network output quantity;
Step 3 by neuron in wavelet neural network is carried out to training study, is set up corresponding mapping relations between input quantity and output quantity, finally sets up the forecast model of the wavelet neural network based on three etale topology structures;
Step 4, utilizes the forecast model of setting up in step 3, realizes the noise prediction of transformer, is specially:
A. first the vibration digital signal at I the diverse location place, transformer surface collecting is carried out to wavelet decomposition, the small echo low frequency coefficient then decomposition being obtained is normalized, finally by the data input prediction model after normalization;
B. forecast model calculates rear output result of calculation to the data that receive;
C. first the Output rusults of forecast model is carried out to renormalization processing, then the small echo low frequency coefficient of the corresponding noise obtaining is carried out to wavelet reconstruction, thus the noisy digit signal that obtains doping.
As further prioritization scheme of the present invention, the training study in step 3, neuron in wavelet neural network being carried out, its process is as follows:
(1) setting training data total sample number described in step 2 is M, and while inputting n training data sample so, each layer of output of wavelet neural network is as follows:
Input layer i neuronic output:
x in=X i(n)
In formula, n is positive number and n ∈ [1, M]; I is the numbering of input layer, i.e. the numbering of vibration signals collecting position, and i=1,2 ..., I; I is total number of input layer, i.e. total number of vibration signals collecting position; X i(n) be the data at i collection position place in n sample data;
Hidden layer h neuronic output:
Y hn = F [ ( Σ i = 1 I w hi x in - c h ) - b h a h ]
In formula, the numbering that h is hidden layer neuron, h=1,2 ..., H; H is total number of hidden layer neuron; for Morlet wavelet function; w hifor the power that links between h neuron of neural network hidden layer and i neuron of input layer; a hcontraction-expansion factor for each neuron wavelet function of hidden layer; b hshift factor for each neuron wavelet function of hidden layer; c hfor each neuronic threshold values of hidden layer;
Output layer j neuronic output:
y jn = g ( Σ h = 1 H W jh Y hn - d j )
In formula, j is the neuronic numbering of output layer, i.e. the numbering of noise signal collection position, and j=1,2 ..., J; J represents the total number of output layer neuron, is also the total number of noise signal measurement point; for S type logarithmic function; W jhrepresent the power that links between j neuron of neural network output layer and h neuron of hidden layer; d jrepresent each neuronic threshold values of output layer;
(2) error function while inputting n training data sample is as follows
E n = 1 2 Σ j = 1 J ( y jn - O jn ) 2
In formula, O jnfor output layer j neuronic actual sample output valve;
According to Gradient Descent principle, adjust network parameter, parameter adjustment amplitude is respectively:
Δ w hi = - η ∂ E n ∂ w hi = - η ( y jn - O jn ) y jn ′ W jh Y hn ′ a h x in
ΔW jh = - η ∂ E n ∂ W jh = - η ( y jn - O jn ) y jn ′ Y hn
Δ a h = - η ∂ E n ∂ a h = η ( y jn - O jn ) y jn ′ W jh Y hn ′ [ ( Σ i = 1 I w hi x in - c h ) - b h a h 2 ]
Δb h = - η ∂ E n ∂ b h = η ( y jn - O jn ) y jn ′ W jh Y hn ′ a h
Δc h = - η ( y jn - O jn ) y jn ′ W jh Y hn ′ a h
Δd j=-η(y jn-O jn)y' jn
In formula, η represents learning rate; Δ w hithe correction that represents weights between h neuron of neural network hidden layer and i neuron of input layer; Δ W jhthe correction that represents weights between j neuron of neural network output layer and h neuron of hidden layer; Δ a hthe correction that represents each neuron wavelet function contraction-expansion factor of hidden layer; Δ b hthe correction that represents each neuron wavelet function shift factor of hidden layer; Δ c hthe correction that represents each neuron threshold values of hidden layer; Δ d jthe correction that represents each neuron threshold values of output layer; Y' jnwhile representing n training data sample input neural network, j derivative corresponding to nerve cell output function of output layer; Y' hnwhile representing n training data sample input neural network, h derivative corresponding to nerve cell output function of hidden layer;
Utilize factor of momentum α roll-off network parameter, iterative formula is as follows:
w hi=w hi+(1+α)*Δw hi
W jh=W jh+(1+α)*ΔW jh
a h=a h+(1+α)*Δa h
b h=b h+(1+α)*Δb h
c h=c h+(1+α)*Δc h
d j=d j+(1+α)*Δd j
(3) repeat (1) and (2), until calculated after the error function of all samples, will after the error amount stack of each sample, be averaged:
E = 1 M Σ n = 1 M E n
In formula, E represents the average error of training data sample;
(4) judge whether the average error E of all training data samples is greater than the error precision of setting, if be not more than, completes training study, otherwise return to (1).
As further prioritization scheme of the present invention, described in vibration transducer acquisition step 1, vibrate digital signal, by sonic transducer, gather described noisy digit signal.
As further prioritization scheme of the present invention, it is characterized in that, utilize MATLAB software to realize wavelet decomposition and wavelet reconstruction.
The present invention adopts above technical scheme compared with prior art, has following technique effect:
1) for the neuronic small echo low frequency coefficient of neural network training, be only 1/2 of former data mdoubly (m is m layer small echo low frequency coefficient), therefore utilizes small echo low frequency coefficient to replace former data and can shorten forecast model Time Created as training sample, thereby can dope quickly the noise signal at positive certain the receiving station place of transformer;
2) the transformer noise data that gather are easily sneaked into the interference noise in transformer surrounding enviroment, if directly utilize noise data can cause forecast model unreliable as training sample, and wavelet technique can filtering high frequency interference noise, only retain the low frequency signal that can characterize former data, make forecast model more accurately and reliably.
Accompanying drawing explanation
Fig. 1 is the structural representation of one embodiment of the invention.
Fig. 2 is predict noise basic process process flow diagram embodiment illustrated in fig. 1.
Average error curve map when Fig. 3 is BP neural network prediction model embodiment illustrated in fig. 1 training.
Average error curve map when Fig. 4 is prediction model based on wavelet neural network embodiment illustrated in fig. 1 training.
Fig. 5 is predict noise small echo low frequency coefficient schematic diagram embodiment illustrated in fig. 1.
Fig. 6 is test noise small echo low frequency coefficient schematic diagram embodiment illustrated in fig. 1.
Fig. 7 is the noise signal time domain schematic diagram that wavelet reconstruction embodiment illustrated in fig. 1 goes out.
Fig. 8 is test noise signal time domain schematic diagram embodiment illustrated in fig. 1.
Fig. 9 is the noise signal frequency domain schematic diagram that wavelet reconstruction embodiment illustrated in fig. 1 goes out.
Figure 10 is test noise signal frequency domain schematic diagram embodiment illustrated in fig. 1.
Embodiment
Below in conjunction with accompanying drawing, technical scheme of the present invention is described in further detail:
The present embodiment is a kind of embodiment of the transformer noise Forecasting Methodology based on wavelet neural network and wavelet technique, as shown in Figure 1, system main composition comprises: 6 models are that C-YD-103 vibration transducer (is separately positioned on some position No. 1 o'clock to No. 6, wherein, No. 5 points, No. 6 points are positioned on heat radiator), as 1 PCM6110 microphone of sonic transducer, data collecting instrument, the Y470 of association notebook that model is NICO-LET7700.Wherein, the signal that vibration transducer picks up inputs in notebook after data collecting instrument is processed, and the signal that microphone picks up is directly inputted in notebook.Specifically, for certain transformer field, while gathering transformer vibration signal and noise signal, close the fan on transformer, signals collecting frequency setting is 5000Hz, and BP neural network structure is made as 6-20-1, and the study precision of network is 0.01, learning rate is 0.04, and momentum factor is 0.9.Wavelet function is db3, and the wavelet decomposition number of plies is 1.Sonic transducer is placed in the middle of transformer front, apart from transformer 0.5m and with the contour position of No. 1 point, No. 4 points on.
During work, realize according to the following steps noise prediction, as shown in Figure 2:
The first step, for easy, by No. 1 o'clock to No. 6 six of the some positions vibration transducer in transformer surface with apart from the positive 0.5m of transformer, the data that gather within the middle of transformer front and with No. 1 point in transformer front, No. 4 point isometry position place sonic transducers are certain time period in the morning one day as vibration and noise sample data.
Second step, consider the constringency performance of forecast model, from the vibration that gathers and noise sample data, choose 2000 groups of corresponding data, wherein front 1300 groups as off-line modeling data, rear 700 groups as on-line testing data, partial data is as shown in table 1.
The data amplitude that table 1 diverse location place gathers
The 3rd step, utilize vibration and noise data in MATLAB his-and-hers watches 1 to carry out respectively wavelet decomposition, extract coefficient of wavelet decomposition, it is programmed for:
[C,L]=wavedec(x,m,'db3')
In formula, x is sample data; The number of plies of m for decomposing; Db3 is wavelet function; L is the number of sample data; C is the wavelet coefficient after sample data is decomposed.
The 4th step, from coefficient of wavelet decomposition, extract m layer small echo low frequency coefficient, it is programmed for:
cA m=appcoef(C,L,'db3',m)
In formula, cA mfor extracting approximate part coefficient, i.e. small echo low frequency coefficient.
The 5th step, in the modelling phase, using the m layer small echo low frequency coefficient of the m layer small echo low frequency coefficient of hyperchannel vibration data and noise data as training data sample, and after normalization as the input-output pair of wavelet neural network, by neuron in wavelet neural network is carried out to training study, between input, output, set up corresponding mapping relations, finally set up the forecast model of the wavelet neural network based on three etale topology structures.
Normalized formula is as follows:
x ‾ = x i - x min x max - x min
In formula, x min, x maxbe respectively maximal value and minimum value in sample; x ifor sample data; Sample after normalized changes between [0,1].
The training study that neuron in wavelet neural network is carried out, its process is as follows:
(1) setting training data total sample number described in the 5th step is M, and while inputting n training data sample so, each layer of output of wavelet neural network is as follows:
Input layer i neuronic output:
x in=X i(n)
In formula, n is positive integer and n ∈ [1, M]; I is the numbering of input layer, i.e. the numbering of vibration signals collecting position, and i=1,2 ..., I; I is total number of input layer, i.e. total number of vibration signals collecting position; X i(n) be the data at i collection position place in n sample data;
Hidden layer h neuronic output:
Y hn = F [ ( Σ i = 1 I w hi x in - c h ) - b h a h ]
In formula, the numbering that h is hidden layer neuron, h=1,2 ..., H; H is total number of hidden layer neuron; F () is Morlet wavelet function, and its expression formula is , x is independent variable; w hifor the power that links between h neuron of neural network hidden layer and i neuron of input layer; a hcontraction-expansion factor for each neuron wavelet function of hidden layer; b hshift factor for each neuron wavelet function of hidden layer; c hfor each neuronic threshold values of hidden layer;
Output layer j neuronic output:
y jn = g ( Σ h = 1 H W jh Y hn - d j )
In formula, j is the neuronic numbering of output layer, i.e. the numbering of noise signal collection position, and j=1,2 ..., J; J represents the total number of output layer neuron, is also the total number of noise signal measurement point; G () is S type logarithmic function, and its expression formula is y is independent variable; W jhrepresent the power that links between j neuron of neural network output layer and h neuron of hidden layer; d jrepresent each neuronic threshold values of output layer;
(2) error function while inputting n training data sample is as follows
E n = 1 2 Σ j = 1 J ( y jn - O jn ) 2
In formula, O jnfor output layer j neuronic actual sample output valve;
According to Gradient Descent principle, adjust network parameter, parameter adjustment amplitude is respectively:
Δ w hi = - η ∂ E n ∂ w hi = - η ( y jn - O jn ) y jn ′ W jh Y hn ′ a h x in
ΔW jh = - η ∂ E n ∂ W jh = - η ( y jn - O jn ) y jn ′ Y hn
Δ a h = - η ∂ E n ∂ a h = η ( y jn - O jn ) y jn ′ W jh Y hn ′ [ ( Σ i = 1 I w hi x in - c h ) - b h a h 2 ]
Δb h = - η ∂ E n ∂ b h = η ( y jn - O jn ) y jn ′ W jh Y hn ′ a h ;
Δc h = - η ( y jn - O jn ) y jn ′ W jh Y hn ′ a h
Δd j=-η(y jn-O jn)y' jn
In formula, η represents learning rate; Δ w hithe correction that represents weights between h neuron of neural network hidden layer and i neuron of input layer; Δ W jhthe correction that represents weights between j neuron of neural network output layer and h neuron of hidden layer; Δ a hthe correction that represents each neuron wavelet function contraction-expansion factor of hidden layer; Δ b hthe correction that represents each neuron wavelet function shift factor of hidden layer; Δ c hthe correction that represents each neuron threshold values of hidden layer; Δ d jthe correction that represents each neuron threshold values of output layer; Y' jnwhile representing n training data sample input neural network, j derivative corresponding to nerve cell output function of output layer; Y h' nwhile representing n training data sample input neural network, h derivative corresponding to nerve cell output function of hidden layer;
For convergence speedup speed and obstruction network are absorbed in local minimum, utilize the iterative formula of factor of momentum α roll-off network parameter as follows:
w hi=w hi+(1+α)*Δw hi
W jh=W jh+(1+α)*ΔW jh
a h=a h+(1+α)*Δa h
b h=b h+(1+α)*Δb h
c h=c h+(1+α)*Δc h
d j=d j+(1+α)*Δd j
In formula, α represents factor of momentum; (remaining variable all defined above, no longer repeated here)
(3) repeat (1) and (2), until calculated after the error function of all samples, will after the error amount stack of each sample, be averaged:
E = 1 M Σ n = 1 M E n
In formula, E represents the average error of training data sample;
(4) judge whether the average error E of all training data samples is greater than the error precision of setting, if be not more than, completes training study, otherwise return to (1).
The 6th step, at test phase, by input prediction model after the small echo low frequency coefficient normalization of the test data of hyperchannel vibration, dope the small echo low frequency coefficient of corresponding noise, recycling wavelet reconstruction is reduced into noisy digit signal, this noisy digit signal is the noisy digit signal of prediction, being wherein programmed for of wavelet reconstruction:
a m=wrcoef('a',C,L,'db3',m)
In formula, a mnoisy digit signal for prediction after wavelet reconstruction.
By test, the present invention's prediction model based on wavelet neural network with factor of momentum used just can reach default convergence error precision 22119 times afterwards having trained.And, if using traditional BP neural network as forecast model, in order to reach same convergence error precision, need to train 31851 times.Above two kinds of forecast models train time error curve respectively as shown in Figure 3, Figure 4, can obviously find thus, and the present invention's forecast model used can improve speed of convergence effectively, reaches higher error precision.
When forecast model completes after training, the ground floor small echo low frequency coefficient of the vibration data for testing is inputted to this forecast model, dope the ground floor small echo low frequency coefficient of receiving station place noise data, recycling wavelet reconstruction noisy digit signal.Utilize the contrast situation of the drawn predict noise of invention of the present invention and test noise as shown in Fig. 5 to Figure 10.By Fig. 5 and Fig. 6 can obviously find out the small echo low frequency coefficient of predict noise and the small echo low frequency coefficient of test noise almost identical in overall variation trend, amplitude, illustrate that this forecast model has higher reliability; The frequency domain schematic diagram of time domain schematic diagram, Fig. 9 and Figure 10 by Fig. 7 and Fig. 8, can find out that Paint Gloss and predict noise only contains low frequency useful part than the noise waveform of test by the predict noise waveform of wavelet reconstruction, the noise that reconstruct is described can be dispelled environment medium-high frequency interference noise effectively, retains real signal.
As can be seen here, the present invention has built feasible transformer noise Forecasting Methodology, wavelet function and factor of momentum not only in BP neural network, have been used, effectively improve forecast model speed of convergence, error precision, and utilize wavelet decomposition and reconfiguration technique to reduce forecast model training calculated amount, overcome because containing environment high frequency interference noise in actual measurement noise data and caused the poor problem of prediction effect.
The above; it is only the embodiment in the present invention; but protection scope of the present invention is not limited to this; any people who is familiar with this technology is in the disclosed technical scope of the present invention; can understand conversion or the replacement expected; all should be encompassed in of the present invention comprise scope within, therefore, protection scope of the present invention should be as the criterion with the protection domain of claims.

Claims (4)

1. the transformer noise Forecasting Methodology based on wavelet neural network and wavelet technique, is characterized in that, concrete implementation step is as follows:
Step 1, gathers respectively the vibration digital signal at I the diverse location place on transformer surface, the noisy digit signal at outer certain the some place of transformer, wherein, and I >=2;
Step 2, utilizes wavelet function to carry out wavelet decomposition to the I road vibration digital signal collecting, and the small echo low frequency coefficient that decomposition is obtained is normalized rear as wavelet neural network input quantity; Utilize wavelet function to carry out wavelet decomposition to the noisy digit signal collecting, the small echo low frequency coefficient that decomposition is obtained is normalized rear as wavelet neural network output quantity;
Step 3 by neuron in wavelet neural network is carried out to training study, is set up corresponding mapping relations between input quantity and output quantity, finally sets up the forecast model of the wavelet neural network based on three etale topology structures;
Step 4, utilizes the forecast model of setting up in step 3, realizes the noise prediction of transformer, is specially:
A. first the vibration digital signal at I the diverse location place, transformer surface collecting is carried out to wavelet decomposition, the small echo low frequency coefficient then decomposition being obtained is normalized, finally by the data input prediction model after normalization;
B. utilize forecast model to calculate rear output result of calculation to the data that receive;
C. first the Output rusults of forecast model is carried out to renormalization processing, then the small echo low frequency coefficient of the corresponding noise obtaining is carried out to wavelet reconstruction, thus the noisy digit signal that obtains doping.
2. a kind of transformer noise Forecasting Methodology based on wavelet neural network and wavelet technique according to claim 1, is characterized in that, the training study in step 3, neuron in wavelet neural network being carried out, and its process is as follows:
(1) setting training data total sample number described in step 2 is M, and while inputting n training data sample so, each layer of output of wavelet neural network is as follows:
Input layer i neuronic output:
x in=X i(n)
In formula, n is positive integer and n ∈ [1, M]; I is the numbering of input layer, i.e. the numbering of vibration signals collecting position, and i=1,2 ..., I; I is total number of input layer, i.e. total number of vibration signals collecting position; X i(n) be the data at i collection position place in n sample data;
Hidden layer h neuronic output:
Y hn = F [ ( Σ i = 1 I w hi x in - c h ) - b h a h ]
In formula, the numbering that h is hidden layer neuron, h=1,2 ..., H; H is total number of hidden layer neuron; for Morlet wavelet function; w hifor the power that links between h neuron of neural network hidden layer and i neuron of input layer; a hcontraction-expansion factor for each neuron wavelet function of hidden layer; b hshift factor for each neuron wavelet function of hidden layer; c hfor each neuronic threshold values of hidden layer;
Output layer j neuronic output:
y jn = g ( Σ h = 1 H W jh Y hn - d j )
In formula, j is the neuronic numbering of output layer, i.e. the numbering of noise signal collection position, and j=1,2 ..., J; J represents the total number of output layer neuron, is also the total number of noise signal measurement point; for S type logarithmic function; W jhrepresent the power that links between j neuron of neural network output layer and h neuron of hidden layer; d jrepresent each neuronic threshold values of output layer;
(2) error function while establishing n training data sample of input is as follows
E n = 1 2 Σ j = 1 J ( y jn - O jn ) 2
In formula, O jnfor output layer j neuronic actual sample output valve;
According to Gradient Descent principle, adjust network parameter, parameter adjustment amplitude is respectively:
Δ w hi = - η ∂ E n ∂ w hi = - η ( y jn - O jn ) y jn ′ W jh Y hn ′ a h x in ;
ΔW jh = - η ∂ E n ∂ W jh = - η ( y jn - O jn ) y jn ′ Y hn ;
Δ a h = - η ∂ E n ∂ a h = η ( y jn - O jn ) y jn ′ W jh Y hn ′ [ ( Σ i = 1 I w hi x in - c h ) - b h a h 2 ] ;
Δb h = - η ∂ E n ∂ b h = η ( y jn - O jn ) y jn ′ W jh Y hn ′ a h ;
Δc h = - η ( y jn - O jn ) y jn ′ W jh Y hn ′ a h ;
Δd j=-η(y jn-O jn)y' jn
In formula, η represents learning rate; Δ w hithe correction that represents weights between h neuron of neural network hidden layer and i neuron of input layer; Δ W jhthe correction that represents weights between j neuron of neural network output layer and h neuron of hidden layer; Δ a hthe correction that represents each neuron wavelet function contraction-expansion factor of hidden layer; Δ b hthe correction that represents each neuron wavelet function shift factor of hidden layer; Δ c hthe correction that represents each neuron threshold values of hidden layer; Δ d jthe correction that represents each neuron threshold values of output layer; Y' jnwhile representing n training data sample input neural network, j derivative corresponding to nerve cell output function of output layer; Y' hnwhile representing n training data sample input neural network, h derivative corresponding to nerve cell output function of hidden layer;
Utilize factor of momentum α roll-off network parameter, iterative formula is as follows:
w hi=w hi+(1+α)*Δw hi
W jh=W jh+(1+α)*ΔW jh
a h=a h+(1+α)*Δa h
b h=b h+(1+α)*Δb h
c h=c h+(1+α)*Δc h
d j=d j+(1+α)*Δd j
(3) repeat (1) and (2), until calculated after the error function of all samples, will after the error amount stack of each sample, be averaged:
E = 1 M Σ n = 1 M E n
In formula, E represents the average error of training data sample;
(4) judge whether the average error E of all training data samples is greater than the error precision of setting, if be not more than, completes training study, otherwise return to (1).
3. a kind of transformer noise Forecasting Methodology based on wavelet neural network and wavelet technique according to claim 1, is characterized in that, described in vibration transducer acquisition step 1, vibrates digital signal, by sonic transducer, gathers described noisy digit signal.
4. a kind of transformer noise Forecasting Methodology based on wavelet neural network and wavelet technique according to claim 1, is characterized in that, utilizes MATLAB software to realize wavelet decomposition and wavelet reconstruction.
CN201410334009.6A 2014-07-14 2014-07-14 Transformer noise prediction method based on wavelet neural network and wavelet technology Pending CN104102838A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410334009.6A CN104102838A (en) 2014-07-14 2014-07-14 Transformer noise prediction method based on wavelet neural network and wavelet technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410334009.6A CN104102838A (en) 2014-07-14 2014-07-14 Transformer noise prediction method based on wavelet neural network and wavelet technology

Publications (1)

Publication Number Publication Date
CN104102838A true CN104102838A (en) 2014-10-15

Family

ID=51670983

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410334009.6A Pending CN104102838A (en) 2014-07-14 2014-07-14 Transformer noise prediction method based on wavelet neural network and wavelet technology

Country Status (1)

Country Link
CN (1) CN104102838A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105841961A (en) * 2016-03-29 2016-08-10 中国石油大学(华东) Bearing fault diagnosis method based on Morlet wavelet transformation and convolutional neural network
CN106468751A (en) * 2016-09-29 2017-03-01 河海大学 A kind of transformer winding state recognition methodss of the neutral net that resonated based on fuzzy self-adaption
CN107180273A (en) * 2017-05-09 2017-09-19 国网内蒙古东部电力有限公司电力科学研究院 A kind of transformer station's factory outside noise prediction and evaluation method based on big data statistical analysis
CN107798077A (en) * 2017-10-09 2018-03-13 中国电子科技集团公司第二十八研究所 A kind of Population surveillance method and system
CN108562837A (en) * 2018-04-19 2018-09-21 江苏方天电力技术有限公司 A kind of power plant's partial discharge of switchgear ultrasonic signal noise-reduction method
CN108921124A (en) * 2018-07-17 2018-11-30 河海大学 A kind of on-load tap changers of transformers mechanical breakdown on-line monitoring method
CN109684742A (en) * 2018-12-27 2019-04-26 上海理工大学 A kind of frictional noise prediction technique based on BP neural network
CN110717468A (en) * 2019-10-16 2020-01-21 电子科技大学 Band-pass filtering method based on six-order radix spline wavelet network
CN111721401A (en) * 2020-06-17 2020-09-29 广州广电计量检测股份有限公司 Low-frequency noise analysis system and method
CN112710988A (en) * 2020-12-30 2021-04-27 中国人民解放军32212部队 Tank armored vehicle sound vibration artificial intelligence detection positioning method
CN117537951A (en) * 2024-01-10 2024-02-09 西南交通大学 Method and device for detecting internal temperature rise of superconducting suspension based on deep learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101832471A (en) * 2010-04-19 2010-09-15 哈尔滨工程大学 Signal identification and classification method
CN102542133A (en) * 2010-12-10 2012-07-04 中国科学院深圳先进技术研究院 Short-time wind speed forecasting method and system for wind power plant
CN103136598A (en) * 2013-02-26 2013-06-05 福建省电力有限公司 Monthly electrical load computer forecasting method based on wavelet analysis
US8725669B1 (en) * 2010-08-02 2014-05-13 Chi Yung Fu Signal processing method and apparatus

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101832471A (en) * 2010-04-19 2010-09-15 哈尔滨工程大学 Signal identification and classification method
US8725669B1 (en) * 2010-08-02 2014-05-13 Chi Yung Fu Signal processing method and apparatus
CN102542133A (en) * 2010-12-10 2012-07-04 中国科学院深圳先进技术研究院 Short-time wind speed forecasting method and system for wind power plant
CN103136598A (en) * 2013-02-26 2013-06-05 福建省电力有限公司 Monthly electrical load computer forecasting method based on wavelet analysis

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
姜鸿羽 等: "基于自适应RBF神经网络的变压器噪声有源控制算法", 《中国电力》 *
宋传学 等: "神经网络技术在车内噪声预测上的应用", 《汽车工程》 *
王春宁 等: "基于振动信号和小波神经网络的变压器故障诊断", 《中国电力》 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105841961A (en) * 2016-03-29 2016-08-10 中国石油大学(华东) Bearing fault diagnosis method based on Morlet wavelet transformation and convolutional neural network
CN106468751A (en) * 2016-09-29 2017-03-01 河海大学 A kind of transformer winding state recognition methodss of the neutral net that resonated based on fuzzy self-adaption
CN107180273B (en) * 2017-05-09 2020-05-22 国网内蒙古东部电力有限公司电力科学研究院 Substation boundary noise prediction and evaluation method based on big data statistical analysis
CN107180273A (en) * 2017-05-09 2017-09-19 国网内蒙古东部电力有限公司电力科学研究院 A kind of transformer station's factory outside noise prediction and evaluation method based on big data statistical analysis
CN107798077A (en) * 2017-10-09 2018-03-13 中国电子科技集团公司第二十八研究所 A kind of Population surveillance method and system
CN108562837A (en) * 2018-04-19 2018-09-21 江苏方天电力技术有限公司 A kind of power plant's partial discharge of switchgear ultrasonic signal noise-reduction method
CN108921124A (en) * 2018-07-17 2018-11-30 河海大学 A kind of on-load tap changers of transformers mechanical breakdown on-line monitoring method
CN109684742A (en) * 2018-12-27 2019-04-26 上海理工大学 A kind of frictional noise prediction technique based on BP neural network
CN110717468A (en) * 2019-10-16 2020-01-21 电子科技大学 Band-pass filtering method based on six-order radix spline wavelet network
CN110717468B (en) * 2019-10-16 2022-08-02 电子科技大学 Band-pass filtering method based on six-order radix spline wavelet network
CN111721401A (en) * 2020-06-17 2020-09-29 广州广电计量检测股份有限公司 Low-frequency noise analysis system and method
CN111721401B (en) * 2020-06-17 2022-03-08 广州广电计量检测股份有限公司 Low-frequency noise analysis system and method
CN112710988A (en) * 2020-12-30 2021-04-27 中国人民解放军32212部队 Tank armored vehicle sound vibration artificial intelligence detection positioning method
CN112710988B (en) * 2020-12-30 2022-06-21 中国人民解放军32212部队 Tank armored vehicle sound vibration artificial intelligence detection positioning method
CN117537951A (en) * 2024-01-10 2024-02-09 西南交通大学 Method and device for detecting internal temperature rise of superconducting suspension based on deep learning
CN117537951B (en) * 2024-01-10 2024-03-26 西南交通大学 Method and device for detecting internal temperature rise of superconducting suspension based on deep learning

Similar Documents

Publication Publication Date Title
CN104102838A (en) Transformer noise prediction method based on wavelet neural network and wavelet technology
CN109214575B (en) Ultrashort-term wind power prediction method based on small-wavelength short-term memory network
CN103226741B (en) Public supply mains tube explosion prediction method
CN103077267B (en) Parameter sound source modeling method based on improved BP (Back Propagation) neural network
CN106197999B (en) A kind of planetary gear method for diagnosing faults
CN103034757B (en) Wind energy turbine set time-frequency domain modeling method based on empirical mode decomposition
CN105424359A (en) Sparse-decomposition-based hybrid fault feature extraction method of gear wheel and bearing
CN101900789B (en) Tolerance analog circuit fault diagnosing method based on wavelet transform and fractal dimension
CN106599520A (en) LSTM-RNN model-based air pollutant concentration forecast method
CN104792522A (en) Intelligent gear defect analysis method based on fractional wavelet transform and BP neutral network
CN102542133B (en) Short-time wind speed forecasting method and system for wind power plant
CN101604356A (en) A kind of method for building up of uncertain mid-and-long term hydrologic forecast model
CN104819846A (en) Rolling bearing sound signal fault diagnosis method based on short-time Fourier transform and sparse laminated automatic encoder
CN103995237A (en) Satellite power supply system online fault diagnosis method
CN106920007A (en) PM based on second order Self-organized Fuzzy Neural Network2.5Intelligent Forecasting
CN104614991A (en) Method for improving robot parameter identification accuracy
CN103268525B (en) A kind of Hydrological Time Series simulating and predicting method based on WD-RBF
CN106447086A (en) Wind electricity power combined prediction method based on wind farm data pre-processing
CN101975825B (en) Method for extracting characteristic parameters of acoustic emission signals of drawing part cracks
CN113609955A (en) Three-phase inverter parameter identification method and system based on deep learning and digital twinning
CN106897704A (en) A kind of microseismic signals noise-reduction method
CN104182914A (en) Wind power output time series modeling method based on fluctuation characteristics
CN104008644A (en) Urban road traffic noise measurement method based on gradient descent
CN104915515A (en) BP neural network based GFET modeling method
CN105678397A (en) Short-term photovoltaic power prediction method based on improved EMD algorithm and Elman algorithm

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20141015

WD01 Invention patent application deemed withdrawn after publication