CN110415709A - Transformer working condition recognition methods based on Application on Voiceprint Recognition model - Google Patents

Transformer working condition recognition methods based on Application on Voiceprint Recognition model Download PDF

Info

Publication number
CN110415709A
CN110415709A CN201910561468.0A CN201910561468A CN110415709A CN 110415709 A CN110415709 A CN 110415709A CN 201910561468 A CN201910561468 A CN 201910561468A CN 110415709 A CN110415709 A CN 110415709A
Authority
CN
China
Prior art keywords
sound
transformer
gray level
application
measured
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910561468.0A
Other languages
Chinese (zh)
Other versions
CN110415709B (en
Inventor
张欣
吕启深
党晓婧
刘顺桂
王丰华
周东旭
解颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Power Supply Bureau Co Ltd
Original Assignee
Shenzhen Power Supply Bureau Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Power Supply Bureau Co Ltd filed Critical Shenzhen Power Supply Bureau Co Ltd
Priority to CN201910561468.0A priority Critical patent/CN110415709B/en
Publication of CN110415709A publication Critical patent/CN110415709A/en
Application granted granted Critical
Publication of CN110415709B publication Critical patent/CN110415709B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01HMEASUREMENT OF MECHANICAL VIBRATIONS OR ULTRASONIC, SONIC OR INFRASONIC WAVES
    • G01H17/00Measuring mechanical vibrations or ultrasonic, sonic or infrasonic waves, not provided for in the preceding groups
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/04Training, enrolment or model building
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/57Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for processing of video signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)

Abstract

The transformer working condition recognition methods based on Application on Voiceprint Recognition model that this application involves a kind of.The recognition methods passes through using multiple sound gray level images as the input parameter of the convolutional neural networks, using the multiple and the multiple one-to-one work state information of sound gray level image as the output parameter of the convolutional neural networks, convolutional neural networks are trained, Application on Voiceprint Recognition model is established.Compared with the prior art, the sound gray level image can amplify the time-frequency characteristic of the transformer voice signal, can be improved the identification of the sound to be measured.When using the sound gray level image to be measured of transformer to be identified as the input parameter of the Application on Voiceprint Recognition model, the Application on Voiceprint Recognition model can accurately identify the working condition of the transformer.The transformer working condition recognition methods can effectively extract transformer voice signal time-frequency characteristics, and then improve the accuracy of transformer working condition identification.

Description

Transformer working condition recognition methods based on Application on Voiceprint Recognition model
Technical field
This application involves detection technique fields, more particularly to a kind of transformer working condition based on Application on Voiceprint Recognition model Recognition methods.
Background technique
Power transformer carries the important function of voltage conversion and electric energy transmission.Transformer usage amount in electric system Greatly, capacitance grade and specification are various, long operational time, its accident rate is caused also accordingly to increase.It, will once transformer breaks down Huge economic loss may be brought to power grid, and jeopardizes operation maintenance personnel personal safety.Therefore, transformer working condition is carried out Effectively monitoring finds incipient fault hidden danger early, becomes power industry researcher's issues that need special attention.
Running transformer vibration mainly includes basket vibration, core vibration and cooling system vibration etc..Vibration generates Mechanical wave form acoustic signals to external radiation by media such as solid structure part, insulating oil and the air of transformer.Sound wave letter Number contain a large amount of transformer work state information.20Hz to 20kHz is the audible audio frequency range of human ear.There is experience Work of transformer substation personnel directly can listen the sound for the transformer being currently running to judge whether its state normal by ear. Device status information abundant is contained for running transformer, in acoustical signal.When overload, iron core pine occurs in transformer When the abnormal operations such as dynamic, D.C. magnetic biasing, ferromagnetic resonance, the acoustical signal feature that transformer is issued, which can also generate, mutually to be strained Change.
Time-frequency Analysis is a kind of conventional means in Underwater Acoustic channels field.But the acoustical signal of running transformer can not It will receive the influence of load current, noise jamming etc. with avoiding, so that the acoustic signals that different time monitors can also change therewith Become and broadband non-stationary characteristic is presented, time-frequency property list reveals certain complexity, it is difficult to directly be analyzed to distinguish transformation The different working condition of device.The accuracy for how improving the identification of transformer working condition is a problem to be solved.
Summary of the invention
Based on this, it is necessary to aiming at the problem that how could improve the accuracy of transformer working condition identification, provide one Transformer working condition recognition methods of the kind based on Application on Voiceprint Recognition model.
A kind of transformer working condition recognition methods based on Application on Voiceprint Recognition model includes:
S100 chooses multiple sound gray level image training convolutional neural networks under a variety of working conditions of transformer, establishes Application on Voiceprint Recognition model, wherein, will be multiple using multiple sound gray level images as the input parameter of the convolutional neural networks Output parameter with the multiple one-to-one work state information of sound gray level image as the convolutional neural networks.
The sound gray level image to be measured of transformer to be identified is inputted the Application on Voiceprint Recognition model by S200, obtain with it is described The corresponding work state information of sound gray level image to be measured.
In one embodiment, the step S100 includes:
S110 randomly selects the multiple sound gray level image under a variety of working conditions of the transformer, will be described more A sound gray level image is divided into training sample set and test sample collection, and it is multiple and described more that a variety of working conditions, which are correspondingly arranged, A one-to-one work state information of sound gray level image.
S120, the multiple sound gray level image that the training sample is concentrated is as the defeated of the convolutional neural networks Enter, using the multiple work state information as the output of convolutional neural networks, the convolutional neural networks are instructed Practice.
In one embodiment, after the step S120, the recognition methods further include:
S130, the convolutional Neural after the multiple sound gray level image input training that the test sample is concentrated Network.
S140 records multiple the surveying correspondingly with the multiple sound gray level image of the Application on Voiceprint Recognition model output Work state information is tried, and calculates the knowledge of the convolutional neural networks after training according to the multiple test job status information Not rate.
S150, if the change rate of the discrimination of the convolutional neural networks is less than setting value, according to described after training Convolutional neural networks establish the Application on Voiceprint Recognition model.
In one embodiment, after the step S140, the recognition methods further include:
S141 executes the S120 extremely if the change rate of the discrimination of the convolutional neural networks is greater than the set value Described 150.
In one embodiment, before the step S100, the recognition methods further include:
S01, acquires the first voice signal of multiple a variety of working conditions, and every kind of working condition corresponds to multiple institutes The first voice signal is stated, and the multiple first voice signal is handled, multiple and the multiple first sound is obtained and believes Number one-to-one sound gray level image.
In one embodiment, in the step S01, the step of the multiple first voice signal is handled packet It includes:
S010, sets the sample frequency and sampling duration of the voice signal of the transformer, and acquires multiple described a variety of First voice signal under working condition.
S020 carries out segmentation windowing process to each first voice signal respectively, obtains multiple second sound signals.
S030 carries out Fourier transformation to multiple second sound signals respectively, obtains multiple rising tone messages Number spectrum distribution.
S040 carries out wavelet transformation to the second sound signal according to the second sound signal spectrum distribution, obtains The multiple and one-to-one matrix of wavelet coefficients of the multiple second sound signal.
S050 carries out greyscale transformation to the multiple matrix of wavelet coefficients respectively, obtains multiple under a variety of working conditions The sound gray level image.
In one embodiment, the step S040 includes:
S041 chooses multiple and multiple one-to-one wavelet basis functions of second sound signal respectively.
S042 divides the frequency bandwidth of multiple second sound signals at equal intervals, obtains multiple frequency band sub-districts Between, and the multiple and one-to-one two-dimensional network of the multiple second sound signal is constructed according to the multiple frequency band subinterval.
S043 carries out the second sound signal according to multiple wavelet basis functions and multiple two-dimensional networks Wavelet transformation, and obtain the multiple and the multiple second sound signal matrix of wavelet coefficients correspondingly.
In one embodiment, after the step S043, the recognition methods further include:
S044 calculates separately the Shannon entropy of the multiple matrix of wavelet coefficients, and more according to multiple Shannon entropy determinations A optimal wavelet basic function.
S045, according to the multiple optimal wavelet basic function respectively to multiple with the multiple optimal wavelet basic function one by one The corresponding second sound signal carries out wavelet transformation, and obtains the matrix of wavelet coefficients of multiple optimizations.
In one embodiment, the step S050 includes:
The multiple matrix of wavelet coefficients is normalized by column respectively by S051, obtains multiple normalized institutes State matrix of wavelet coefficients.
S052 carries out greyscale transformation to multiple normalized matrix of wavelet coefficients respectively, obtains multiple initial grays Image.
S053 carries out the disposal of gentle filter to the multiple initial gray image respectively, after obtaining multiple smoothing processings The initial gray image.
S054 is sharpened processing and gray-level correction to the initial gray image after multiple smoothing processings respectively, obtains To multiple sound gray level images.
In one embodiment, use Gaussian Blur operator respectively to the multiple initial gray figure in the step S053 As carrying out the disposal of gentle filter.
In one embodiment, before the step S200, the recognition methods further include:
S02, acquires the voice signal to be measured of the transformer to be identified, and handles the voice signal to be measured, Obtain the sound gray level image to be measured corresponding with the voice signal to be measured.
In one embodiment, the step S02 includes:
S021 acquires the sound to be measured of the transformer to be identified according to the sample frequency and the sampling duration is set Sound signal.
S022 carries out segmentation windowing process to the voice signal to be measured, obtains multiple segmentation voice signals to be measured.
S023 carries out Fourier transformation to multiple segmentation voice signals to be measured, obtains multiple and multiple segmentations Voice signal to be measured is segmented the spectrum distribution of voice signal to be measured correspondingly.
S024 carries out small echo to the segmentation voice signal to be measured according to the spectrum distribution of the segmentation voice signal to be measured Transformation obtains multiple and the multiple segmentation voice signal to be measured matrix of wavelet coefficients to be measured correspondingly.
S025 carries out greyscale transformation to the multiple matrix of wavelet coefficients to be measured respectively, obtains under a variety of working conditions Multiple sound gray level images to be measured.
The transformer working condition recognition methods based on Application on Voiceprint Recognition model provided by the present application, by by multiple institutes Input parameter of the sound gray level image as the convolutional neural networks is stated, one by one by multiple and the multiple sound gray level image Output parameter of the corresponding work state information as the convolutional neural networks, is trained convolutional neural networks, establishes Application on Voiceprint Recognition model.Compared with the prior art, the sound gray level image can amplify the time-frequency of the transformer voice signal Characteristic can be improved the identification of the sound to be measured.When using the sound gray level image to be measured of transformer to be identified as described in The input parameter of Application on Voiceprint Recognition model, the Application on Voiceprint Recognition model can accurately identify the working condition of the transformer.It is described The recognition methods of transformer working condition can effectively extract transformer voice signal time-frequency characteristics, and then improve transformer work The accuracy of state recognition.
Detailed description of the invention
Fig. 1 is the transformer working condition identification based on Application on Voiceprint Recognition model provided in the application one embodiment The flow diagram of method;
Fig. 2 is that the transformer working condition based on Application on Voiceprint Recognition model provided in another embodiment of the application is known The flow diagram of other method;
Fig. 3 is that the transformer working condition based on Application on Voiceprint Recognition model provided in another embodiment of the application is known The flow diagram of other method.
Specific embodiment
In order to make the above objects, features, and advantages of the present application more apparent, with reference to the accompanying drawing to the application Specific embodiment be described in detail.Many details are explained in the following description in order to fully understand this Shen Please.But the application can be implemented with being much different from other way described herein, those skilled in the art can be not Similar improvement is done in the case where violating the application intension, therefore the application is not limited by following public specific implementation.
It is herein component institute serialization number itself, such as " first ", " second " etc., is only used for distinguishing described object, Without any sequence or art-recognized meanings.And " connection ", " connection " described in the application, unless otherwise instructed, include directly and It is indirectly connected with (connection).In the description of the present application, it is to be understood that term " on ", "lower", "front", "rear", " left side ", The orientation of the instructions such as " right side ", "vertical", "horizontal", "top", "bottom", "inner", "outside", " clockwise ", " counterclockwise " or position are closed System indicates to be based on the orientation or positional relationship shown in the drawings, being merely for convenience of description the application and simplifying description Or imply that signified device or element must have a particular orientation, be constructed and operated in a specific orientation, therefore cannot understand For the limitation to the application.
In this application unless specifically defined or limited otherwise, fisrt feature in the second feature " on " or " down " can be with It is that the first and second features directly contact or the first and second features pass through intermediary mediate contact.Moreover, fisrt feature exists Second feature " on ", " top " and " above " but fisrt feature be directly above or diagonally above the second feature, or be merely representative of First feature horizontal height is higher than second feature.Fisrt feature can be under the second feature " below ", " below " and " below " One feature is directly under or diagonally below the second feature, or is merely representative of first feature horizontal height less than second feature.
Referring to Figure 1, the embodiment of the present application provides a kind of transformer working condition identification side based on Application on Voiceprint Recognition model Method includes:
S100 chooses multiple sound gray level image training convolutional neural networks under a variety of working conditions of transformer, establishes Application on Voiceprint Recognition model, wherein, will be multiple using multiple sound gray level images as the input parameter of the convolutional neural networks Output parameter with the multiple one-to-one work state information of sound gray level image as the convolutional neural networks.
The sound gray level image to be measured of the transformer to be identified is inputted the Application on Voiceprint Recognition model by S200, obtain with The corresponding work state information of the sound gray level image to be measured.
The transformer working condition recognition methods based on Application on Voiceprint Recognition model provided by the embodiments of the present application.The knowledge Other method by using multiple sound gray level images as the input parameter of the convolutional neural networks, will it is multiple with it is described more Output parameter of a one-to-one work state information of sound gray level image as the convolutional neural networks, to the convolution Neural network is trained, and establishes the Application on Voiceprint Recognition model.Compared with the prior art, the sound gray level image can amplify The time-frequency characteristic of the transformer voice signal can be improved the identification of the sound to be measured.When by transformer to be identified Input parameter of the sound gray level image to be measured as the Application on Voiceprint Recognition model, the Application on Voiceprint Recognition model can accurately identify institute State the working condition of transformer.It is special that the transformer working condition recognition methods can effectively extract transformer voice signal time-frequency Sign, and then improve the accuracy of transformer working condition identification.
The step S100 establishes the Application on Voiceprint Recognition model for training the convolutional neural networks.The step The working condition of the S200 transformer for identification.
Please also refer to Fig. 2, in one embodiment, the step S100 includes:
S110 randomly selects the multiple sound gray level image under a variety of working conditions of the transformer, will be described more A sound gray level image is divided into training sample set and test sample collection, and it is multiple and described more that a variety of working conditions, which are correspondingly arranged, A one-to-one work state information of sound gray level image.
In one embodiment, randomly selecting the gray level image under transformer M kind working condition is convolutional neural networks Input, separately constitutes the training sample set I and test sample collection I' of convolutional neural networks.The work shape of the transformer to be identified State is M kind, constitutes the desired output Y of convolutional neural networks.Its described training sample set I and test sample collection I' difference Are as follows:
Wherein, NxAnd NyFor voice signal gray level image number under certain state of transformer, and there is Nx+Ny=N.NxFor transformation The training sample set number of voice signal gray level image, N under certain state of deviceyFor voice signal gray scale under certain state of transformer The test sample collection number of image, usually there is Ny< Nx,For the training sample set I.
The convolutional neural networks are used for image recognition.The convolutional neural networks include input layer, convolutional layer, excitation Layer, pond layer, full articulamentum and output layer etc..The input layer is for receiving input picture.The convolutional layer is for extracting figure The local message of picture.The excitation layer is used to carry out regular processing to convolutional layer output, to facilitate the training of network.It is described Pond layer extracts image main information, to reduce data volume, improves neural network computing performance for simplifying image information.Institute It states full articulamentum and makes full use of image information, by network training, reach the output characteristics of needs.The output layer is for exporting The working condition of the transformer to be identified.
S120, the multiple sound gray level image that the training sample is concentrated as the input of convolutional neural networks, Using the multiple work state information as the output of convolutional neural networks, the convolutional neural networks are trained.
The convolutional neural networks training process includes:
The multiple sound gray level image of S121, the training sample input the input layer.
In the convolutional layer of the convolutional neural networks convolution kernel that c size is h × h is respectively adopted, with step in S122 The multiple sound gray level image for the training sample that long s exports the input layer carries out convolution algorithm, and will calculate As a result it is input to the excitation layer.
S123 is converted in the excitation layer using output result of the activation primitive σ to the convolutional layer, and will Calculated result is input to the pond layer.
S124 carries out resampling to the output result of the excitation layer in the pond layer, defeated to reduce excitation layer The data dimension of result out, and the resampling result is exported to described and connects layer entirely.
S125, the full articulamentum include articulamentum and softmax classification layer.Wherein, the articulamentum is comprising l layers of mind Feedforward neural network through member.Wherein, the output of the pond layer is the first layer of the feedforward neural network.Before described Godwards Input and output through network adjacent two layers pass through a certain weight and are attached.Described in the output of the feedforward neural network The input of softmax classification layer.Data processing is carried out by softmax function, obtains the output of the softmax classification layer. The output of softmax classification layer is compared with the desired output Y, to update the connection of the feedforward neural network Weight.The expression formula of the softmax function are as follows:
Wherein, QmM-th of element of array Q is exported for articulamentum.
In one embodiment, after the step S120, the recognition methods further include:
S130, the convolutional Neural after the multiple sound gray level image input training that the test sample is concentrated Network.
S140 records multiple the surveying correspondingly with the multiple sound gray level image of the Application on Voiceprint Recognition model output Work state information is tried, and calculates the knowledge of the convolutional neural networks after training according to the multiple test job status information Not rate.
S150, if the change rate of the discrimination of the convolutional neural networks is less than setting value, according to described after training Convolutional neural networks establish the Application on Voiceprint Recognition model.
In one embodiment, after the step S140, the recognition methods further include:
S141 executes the S120 extremely if the change rate of the discrimination of the convolutional neural networks is greater than the set value The S150.
In the step S141 and the step S150, the change rate of the discrimination of the convolutional neural networks refers to front and back Test sample described in same group is inputted to the difference of the discrimination of the same convolutional neural networks twice.
In one embodiment, the acquisition step of the change rate of the discrimination are as follows:
S1, the multiple sound gray level image that the training sample is concentrated, will as the input of convolutional neural networks The multiple work state information is trained the convolutional neural networks respectively as the output of convolutional neural networks.
S2, the convolutional Neural net after the multiple sound gray level image input training that the test sample is concentrated Network.
S3 records multiple the testing correspondingly with the multiple sound gray level image of the Application on Voiceprint Recognition model output Work state information, and first of the convolutional neural networks after training is calculated according to the multiple test job status information Discrimination.
S4, will be described more in the training sample set (being same with the training sample set in the step S1) Input of a sound gray level image as convolutional neural networks, using the multiple work state information as convolutional Neural net The output of network is trained the convolutional neural networks.
S5, will be described more in the test sample collection (being same with the training sample set in the step S2) The convolutional neural networks after a sound gray level image input training.
S6 records multiple the testing correspondingly with the multiple sound gray level image of the Application on Voiceprint Recognition model output Work state information, and second of the convolutional neural networks after training is calculated according to the multiple test job status information Discrimination.
S7 calculates the difference of first discrimination and second discrimination, obtains the change rate of the discrimination.
Please also refer to Fig. 3, in one embodiment, before the step S100, the recognition methods further include:
S01 acquires the first voice signal under multiple a variety of working conditions of transformer, the work of every kind of transformer Make state and correspond to multiple first voice signals, and the multiple first voice signal is handled, obtains multiple and institute State the one-to-one sound gray level image of multiple first voice signals.
In one embodiment, the step S01 includes:
S010, sets the sample frequency and sampling duration of the voice signal of the transformer, and acquires multiple transformations First voice signal under a variety of working conditions of device.
S020 carries out segmentation windowing process to each first voice signal respectively, obtains multiple second sound signals.
S030 carries out Fourier transformations to multiple second sound signals respectively, obtains multiple with multiple described second The spectrum distribution of the one-to-one second sound signal of voice signal.
S040 carries out wavelet transformation to the second sound signal according to the second sound signal spectrum distribution, obtains The multiple and one-to-one matrix of wavelet coefficients of the multiple second sound signal.
S050 carries out greyscale transformation to the multiple matrix of wavelet coefficients respectively, obtains multiple under a variety of working conditions The sound gray level image.
In one embodiment, the step S040 includes:
S041 chooses multiple and multiple one-to-one wavelet basis functions of second sound signal respectively.The small echo Basic function are as follows:
Wherein, xiIt (t) is i-th section of second sound signal, ψiIt (t) is the wavelet basis function of i-th section of voice signal, fbi For the bandwidth of the wavelet basis function, fciFor center frequency.
The wavelet basis selection determines the reasonability of wavelet transformation.The wavelet basis function has bandwidth and center Two important parameters of frequency.The range size of the bandwidth contributions frequency analysis.The target frequency being generally analysed to is as institute State centre frequency.Using the cosine function of properties of exponential decay as wavelet basis function in the application.The cosine function bandwidth Width is moderate, and frequency to be analyzed will not be made to generate overlapping.The cosine function has convergence simultaneously, by the Exponential Decay The cosine function of matter can obtain accurate frequency response as wavelet basis function.
S042 divides the frequency bandwidth of multiple second sound signals at equal intervals, obtains multiple frequency band sub-districts Between, and the multiple and one-to-one two-dimensional network of the multiple second sound signal is constructed according to the multiple frequency band subinterval.
The frequency range of i-th section of second sound signal is [fimin,fimax], herein, i=1,2 ..., N;To i-th section Frequency range [the f of the second sound signalimin,fimax] divided at equal intervals, obtain the B that length is Δ fiA frequency Band subinterval, constructing a size is Bi×BiTwo-dimensional grid, the frequency band subinterval BiCalculation formula are as follows:
Wherein, " [] " indicates to be rounded.
The step S042 carries out optimizing to the bandwidth and the centre frequency using the gridding method of the two-dimensional grid.
S043 carries out the second sound signal according to multiple wavelet basis functions and multiple two-dimensional networks Wavelet transformation, and obtain the multiple and the multiple second sound signal matrix of wavelet coefficients correspondingly.
It is B by row traversal sizei×BiThe two-dimensional grid all nodes (g1, g2).Herein, g1=1,2 ..., Bi + 1, g2=1,2 ..., Bi+1.All nodes for enabling every row in the two-dimensional grid are the small of i-th section of second sound signal The bandwidth of wave basic function.Wherein, the band of the wavelet basis function at the g1 row and g2 column of the two-dimensional grid at node Width is fbi (g1,g2)=fimin+(g1-1)Δf.All nodes for enabling each column in the two-dimensional grid are i-th section of second sound The centre frequency of the wavelet basis function of signal.Wherein, at the g1 row and g2 column of the two-dimensional grid node wavelet basis letter Several centre frequencies is fci (g1,g2)=fimin+ (g2-1) Δ f calculates i-th section of second sound signal xi(t) small echo becomes It changes, the wavelet transformation calculation formula are as follows:
J=1 ..., H;K=1 ..., L;G1=1,2 ..., Bi+1;G2=1,2 ..., Bi+1 (8)
Wherein,It is the matrix of wavelet coefficients W of second sound signal for i-th sectioniMiddle jth row kth column member Element,For the wavelet basis function of i-th section of second sound signal, H is the small echo of i-th section of second sound signal Coefficient matrix WiLine number, L be i-th section of second sound signal matrix of wavelet coefficients WiColumns.
ajFor scale factor, may be expressed as:
The value of the scale factor is it is ensured that corresponding actual frequency fajThe frequency range of voice signal can be covered [fimin,fimax]。
Optimizing is carried out to bandwidth parameter and centre frequency by the gridding method using two-dimensional network.To the grid of building, net Each point of lattice is used as the parameter combination of a bandwidth and centre frequency.It is combined using this and carries out continuous wavelet transform, And according to the objective function hereafter set, optimal parameter group is selected.
In one embodiment, after the step S043, the recognition methods further include:
S044 calculates separately the Shannon entropy of the multiple matrix of wavelet coefficients, and more according to multiple Shannon entropy determinations A optimal wavelet basic function.
Calculate the matrix of wavelet coefficientsThe Shannon entropy.Choose two when the Shannon entropy is minimum value Tie up the bandwidth of the bandwidth and the centre frequency as optimal wavelet basic function corresponding to grid node (g1, g2)With Centre frequencyThe calculation formula of the Shannon entropy are as follows:
Wherein, Si(g1, g2) is Shannon entropy, pjFor matrix of wavelet coefficientsJth row element energy and wavelet coefficient MatrixThe ratio of gross energy.
S045, according to the multiple optimal wavelet basic function respectively to multiple with the multiple optimal wavelet basic function one by one The corresponding second sound signal carries out wavelet transformation, and obtains the matrix of wavelet coefficients of multiple optimizations.
The time-frequency spy of signal can preferably be extracted by carrying out wavelet transformation to the second sound signal of the transformer Sign.The time-frequency characteristics reflect signal frequency domain characteristic and change with time situation.The analytical effect of wavelet transformation depends on institute It states wavelet basis function and analyzes the Rational choice of frequency band.The application extracts acoustical signal time-frequency using the continuous wavelet transform of optimization Feature can take into account the accuracy in time domain and frequency domain, improve the accuracy of the transformer working condition identification.
In one embodiment, the step S050 includes:
The multiple matrix of wavelet coefficients is normalized by column respectively by S051, obtains multiple normalized institutes State matrix of wavelet coefficients.
The normalized calculation formula of i-th section of second sound signal are as follows:
Wherein, Ui(:, k) it is the mean value that the matrix of wavelet coefficients kth of i-th section of second sound signal arranges, δi(:, k) be The variance of the matrix of wavelet coefficients kth column of i-th section of second sound signal,For the matrix of wavelet coefficients.
S052 carries out greyscale transformation to multiple normalized matrix of wavelet coefficients respectively, obtains multiple initial grays Image.
The greyscale transformation formula of the initial gray image G' of i-th section of second sound signal are as follows:
Wherein, G'i(j, k) is the initial gray image G' of i-th section of second sound signaliMiddle jth row kth column Element gray scale.Ceil function representation rounds up, and p is gray scale locating depth.For the small of i-th section of second sound signal The kth column element of wave conversion matrix.
S053 carries out the disposal of gentle filter to the multiple initial gray image respectively, after obtaining multiple smoothing processings The initial gray image.
In one embodiment, in the step S053, using Gaussian Blur operator respectively to the multiple initial gray Image carries out the disposal of gentle filter.
The disposal of gentle filter of the initial gray image G' of i-th section of second sound signal are as follows:
Wherein, G'1For the gray level image for obtain after smothing filtering to the initial gray image G', F is height This fuzzy operator, c1And c2For the length and width of rectangular area.
S054 is sharpened processing and gray-level correction to the initial gray image after multiple smoothing processings respectively, obtains To multiple sound gray level images.
The gray level image G' of i-th section of second sound signal1iEdge contrast formula are as follows:
Wherein, G'2For to gray level image G'1It is sharpened the gray level image obtained after processing.
The gray level image G' of i-th section of second sound signal2Gray-level correction formula are as follows:
Wherein, γ is gray-level correction coefficient, usually there is γ < 1.
In one embodiment, before the step S200, the recognition methods further include:
S02, acquires the voice signal to be measured of the transformer to be identified, and handles the voice signal to be measured, Obtain the sound gray level image to be measured corresponding with the voice signal to be measured.The processing method of the voice signal to be measured is joined According to the step S01 and including correlation step.
In one embodiment, the step S02 includes:
S021 acquires the sound to be measured of the transformer to be identified according to the sample frequency and the sampling duration is set Sound signal.
S022 carries out segmentation windowing process to the voice signal to be measured, obtains multiple segmentation voice signals to be measured.
S023 carries out Fourier transformation to multiple segmentation voice signals to be measured, obtains multiple and multiple segmentations Voice signal to be measured is segmented the spectrum distribution of voice signal to be measured correspondingly.
S024 carries out small echo to the segmentation voice signal to be measured according to the spectrum distribution of the segmentation voice signal to be measured Transformation obtains multiple and the multiple segmentation voice signal to be measured matrix of wavelet coefficients to be measured correspondingly.
S025 carries out greyscale transformation to the multiple matrix of wavelet coefficients to be measured respectively, obtains under a variety of working conditions Multiple sound gray level images to be measured.
In one embodiment, in the step S010, the sample frequency is fs, every kind of state acoustical signal it is described Acquisition time is T.The type of the different working condition is denoted as M, herein, fs=50kHz, T=10min, M=4, corresponding transformation 4 kinds of working conditions of device be respectively it is normal, winding loosens, iron core looseness and current overload.Work described in digital representation shape respectively State." normal " is expressed as " 1 ", and " winding loosening " is expressed as " 2 ", and " iron core looseness " is expressed as " 3 ", and " current overload " is expressed as “4”。
In the step S010, the first voice signal x (t) is subjected to segmentation windowing process, obtains N sections described Two voice signals.Wherein, the length of every section of second sound signal is L, the overlapping of adjacent two sections of second sound signals Length is O, and every section of second sound signal with length for L, which can regard stationary signal as, to be advisable.Herein, N=800, L= 51200, O=10240.
In the step S053, c1And c2Length and wide, c for rectangular area1=c2=3.
In the step S054, γ=0.5.
In the step S122, c=8, h=4, s=2.
In the step S123, the activation primitive selects Relu function, the Relu function are as follows:
In the step S124, the pond layer of the convolutional neural networks adopts the output result of excitation layer again Sample processing, to reduce the data dimension of excitation layer output result, and as the output of pond layer.Herein using maximum pond Processing replaces former region with the maximum value of 4 × 4 region interior elements.
In the step S125, articulamentum is the feedforward neural network comprising l layers of neuron, herein l=4.
In the step S150, the setting value is δ=1%.
Include: in the step S200
The sound gray level image to be measured of the transformer to be identified is inputted the Application on Voiceprint Recognition model, obtained more by S210 A number corresponding to the working condition.
S220 chooses the maximum value in multiple numbers, and obtains the working condition corresponding with the maximum value, The working condition of the as described transformer to be identified.
Each technical characteristic of embodiment described above can be combined arbitrarily, for simplicity of description, not to above-mentioned reality It applies all possible combination of each technical characteristic in example to be all described, as long as however, the combination of these technical characteristics is not deposited In contradiction, all should be considered as described in this specification.
The several embodiments of the application above described embodiment only expresses, but it cannot be understood as to this Shen Please the scope of the patents limitation.It should be pointed out that for those of ordinary skill in the art, not departing from the application design Under the premise of, various modifications and improvements can be made, these belong to the protection scope of the application.Therefore, the application patent The scope of protection shall be subject to the appended claims.

Claims (12)

1. a kind of transformer working condition recognition methods based on Application on Voiceprint Recognition model characterized by comprising
S100 chooses multiple sound gray level image training convolutional neural networks under a variety of working conditions of transformer, establishes vocal print Identification model, wherein using multiple sound gray level images as the input parameter of the convolutional neural networks, by multiple and institute State output parameter of multiple one-to-one work state informations of sound gray level image as the convolutional neural networks;
The sound gray level image to be measured of transformer to be identified is inputted the Application on Voiceprint Recognition model by S200, obtain with it is described to be measured The corresponding work state information of sound gray level image.
2. the transformer working condition recognition methods based on Application on Voiceprint Recognition model as described in claim 1, which is characterized in that institute Stating step S100 includes:
S110 randomly selects the multiple sound gray level image under a variety of working conditions of the transformer, by the multiple sound Sound gray level image is divided into training sample set and test sample collection, and a variety of working conditions are correspondingly arranged as multiple and the multiple sound The one-to-one work state information of sound gray level image;
S120, the multiple sound gray level image that the training sample is concentrated as the input of the convolutional neural networks, Using the multiple work state information as the output of convolutional neural networks, the convolutional neural networks are trained.
3. the transformer working condition recognition methods based on Application on Voiceprint Recognition model as claimed in claim 2, which is characterized in that In After the step S120 further include:
S130, the convolutional Neural net after the multiple sound gray level image input training that the test sample is concentrated Network;
S140, the multiple and the multiple sound gray level image for recording the Application on Voiceprint Recognition model output test work correspondingly Make status information, and calculates the identification of the convolutional neural networks after training according to the multiple test job status information Rate;
S150, if the change rate of the discrimination of the convolutional neural networks is less than setting value, according to the convolution after training Application on Voiceprint Recognition model described in neural network.
4. the transformer working condition recognition methods based on Application on Voiceprint Recognition model as claimed in claim 3, which is characterized in that In After the step S140 further include:
S141 executes the S120 to described if the change rate of the discrimination of the convolutional neural networks is greater than the set value 150。
5. the transformer working condition recognition methods based on Application on Voiceprint Recognition model as described in claim 1, which is characterized in that In Before the step S100 further include:
S01, acquires the first voice signal of multiple a variety of working conditions, and every kind of working condition corresponds to multiple described the One voice signal, and the multiple first voice signal is handled, obtain multiple and the multiple first voice signal one One corresponding sound gray level image.
6. the transformer working condition recognition methods based on Application on Voiceprint Recognition model as claimed in claim 5, which is characterized in that In In the step S01, the step of handling the multiple first voice signal, includes:
S010, sets the sample frequency and sampling duration of the voice signal of the transformer, and acquires multiple a variety of work First voice signal under state;
S020 carries out segmentation windowing process to each first voice signal respectively, obtains multiple second sound signals;
S030 carries out Fourier transformation to multiple second sound signals respectively, obtains multiple second sound signals Spectrum distribution;
S040 carries out wavelet transformation to the second sound signal according to the second sound signal spectrum distribution, obtains multiple With the one-to-one matrix of wavelet coefficients of the multiple second sound signal;
S050 carries out greyscale transformation to the multiple matrix of wavelet coefficients respectively, obtains multiple described under a variety of working conditions Sound gray level image.
7. the transformer working condition recognition methods based on Application on Voiceprint Recognition model as claimed in claim 6, which is characterized in that institute Stating step S040 includes:
S041 chooses multiple and multiple one-to-one wavelet basis functions of second sound signal respectively;
S042 divides the frequency bandwidth of multiple second sound signals at equal intervals, obtains multiple frequency band subintervals, and The multiple and one-to-one two-dimensional network of the multiple second sound signal is constructed according to the multiple frequency band subinterval;
S043 carries out small echo to the second sound signal according to multiple wavelet basis functions and multiple two-dimensional networks Transformation, and obtain the multiple and the multiple second sound signal matrix of wavelet coefficients correspondingly.
8. the transformer working condition recognition methods based on Application on Voiceprint Recognition model as claimed in claim 7, which is characterized in that institute After stating step S043 further include:
S044, calculates separately the Shannon entropy of the multiple matrix of wavelet coefficients, and according to multiple Shannon entropies determine it is multiple most Excellent wavelet basis function;
S045 is corresponded to multiple with the multiple optimal wavelet basic function respectively according to the multiple optimal wavelet basic function The second sound signal carry out wavelet transformation, and obtain the matrix of wavelet coefficients of multiple optimizations.
9. the transformer working condition recognition methods based on Application on Voiceprint Recognition model as claimed in claim 6, which is characterized in that institute Stating step S050 includes:
The multiple matrix of wavelet coefficients is normalized by column respectively by S051, is obtained multiple normalized described small Wave system matrix number;
S052 carries out greyscale transformation to multiple normalized matrix of wavelet coefficients respectively, obtains multiple initial gray images;
S053 carries out the disposal of gentle filter to the multiple initial gray image respectively, described after obtaining multiple smoothing processings Initial gray image;
S054 is sharpened processing and gray-level correction to the initial gray image after multiple smoothing processings respectively, obtains more A sound gray level image.
10. the transformer working condition recognition methods based on Application on Voiceprint Recognition model as claimed in claim 9, which is characterized in that The disposal of gentle filter is carried out to the multiple initial gray image respectively using Gaussian Blur operator in the step S053.
11. the transformer working condition recognition methods based on Application on Voiceprint Recognition model as described in claim 1, which is characterized in that Before the step S200 further include:
S02, acquires the voice signal to be measured of the transformer to be identified, and handles the voice signal to be measured, obtains The sound gray level image to be measured corresponding with the voice signal to be measured.
12. the transformer working condition recognition methods based on Application on Voiceprint Recognition model as claimed in claim 11, which is characterized in that The step S02 includes:
S021 acquires the sound to be measured letter of the transformer to be identified according to the sample frequency and the sampling duration is set Number;
S022 carries out segmentation windowing process to the voice signal to be measured, obtains multiple segmentation voice signals to be measured;
S023 carries out Fourier transformation to multiple segmentation voice signals to be measured, obtains multiple to be measured with multiple segmentations Voice signal is segmented the spectrum distribution of voice signal to be measured correspondingly;
S024 carries out small echo change to the segmentation voice signal to be measured according to the spectrum distribution of the segmentation voice signal to be measured It changes, obtains multiple and the multiple segmentation voice signal to be measured matrix of wavelet coefficients to be measured correspondingly;
S025 carries out greyscale transformation to the multiple matrix of wavelet coefficients to be measured respectively, obtains multiple under a variety of working conditions The sound gray level image to be measured.
CN201910561468.0A 2019-06-26 2019-06-26 Transformer working state identification method based on voiceprint identification model Active CN110415709B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910561468.0A CN110415709B (en) 2019-06-26 2019-06-26 Transformer working state identification method based on voiceprint identification model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910561468.0A CN110415709B (en) 2019-06-26 2019-06-26 Transformer working state identification method based on voiceprint identification model

Publications (2)

Publication Number Publication Date
CN110415709A true CN110415709A (en) 2019-11-05
CN110415709B CN110415709B (en) 2022-01-25

Family

ID=68359737

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910561468.0A Active CN110415709B (en) 2019-06-26 2019-06-26 Transformer working state identification method based on voiceprint identification model

Country Status (1)

Country Link
CN (1) CN110415709B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111222285A (en) * 2019-12-31 2020-06-02 国网安徽省电力有限公司 Transformer high active value prediction method based on voiceprint and neural network
CN111579056A (en) * 2020-05-19 2020-08-25 北京快鱼电子股份公司 Transformer direct-current magnetic bias prediction method and system
CN111735533A (en) * 2020-06-08 2020-10-02 贵州电网有限责任公司 Transformer direct-current magnetic bias judgment method based on vibration signal wavelet energy spectrum characteristics
CN111929542A (en) * 2020-07-03 2020-11-13 北京国网富达科技发展有限责任公司 Power equipment diagnosis method and system
CN112420055A (en) * 2020-09-22 2021-02-26 甘肃同兴智能科技发展有限公司 Substation state identification method and device based on voiceprint characteristics
CN112735436A (en) * 2021-01-21 2021-04-30 国网新疆电力有限公司信息通信公司 Voiceprint recognition method and voiceprint recognition system
CN113985156A (en) * 2021-09-07 2022-01-28 绍兴电力局柯桥供电分局 Intelligent fault identification method based on transformer voiceprint big data
CN115728612A (en) * 2022-12-01 2023-03-03 广州广电计量检测股份有限公司 Transformer discharge fault diagnosis method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6839673B1 (en) * 1999-03-29 2005-01-04 Markany Inc. Digital watermarking method and apparatus for audio data
JP2008281898A (en) * 2007-05-14 2008-11-20 Univ Of Tokyo Signal processing method and device
CN107330405A (en) * 2017-06-30 2017-11-07 上海海事大学 Remote sensing images Aircraft Target Recognition based on convolutional neural networks
CN108846323A (en) * 2018-05-28 2018-11-20 哈尔滨工程大学 A kind of convolutional neural networks optimization method towards Underwater Targets Recognition
CN109740523A (en) * 2018-12-29 2019-05-10 国网陕西省电力公司电力科学研究院 A kind of method for diagnosing fault of power transformer based on acoustic feature and neural network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6839673B1 (en) * 1999-03-29 2005-01-04 Markany Inc. Digital watermarking method and apparatus for audio data
JP2008281898A (en) * 2007-05-14 2008-11-20 Univ Of Tokyo Signal processing method and device
CN107330405A (en) * 2017-06-30 2017-11-07 上海海事大学 Remote sensing images Aircraft Target Recognition based on convolutional neural networks
CN108846323A (en) * 2018-05-28 2018-11-20 哈尔滨工程大学 A kind of convolutional neural networks optimization method towards Underwater Targets Recognition
CN109740523A (en) * 2018-12-29 2019-05-10 国网陕西省电力公司电力科学研究院 A kind of method for diagnosing fault of power transformer based on acoustic feature and neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李静娇等: "《基于自适应Morlet小波变换滚动轴承声学故障诊断的研究》", 《石家庄铁道大学学报(自然科学版)》 *
苏世玮等: "《基于卷积神经网络的变压器振动信号分析》", 《广东电力》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111222285A (en) * 2019-12-31 2020-06-02 国网安徽省电力有限公司 Transformer high active value prediction method based on voiceprint and neural network
CN111579056A (en) * 2020-05-19 2020-08-25 北京快鱼电子股份公司 Transformer direct-current magnetic bias prediction method and system
CN111735533A (en) * 2020-06-08 2020-10-02 贵州电网有限责任公司 Transformer direct-current magnetic bias judgment method based on vibration signal wavelet energy spectrum characteristics
CN111735533B (en) * 2020-06-08 2022-05-13 贵州电网有限责任公司 Transformer direct-current magnetic bias judgment method based on vibration signal wavelet energy spectrum characteristics
CN111929542A (en) * 2020-07-03 2020-11-13 北京国网富达科技发展有限责任公司 Power equipment diagnosis method and system
CN111929542B (en) * 2020-07-03 2023-05-26 北京国网富达科技发展有限责任公司 Power equipment diagnosis method and system
CN112420055A (en) * 2020-09-22 2021-02-26 甘肃同兴智能科技发展有限公司 Substation state identification method and device based on voiceprint characteristics
CN112735436A (en) * 2021-01-21 2021-04-30 国网新疆电力有限公司信息通信公司 Voiceprint recognition method and voiceprint recognition system
CN113985156A (en) * 2021-09-07 2022-01-28 绍兴电力局柯桥供电分局 Intelligent fault identification method based on transformer voiceprint big data
CN115728612A (en) * 2022-12-01 2023-03-03 广州广电计量检测股份有限公司 Transformer discharge fault diagnosis method and device

Also Published As

Publication number Publication date
CN110415709B (en) 2022-01-25

Similar Documents

Publication Publication Date Title
CN110415709A (en) Transformer working condition recognition methods based on Application on Voiceprint Recognition model
Zhu et al. Intelligent logging lithological interpretation with convolution neural networks
CN104751169B (en) High ferro rail defects and failures sorting technique
CN113707176B (en) Transformer fault detection method based on acoustic signal and deep learning technology
CN109858408B (en) Ultrasonic signal processing method based on self-encoder
CN108648188A (en) A kind of non-reference picture quality appraisement method based on generation confrontation network
CN103398769B (en) Transformer on-line fault detecting method based on sampling integrated SVM (support vector machine) under wavelet GGD (general Gaussian distribution) feather and unbalanced K-mean value
CN106909784A (en) Epileptic electroencephalogram (eeg) recognition methods based on two-dimentional time-frequency image depth convolutional neural networks
CN106556781A (en) Shelf depreciation defect image diagnostic method and system based on deep learning
CN108682007B (en) JPEG image resampling automatic detection method based on depth random forest
CN110398647B (en) Transformer state monitoring method
CN111723701B (en) Underwater target identification method
CN104299232B (en) SAR image segmentation method based on self-adaptive window directionlet domain and improved FCM
CN114595732B (en) Radar radiation source sorting method based on depth clustering
CN112668527B (en) Ultrasonic guided wave semi-supervised imaging detection method
CN111914705A (en) Signal generation method and device for improving health state evaluation accuracy of reactor
CN102279358A (en) MCSKPCA based neural network fault diagnosis method for analog circuits
CN106950475A (en) A kind of local discharge signal extracting method and device based on wavelet transformation
CN115728612A (en) Transformer discharge fault diagnosis method and device
CN104156628A (en) Ship radiation signal recognition method based on multi-kernel learning and discriminant analysis
CN112785539A (en) Multi-focus image fusion method based on image adaptive decomposition and parameter adaptive
CN115565019A (en) Single-channel high-resolution SAR image ground object classification method based on deep self-supervision generation countermeasure
CN115600088A (en) Distribution transformer fault diagnosis method based on vibration signals
CN108648180B (en) Full-reference image quality objective evaluation method based on visual multi-feature depth fusion processing
CN108550152B (en) Full-reference image quality objective evaluation method based on depth feature perception inference

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant