CN117520926B - Electric energy quality analysis method and device - Google Patents

Electric energy quality analysis method and device Download PDF

Info

Publication number
CN117520926B
CN117520926B CN202410010799.6A CN202410010799A CN117520926B CN 117520926 B CN117520926 B CN 117520926B CN 202410010799 A CN202410010799 A CN 202410010799A CN 117520926 B CN117520926 B CN 117520926B
Authority
CN
China
Prior art keywords
layer
neural network
convolution
electric energy
wavelet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410010799.6A
Other languages
Chinese (zh)
Other versions
CN117520926A (en
Inventor
姚东方
田曙光
易怀权
郭鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Youyou Software Technology Co ltd
Nanjing Shining Electric Automation Co ltd
Original Assignee
Nanjing Youyou Software Technology Co ltd
Nanjing Shining Electric Automation Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Youyou Software Technology Co ltd, Nanjing Shining Electric Automation Co ltd filed Critical Nanjing Youyou Software Technology Co ltd
Priority to CN202410010799.6A priority Critical patent/CN117520926B/en
Publication of CN117520926A publication Critical patent/CN117520926A/en
Application granted granted Critical
Publication of CN117520926B publication Critical patent/CN117520926B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/10Pre-processing; Data cleansing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06395Quality analysis or management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • G06F2218/04Denoising
    • G06F2218/06Denoising by applying a scale-space analysis, e.g. using wavelet analysis
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/80Management or planning
    • Y02P90/82Energy audits or management systems therefor

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Human Resources & Organizations (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Biology (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Strategic Management (AREA)
  • Biophysics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Development Economics (AREA)
  • Marketing (AREA)
  • Educational Administration (AREA)
  • Quality & Reliability (AREA)
  • Operations Research (AREA)
  • Game Theory and Decision Science (AREA)
  • Public Health (AREA)
  • Water Supply & Treatment (AREA)
  • Primary Health Care (AREA)
  • Supply And Distribution Of Alternating Current (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a power quality analysis method and a device, wherein the power quality analysis method comprises the following steps: collecting electric energy signals in a plurality of time periods, wherein the sampling frequency is set to be 5kHZ; performing n-layer wavelet decomposition on the electric energy signal, and respectively calculating wavelet packet energy values of the electric energy signal at the n-2 layer and the n layer; normalizing the wavelet packet energy value to obtain a feature vector; inputting the feature vector into a first neural network to train the first neural network to obtain a second neural network; inputting an electric energy signal into a second neural network, and identifying the disturbance type of the second neural network; the invention combines wavelet decomposition and neural network technology, avoids complex pretreatment process, and can rapidly and accurately identify various disturbance types of electric energy signals.

Description

Electric energy quality analysis method and device
Technical Field
The invention relates to the technical field of power quality analysis, in particular to a power quality analysis method and device.
Background
As electrically sensitive devices increase, so too does the demand for power quality from many precision devices. In order to improve the power quality, various signals in the power system that affect the power quality need to be analyzed and processed. Disturbance detection is an important link in power quality analysis, and along with the continuous improvement of the power system on the power quality requirement, the novel information technology and the optimized mathematical change method are simultaneously applied to the detection of the power quality disturbance signal, and a great research result is obtained.
At present, the detection method of the power quality disturbance signal mainly comprises the following aspects: time domain analysis methods such as an instantaneous reactive power theory and a dynamic measurement theory; frequency domain analysis methods such as frequency scanning and harmonic power flow calculation; transform domain methods represented by short-time fourier transform, S transform, wavelet transform, and the like. Short-time fourier transforms, while having temporal locality, suffer from the disadvantage of requiring selection of window type and width, and the window width is fixed and cannot accommodate varying signals. Although the gaussian window width of the S-transform varies with frequency, it is always inversely proportional to frequency, making it impossible to adjust its time-frequency resolution. The wavelet transformation has the characteristic of time-frequency localization, is particularly suitable for analyzing abrupt signals and unstable signals, but has a large selection relation between an analysis result and a wavelet base, lacks self-adaptability and has large operation amount, and the wavelet transformation can not independently extract signals with any frequency.
Disclosure of Invention
The present invention has been made in view of the above-described problems occurring in the prior art.
In order to solve the technical problems, the invention provides the following technical scheme that: collecting electric energy signals in a plurality of time periods, wherein the sampling frequency is set to be 5kHZ; performing n-layer wavelet decomposition on the electric energy signal, and respectively calculating wavelet packet energy values of the electric energy signal at the n-2 layer and the n layer; normalizing the wavelet packet energy value to obtain a feature vector; inputting the feature vector into a first neural network to train the first neural network to obtain a second neural network; inputting the electric energy signal into the second neural network, and identifying the disturbance type of the second neural network; the disturbance types include: voltage dip, transient pulse, voltage interruption, notch, brief harmonic, and no disturbance.
As a preferred embodiment of the power quality analysis method according to the present invention, wherein: the wavelet decomposition includes: performing convolution operation on the Morlet wavelet function and the electric energy signal to obtain wavelet decomposition coefficients of different frequencies, and respectively calculating wavelet packet energy values of the electric energy signal in the n-2 layer and the n layer:
E n =∑a n
wherein: e (E) n A is the wavelet packet energy value of the n layer n Wavelet decomposition coefficient of the n layer;
E n-2 =∑b n-2
wherein: e (E) n-2 Wavelet packet energy value, b, for layer n-2 n-2 Is the wavelet decomposition coefficient of the n-2 layer.
As a preferred embodiment of the power quality analysis method according to the present invention, wherein: the wavelet decomposition includes: the first neural network comprises an input layer, a plurality of convolution layers, a channel attention module, an ASPP module and an output layer; the plurality of convolution layers includes 1 convolution layer having a convolution kernel size of 3*3 and 3 convolution layers having a convolution kernel size of 1*1; the channel attention module comprises a global average pooling layer, a global maximum pooling layer and a multi-layer perceptron; the output end of the convolution layer with the convolution kernel size of 3*3 is connected with the channel attention module, and the output end of the channel attention module is connected with the convolution layer with the convolution kernel size of 1*1; the ASPP module comprises a first hole convolution layer, a second hole convolution layer, a third hole convolution layer and a fourth hole convolution layer, wherein the sampling rates of the first hole convolution layer, the second hole convolution layer, the third hole convolution layer and the fourth hole convolution layer are respectively 6, 12, 18 and 24, and the convolution kernel sizes are 3*3.
As a preferred embodiment of the power quality analysis method according to the present invention, wherein: the wavelet decomposition includes: further comprises: the relationship between the input and output of the first neural network is as follows:
wherein y is i For the ith output of the first neural network, k is the number of samples of the electrical energy signal, w is the weight of the inter-neuron connections, x i The i-th input to the first neural network, b is the bias, f (x) is the activation function, which is the Softplus function.
As a preferred embodiment of the power quality analysis method according to the present invention, wherein: the wavelet decomposition includes: the training comprises: selecting a classification cross entropy loss function as a target loss function L of a first neural network, performing iterative training on the first neural network by adopting an Adagra optimizer, setting an initial learning rate, and continuously adjusting a weight w and a bias b until the target loss function value reaches the minimum, wherein the loss function L is as follows:
L=-[y i log(p i )+(1-y i )log(1-p i )];
wherein y is i For the ith output of the first neural network, p i Is the i-th desired output of the first neural network.
As a preferred embodiment of the power quality analysis device according to the present invention, wherein: comprising the following steps: an acquisition unit configured to perform acquisition of the electric energy signals for a plurality of time periods, the sampling frequency being set to 5kHZ; the characteristic extraction unit is configured to perform n-layer wavelet decomposition on the electric energy signal, respectively calculate wavelet packet energy values of the electric energy signal at the n-2 th layer and the n-th layer, and then normalize the wavelet packet energy values to obtain a characteristic vector; a network training unit configured to perform inputting the feature vector to a first neural network to train the first neural network to obtain a second neural network; an identification unit configured to perform inputting the electric energy signal to the second neural network, identifying a disturbance type thereof; the disturbance types include: voltage dip, transient pulse, voltage interruption, notch, brief harmonic, and no disturbance.
As a preferred embodiment of the power quality analysis device according to the present invention, wherein: the feature extraction unit is specifically configured to perform: performing convolution operation on the Morlet wavelet function and the electric energy signal to obtain wavelet decomposition coefficients of different frequencies, and calculating wavelet packet energy values of the electric energy signal in the n-2 layer and the n layer:
E n =∑a n
wherein: e (E) n A is the wavelet packet energy value of the n layer n Wavelet decomposition coefficient of the n layer;
E n-2 =∑b n-2
wherein: e (E) n-2 Wavelet packet energy value, b, for layer n-2 n-2 Is the wavelet decomposition coefficient of the n-2 layer.
As a preferred embodiment of the power quality analysis device according to the present invention, wherein: the network training unit is specifically configured to perform: the first neural network comprises an input layer, a plurality of convolution layers, a channel attention module, an ASPP module and an output layer; the plurality of convolution layers includes 1 convolution layer having a convolution kernel size of 3*3 and 3 convolution layers having a convolution kernel size of 1*1; the channel attention module comprises a global average pooling layer, a global maximum pooling layer and a multi-layer perceptron; the output end of the convolution layer with the convolution kernel size of 3*3 is connected with the channel attention module, and the output end of the channel attention module is connected with the convolution layer with the convolution kernel size of 1*1; the ASPP module comprises a first hole convolution layer, a second hole convolution layer, a third hole convolution layer and a fourth hole convolution layer, wherein the sampling rates of the first hole convolution layer, the second hole convolution layer, the third hole convolution layer and the fourth hole convolution layer are respectively 6, 12, 18 and 24, and the convolution kernel sizes are 3*3.
As a preferred embodiment of the power quality analysis device according to the present invention, wherein: the network training unit is specifically configured to perform: the relationship between the input and output of the first neural network is as follows:
wherein y is i For the ith output of the first neural network, k is the number of samples of the electrical energy signal, w is the weight of the inter-neuron connections, x i Is the firstAn ith input of a neural network, b is a bias, f (x) is an activation function, and the activation function is a Softplus function.
As a preferred embodiment of the power quality analysis device according to the present invention, wherein: the network training unit is specifically configured to perform: selecting a classification cross entropy loss function as a target loss function L of a first neural network, performing iterative training on the first neural network by adopting an Adagra optimizer, setting an initial learning rate, and continuously adjusting a weight w and a bias b until the target loss function value reaches the minimum, wherein the loss function L is as follows:
L=-[y i log(p i )+(1-y i )log(1-p i )];
wherein y is i For the ith output of the first neural network, p i Is the i-th desired output of the first neural network.
The invention has the beneficial effects that: compared with the prior art, the method and the device have the advantages that the complex pretreatment process is omitted by combining wavelet decomposition and a neural network technology, and multiple disturbance types of the electric energy signal can be rapidly and accurately identified.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art. Wherein:
FIG. 1 is a flow chart of a method for analyzing electric energy quality according to a first embodiment of the present invention;
fig. 2 is a schematic structural diagram of a first neural network according to a first embodiment of the present invention.
Detailed Description
So that the manner in which the above recited objects, features and advantages of the present invention can be understood in detail, a more particular description of the invention, briefly summarized above, may be had by reference to the embodiments, some of which are illustrated in the appended drawings. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways other than those described herein, and persons skilled in the art will readily appreciate that the present invention is not limited to the specific embodiments disclosed below.
Further, reference herein to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic can be included in at least one implementation of the invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments.
While the embodiments of the present invention have been illustrated and described in detail in the drawings, the cross-sectional view of the device structure is not to scale in the general sense for ease of illustration, and the drawings are merely exemplary and should not be construed as limiting the scope of the invention. In addition, the three-dimensional dimensions of length, width and depth should be included in actual fabrication.
Also in the description of the present invention, it should be noted that the orientation or positional relationship indicated by the terms "upper, lower, inner and outer", etc. are based on the orientation or positional relationship shown in the drawings, are merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the apparatus or elements referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first, second, or third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
The terms "mounted, connected, and coupled" should be construed broadly in this disclosure unless otherwise specifically indicated and defined, such as: can be fixed connection, detachable connection or integral connection; it may also be a mechanical connection, an electrical connection, or a direct connection, or may be indirectly connected through an intermediate medium, or may be a communication between two elements. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
Example 1
Referring to fig. 1 to 2, a first embodiment of the present invention provides a power quality analysis method, which includes:
s1: the power signal is collected over a plurality of time periods.
To meet the sampling theorem while ensuring accuracy, the sampling frequency is set to 5kHZ.
S2: and carrying out n-layer wavelet decomposition on the electric energy signal, and respectively calculating wavelet packet energy values of the electric energy signal in the n-2 layer, the n-1 layer and the n layer.
The signal is divided into a high frequency part and a low frequency part by wavelet decomposition, and the frequency resolution of the low frequency part is high and the time resolution of the high frequency part is high. In the power system, the moment of disturbance can cause the frequency to be increased instantaneously to form a mutation point, and the high-frequency part in the signal is separated by utilizing wavelet decomposition, so that the mutation point is more convenient to detect.
The advantage of wavelet decomposition over fourier transforms and short-time fourier transforms is that the adaptive window can follow the localized nature of frequency auto-tuning in time and frequency. However, whether the wavelet basis function is selected correctly determines whether the wavelet decomposition result is good or bad, and in this embodiment, the wavelet function is selected to perform wavelet decomposition on the electric energy signal, which specifically includes:
convolution operation is carried out on the Morlet wavelet function and the electric energy signal to obtain wavelet decomposition coefficients of different frequencies, and wavelet packet energy values of the electric energy signal in the n-2 layer and the n layer are calculated respectively:
E n =∑a n
wherein: e (E) n A is the wavelet packet energy value of the n layer n Wavelet decomposition coefficient of the n layer;
E n-2 =∑b n-2
wherein: e (E) n-2 Wavelet packet energy value, b, for layer n-2 n-2 Is the wavelet decomposition coefficient of the n-2 layer.
Preferably, the Morlet wavelet function has better locality and multi-resolution, and is suitable for various signal processing scenes.
S3: and carrying out normalization processing on the wavelet packet energy value to obtain a feature vector.
S4: the feature vector is input to the first neural network to train the first neural network to obtain the second neural network.
(1) Referring to fig. 2, a schematic structural diagram of a first neural network is shown, and the first neural network is constructed by optimally designing a convolutional layer and a pooling layer of the convolutional neural network, where the first neural network includes an input layer, a plurality of convolutional layers, a channel attention module, an ASPP module, and an output layer.
The plurality of convolution layers includes 1 convolution layer having a convolution kernel size of 3*3 and 3 convolution layers having a convolution kernel size of 1*1; preferably, the original large-size convolution kernel is replaced by a plurality of small-size convolution kernels, so that the parameter number of network training can be effectively reduced, and meanwhile, the phenomenon of fitting exceeding can be avoided.
The channel attention module includes a global average pooling layer, a global maximum pooling layer, and a multi-layer perceptron (Multilayer Perceptron, MLP); the output end of the convolution layer with the convolution kernel size of 3*3 is connected with a channel attention module, and the output end of the channel attention module is connected with 3 continuous convolution layers with the convolution kernel size of 1*1; and respectively carrying out maximum pooling and average pooling on the input feature vectors in the channel attention module to generate two different spatial information description features, then sending the two features into a multi-layer perceptron to calculate to obtain the channel attention features, and finally merging the two vectors output by the multi-layer perceptron through point pixel bitwise addition.
Preferably, the channel attention module focuses on the relation of the features among channels, mainly extracts meaningful content information in the feature vectors, compresses spatial direction information of the input feature vectors, and adopts global average pooling and global maximum pooling at the same time, wherein the average pooling reflects global information, the maximum pooling reflects prominent features in the feature vectors, and the two different pooling can be used for extracting more abundant high-level features.
The ASPP module is composed of a plurality of cavity convolution layers, and specifically comprises a first cavity convolution layer, a second cavity convolution layer, a third cavity convolution layer and a fourth cavity convolution layer, wherein the sampling rates of the first cavity convolution layer, the second cavity convolution layer, the third cavity convolution layer and the fourth cavity convolution layer are respectively 6, 12, 18 and 24, and the convolution kernel sizes are 3*3.
The working principle of the ASPP module is as follows: and using a plurality of parallel cavity convolution layers with different sampling rates to sample the characteristics output by the channel attention module at different sampling rates, namely extracting input characteristics from different scales, and then fusing the obtained characteristics to obtain a final characteristic extraction result.
Preferably, the cavity convolution is used to ensure resolution while expanding the receptive field.
Meanwhile, different receptive fields are brought to the first neural network by setting different sampling rates, so that multi-scale context information is obtained.
(2) The relationship between the input and output of the first neural network is as follows:
wherein y is i For the ith output of the first neural network, k is the number of samples of the electrical energy signal, w is the weight of the inter-neuron connections, x i For the ith input of the first neural network, b is the bias, f (x) is the activation function, which is the Softplus function.
The Softplus function has the expression:
Softplus(x)=log(1+e x
where x is the input.
Preferably, the invention solves the gradient disappearance problem of the ReLU function by using the Softplus function as the activation function, the image of the Softplus function presents a shape similar to the ReLU function but smoother than the ReLU function, when the input approaches negative infinity, the output of the Softplus function approaches 0, when the input approaches positive infinity, the output of the Softplus function approaches the input itself, and the training speed and accuracy of the neural network model can be effectively improved.
(3) Training the first neural network specifically comprises:
selecting a classification cross entropy loss function as a target loss function L of the first neural network, performing iterative training on the first neural network by adopting an Adagra optimizer, setting the initial learning rate to be 0.001, and continuously adjusting the weight w and the bias b until the target loss function value reaches the minimum, wherein the loss function L is as follows:
L=-[y i log(p i )+(1-y i )log(1-p i )];
wherein y is i For the ith output of the first neural network, p i Is the i-th desired output of the first neural network.
Preferably, adagrad (Adaptive Gradient) is an adaptive learning rate optimization algorithm that is capable of adaptively adjusting the learning rate for different parameters, which makes the training process with sparse gradients more efficient, thereby increasing the training speed of the neural network.
S5: the electrical energy signal is input to a second neural network, the disturbance type of which is identified.
From a dynamic performance perspective, the power quality disturbance can be divided into two main types, namely steady state and transient state, and specifically, the disturbance types include: voltage dip, transient pulse, voltage interruption, notch, brief harmonic and no disturbance. According to the disturbance type, the invention sets the node number of the input layer and the output layer to 7.
And inputting the electric energy signal into a first neural network which is trained, namely a second neural network, automatically analyzing the electric energy signal through the second neural network, and outputting the disturbance type.
Example 2
The embodiment provides an electric energy quality analysis device, which specifically comprises,
an acquisition unit configured to perform acquisition of the electric energy signals for a plurality of time periods, the sampling frequency being set to 5kHZ;
the characteristic extraction unit is configured to perform n-layer wavelet decomposition on the electric energy signal, respectively calculate wavelet packet energy values of the electric energy signal in the n-2 layer, the n-1 layer and the n layer, and then normalize the wavelet packet energy values to obtain a characteristic vector;
a network training unit configured to perform inputting of the feature vector to the first neural network to train the first neural network to obtain a second neural network;
an identification unit configured to perform inputting of the electric energy signal to the second neural network, identifying a disturbance type thereof; the disturbance types include: voltage dip, transient pulse, voltage interruption, notch, brief harmonic, and no disturbance.
Further, the feature extraction unit is specifically configured to perform:
performing convolution operation on the Morlet wavelet function and the electric energy signal to obtain wavelet decomposition coefficients of different frequencies, and calculating wavelet packet energy values of the electric energy signal in the n-2 layer and the n layer:
E n =∑a n
wherein: e (E) n A is the wavelet packet energy value of the n layer n Wavelet decomposition coefficient of the n layer;
E n-2 =∑b n-2
wherein: e (E) n-2 Wavelet packet energy value, b, for layer n-2 n-2 Is the wavelet decomposition coefficient of the n-2 layer.
Further, the network training unit is specifically configured to perform:
the first neural network comprises an input layer, a plurality of convolution layers, a channel attention module, an ASPP module and an output layer;
the plurality of convolution layers includes 1 convolution layer having a convolution kernel size of 3*3 and 3 convolution layers having a convolution kernel size of 1*1; preferably, the original large-size convolution kernel is replaced by a plurality of small-size convolution kernels, so that the parameter number of network training can be effectively reduced, and meanwhile, the phenomenon of fitting exceeding can be avoided.
The channel attention module includes a global average pooling layer, a global maximum pooling layer, and a multi-layer perceptron (Multilayer Perceptron, MLP); the output end of the convolution layer with the convolution kernel size of 3*3 is connected with a channel attention module, and the output end of the channel attention module is connected with 3 continuous convolution layers with the convolution kernel size of 1*1; and respectively carrying out maximum pooling and average pooling on the input feature vectors in the channel attention module to generate two different spatial information description features, then sending the two features into a multi-layer perceptron to calculate to obtain the channel attention features, and finally merging the two vectors output by the multi-layer perceptron through point pixel bitwise addition.
Preferably, the channel attention module focuses on the relation of the features among channels, mainly extracts meaningful content information in the feature vectors, compresses spatial direction information of the input feature vectors, and adopts global average pooling and global maximum pooling at the same time, wherein the average pooling reflects global information, the maximum pooling reflects prominent features in the feature vectors, and the two different pooling can be used for extracting more abundant high-level features.
The ASPP module is composed of a plurality of cavity convolution layers, and specifically comprises a first cavity convolution layer, a second cavity convolution layer, a third cavity convolution layer and a fourth cavity convolution layer, wherein the sampling rates of the first cavity convolution layer, the second cavity convolution layer, the third cavity convolution layer and the fourth cavity convolution layer are respectively 6, 12, 18 and 24, and the convolution kernel sizes are 3*3.
The working principle of the ASPP module is as follows: and using a plurality of parallel cavity convolution layers with different sampling rates to sample the characteristics output by the channel attention module at different sampling rates, namely extracting input characteristics from different scales, and then fusing the obtained characteristics to obtain a final characteristic extraction result.
Preferably, the cavity convolution is used to ensure resolution while expanding the receptive field.
Meanwhile, different receptive fields are brought to the first neural network by setting different sampling rates, so that multi-scale context information is obtained.
Further, the network training unit is specifically configured to perform:
the relationship between the input and output of the first neural network is as follows:
wherein y is i For the ith output of the first neural network, k is the number of samples of the electrical energy signal, w is the weight of the inter-neuron connections, x i For the ith input of the first neural network, b is the bias, f (x) is the activation function, which is the Softplus function.
Further, the network training unit is specifically configured to perform:
selecting a classification cross entropy loss function as a target loss function L of the first neural network, performing iterative training on the first neural network by adopting an Adagra optimizer, setting an initial learning rate, and continuously adjusting a weight w and a bias b until the target loss function value reaches the minimum, wherein the loss function L is as follows:
L=-[y i log(p i )+(1-y i )log(1-p i )];
wherein y is i For the ith output of the first neural network, p i Is the i-th desired output of the first neural network.
It should be appreciated that embodiments of the invention may be implemented or realized by computer hardware, a combination of hardware and software, or by computer instructions stored in a non-transitory computer readable memory. The methods may be implemented in a computer program using standard programming techniques, including a non-transitory computer readable storage medium configured with a computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner, in accordance with the methods and drawings described in the specific embodiments. Each program may be implemented in a high level procedural or object oriented programming language to communicate with a computer system. However, the program(s) can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language. Furthermore, the program can be run on a programmed application specific integrated circuit for this purpose.
Furthermore, the operations of the processes described herein may be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The processes (or variations and/or combinations thereof) described herein may be performed under control of one or more computer systems configured with executable instructions, and may be implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications), by hardware, or combinations thereof, collectively executing on one or more processors. The computer program includes a plurality of instructions executable by one or more processors.
Further, the method may be implemented in any type of computing platform operatively connected to a suitable computing platform, including, but not limited to, a personal computer, mini-computer, mainframe, workstation, network or distributed computing environment, separate or integrated computer platform, or in communication with a charged particle tool or other imaging device, and so forth. Aspects of the invention may be implemented in machine-readable code stored on a non-transitory storage medium or device, whether removable or integrated into a computing platform, such as a hard disk, optical read and/or write storage medium, RAM, ROM, etc., such that it is readable by a programmable computer, which when read by a computer, is operable to configure and operate the computer to perform the processes described herein. Further, the machine readable code, or portions thereof, may be transmitted over a wired or wireless network. When such media includes instructions or programs that, in conjunction with a microprocessor or other data processor, implement the steps described above, the invention described herein includes these and other different types of non-transitory computer-readable storage media. The invention also includes the computer itself when programmed according to the methods and techniques of the present invention. The computer program can be applied to the input data to perform the functions described herein, thereby converting the input data to generate output data that is stored to the non-volatile memory. The output information may also be applied to one or more output devices such as a display. In a preferred embodiment of the invention, the transformed data represents physical and tangible objects, including specific visual depictions of physical and tangible objects produced on a display.
As used in this application, the terms "component," "module," "system," and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, or software in execution. For example, the components may be, but are not limited to: a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of example, both an application running on a computing device and the computing device can be a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers. Furthermore, these components can execute from various computer readable media having various data structures thereon. The components may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the internet with other systems by way of the signal).
It should be noted that the above embodiments are only for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that the technical solution of the present invention may be modified or substituted without departing from the spirit and scope of the technical solution of the present invention, which is intended to be covered in the scope of the claims of the present invention.

Claims (4)

1. A power quality analysis method, comprising:
collecting electric energy signals in a plurality of time periods, wherein the sampling frequency is set to be 5kHZ;
performing n-layer wavelet decomposition on the electric energy signal, and respectively calculating wavelet packet energy values of the electric energy signal at the n-2 layer and the n layer;
normalizing the wavelet packet energy value to obtain a feature vector;
inputting the feature vector into a first neural network to train the first neural network to obtain a second neural network;
inputting the electric energy signal into the second neural network, and identifying the disturbance type of the second neural network; the disturbance types include: voltage sag, transient pulse, voltage interruption, notch, simple harmonic wave and no disturbance;
wherein the wavelet decomposition comprises:
performing convolution operation on the Morlet wavelet function and the electric energy signal to obtain wavelet decomposition coefficients of different frequencies, and respectively calculating wavelet packet energy values of the electric energy signal in the n-2 layer and the n layer:
E n =∑a n
wherein: e (E) n A is the wavelet packet energy value of the n layer n Wavelet decomposition coefficient of the n layer;
E n-2 =∑b n-2
wherein: e (E) n-2 Wavelet packet energy value, b, for layer n-2 n-2 Wavelet decomposition coefficient of n-2 layer;
the first neural network comprises an input layer, a plurality of convolution layers, a channel attention module, an ASPP module and an output layer;
the plurality of convolution layers includes 1 convolution layer having a convolution kernel size of 3*3 and 3 convolution layers having a convolution kernel size of 1*1;
the channel attention module comprises a global average pooling layer, a global maximum pooling layer and a multi-layer perceptron; the output end of the convolution layer with the convolution kernel size of 3*3 is connected with the channel attention module, and the output end of the channel attention module is connected with the convolution layer with the convolution kernel size of 1*1;
the ASPP module comprises a first hole convolution layer, a second hole convolution layer, a third hole convolution layer and a fourth hole convolution layer, wherein the sampling rates of the first hole convolution layer, the second hole convolution layer, the third hole convolution layer and the fourth hole convolution layer are respectively 6, 12, 18 and 24, and the convolution kernel sizes are 3*3;
wherein, the relation between the input and the output of the first neural network is as follows:
wherein y is i For the ith output of the first neural network, k is the number of samples of the electrical energy signal, w is the weight of the inter-neuron connections, x i The i-th input to the first neural network, b is the bias, f (x) is the activation function, which is the Softplus function.
2. The power quality analysis method of claim 1, wherein the training comprises:
selecting a classification cross entropy loss function as a target loss function L of a first neural network, performing iterative training on the first neural network by adopting an Adagra optimizer, setting an initial learning rate, and continuously adjusting a weight w and a bias b until the target loss function value reaches the minimum, wherein the loss function L is as follows:
L=-[y i log(p i )+(1-y i )log(1-p i )];
wherein y is i For the ith output of the first neural network, p i Is the i-th desired output of the first neural network.
3. A power quality analysis device, comprising:
an acquisition unit configured to perform acquisition of the electric energy signals for a plurality of time periods, the sampling frequency being set to 5kHZ;
the characteristic extraction unit is configured to perform n-layer wavelet decomposition on the electric energy signal, respectively calculate wavelet packet energy values of the electric energy signal at the n-2 th layer and the n-th layer, and perform normalization processing on the wavelet packet energy values to obtain a characteristic vector;
a network training unit configured to perform inputting the feature vector to a first neural network to train the first neural network to obtain a second neural network;
an identification unit configured to perform inputting the electric energy signal to the second neural network, identifying a disturbance type thereof; the disturbance types include: voltage sag, transient pulse, voltage interruption, notch, simple harmonic wave and no disturbance;
wherein the feature extraction unit is specifically configured to perform:
performing convolution operation on the Morlet wavelet function and the electric energy signal to obtain wavelet decomposition coefficients of different frequencies, and calculating wavelet packet energy values of the electric energy signal in the n-2 layer and the n layer:
E n =∑a n
wherein: e (E) n A is the wavelet packet energy value of the n layer n Wavelet decomposition coefficient of the n layer;
E n-2 =∑b n-2
wherein: e (E) n-2 Wavelet packet energy value, b, for layer n-2 n-2 Wavelet decomposition coefficient of n-2 layer;
wherein the network training unit is specifically configured to perform:
the first neural network comprises an input layer, a plurality of convolution layers, a channel attention module, an ASPP module and an output layer;
the plurality of convolution layers includes 1 convolution layer having a convolution kernel size of 3*3 and 3 convolution layers having a convolution kernel size of 1*1;
the channel attention module comprises a global average pooling layer, a global maximum pooling layer and a multi-layer perceptron; the output end of the convolution layer with the convolution kernel size of 3*3 is connected with the channel attention module, and the output end of the channel attention module is connected with the convolution layer with the convolution kernel size of 1*1;
the ASPP module comprises a first hole convolution layer, a second hole convolution layer, a third hole convolution layer and a fourth hole convolution layer, wherein the sampling rates of the first hole convolution layer, the second hole convolution layer, the third hole convolution layer and the fourth hole convolution layer are respectively 6, 12, 18 and 24, and the convolution kernel sizes are 3*3;
wherein the network training unit is specifically configured to perform:
the relationship between the input and output of the first neural network is as follows:
wherein y is i For the ith output of the first neural network, k is the number of samples of the electrical energy signal, w is the weight of the inter-neuron connections, x i The i-th input to the first neural network, b is the bias, f (x) is the activation function, which is the Softplus function.
4. The power quality analysis device of claim 3, wherein the network training unit is specifically configured to perform:
selecting a classification cross entropy loss function as a target loss function L of a first neural network, performing iterative training on the first neural network by adopting an Adagra optimizer, setting an initial learning rate, and continuously adjusting a weight w and a bias b until the target loss function value reaches the minimum, wherein the loss function L is as follows:
L=-[y i log(p i )+(1-y i )log(1-p i )];
wherein y is i For the ith output of the first neural network, p i Is the i-th desired output of the first neural network.
CN202410010799.6A 2024-01-04 2024-01-04 Electric energy quality analysis method and device Active CN117520926B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410010799.6A CN117520926B (en) 2024-01-04 2024-01-04 Electric energy quality analysis method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410010799.6A CN117520926B (en) 2024-01-04 2024-01-04 Electric energy quality analysis method and device

Publications (2)

Publication Number Publication Date
CN117520926A CN117520926A (en) 2024-02-06
CN117520926B true CN117520926B (en) 2024-04-16

Family

ID=89742357

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410010799.6A Active CN117520926B (en) 2024-01-04 2024-01-04 Electric energy quality analysis method and device

Country Status (1)

Country Link
CN (1) CN117520926B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113670610A (en) * 2021-06-09 2021-11-19 广州大学 Fault detection method, system and medium based on wavelet transformation and neural network
CN113934972A (en) * 2021-10-22 2022-01-14 温州大学 Power quality disturbance identification method based on improved wavelet threshold method and MPA (Multi-Point application) optimized PNN (Passive network)
CN114533086A (en) * 2022-02-21 2022-05-27 昆明理工大学 Motor imagery electroencephalogram decoding method based on spatial domain characteristic time-frequency transformation
CN115663784A (en) * 2021-12-15 2023-01-31 三峡大学 Power quality disturbance classification method for power distribution network based on HATF and E-CNN
CN117312902A (en) * 2023-07-16 2023-12-29 国网青海省电力公司电力科学研究院 Power grid power quality on-line state estimation method based on wavelet change and convolutional neural network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113670610A (en) * 2021-06-09 2021-11-19 广州大学 Fault detection method, system and medium based on wavelet transformation and neural network
CN113934972A (en) * 2021-10-22 2022-01-14 温州大学 Power quality disturbance identification method based on improved wavelet threshold method and MPA (Multi-Point application) optimized PNN (Passive network)
CN115663784A (en) * 2021-12-15 2023-01-31 三峡大学 Power quality disturbance classification method for power distribution network based on HATF and E-CNN
CN114533086A (en) * 2022-02-21 2022-05-27 昆明理工大学 Motor imagery electroencephalogram decoding method based on spatial domain characteristic time-frequency transformation
CN117312902A (en) * 2023-07-16 2023-12-29 国网青海省电力公司电力科学研究院 Power grid power quality on-line state estimation method based on wavelet change and convolutional neural network

Also Published As

Publication number Publication date
CN117520926A (en) 2024-02-06

Similar Documents

Publication Publication Date Title
KR102213013B1 (en) Frequency-based audio analysis using neural networks
CN112733749B (en) Real-time pedestrian detection method integrating attention mechanism
Zhou et al. Adaptive genetic algorithm-aided neural network with channel state information tensor decomposition for indoor localization
Paul et al. Face recognition using principal component analysis method
Soro et al. Joint time-frequency RSSI features for convolutional neural network-based indoor fingerprinting localization
CN112819019B (en) Classification model generation device and classification model generation method thereof
CN116560895B (en) Fault diagnosis method for mechanical equipment
Qian et al. FESSD: SSD target detection based on feature fusion and feature enhancement
Yeswanth et al. Residual skip network-based super-resolution for leaf disease detection of grape plant
Chen et al. Multi-scale split dual calibration network with periodic information for interpretable fault diagnosis of rotating machinery
CN117520926B (en) Electric energy quality analysis method and device
CN112884062B (en) Motor imagery classification method and system based on CNN classification model and generated countermeasure network
CN112380903B (en) Human body activity recognition method based on WiFi-CSI signal enhancement
CN115469303A (en) Cognitive biological radar method and device for detecting human body posture and vital signs
CN108257148B (en) Target suggestion window generation method of specific object and application of target suggestion window generation method in target tracking
Luo et al. A fault diagnosis method for rotating machinery with variable speed based on multi-feature fusion and improved ShuffleNet V2
Zhang et al. Watermelon ripeness detection via extreme learning machine with kernel principal component analysis based on acoustic signals
CN115328661B (en) Computing power balance execution method and chip based on voice and image characteristics
Rezvani et al. Lagrangian-based pattern extraction for edge computing in the internet of things
CN113688655B (en) Method, device, computer equipment and storage medium for identifying interference signals
CN109215057B (en) High-performance visual tracking method and device
Guo et al. A single feature for human activity recognition using two-dimensional acoustic array
Gao Research on modulation recognition method of electromagnetic signal based on wavelet transform convolutional neural network
De Aguiar et al. ST-NILM: A Wavelet Scattering-Based Architecture for Feature Extraction and Multi-Label Classification in NILM Signals
Zhou et al. Radar-Based Swimming Activity Recognition with Temporal Dynamic Convolution and Spectral Data Augmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant