CN118194141A - Power consumption behavior discriminating method and system - Google Patents

Power consumption behavior discriminating method and system Download PDF

Info

Publication number
CN118194141A
CN118194141A CN202410613784.9A CN202410613784A CN118194141A CN 118194141 A CN118194141 A CN 118194141A CN 202410613784 A CN202410613784 A CN 202410613784A CN 118194141 A CN118194141 A CN 118194141A
Authority
CN
China
Prior art keywords
representing
electric equipment
vector
sequence
transducer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410613784.9A
Other languages
Chinese (zh)
Inventor
蔺菲
刘辉舟
孙伟
孙建
丁建顺
陈征
高寅
嵇爱琼
刘景姝
张悦
马昆
张文琪
李双双
李欣然
郭慧珠
庄磊
梁晓伟
王凯
刘单华
常乐
任民
冯欣
孙伟红
李红艳
李奇越
李帷韬
张志强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei University of Technology
Marketing Service Center of State Grid Anhui Electric Power Co Ltd
Original Assignee
Hefei University of Technology
Marketing Service Center of State Grid Anhui Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei University of Technology, Marketing Service Center of State Grid Anhui Electric Power Co Ltd filed Critical Hefei University of Technology
Priority to CN202410613784.9A priority Critical patent/CN118194141A/en
Publication of CN118194141A publication Critical patent/CN118194141A/en
Pending legal-status Critical Current

Links

Landscapes

  • Complex Calculations (AREA)

Abstract

The invention discloses a method and a system for discriminating electricity consumption behaviors, wherein the method comprises the following steps: collecting the electric energy parameter variation of a bus when each electric equipment of a user is accessed to construct a data set; performing adaptive wavelet threshold denoising on the data in the normalized data set; performing dimension reduction on the denoised data set to obtain a vector for feature classification; inputting the vector for feature classification into a multi-classification task model based on a transducer, training the multi-classification task model based on the transducer, and judging the electricity consumption behavior by using the trained multi-classification task model based on the transducer; the invention has the advantages that: the calculation efficiency is improved, and the accuracy of judging the electricity utilization behavior is improved.

Description

Power consumption behavior discriminating method and system
Technical Field
The invention relates to the field of electricity behavior analysis, in particular to an electricity behavior judging method and system.
Background
The electricity consumption behavior discrimination aims to realize the targets of intelligent energy management, abnormal detection and the like through the identification and analysis of the electricity consumption behavior of a user, so that the energy utilization efficiency is improved, the safe and stable operation of a power system is ensured, the energy conservation and emission reduction of the user are promoted, and the green low-carbon development is promoted.
The traditional electricity behavior analysis method is often dependent on manually formulated rules or simple statistical models, has the problems of low precision and weak generalization capability, and is difficult to process large-scale and diversified electricity data when the electricity data size is increased and the complexity is increased, so that the analysis efficiency is low. With the development of artificial intelligence, more and more researches start to try to utilize a deep learning model to perform feature learning on user electricity data so as to realize more accurate and efficient electricity behavior discrimination, for example, an abnormal electricity utilization detection method of a power grid based on an attention mechanism and a residual network, which is disclosed in Chinese patent publication No. CN 114676742A. However, since the extraction of the complete user features requires analysis by using the power parameter data in a longer period, the analysis time and the calculation complexity of the power consumption behavior corresponding to the data set in a longer period are greatly increased. And the electric energy parameters often contain noise, which affects the accuracy of the judgment of the electric power consumption behavior.
In order to solve the above problems, the prior art generally adopts LSTM and RNN as classification models to classify data, thereby reducing computational complexity and time cost. However, existing LSTM and RNN have some drawbacks as classification models. They are susceptible to gradient extinction or explosion when dealing with long-term dependencies of data, and it is difficult to capture key information in long sequences, resulting in low accuracy in discrimination of power consumption behavior. In addition, the parameter adjusting process is complicated, effective parallel calculation is difficult to perform, the efficiency is low, the length of an input sequence is limited to a certain extent, part of important information can be lost, and the accuracy of judging the power utilization behavior is further low.
Disclosure of Invention
The invention aims to solve the technical problems of low calculation efficiency and low accuracy of the electricity behavior judging method in the prior art.
The invention solves the technical problems by the following technical means: the electricity consumption behavior distinguishing method comprises the following steps:
Step one, collecting the electric energy parameter variation of a bus when each electric equipment of a user is accessed to construct a data set;
Step two, performing adaptive wavelet threshold denoising on the data in the normalized data set;
step three, performing dimension reduction on the denoised data set to obtain a vector for feature classification;
And fourthly, inputting the vector for feature classification into a multi-classification task model based on a transducer, training the multi-classification task model based on the transducer, and judging the electricity utilization behavior by using the trained multi-classification task model based on the transducer.
Further, the first step includes:
collecting the electric energy parameter variation of a bus when each electric equipment of a certain user is accessed to construct a data set Wherein/>Representing the transpose of the matrix,/>Representing the total number of consumers of the user,/>Represents the/>Electric energy parameter variation quantity of bus when individual electric equipment is accessed and/>、/>、/>、/>Respectively represent the/>The variation of the voltage, current, active power and reactive power of the bus when the electric equipment is connected, and/>、/>、/>,/>Representing the number of sampling points,/>Represents the/>The voltage of the bus is at the/>, when the individual electric equipment is connectedVariation of the individual sample points,/>Represents the/>The current of the bus is at the/>, when the individual electric equipment is connectedVariation of the individual sample points,/>Represents the/>The active power of the bus is at the/>, when the individual electric equipment is accessedVariation of the individual sample points,/>Represents the/>Reactive power of bus is in/>, when individual electric equipment is accessedThe amount of change in the individual sampling points; for data set/>Normalized data set/>Wherein/>Is/>Is a line normalization result of (2).
Further, the second step includes:
step 2.1, initializing m=1, m representing the normalized dataset />A row;
step 2.2 normalizing the data set using wavelet transform Middle/>Line vector/>Go/>Layer decomposition to obtain/>Personal coefficient sequence/>Wherein/>Representation/>Go/>Layer decomposition of the first layer/>Sequence of detail coefficients,/>Representation/>Go/>An approximation coefficient sequence of layer decomposition;
Step 2.3, adopting the formula (1) to pair coefficient sequences Denoising, formula (1) is as follows
(1)
Wherein,Representing the coefficient sequence/>Middle/>The/>, of the individual sequencesElement,/>Representation/>Absolute value of/>Representing the coefficient sequence/>Post denoising/>The/>, of the individual sequencesElement,/>Representing the coefficient sequence/>Results after denoising,/>Representing a sign function,/>Exponential function representing the base of the natural constant e,/>As a regulatory factor,/>Is a set threshold value and
(2)
Wherein,Represents the standard deviation of noise;
step 2.4, pair Inverse wavelet transform to obtain/>,/>Representation/>The denoised signal;
Step 2.5, if Let/>Returning to step 2.2 until/>Stopping at the time to obtain,/>Representing normalized dataset/>Results after denoising and/>For/>Is/>The denoised signal.
Still further, the third step includes:
Step 3.1, normalized data set Signal after denoising/>Principal component analysis by PCA technique will/>The component characteristics of (a) are arranged in descending order according to the variance, and the/>The r component features with medium variance ranked in front form a low-dimensional feature set/>Wherein/>Is/>,/>Is/>,/>Representing low-dimensional feature set/>An ith row vector in (a);
Step 3.2, obtaining Gaussian probability distribution matrix/>,/>Is the ith row vector of the Gaussian probability distribution matrix and/>,/>Is electric equipment/>With electric equipment/>Is obtained from the following formula (3):
(3)
In the formula (3), the amino acid sequence of the compound, Representing the electric equipment/>With electric equipment/>Single-sided probability of/>Representing the electric equipment/>With electric equipment/>Single-sided probability of/>Obtained from the formula (4):
(4)
In the formula (4), the amino acid sequence of the compound, Is the standard deviation of the Gaussian probability distribution matrix;
step 3.3, defining the current iteration sequence number as ,/>Wherein/>The total iteration times; initialization/>; When/>Let the first/>Low-dimensional feature set/>, of the next iteration
Step 3.4, obtainingT distribution probability matrix/>Wherein/>I-th row vector and/>, of t-distribution probability matrix;/>Electric equipment/>, respectivelyWith electric equipment/>In the first placeLow-dimensional feature set/>, of the next iterationThe similarity of (3) is obtained by the following formula (5):
(5)
Wherein, Is L1 norm sign,/>For/>Middle/>A row vector;
Step 3.5, obtaining Gradient vector/>,/>For/>Middle/>Row vectors and are obtained from equation (6):
(6)
step 3.6, calculating the correction of the low-dimensional feature set When/>Time, orderWill/>As/>Returning to the steps 3.4 to 3.6; when/>Stopping iteration and obtaining the iteration result/>, of the last low-dimensional feature setNoted as vector/>, for feature classificationWherein/>And the low-dimensional electric energy parameter variation quantity of the ith electric equipment is represented.
Still further, the workflow of the transducer-based multi-classification task model is:
step 4.1, vector for feature classification Performing input embedding operation to obtain sequence/>
(7)
Wherein,Is a weight matrix;
step 4.2, in sequence Embedded category coding information/>To obtain the expression sequence/>Category sequence of belonging category/>Weighting the category sequence to obtain a sequence/>
(8)
Step 4.3, multi-head attention Module sequencingThe three groups of independent linear layers are used for linear transformation to generate a query vector Q, a key vector K and a value vector V with the same dimension, and the linear transformation process is expressed as follows: /(I),/>,/>、/>、/>The weights of the three sets of parameter independent linear layers are represented, respectively, and the outputs of the multi-headed attention module are mapped as follows:
(9)
(10)
(11)
Wherein, Representing a multi-headed attention module,/>Is/>Function,/>Dimensions of query vector Q and key vector K in the attention header representing a multi-headed attention module,/>The representation represents the/>Output results of the attention heads,/>For the weight of the query matrix,/>Is the weight of the key matrix,/>Is the weight of the value matrix,/>Is a fine-grained feature vector,/>Representing a multi-headed attentiveness mechanism,/>Is the number of attention heads,/>Representing data fusion operations,/>Is a matrix of attention weights;
the multi-layer perceptron module receives the fine-grained feature vector Output hidden layer feature/>Expressed as by the formula of
(12)
Wherein,Is a multi-layer perceptron module,/>To activate the function,/>And/>Weight matrix of multi-layer perceptron module,/>And/>Is an offset term of the multi-layer perceptron module;
By means of Function pair implicit layer feature/>Processing to obtain feature vector/>The following formula (13)
(13)
Wherein,Is a standard Gaussian distribution function,/>Representing an error function;
then to the feature vector Normalization operation is carried out to obtain output characteristics/>As formula (14)
(14)
Wherein,Representing feature vectors/>Mean value of/(I)Representing feature vectors/>Variance of/>Is a positive number,/>Is a scaling parameter,/>Is a bias parameter;
Step 4.4, outputting the characteristics Inputting into the full-connection layer to obtain the output characteristics/>, of the full-connection layer
(15)
Wherein,Is the weight matrix of the full connection layer,/>Bias terms for the full connection layer; output features of fully connected layersThe power consumption behavior judgment result is a power consumption behavior judgment result based on the multi-classification task model prediction of a transducer.
Further, the training process of the multi-classification task model based on the transducer is as follows:
Inputting the vector for feature classification into a multi-classification task model based on a transducer, calculating the value of a loss function, judging whether the loss function converges or reaches the iteration times, if yes, stopping iteration to obtain a trained multi-classification task model based on the transducer, if no, continuously adjusting the parameters of the multi-classification task model based on the transducer, and recalculating the value of the loss function until the loss function converges or reaches the iteration times to obtain the trained multi-classification task model based on the transducer, wherein the formula of the loss function is as follows
(16)
Wherein,Representing the total number of samples used for training,/>Representing category number,/>Representation of samples/>Whether or not it belongs to category/>
The invention also provides a power consumption behavior judging system, which comprises:
The data set construction module is used for collecting the electric energy parameter variation of the bus when each electric equipment of a user is accessed to construct a data set;
the data denoising module is used for performing adaptive wavelet threshold denoising on the data in the normalized data set;
The data dimension reduction module is used for reducing dimension of the denoised data set to obtain a vector for feature classification;
the model building module is used for inputting the vector for feature classification into the multi-classification task model based on the transducer, training the multi-classification task model based on the transducer, and judging the electricity consumption behavior by using the trained multi-classification task model based on the transducer.
Further, the data set construction module is further configured to:
collecting the electric energy parameter variation of a bus when each electric equipment of a certain user is accessed to construct a data set Wherein/>Representing the transpose of the matrix,/>Representing the total number of consumers of the user,/>Represents the/>Electric energy parameter variation quantity of bus when individual electric equipment is accessed and/>、/>、/>、/>Respectively represent the/>The variation of the voltage, current, active power and reactive power of the bus when the electric equipment is connected, and/>、/>、/>,/>Representing the number of sampling points,/>Represents the/>The voltage of the bus is at the/>, when the individual electric equipment is connectedVariation of the individual sample points,/>Represents the/>The current of the bus is at the/>, when the individual electric equipment is connectedVariation of the individual sample points,/>Represents the/>The active power of the bus is at the/>, when the individual electric equipment is accessedVariation of the individual sample points,/>Represents the/>Reactive power of bus is in/>, when individual electric equipment is accessedThe amount of change in the individual sampling points; for data set/>Normalized data set/>Wherein/>Is/>Is a line normalization result of (2).
Still further, the data denoising module is further configured to:
step 2.1, initializing m=1, m representing the normalized dataset />A row;
step 2.2 normalizing the data set using wavelet transform Middle/>Line vector/>Go/>Layer decomposition to obtain/>Personal coefficient sequence/>Wherein/>Representation/>Go/>Layer decomposition of the first layer/>Sequence of detail coefficients,/>Representation/>Go/>An approximation coefficient sequence of layer decomposition;
Step 2.3, adopting the formula (1) to pair coefficient sequences Denoising, formula (1) is as follows
(1)
Wherein,Representing the coefficient sequence/>Middle/>The/>, of the individual sequencesElement,/>Representation/>Absolute value of/>Representing the coefficient sequence/>Post denoising/>The/>, of the individual sequencesElement,/>Representing the coefficient sequence/>Results after denoising,/>Representing a sign function,/>Exponential function representing the base of the natural constant e,/>As a regulatory factor,/>Is a set threshold value and
(2)
Wherein,Represents the standard deviation of noise;
step 2.4, pair Inverse wavelet transform to obtain/>,/>Representation/>The denoised signal;
Step 2.5, if Let/>Returning to step 2.2 until/>Stopping at the time to obtain/>Representing normalized dataset/>Results after denoising and/>For/>Is/>The denoised signal.
Still further, the data dimension reduction module is further configured to:
Step 3.1, normalized data set Signal after denoising/>Principal component analysis by PCA technique will/>The component characteristics of (a) are arranged in descending order according to the variance, and the/>The r component features with medium variance ranked in front form a low-dimensional feature set/>Wherein/>Is/>,/>Is/>,/>Representing low-dimensional feature set/>An ith row vector in (a);
Step 3.2, obtaining Gaussian probability distribution matrix/>,/>Is the ith row vector of the Gaussian probability distribution matrix and/>,/>Is electric equipment/>With electric equipment/>Is obtained from the following formula (3):
(3)
In the formula (3), the amino acid sequence of the compound, Representing the electric equipment/>With electric equipment/>Single-sided probability of/>Representing the electric equipment/>With electric equipment/>Single-sided probability of/>Obtained from the formula (4):
(4)
In the formula (4), the amino acid sequence of the compound, Is the standard deviation of the Gaussian probability distribution matrix;
step 3.3, defining the current iteration sequence number as ,/>Wherein/>The total iteration times; initialization/>; When/>Let the first/>Low-dimensional feature set/>, of the next iteration
Step 3.4, obtainingT distribution probability matrix/>Wherein/>I-th row vector and/>, of t-distribution probability matrix;/>Electric equipment/>, respectivelyWith electric equipment/>In the first placeLow-dimensional feature set/>, of the next iterationThe similarity of (3) is obtained by the following formula (5):
(5)
Wherein, Is L1 norm sign,/>For/>Middle/>A row vector;
Step 3.5, obtaining Gradient vector/>,/>For/>Middle/>Row vectors and are obtained from equation (6):
(6)
step 3.6, calculating the correction of the low-dimensional feature set When/>Time, orderWill/>As/>Returning to the steps 3.4 to 3.6; when/>Stopping iteration and obtaining the iteration result/>, of the last low-dimensional feature setNoted as vector/>, for feature classificationWherein/>And the low-dimensional electric energy parameter variation quantity of the ith electric equipment is represented.
Still further, the workflow of the transducer-based multi-classification task model is:
step 4.1, vector for feature classification Performing input embedding operation to obtain sequence/>
(7)
Wherein,Is a weight matrix;
step 4.2, in sequence Embedded category coding information/>To obtain the expression sequence/>Category sequence of belonging category/>Weighting the category sequence to obtain a sequence/>
(8)
Step 4.3, multi-head attention Module sequencingThe three groups of independent linear layers are used for linear transformation to generate a query vector Q, a key vector K and a value vector V with the same dimension, and the linear transformation process is expressed as follows: /(I),/>,/>、/>、/>The weights of the three sets of parameter independent linear layers are represented, respectively, and the outputs of the multi-headed attention module are mapped as follows:
(9)
(10)
(11)
Wherein, Representing a multi-headed attention module,/>Is/>Function,/>Dimensions of query vector Q and key vector K in the attention header representing a multi-headed attention module,/>The representation represents the/>Output results of the attention heads,/>For the weight of the query matrix,/>Is the weight of the key matrix,/>Is the weight of the value matrix,/>Is a fine-grained feature vector,/>Representing a multi-headed attentiveness mechanism,/>Is the number of attention heads,/>Representing data fusion operations,/>Is a matrix of attention weights;
the multi-layer perceptron module receives the fine-grained feature vector Output hidden layer feature/>Expressed as by the formula of
(12)
Wherein,Is a multi-layer perceptron module,/>To activate the function,/>And/>Weight matrix of multi-layer perceptron module,/>And/>Is an offset term of the multi-layer perceptron module;
By means of Function pair implicit layer feature/>Processing to obtain feature vector/>The following formula (13)
(13)
Wherein,Is a standard Gaussian distribution function,/>Representing an error function;
then to the feature vector Normalization operation is carried out to obtain output characteristics/>As formula (14)
(14)
Wherein,Representing feature vectors/>Mean value of/(I)Representing feature vectors/>Variance of/>Is a positive number,/>Is a scaling parameter,/>Is a bias parameter;
Step 4.4, outputting the characteristics Inputting into the full-connection layer to obtain the output characteristics/>, of the full-connection layer
(15)
Wherein,Is the weight matrix of the full connection layer,/>Bias terms for the full connection layer; output features of fully connected layersThe power consumption behavior judgment result is a power consumption behavior judgment result based on the multi-classification task model prediction of a transducer.
Further, the training process of the multi-classification task model based on the transducer is as follows:
Inputting the vector for feature classification into a multi-classification task model based on a transducer, calculating the value of a loss function, judging whether the loss function converges or reaches the iteration times, if yes, stopping iteration to obtain a trained multi-classification task model based on the transducer, if no, continuously adjusting the parameters of the multi-classification task model based on the transducer, and recalculating the value of the loss function until the loss function converges or reaches the iteration times to obtain the trained multi-classification task model based on the transducer, wherein the formula of the loss function is as follows
(16)
Wherein,Representing the total number of samples used for training,/>Representing category number,/>Representation of samples/>Whether or not it belongs to category/>
The invention has the advantages that:
(1) Because the transducer can capture long-distance dependency relations through a self-attention mechanism, the parameter adjustment process is simplified, vectors for feature classification are input into the multi-classification task model based on the transducer, the multi-classification task model based on the transducer is trained, and the trained multi-classification task model based on the transducer is utilized to conduct electricity consumption behavior judgment, so that key information in a long sequence can be captured, the accuracy of electricity consumption behavior judgment is improved, the parameter adjustment process is simplified, and the calculation efficiency is improved.
(2) In dealing with long-term dependencies, gradient disappearance or explosion is a problem faced by conventional LSTM and RNN. This is because the gradients multiply or add over time during the back propagation process, resulting in gradients that gradually go to zero or infinity. In order to solve the problem, the invention introduces a network structure based on a multi-classification task model of a transducer, and the transducer captures long-distance dependence in a sequence by using a self-attention mechanism without involving gradual computation, so that the problem of gradient disappearance or explosion is avoided. The transducer can better capture key information in a long sequence, and accuracy of electricity utilization behavior discrimination is improved.
(3) The kernel of the multi-classification task model based on the transducer is self-attention mechanism and position coding, the super parameters are relatively less, and gradual calculation is not involved, so that the parameter adjusting process is simpler. In addition, due to parallelism of the attention mechanism, the multi-classification task model based on the transducer can effectively perform parallel computation, and the computation efficiency is improved. Second, conventional LSTM and RNN models have some limitations on the length of the input sequence, while the transducer-based multi-classification task model employs a self-attention mechanism that can efficiently process sequence data of arbitrary length without losing important information. In addition, the multi-classification task model based on the transducer also introduces position codes, which is beneficial to the model to better understand the information of different positions in the sequence, thereby further reducing the information loss and improving the accuracy of the judgment of the power utilization behavior.
(4) The invention firstly adopts wavelet threshold denoising and dimension reduction technology to process electric energy parameters and provides a high-quality data set for the subsequent judgment of electricity utilization behaviors. The traditional wavelet threshold denoising method adopts a universal threshold, and the universal threshold is fixed on each scale and does not have self-adaptive capacity, so that the denoising effect is greatly influenced. The invention provides a self-adaptive wavelet threshold denoising method, which can be seen from formulas from step 2.1 to step 2.5,Is a set threshold value and/>The threshold value is adaptively changed according to different sampling points and standard deviation of noise, so that the self-adaptive noise-removing device has self-adaptive capacity and improves noise-removing effect.
(5) As can be seen from equation (1) of the adaptive wavelet denoising process of the present invention,At/>The time is continuous, so that abrupt truncation is avoided, and the smoothness of the denoised signal is ensured. When/>Greater than the threshold and increasing gradually,/>And/>The difference value is approximately 0, so that a fixed offset of the whole denoised signal is avoided, and the distortion problem of the signal after the wavelet transformation process is reduced.
(6) The invention steps 3.2-3.6 are iterative algorithms, the result of which can be influenced by randomness in the initial value selection and optimization process, the invention uses PCA technology to reduce the dimension of data before the steps 3.2-3.6 are carried out, the result of PCA dimension reduction is used as the initial value of the steps 3.2-3.6, so that the steps 3.2-3.6 can search local structures from a better starting point, which helps to accelerate the convergence process, thereby obtaining the vector for feature classification more quickly.
Drawings
Fig. 1 is a flowchart of a power consumption behavior discriminating method according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions in the embodiments of the present invention will be clearly and completely described in the following in conjunction with the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
As shown in fig. 1, the invention provides a power consumption behavior discriminating method, which comprises the following steps:
s1, collecting the electric energy parameter variation of a bus when each electric equipment of a user is accessed to construct a data set; the specific process is as follows:
collecting the electric energy parameter variation of a bus when each electric equipment of a certain user is accessed to construct a data set Wherein/>Representing the transpose of the matrix,/>Representing the total number of consumers of the user,/>Represents the/>Electric energy parameter variation quantity of bus when individual electric equipment is accessed and/>、/>、/>、/>Respectively represent the/>The variation of the voltage, current, active power and reactive power of the bus when the electric equipment is connected, and/>、/>、/>,/>Representing the number of sampling points,/>Represents the/>The voltage of the bus is at the/>, when the individual electric equipment is connectedVariation of the individual sample points,/>Represents the/>The current of the bus is at the/>, when the individual electric equipment is connectedVariation of the individual sample points,/>Represents the/>The active power of the bus is at the/>, when the individual electric equipment is accessedVariation of the individual sample points,/>Represents the/>Reactive power of bus is in/>, when individual electric equipment is accessedThe amount of change in the individual sampling points; for data set/>Normalized data set/>Wherein/>Is/>Is a line normalization result of (2).
S2, performing adaptive wavelet threshold denoising on the data in the normalized data set; the specific process is as follows:
step 2.1, initializing m=1, m representing the normalized dataset />A row;
step 2.2 normalizing the data set using wavelet transform Middle/>Line vector/>Go/>Layer decomposition to obtain/>Personal coefficient sequence/>Wherein/>Representation/>Go/>Layer decomposition of the first layer/>Sequence of detail coefficients,/>Representation/>Go/>An approximation coefficient sequence of layer decomposition; among them, wavelet transformation is a wavelet transformation method described in the prior art, specific references Wen Li, liu Zhengshi, ge Yunjian, several methods of wavelet denoising [ J ]. University of synthetic fertilizer industry journal (Nature science edition), 2002, (02): 167-172.
Step 2.3, adopting the formula (1) to pair coefficient sequencesDenoising, formula (1) is as follows
(1)
Wherein,Representing the coefficient sequence/>Middle/>The/>, of the individual sequencesElement,/>Representation/>Absolute value of/>Representing the coefficient sequence/>Post denoising/>The/>, of the individual sequencesElement,/>Representing the coefficient sequence/>Results after denoising,/>Representing a sign function,/>Exponential function representing the base of the natural constant e,/>As a regulatory factor,/>Is a set threshold value and
(2)
Wherein,Represents the standard deviation of noise;
step 2.4, pair Inverse wavelet transform to obtain/>,/>Representation/>The denoised signal;
Step 2.5, if Let/>Returning to step 2.2 until/>Stopping at the time to obtain/>Representing normalized dataset/>Results after denoising and/>For/>Is/>The denoised signal.
S3, performing dimension reduction on the denoised data set to obtain a vector for feature classification; the specific process is as follows:
Step 3.1, normalized data set Signal after denoising/>Principal component analysis by PCA technique will/>The component characteristics of (a) are arranged in descending order according to the variance, and the/>The r component features with medium variance ranked in front form a low-dimensional feature set/>Wherein/>Is/>,/>Is/>,/>Representing low-dimensional feature set/>An ith row vector in (a); the PCA technology belongs to the prior art, and particularly relates to a PCA dimension reduction method recorded in the references of Baiyun, yan Hua, weiyuan, temperature field reconstruction [ J ] automation and instrumentation, 2023,38 (09): 21-26.DOI:10.19557/j.cnki.1001-9944.2023.09.005.
Step 3.2, obtainingGaussian probability distribution matrix/>,/>Is the ith row vector of the Gaussian probability distribution matrix and/>,/>Is electric equipment/>With electric equipment/>Is obtained from the following formula (3):
(3)
In the formula (3), the amino acid sequence of the compound, Representing the electric equipment/>With electric equipment/>Single-sided probability of/>Representing the electric equipment/>With electric equipment/>Single-sided probability of/>Obtained from the formula (4):
(4)
In the formula (4), the amino acid sequence of the compound, Is the standard deviation of the Gaussian probability distribution matrix; /(I)
Step 3.3, defining the current iteration sequence number as,/>Wherein/>The total iteration times; initialization/>; When/>Let the first/>Low-dimensional feature set/>, of the next iteration
Step 3.4, obtainingT distribution probability matrix/>Wherein/>I-th row vector and/>, of t-distribution probability matrix;/>Electric equipment/>, respectivelyWith electric equipment/>In the first placeLow-dimensional feature set/>, of the next iterationThe similarity of (3) is obtained by the following formula (5):
(5)
Wherein, Is L1 norm sign,/>For/>Middle/>A row vector;
Step 3.5, obtaining Gradient vector/>,/>For/>Middle/>Row vectors and are obtained from equation (6):
(6)
step 3.6, calculating the correction of the low-dimensional feature set When/>Time, orderWill/>As/>Returning to the steps 3.4 to 3.6; when/>Stopping iteration and obtaining the iteration result/>, of the last low-dimensional feature setNoted as vector/>, for feature classificationWherein/>And the low-dimensional electric energy parameter variation quantity of the ith electric equipment is represented.
S4, inputting the vector for feature classification into a multi-classification task model based on a transducer, training the multi-classification task model based on the transducer, and judging electricity utilization behaviors by using the trained multi-classification task model based on the transducer; the workflow of the multi-classification task model based on the transducer is as follows:
step 4.1, vector for feature classification Performing input embedding operation to obtain sequence/>
(7)
Wherein,Is a weight matrix;
step 4.2, in sequence Embedded category coding information/>To obtain the expression sequence/>Category sequence of belonging category/>Weighting the category sequence to obtain a sequence/>
(8)
Step 4.3, multi-head attention Module sequencingThe three groups of independent linear layers are used for linear transformation to generate a query vector Q, a key vector K and a value vector V with the same dimension, and the linear transformation process is expressed as follows: /(I),/>,/>、/>、/>The weights of the three sets of parameter independent linear layers are represented respectively, while the output map of the multi-headed attention module is as follows:/>
(9)
(10)
(11)
Wherein,Representing a multi-headed attention module,/>Is/>Function,/>Dimensions of query vector Q and key vector K in the attention header representing a multi-headed attention module,/>The representation represents the/>Output results of the attention heads,/>For the weight of the query matrix,/>Is the weight of the key matrix,/>Is the weight of the value matrix,/>Is a fine-grained feature vector,/>Representing a multi-headed attentiveness mechanism,/>Is the number of attention heads,/>Representing data fusion operations,/>Is a matrix of attention weights;
the multi-layer perceptron module receives the fine-grained feature vector Output hidden layer feature/>Expressed as by the formula of
(12)
Wherein,Is a multi-layer perceptron module,/>To activate the function,/>And/>Weight matrix of multi-layer perceptron module,/>And/>Is an offset term of the multi-layer perceptron module;
By means of Function pair implicit layer feature/>Processing to obtain feature vector/>The following formula (13)
(13)
Wherein,Is a standard Gaussian distribution function,/>Representing an error function;
then to the feature vector Normalization operation is carried out to obtain output characteristics/>As formula (14)
(14)
Wherein,Representing feature vectors/>Mean value of/(I)Representing feature vectors/>Variance of/>Is a positive number,/>Is a scaling parameter,/>Is a bias parameter;
Step 4.4, outputting the characteristics Inputting into the full-connection layer to obtain the output characteristics/>, of the full-connection layer
(15)
Wherein,Is the weight matrix of the full connection layer,/>Bias terms for the full connection layer; output features of fully connected layersThe power consumption behavior judgment result is a power consumption behavior judgment result based on the multi-classification task model prediction of a transducer.
The training process of the multi-classification task model based on the Transformer is as follows:
Inputting the vector for feature classification into a multi-classification task model based on a transducer, calculating the value of a loss function, judging whether the loss function converges or reaches the iteration times, if yes, stopping iteration to obtain a trained multi-classification task model based on the transducer, if no, continuously adjusting the parameters of the multi-classification task model based on the transducer, and recalculating the value of the loss function until the loss function converges or reaches the iteration times to obtain the trained multi-classification task model based on the transducer, wherein the formula of the loss function is as follows
(16)
Wherein,Representing the total number of samples used for training,/>Representing category number,/>Representation of samples/>Whether or not it belongs to category/>
According to the technical scheme, the electric energy parameters are processed by adopting the adaptive wavelet threshold denoising and dimension reduction technology, a high-quality data set is provided for subsequent electric behavior discrimination, then vectors for feature classification are input into a multi-classification task model based on a transducer, the multi-classification task model based on the transducer is trained, and the trained multi-classification task model based on the transducer is utilized for electric behavior discrimination, so that key information in a long sequence can be captured, the accuracy of electric behavior discrimination is improved, the parameter adjustment process is simplified, and the calculation efficiency is improved.
Example 2
Based on embodiment 1, embodiment 2 of the present invention further provides an electricity behavior discrimination system, including:
The data set construction module is used for collecting the electric energy parameter variation of the bus when each electric equipment of a user is accessed to construct a data set;
the data denoising module is used for performing adaptive wavelet threshold denoising on the data in the normalized data set;
The data dimension reduction module is used for reducing dimension of the denoised data set to obtain a vector for feature classification;
the model building module is used for inputting the vector for feature classification into the multi-classification task model based on the transducer, training the multi-classification task model based on the transducer, and judging the electricity consumption behavior by using the trained multi-classification task model based on the transducer.
Specifically, the data set construction module is further configured to:
collecting the electric energy parameter variation of a bus when each electric equipment of a certain user is accessed to construct a data set Wherein/>Representing the transpose of the matrix,/>Representing the total number of consumers of the user,/>Represents the/>Electric energy parameter variation quantity of bus when individual electric equipment is accessed and/>、/>、/>、/>Respectively represent the/>The variation of the voltage, current, active power and reactive power of the bus when the electric equipment is connected, and/>、/>、/>,/>Representing the number of sampling points,/>Represents the/>The voltage of the bus is at the/>, when the individual electric equipment is connectedVariation of the individual sample points,/>Represents the/>The current of the bus is at the/>, when the individual electric equipment is connectedVariation of the individual sample points,/>Represents the/>The active power of the bus is at the/>, when the individual electric equipment is accessedVariation of the individual sample points,/>Represents the/>Reactive power of bus is in/>, when individual electric equipment is accessedThe amount of change in the individual sampling points; for data set/>Normalized data set/>Wherein/>Is/>Is a line normalization result of (2).
More specifically, the data denoising module is further configured to:
step 2.1, initializing m=1, m representing the normalized dataset />A row;
step 2.2 normalizing the data set using wavelet transform Middle/>Line vector/>Go/>Layer decomposition to obtain/>Personal coefficient sequence/>Wherein/>Representation/>Go/>Layer decomposition of the first layer/>Sequence of detail coefficients,/>Representation/>Go/>An approximation coefficient sequence of layer decomposition;
Step 2.3, adopting the formula (1) to pair coefficient sequences Denoising, formula (1) is as follows
(1)
Wherein,Representing the coefficient sequence/>Middle/>The/>, of the individual sequencesElement,/>Representation/>Absolute value of/>Representing the coefficient sequence/>Post denoising/>The/>, of the individual sequencesElement,/>Representing the coefficient sequence/>Results after denoising,/>Representing a sign function,/>Exponential function representing the base of the natural constant e,/>As a regulatory factor,/>Is a set threshold value and
(2)
Wherein,Represents the standard deviation of noise;
step 2.4, pair Inverse wavelet transform to obtain/>,/>Representation/>The denoised signal;
Step 2.5, if Let/>Returning to step 2.2 until/>Stopping at the time to obtain/>,/>Representing normalized dataset/>Results after denoising and/>For/>Is/>The denoised signal.
More specifically, the data dimension reduction module is further configured to:
Step 3.1, normalized data set Signal after denoising/>Principal component analysis by PCA technique will/>The component characteristics of (a) are arranged in descending order according to the variance, and the/>The r component features with medium variance ranked in front form a low-dimensional feature set/>Wherein/>Is/>,/>Is/>,/>Representing low-dimensional feature set/>An ith row vector in (a);
Step 3.2, obtaining Gaussian probability distribution matrix/>,/>Is the ith row vector of the Gaussian probability distribution matrix and/>,/>Is electric equipment/>With electric equipment/>Is obtained from the following formula (3):
(3)
In the formula (3), the amino acid sequence of the compound, Representing the electric equipment/>With electric equipment/>Single-sided probability of/>Representing the electric equipment/>With electric equipment/>Single-sided probability of/>Obtained from the formula (4):
(4)
In the formula (4), the amino acid sequence of the compound, Is the standard deviation of the Gaussian probability distribution matrix; /(I)
Step 3.3, defining the current iteration sequence number as,/>Wherein/>The total iteration times; initialization/>; When/>Let the first/>Low-dimensional feature set/>, of the next iteration
Step 3.4, obtainingT distribution probability matrix/>Wherein/>I-th row vector and/>, of t-distribution probability matrix;/>Electric equipment/>, respectivelyWith electric equipment/>In the first placeLow-dimensional feature set/>, of the next iterationThe similarity of (3) is obtained by the following formula (5):
(5)
Wherein, Is L1 norm sign,/>For/>Middle/>A row vector;
Step 3.5, obtaining Gradient vector/>,/>For/>Middle/>Row vectors and are obtained from equation (6):
(6)
step 3.6, calculating the correction of the low-dimensional feature set When/>Time, orderWill/>As/>Returning to the steps 3.4 to 3.6; when/>Stopping iteration and obtaining the iteration result/>, of the last low-dimensional feature setNoted as vector/>, for feature classificationWherein/>And the low-dimensional electric energy parameter variation quantity of the ith electric equipment is represented.
More specifically, the workflow of the multi-classification task model based on the transducer is as follows:
step 4.1, vector for feature classification Performing input embedding operation to obtain sequence/>
(7)
Wherein,Is a weight matrix;
step 4.2, in sequence Embedded category coding information/>To obtain the expression sequence/>Category sequence of belonging category/>Weighting the category sequence to obtain a sequence/>
(8)
Step 4.3, multi-head attention Module sequencingThe three groups of independent linear layers are used for linear transformation to generate a query vector Q, a key vector K and a value vector V with the same dimension, and the linear transformation process is expressed as follows: /(I),/>,/>、/>、/>The weights of the three sets of parameter independent linear layers are represented, respectively, and the outputs of the multi-headed attention module are mapped as follows:
(9)/>
(10)
(11)
Wherein, Representing a multi-headed attention module,/>Is/>Function,/>Dimensions of query vector Q and key vector K in the attention header representing a multi-headed attention module,/>The representation represents the/>Output results of the attention heads,/>For the weight of the query matrix,/>Is the weight of the key matrix,/>Is the weight of the value matrix,/>Is a fine-grained feature vector,/>Representing a multi-headed attentiveness mechanism,/>Is the number of attention heads,/>Representing data fusion operations,/>Is a matrix of attention weights;
the multi-layer perceptron module receives the fine-grained feature vector Output hidden layer feature/>Expressed as by the formula of
(12)
Wherein,Is a multi-layer perceptron module,/>To activate the function,/>And/>Weight matrix of multi-layer perceptron module,/>And/>Is an offset term of the multi-layer perceptron module;
By means of Function pair implicit layer feature/>Processing to obtain feature vector/>The following formula (13)
(13)
Wherein,Is a standard Gaussian distribution function,/>Representing an error function;
then to the feature vector Normalization operation is carried out to obtain output characteristics/>As formula (14)
(14)
Wherein,Representing feature vectors/>Mean value of/(I)Representing feature vectors/>Variance of/>Is a positive number,/>Is a scaling parameter,/>Is a bias parameter;
Step 4.4, outputting the characteristics Inputting into the full-connection layer to obtain the output characteristics/>, of the full-connection layer
(15)
Wherein,Is the weight matrix of the full connection layer,/>Bias terms for the full connection layer; output features of fully connected layersThe power consumption behavior judgment result is a power consumption behavior judgment result based on the multi-classification task model prediction of a transducer.
More specifically, the training process of the multi-classification task model based on the transducer is as follows:
Inputting the vector for feature classification into a multi-classification task model based on a transducer, calculating the value of a loss function, judging whether the loss function converges or reaches the iteration times, if yes, stopping iteration to obtain a trained multi-classification task model based on the transducer, if no, continuously adjusting the parameters of the multi-classification task model based on the transducer, and recalculating the value of the loss function until the loss function converges or reaches the iteration times to obtain the trained multi-classification task model based on the transducer, wherein the formula of the loss function is as follows
(16)/>
Wherein,Representing the total number of samples used for training,/>Representing category number,/>Representation of samples/>Whether or not it belongs to category/>
The above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (8)

1. The electricity consumption behavior distinguishing method is characterized by comprising the following steps of:
Step one, collecting the electric energy parameter variation of a bus when each electric equipment of a certain user is accessed to construct a data set Wherein/>Representing the transpose of the matrix,/>Representing the total number of consumers of the user,/>Represents the/>Electric energy parameter variation quantity of bus when individual electric equipment is accessed and/>、/>、/>、/>Respectively represent the/>The variation of the voltage, current, active power and reactive power of the bus when the electric equipment is connected, and/>、/>、/>,/>Representing the number of sampling points,/>Represents the/>The voltage of the bus is at the/>, when the individual electric equipment is connectedVariation of the individual sample points,/>Represents the/>The current of the bus is at the/>, when the individual electric equipment is connectedVariation of the individual sample points,/>Represents the/>The active power of the bus is at the/>, when the individual electric equipment is accessedVariation of the individual sample points,/>Represents the/>Reactive power of bus is in/>, when individual electric equipment is accessedThe amount of change in the individual sampling points; for data set/>Normalized data set/>Wherein/>Is/>Is a line normalization result;
Step two, performing adaptive wavelet threshold denoising on the data in the normalized data set;
step three, performing dimension reduction on the denoised data set to obtain a vector for feature classification;
And fourthly, inputting the vector for feature classification into a multi-classification task model based on a transducer, training the multi-classification task model based on the transducer, and judging the electricity utilization behavior by using the trained multi-classification task model based on the transducer.
2. The electricity consumption behavior discrimination method according to claim 1, wherein the step two includes:
step 2.1, initializing m=1, m representing the normalized dataset />A row;
step 2.2 normalizing the data set using wavelet transform Middle/>Line vector/>Go/>Layer decomposition to obtain/>Personal coefficient sequence/>Wherein/>Representation/>Go/>Layer decomposition of the first layer/>Sequence of detail coefficients,/>Representation/>Go/>An approximation coefficient sequence of layer decomposition;
Step 2.3, adopting the formula (1) to pair coefficient sequences Denoising, formula (1) is as follows
(1)
Wherein,Representing the coefficient sequence/>Middle/>The/>, of the individual sequencesElement,/>Representation/>Absolute value of/>Representing the coefficient sequence/>Post denoising/>The/>, of the individual sequencesElement,/>Representing the coefficient sequence/>Results after denoising,/>Representing a sign function,/>Exponential function representing the base of the natural constant e,/>As a regulatory factor,/>Is a set threshold value and
(2)
Wherein,Represents the standard deviation of noise;
step 2.4, pair Inverse wavelet transform to obtain/>,/>Representation/>The denoised signal;
Step 2.5, if Let/>Returning to step 2.2 until/>Stopping at the time to obtain/>Representing normalized dataset/>Results after denoising and/>For/>Is/>The denoised signal.
3. The electricity consumption behavior discrimination method according to claim 2, wherein said step three includes:
Step 3.1, normalized data set Signal after denoising/>Principal component analysis by PCA techniqueThe component characteristics of (a) are arranged in descending order according to the variance, and the/>The r component features with medium variance ranked in front form a low-dimensional feature set/>Wherein/>Is/>,/>Is of the dimension of,/>Representing low-dimensional feature set/>An ith row vector in (a);
Step 3.2, obtaining Gaussian probability distribution matrix/>,/>Is the ith row vector of the Gaussian probability distribution matrix and/>,/>Is electric equipment/>With electric equipment/>Is obtained from the following formula (3):
(3)
In the formula (3), the amino acid sequence of the compound, Representing the electric equipment/>With electric equipment/>Single-sided probability of/>Representing the electric equipment/>With electric equipment/>Single-sided probability of/>Obtained from the formula (4):
(4)
In the formula (4), the amino acid sequence of the compound, Is the standard deviation of the Gaussian probability distribution matrix;
step 3.3, defining the current iteration sequence number as ,/>Wherein/>The total iteration times; initialization/>; When (when)Let the first/>Low-dimensional feature set/>, of the next iteration
Step 3.4, obtainingT distribution probability matrix/>Wherein/>I-th row vector and/>, of t-distribution probability matrix;/>Electric equipment/>, respectivelyWith electric equipment/>In/>Low-dimensional feature set/>, of the next iterationThe similarity of (3) is obtained by the following formula (5):
(5)
Wherein, Is L1 norm sign,/>For/>Middle/>A row vector;
Step 3.5, obtaining Gradient vector/>,/>Is thatMiddle/>Row vectors and are obtained from equation (6):
(6)
step 3.6, calculating the correction of the low-dimensional feature set When/>Time, let/>Will/>As/>Returning to the steps 3.4 to 3.6; when/>Stopping iteration and obtaining the iteration result/>, of the last low-dimensional feature setNoted as vector/>, for feature classificationWherein/>And the low-dimensional electric energy parameter variation quantity of the ith electric equipment is represented.
4. A method for discriminating electricity consumption behavior according to claim 3 wherein said transform-based multi-classification task model workflow is:
step 4.1, vector for feature classification Performing input embedding operation to obtain sequence/>
(7)
Wherein,Is a weight matrix;
step 4.2, in sequence Embedded category coding information/>To obtain the expression sequence/>Category sequence of belonging category/>Weighting the category sequence to obtain a sequence/>
(8)
Step 4.3, multi-head attention Module sequencingThe three groups of independent linear layers are used for linear transformation to generate a query vector Q, a key vector K and a value vector V with the same dimension, and the linear transformation process is expressed as follows: /(I),/>,/>、/>、/>The weights of the three sets of parameter independent linear layers are represented, respectively, and the outputs of the multi-headed attention module are mapped as follows:
(9)
(10)
(11)
Wherein, Representing a multi-headed attention module,/>Is/>Function,/>Dimensions of query vector Q and key vector K in the attention header representing a multi-headed attention module,/>The representation represents the/>The output results of the individual attention heads,For the weight of the query matrix,/>Is the weight of the key matrix,/>Is the weight of the value matrix,/>Is a feature vector of a fine-grained degree,Representing a multi-headed attentiveness mechanism,/>Is the number of attention heads,/>Representing data fusion operations,/>Is a matrix of attention weights;
the multi-layer perceptron module receives the fine-grained feature vector Output hidden layer feature/>Expressed as by the formula of
(12)
Wherein,Is a multi-layer perceptron module,/>To activate the function,/>And/>Weight matrix of multi-layer perceptron module,/>And/>Is an offset term of the multi-layer perceptron module;
By means of Function pair implicit layer feature/>Processing to obtain feature vector/>The following formula (13)
(13)
Wherein,Is a standard Gaussian distribution function,/>Representing an error function;
then to the feature vector Normalization operation is carried out to obtain output characteristics/>As formula (14)
(14)
Wherein,Representing feature vectors/>Mean value of/(I)Representing feature vectors/>Variance of/>Is a positive number,/>Is a scaling parameter,/>Is a bias parameter;
Step 4.4, outputting the characteristics Inputting into the full-connection layer to obtain the output characteristics/>, of the full-connection layer
(15)
Wherein,Is the weight matrix of the full connection layer,/>Bias terms for the full connection layer; output features of fully connected layer/>The power consumption behavior judgment result is a power consumption behavior judgment result based on the multi-classification task model prediction of a transducer.
5. The method for discriminating electric behavior according to claim 4 wherein said training process of said transducer-based multi-classification task model is:
Inputting the vector for feature classification into a multi-classification task model based on a transducer, calculating the value of a loss function, judging whether the loss function converges or reaches the iteration times, if yes, stopping iteration to obtain a trained multi-classification task model based on the transducer, if no, continuously adjusting the parameters of the multi-classification task model based on the transducer, and recalculating the value of the loss function until the loss function converges or reaches the iteration times to obtain the trained multi-classification task model based on the transducer, wherein the formula of the loss function is as follows
(16)
Wherein,Representing the total number of samples used for training,/>Representing category number,/>Representation of samples/>Whether or not it belongs to category/>
6. An electricity consumption behavior discrimination system, comprising:
the data set construction module is used for collecting the electric energy parameter variation of the bus when each electric equipment of a certain user is accessed to construct a data set Wherein/>Representing the transpose of the matrix,/>Representing the total number of consumers of the user,/>Represents the/>Electric energy parameter variation quantity of bus when individual electric equipment is accessed and/>、/>、/>、/>Respectively represent the/>The variation of the voltage, current, active power and reactive power of the bus when the electric equipment is connected, and/>、/>、/>,/>Representing the number of sampling points,/>Represents the/>The voltage of the bus is at the/>, when the individual electric equipment is connectedVariation of the individual sample points,/>Represents the/>The current of the bus is at the/>, when the individual electric equipment is connectedVariation of the individual sample points,/>Represents the/>The active power of the bus is at the/>, when the individual electric equipment is accessedVariation of the individual sample points,/>Represents the/>Reactive power of bus is in/>, when individual electric equipment is accessedThe amount of change in the individual sampling points; for data set/>Normalized data set/>Wherein/>Is/>Is a line normalization result;
the data denoising module is used for performing adaptive wavelet threshold denoising on the data in the normalized data set;
The data dimension reduction module is used for reducing dimension of the denoised data set to obtain a vector for feature classification;
the model building module is used for inputting the vector for feature classification into the multi-classification task model based on the transducer, training the multi-classification task model based on the transducer, and judging the electricity consumption behavior by using the trained multi-classification task model based on the transducer.
7. The electrical behavior discrimination system of claim 6, wherein the data denoising module is further configured to:
step 2.1, initializing m=1, m representing the normalized dataset />A row;
step 2.2 normalizing the data set using wavelet transform Middle/>Line vector/>Go/>Layer decomposition to obtain/>Personal coefficient sequence/>Wherein/>Representation/>Go/>Layer decomposition of the first layer/>Sequence of detail coefficients,/>Representation/>Go/>An approximation coefficient sequence of layer decomposition;
Step 2.3, adopting the formula (1) to pair coefficient sequences Denoising, formula (1) is as follows
(1)
Wherein,Representing the coefficient sequence/>Middle/>The/>, of the individual sequencesElement,/>Representation/>Absolute value of/>Representing the coefficient sequence/>Post denoising/>The/>, of the individual sequencesElement,/>Representing the coefficient sequence/>Results after denoising,/>Representing a sign function,/>Exponential function representing the base of the natural constant e,/>As a regulatory factor,/>Is a set threshold value and
(2)
Wherein,Represents the standard deviation of noise;
step 2.4, pair Inverse wavelet transform to obtain/>,/>Representation/>The denoised signal;
Step 2.5, if Let/>Returning to step 2.2 until/>Stopping at the time to obtain/>Representing normalized dataset/>Results after denoising and/>For/>Is/>The denoised signal.
8. The electricity consumption behavior discrimination system of claim 7, wherein the data dimension reduction module is further configured to:
Step 3.1, normalized data set Signal after denoising/>Principal component analysis by PCA techniqueThe component characteristics of (a) are arranged in descending order according to the variance, and the/>The r component features with medium variance ranked in front form a low-dimensional feature set/>Wherein/>Is/>,/>Is of the dimension of,/>Representing low-dimensional feature set/>An ith row vector in (a);
Step 3.2, obtaining Gaussian probability distribution matrix/>,/>Is the ith row vector of the Gaussian probability distribution matrix and/>,/>Is electric equipment/>With electric equipment/>Is obtained from the following formula (3):
(3)
In the formula (3), the amino acid sequence of the compound, Representing the electric equipment/>With electric equipment/>Single-sided probability of/>Representing the electric equipment/>With electric equipment/>Single-sided probability of/>Obtained from the formula (4):
(4)
In the formula (4), the amino acid sequence of the compound, Is the standard deviation of the Gaussian probability distribution matrix;
step 3.3, defining the current iteration sequence number as ,/>Wherein/>The total iteration times; initialization/>; When (when)Let the first/>Low-dimensional feature set/>, of the next iteration
Step 3.4, obtainingT distribution probability matrix/>Wherein/>I-th row vector and/>, of t-distribution probability matrix;/>Electric equipment/>, respectivelyWith electric equipment/>In/>Low-dimensional feature set/>, of the next iterationThe similarity of (3) is obtained by the following formula (5):
(5)
Wherein, Is L1 norm sign,/>For/>Middle/>A row vector;
Step 3.5, obtaining Gradient vector/>,/>Is thatMiddle/>Row vectors and are obtained from equation (6):
(6)
step 3.6, calculating the correction of the low-dimensional feature set When/>Time, let/>Will/>As/>Returning to the steps 3.4 to 3.6; when/>Stopping iteration and obtaining the iteration result/>, of the last low-dimensional feature setNoted as vector/>, for feature classificationWherein/>And the low-dimensional electric energy parameter variation quantity of the ith electric equipment is represented.
CN202410613784.9A 2024-05-17 2024-05-17 Power consumption behavior discriminating method and system Pending CN118194141A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410613784.9A CN118194141A (en) 2024-05-17 2024-05-17 Power consumption behavior discriminating method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410613784.9A CN118194141A (en) 2024-05-17 2024-05-17 Power consumption behavior discriminating method and system

Publications (1)

Publication Number Publication Date
CN118194141A true CN118194141A (en) 2024-06-14

Family

ID=91400267

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410613784.9A Pending CN118194141A (en) 2024-05-17 2024-05-17 Power consumption behavior discriminating method and system

Country Status (1)

Country Link
CN (1) CN118194141A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111564842A (en) * 2020-06-03 2020-08-21 吉林大学 Method for statistical estimation of medium-and-long-term energy consumption in non-invasive electric load monitoring
CN113343876A (en) * 2021-06-18 2021-09-03 上海梦象智能科技有限公司 Household equipment appliance fingerprint data generation method based on countermeasure generation network
CN114004162A (en) * 2021-11-03 2022-02-01 国网重庆市电力公司电力科学研究院 Modeling method for smelting load harmonic emission level under multi-working-condition scene
CN115018512A (en) * 2022-04-21 2022-09-06 国网湖南省电力有限公司 Electricity stealing detection method and device based on Transformer neural network
CN115905857A (en) * 2022-10-19 2023-04-04 华南理工大学 Non-invasive load decomposition method based on mathematical morphology and improved Transformer
CN116881639A (en) * 2023-07-10 2023-10-13 国网四川省电力公司营销服务中心 Electricity larceny data synthesis method based on generation countermeasure network
WO2023236977A1 (en) * 2022-06-08 2023-12-14 华为技术有限公司 Data processing method and related device
CN117452063A (en) * 2023-10-25 2024-01-26 福州大学 Semi-supervised electricity stealing time positioning method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111564842A (en) * 2020-06-03 2020-08-21 吉林大学 Method for statistical estimation of medium-and-long-term energy consumption in non-invasive electric load monitoring
CN113343876A (en) * 2021-06-18 2021-09-03 上海梦象智能科技有限公司 Household equipment appliance fingerprint data generation method based on countermeasure generation network
CN114004162A (en) * 2021-11-03 2022-02-01 国网重庆市电力公司电力科学研究院 Modeling method for smelting load harmonic emission level under multi-working-condition scene
CN115018512A (en) * 2022-04-21 2022-09-06 国网湖南省电力有限公司 Electricity stealing detection method and device based on Transformer neural network
WO2023236977A1 (en) * 2022-06-08 2023-12-14 华为技术有限公司 Data processing method and related device
CN115905857A (en) * 2022-10-19 2023-04-04 华南理工大学 Non-invasive load decomposition method based on mathematical morphology and improved Transformer
CN116881639A (en) * 2023-07-10 2023-10-13 国网四川省电力公司营销服务中心 Electricity larceny data synthesis method based on generation countermeasure network
CN117452063A (en) * 2023-10-25 2024-01-26 福州大学 Semi-supervised electricity stealing time positioning method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
马洪亮: "基于深度学习的电力营销精益化服务研究", 《万方硕士论文库》, 31 December 2022 (2022-12-31) *

Similar Documents

Publication Publication Date Title
CN115018021B (en) Machine room abnormity detection method and device based on graph structure and abnormity attention mechanism
CN109884419B (en) Smart power grid power quality online fault diagnosis method
CN111078895B (en) Remote supervision entity relation extraction method based on denoising convolutional neural network
Cresswell et al. CaloMan: Fast generation of calorimeter showers with density estimation on learned manifolds
CN113065704B (en) Super-parameter optimization and post-processing method of non-invasive load decomposition model
CN111222689A (en) LSTM load prediction method, medium, and electronic device based on multi-scale temporal features
CN115659254A (en) Power quality disturbance analysis method for power distribution network with bimodal feature fusion
CN116451117A (en) Power data anomaly detection method based on federal learning
CN115694985A (en) TMB-based hybrid network traffic attack prediction method
Wang et al. Time-weighted kernel-sparse-representation-based real-time nonlinear multimode process monitoring
CN114925938A (en) Electric energy meter running state prediction method and device based on self-adaptive SVM model
CN117452063A (en) Semi-supervised electricity stealing time positioning method
Wan et al. A new weakly supervised discrete discriminant hashing for robust data representation
CN115100466A (en) Non-invasive load monitoring method, device and medium
CN113762591B (en) Short-term electric quantity prediction method and system based on GRU and multi-core SVM countermeasure learning
CN108388918B (en) Data feature selection method with structure retention characteristics
CN116910573B (en) Training method and device for abnormality diagnosis model, electronic equipment and storage medium
CN111090679B (en) Time sequence data representation learning method based on time sequence influence and graph embedding
CN117151770A (en) Attention mechanism-based LSTM carbon price prediction method and system
Wang et al. A convolutional neural network image classification based on extreme learning machine
CN118194141A (en) Power consumption behavior discriminating method and system
Liu et al. Enhancing short-term wind power forecasting accuracy for reliable and safe integration into power systems: A gray relational analysis and optimized support vector regression machine approach
Wu et al. A unified framework for age invariant face recognition and age estimation
CN118211031A (en) Non-invasive load identification method based on linear complexity self-attention mechanism
Zhang et al. Load prediction based on depthwise separable convolution model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination