CN114091536A - Load decomposition method based on variational self-encoder - Google Patents
Load decomposition method based on variational self-encoder Download PDFInfo
- Publication number
- CN114091536A CN114091536A CN202111384868.2A CN202111384868A CN114091536A CN 114091536 A CN114091536 A CN 114091536A CN 202111384868 A CN202111384868 A CN 202111384868A CN 114091536 A CN114091536 A CN 114091536A
- Authority
- CN
- China
- Prior art keywords
- network
- load
- encoder
- data
- layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000354 decomposition reaction Methods 0.000 title claims abstract description 38
- 238000000034 method Methods 0.000 title claims abstract description 31
- 238000013507 mapping Methods 0.000 claims abstract description 8
- 238000000605 extraction Methods 0.000 claims abstract description 4
- 238000010606 normalization Methods 0.000 claims description 56
- 239000011159 matrix material Substances 0.000 claims description 22
- 238000012545 processing Methods 0.000 claims description 19
- 238000005070 sampling Methods 0.000 claims description 16
- 238000011156 evaluation Methods 0.000 claims description 11
- 238000012549 training Methods 0.000 claims description 10
- 230000004913 activation Effects 0.000 claims description 8
- 238000011176 pooling Methods 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 claims description 3
- 238000005259 measurement Methods 0.000 claims description 3
- 230000001052 transient effect Effects 0.000 claims description 3
- 230000002776 aggregation Effects 0.000 abstract description 4
- 238000004220 aggregation Methods 0.000 abstract description 4
- 230000009286 beneficial effect Effects 0.000 abstract description 3
- 238000005265 energy consumption Methods 0.000 description 4
- 230000007423 decrease Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000005406 washing Methods 0.000 description 2
- 238000013499 data model Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- YLGXILFCIXHCMC-JHGZEJCSSA-N methyl cellulose Chemical compound COC1C(OC)C(OC)C(COC)O[C@H]1O[C@H]1C(OC)C(OC)C(OC)OC1COC YLGXILFCIXHCMC-JHGZEJCSSA-N 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/08—Feature extraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/088—Non-supervised learning, e.g. competitive learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
- G06Q50/06—Electricity, gas or water supply
Abstract
The invention belongs to the technical field of non-invasive load decomposition, and particularly relates to a load decomposition method based on a variational self-encoder. The invention utilizes an example-batch normalized network (IBN-Net) to build the main components of the network: an encoder and a decoder; IBN-Net can enhance the extraction of deep features in the aggregate load data, so that an encoder mapping information to a potential space and a decoder reconstructing a target device load signal by using a potential representation have better performance; the stride connection is added between the encoder and the decoder, so that the decoder can acquire more information from the characteristic mapping of the encoder, the global insight of the decoder on the total power consumption is improved, and the load information of the target equipment is reconstructed better; the regularization potential space provided by the variational self-encoder is beneficial to the network to encode the relevant characteristics of the aggregation load signal, so that the generalization capability of the network model is stronger, and the accuracy in the load signal reconstruction of the multi-state equipment is higher.
Description
Technical Field
The invention belongs to the technical field of non-intrusive load decomposition, and particularly relates to a load decomposition method based on a variational self-encoder.
Background
The energy consumption inside the building accounts for 20% of the global energy consumption, and the building energy consumption is easily reduced by energy saving measures. Non-intrusive load monitoring or energy decomposition attempts to conserve energy consumption by developing appliance usage strategies by resolving the power of the respective appliances from the total power of the entire house meter. Therefore, it is a main research objective at present to deduce the power load of a specific device and perform power load decomposition, and it is popular to improve the accuracy of load decomposition by deep learning related methods. Although many solutions are proposed, there still exist some problems, for example, many models are trained on an existing data set for a certain appliance, the generalization capability of unknown load aggregation data models is poor, and besides, many models do not consider load decomposition of multi-state devices, and only research on devices with on/off states is performed.
Disclosure of Invention
The invention aims to provide a load decomposition method based on a variational self-encoder, so as to improve the generalization performance of a network model and effectively reconstruct a load signal of a multi-state device.
The invention utilizes an example-batch normalization network (abbreviated as IBN-Net) to build two main components of the network, namely an encoder and a decoder, wherein the IBN-Net can enhance the extraction of deep features in aggregated load data, so that the encoder for mapping information into a potential space and the decoder for reconstructing a load signal of a target device by using a potential representation have better performance. And meanwhile, the step connection is added between the encoder and the decoder, so that the decoder can acquire more information from the characteristic mapping of the encoder, the global insight of the decoder on the total power consumption is improved, and the load information of the target equipment is reconstructed better. The regularization potential space provided by the variational self-encoder is beneficial to the network to encode the relevant characteristics of the aggregate load signal, so that the generalization capability of a network model is stronger, and the accuracy in the load signal reconstruction of the multi-state device is higher.
The invention provides a load decomposition method based on a variational self-encoder, which comprises the following specific steps:
step 1: determining input data;
step 2: constructing an example-batch processing normalization network;
and step 3: determining the overall structure of the network;
and 4, step 4: determining an evaluation index, and training a network;
and 5: reconstructing a load signal of the target equipment and judging the running state information of the electric appliance;
each step is described in further detail below.
Step 1, determining input data
The load signals of the equipment aimed at by the load decomposition task are mainly divided into two types, one is high-frequency sampled data which mainly represents signal waveforms sampled thousands of times or ten thousand times per second and generally comprises the state switching between the steady state and the transient state of the electric appliance and the continuous load state of the electric appliance; and the other is low-frequency sampling data which is mainly expressed as the root mean square value of relevant data such as current, voltage, power and the like when the electric appliance operates.
Although non-invasive load decomposition can extract more useful characteristics from high-frequency sampling data so as to obtain better performance, the high-frequency sampling needs an expensive high-frequency intelligent ammeter, so that the invention only processes data under low-frequency sampling to ensure the practicability and popularization of the method, and the invention adopts a network, wherein the input data of the network is the total power recorded by the ammeter during the operation of the electric appliance, and the sampling number is 6 times per second. Since the goal of the network implementation is to reconstruct the load signal of the target device, the output data length is equal to the input data length, and to ensure that the output data is representative of the operational status of the different devices, the data length T is set to 512 seconds. And for the long-time aggregated load data, dividing the aggregated load data sequence into input data in a sliding window mode, wherein the length of the sliding window is equal to that of the input data.
Step 2, constructing an example-batch processing normalization network
One layer of example-batch normalization network mainly comprises two parts: an example normalization network and a batch normalization network, wherein the specific network structure is shown in FIG. 2; from the input end to the output end, three layers of batch processing normalization networks which are connected in sequence are firstly arranged; the first two layers of batch normalization networks, each layer of batch normalization network comprises a convolution layer and ReLU activation, and the third layer of batch normalization network comprises a convolution layer but does not comprise ReLU activation; the following is an example-batch normalization network, the input of which is added with the output of the third layer of batch normalization network through residual connection and then input into the example normalization network layer, and then the output of the example-batch normalization network is obtained through ReLU activation.
Step 3, determining the whole network structure
The network structure of the invention mainly comprises two parts, namely an encoder and a decoder, wherein the two parts are both the example-batch processing normalization network constructed in the step 2 and are connected through step-by-step connection.
Step 3.1: the encoder portion includes 7 layers of the example-batch normalization network, each layer of the network being followed by a maximum pooling layer except for the last layer, so that the number of nodes of each layer of the example-batch normalization network decreases from input to output. The role of the max-pooling layer is to reduce the dimensionality of the input data in time, thereby encouraging the network to learn higher-level features of the target device load signal. The last layer of example, a batch normalization network, is followed by a full connection layer; the output of the full connection layer is divided into two same distribution parameter matrixes which are respectively marked as mu and sigma, the two distribution parameter matrixes mu and sigma are the same and are both the output obtained by the full connection layer of the encoder, and the two distribution parameter matrixes are divided into two parts for carrying out the re-parameterization processing; randomly sampling the category distribution of the electric appliances to obtain a parameter matrix which is marked as epsilon; performing Hadamard product calculation on the parameter matrix epsilon and the distribution parameter matrix sigma, and connecting the distribution parameter matrix mu to form an output potential space matrix z of the encoder, wherein the relational expression between the output potential space matrix z and the distribution parameter matrix sigma is as follows:
z=μ+σ⊙ε
step 3.2: the decoder part is similar to the encoder and also comprises 7 layers of example-batch processing normalization networks, except the last layer, each layer of network is followed by a convolution layer and a deconvolution layer, so that the time dimension of data is gradually recovered, and the aim of reconstructing a load signal of a target device is fulfilled. And finally, only one convolution layer is connected after the normalization network is processed in batch, then the potential space matrix z is input into a decoder through the convolution layer, and final output is obtained through the decoder part and is the load decomposition data y of the target equipment.
Step 4, determining evaluation index and training network
Step 4.1: the load decomposition method aims at load decomposition tasks of aggregated load signals in different circuit environments, a constructed network model needs to be trained by power data collected in a circuit, the adopted network is a regression network, and load decomposition is realized by reconstructing a load signal of target equipment, so that the standard of evaluation in the network training process is the Error between generated data and real data, the method adopts an evaluation index of Mean Absolute Error (MAE), and the smaller the numerical value is, the better the model decomposition effect is.
Since the time for which the device is operating in a normal operating environment is small, an additional metric MAE is definedONIt calculates the mean absolute error alone using the threshold δ only when the device is turned on. The threshold δ is determined according to the target device, the main selection criterion being the lowest power at which it operates. Wherein, T is the number of time nodes,and yt is the predicted power and the real power at time node T, respectively, T ∈ [1],NONThe time node when the device is running includes:
step 4.2: meanwhile, in order to improve the performance of the network in the load signal reconstruction aspect of the multi-state equipment, the capacity of predicting the equipment state of the model is measured by using the measurement F1 score based on the state; in each time step, the power supply of the device is considered to be in the ON state if it exceeds a preset power threshold ρ. The power threshold P is determined according to the type of appliance and general power representatives, for example 50, 2000, 200, 20 and 10 (watts) for refrigerators, kettles, microwave ovens, washing machines and dishwashers, respectively, and once the state of the appliance is determined, the F1 score is calculated using the precision rate P and the recall rate R. Assuming that TP is the positive sample number predicted correctly, FP is the negative sample number predicted positively, and FN is the positive sample number predicted negatively, then:
step 5, reconstructing a load signal of the target equipment and judging the running state information of the electric appliance
According to the network model obtained by the steps and the training means, the load decomposition can be carried out on the electric appliance aggregation load signal. The network receives input power data and outputs load signals with equal time length, the output is the reconstructed target equipment load signal, the current running state of the equipment can be judged according to the load signal, and the equipment can be determined to be started to run in the time period when the current running state exceeds a set threshold value. The household electric meter data can judge the running state of the known household appliances and identify the appliances running under the current aggregated load.
The invention utilizes an example-batch normalization network (IBN-Net) to build two main components of the network, namely an encoder and a decoder, wherein the IBN-Net can enhance the extraction of deep features in aggregated load data, so that the encoder for mapping information to a potential space and the decoder for reconstructing a load signal of target equipment by using a potential representation have better performance. And meanwhile, the step connection is added between the encoder and the decoder, so that the decoder can acquire more information from the characteristic mapping of the encoder, the global insight of the decoder on the total power consumption is improved, and the load information of the target equipment is reconstructed better. The regularization potential space provided by the variational self-encoder is beneficial to the network to encode the relevant characteristics of the aggregation load signal, so that the generalization capability of the network model is stronger, and the accuracy in the load signal reconstruction of the multi-state equipment is higher.
Drawings
FIG. 1 is a flow chart of the load decomposition method based on the variational self-encoder of the present invention.
FIG. 2 is a block diagram of an example batch normalized network for use with the present invention.
Fig. 3 is a diagram of a network architecture of the present invention.
Detailed Description
The technical scheme of the invention is explained in detail in the following by combining the drawings and the embodiment.
Example (b):
the invention provides a load decomposition method based on a variational self-encoder, a flow chart of which is shown in figure 1 and can be divided into the following steps:
step 1: determining input data;
step 2: constructing an example-batch processing normalization network;
and step 3: determining the overall structure of the network;
and 4, step 4: determining an evaluation index, and training a network;
and 5: reconstructing a load signal of the target equipment and judging the running state information of the electric appliance;
the steps are further specifically described below.
1. Determining input data
The load signals of the equipment aimed at by the load decomposition task are mainly divided into two types, one is high-frequency sampled data which mainly represents signal waveforms sampled for thousands of times or ten thousand times per second and generally comprises state switching between steady state and transient state of the electric appliance and continuous load state of the electric appliance; and the other is low-frequency sampling data which is mainly expressed as the root mean square value of relevant data such as current, voltage, power and the like when the electric appliance operates.
Although non-invasive load decomposition can extract more useful characteristics from high-frequency sampling data so as to obtain better performance, the high-frequency sampling needs an expensive high-frequency intelligent ammeter, so that the method only processes data under low-frequency sampling to ensure the practicability and popularization of the method, the method adopts a network, the input data of the network is the total power recorded by the ammeter during the operation of the electric appliance, and the sampling number is 6 times per second. Since the goal of the network implementation is to reconstruct the load signal of the target device, the output data length is equal to the input data length, and to ensure that the output data is representative of the operational status of the different devices, the data length T is set to 512 seconds. And for the long-time aggregated load data, dividing the aggregated load data sequence into input data in a sliding window mode, wherein the length of the sliding window is equal to that of the input data.
2. Constructing an example-batch normalization network
The example-batch normalization network mainly comprises two parts, namely an example normalization network and a batch normalization network are integrated, and the specific network structure is shown in fig. 2. From input to output, two layers of batch normalization networks are first provided, each layer of the batch normalization network including a convolutional layer and a ReLU activation, and the third layer of the batch normalization network including no ReLU activation. And the input of the example-batch normalization network is added with the output of the third-layer batch normalization network through residual connection and input into the example normalization network layer, and then the output of the example-batch normalization network is obtained through ReLU activation. The example normalized network improves the generalization performance of the network, and the batch normalized network improves the discrimination capability of the learned features, thereby allowing the encoder to have more relevant features to map to the potential space.
3. Determining network overall structure
The network structure of the invention mainly comprises two parts, namely an encoder and a decoder, wherein the two parts are both the example-batch processing normalization network constructed in the step 2 and are connected through step-by-step connection.
The encoder part comprises a 7-layer instance-batch normalization network, each layer of the network being followed by a maximum pooling layer except for the last layer, so that the number of nodes of each layer of the instance-batch normalization network decreases from input to output. The role of the max-pooling layer is to reduce the dimensionality of the input data in time, thereby encouraging the network to learn higher-level features of the target device load signal. The last layer of example-batch processing normalization network is followed by a full connection layer, the output of which is divided into two parts, namely a distribution parameter matrix mu and a distribution parameter matrix sigma, the two distribution parameter matrices mu and sigma are the same and are both the output obtained by the full connection layer of the encoder, the two parts are used for carrying out re-parameterization, the adopted parameter matrix epsilon is obtained by randomly sampling the category distribution of the electric appliance, simultaneously, the parameter matrix epsilon and sigma carry out Hadamard product calculation, and the output potential space matrix z of the encoder is formed by connecting mu, and the relational expression between the two parts is as follows:
z=μ+σ⊙ε
the decoder part is similar to the encoder and also comprises 7 layers of example-batch processing normalization networks, except the last layer, each layer of network is followed by a convolution layer and a deconvolution layer, so that the time dimension of data is gradually recovered, and the aim of reconstructing a load signal of a target device is fulfilled. And finally, the example of the last layer, namely the batch processing normalized network, is only connected with one convolution layer, then passes through one convolution layer, the potential space matrix z is input into the encoder, and the final output is obtained through the decoder part and is the load decomposition data y of the target equipment.
4. Determining an evaluation index, training a network
Aiming at carrying out load decomposition tasks on aggregated load signals in different circuit environments, a constructed network model needs to be trained by using power data collected in a circuit, because the network of the method is a regression network, and load decomposition is realized by reconstructing the load signal of target equipment, the standard of evaluation in the network training process is the Error between generated data and real data, the method adopts an evaluation index of Mean Absolute Error (MAE), and the smaller the numerical value is, the better the model decomposition effect is, the smaller the numerical value is. Since the equipment runs in a normal working environment for a short time, an additional metric MAE is definedoNIt calculates the mean absolute error alone only when the device is switched on, using a threshold δ, which is determined according to the target device, the main selection criterion being the lowest power at which it operates. Wherein T is the number of time nodes,and ytThe predicted power and the real power at the time of the time node T are respectively, and T belongs to [1],NONIs the time node at which the device is running. Number of
Meanwhile, in order to improve the performance of the network in the aspect of load signal reconstruction of the multi-state equipment, the capacity of predicting the equipment state by a model is measured by using a state-based measurement F1 score, and in each time step, if the power supply of the equipment exceeds a preset power threshold value, the equipment is considered to be in an ON state. The power threshold is determined according to the type of appliance and general power representatives, such as 50, 2000, 200, 20 and 10 (watts) for refrigerators, kettles, microwave ovens, washing machines and dishwashers, respectively, and once the state of the appliance is determined, the F1 score is calculated using the precision rate P and the recall rate R. Where TP is the number of positive samples predicted correctly, FP is the number of negative samples predicted positively, and FN is the number of positive samples predicted negatively.
5. Reconstructing load signal of target equipment and judging running state information of electric appliance
According to the network model obtained by the steps and the training means, load decomposition can be carried out on the electric appliance aggregated load signal. The network receives input power data and outputs load signals with equal time length, the output is the reconstructed target equipment load signal, the current operation state of the equipment can be judged according to the load signal, and the equipment can be determined to be started and operated in the time period when the load signal exceeds a set threshold value. The household electric meter data can judge the running state of the known household appliances and identify the appliances running under the current aggregated load.
In this case, the method is compared with a full-connection network adopting the S2P method proposed by Chaoyun Zhang (Chaoyun Zhang, Mingjun Zhang, Zongzuo Wang, nicel Goddard, and Charles sutton. sequence-to-pointearing) on an existing power data set UK-DALE, and compared on the accuracy of judging the state of the electrical appliance according to the load decomposition result, as shown in table 2, evaluation is performed by adopting three indexes. The method of the invention has excellent performance on most indexes, but has disadvantages, and a multi-task learning method needs to be further considered to improve the generalization capability of the network.
TABLE 1 comparison of load decomposition results
Claims (3)
1. A load decomposition method based on a variational self-encoder is characterized in that two main components of a network are built by using an instance-batch processing normalization network IBN-Net: an encoder and a decoder; the IBN-Net is used to enhance the extraction of deep features in the aggregate load data, thereby enabling better performance of an encoder mapping information to a potential space and a decoder reconstructing a target device load signal using the potential representation; meanwhile, the step connection is added between the encoder and the decoder, so that the decoder acquires more information from the characteristic mapping of the encoder, the global insight of the decoder on the total power consumption is improved, and the load information of the target equipment is reconstructed better; the regularization potential space provided by the variational self-encoder is used for encoding relevant characteristics of the aggregated load signal by the network, so that the generalization capability of a network model is stronger, and the higher precision is realized in the load signal reconstruction aspect of the multi-state equipment; the method comprises the following specific steps:
step 1, determining input data
The load signals of the equipment aimed at by the load decomposition task are divided into two types, namely high-frequency sampled data which mainly represent signal waveforms sampled for thousands of times or ten thousand times per second and comprise the state switching between the steady state and the transient state of the electric appliance and the continuous load state of the electric appliance; secondly, low-frequency sampling data is expressed as root mean square values of relevant data such as current, voltage, power and the like when the electric appliance operates;
aiming at a network adopted for data processing under low-frequency sampling, input data of the network is the total power recorded by an ammeter when an electric appliance operates, and the sampling number is 6 times per second; since the network achieves the goal of reconstructing the load signal of the target device, the output data length is equal to the input data length, and in order to ensure that the output data can represent the operating states of different devices, the data length T is set to 512 seconds; for aggregated load data of a longer time, dividing the aggregated load data sequence into input data in a sliding window mode, wherein the length of the sliding window is equal to that of the input data;
step 2, constructing an example-batch processing normalization network
One layer of example-batch normalization network mainly comprises two parts: the specific network structure of the example normalization network and the batch processing normalization network is as follows: from the input end to the output end, three layers of batch processing normalization networks which are connected in sequence are firstly arranged; the first two layers of batch normalization networks each comprise a convolution layer and ReLU activation, and the third layer of batch normalization network only comprises one convolution layer; the input of the example-batch normalization network is added with the output of the third layer of batch normalization network through residual connection and then input into the example normalization network layer, and then the output of the example-batch normalization network is obtained through ReLU activation;
step 3, determining the whole structure of the network
The whole network structure is divided into two parts: an encoder and a decoder, wherein the main components of the two parts are an example-batch processing normalization network constructed in the step 2, and the two parts are connected through step-by-step connection; wherein:
the encoder part comprises 7 layers of example-batch normalization networks, and each layer of the networks is followed by a maximum pooling layer except the last layer, so that the number of nodes of each layer of example-batch normalization networks is reduced from the input to the output; the function of the maximum pooling layer is to reduce the dimension of input data in time, so as to encourage the network to learn the characteristics of a target device load signal at a higher level; the last layer of example, a batch normalization network, is followed by a full connection layer; the output of the full connection layer is divided into two same distribution parameter matrixes which are respectively marked as mu and sigma; randomly sampling the category distribution of the electric appliances to obtain a parameter matrix which is marked as epsilon; performing Hadamard product calculation on the parameter matrix epsilon and the distribution parameter matrix sigma, and connecting the distribution parameter matrix mu to form an output potential space matrix z of the encoder, wherein the relational expression between the output potential space matrix z and the distribution parameter matrix sigma is as follows:
z=μ+σ⊙ε
the decoder part is similar to the encoder and also comprises 7 layers of example-batch processing normalization networks, except the last layer, each layer of network is followed by a convolution layer and a deconvolution layer, so that the time dimension of data is gradually recovered, and the aim of reconstructing a load signal of target equipment is fulfilled; the last layer of example, after the batch processing normalization network, is only connected with one convolution layer, and then passes through one convolution layer; inputting the potential space matrix z into a decoder, and obtaining final output through the decoder, wherein the final output is target equipment load decomposition data y;
step 4, determining evaluation index and training network
Step 4.1: the method is characterized in that a load decomposition task is carried out aiming at aggregated load signals in different circuit environments, a constructed network model needs to be trained by using power data collected in a circuit, and because the adopted network is a regression network and load decomposition is realized by reconstructing a load signal of target equipment, the error between generated data and real data is adopted as an evaluation standard in the network training process, specifically, the average absolute error (MAE) is adopted, and the smaller the numerical value is, the better the model decomposition effect is;
in addition, an additional metric MAE is definedONIt is onlyThe threshold δ is used to calculate the mean absolute error alone when the device is turned on; let T be the number of time nodes,and ytThe predicted power and the real power at the time of the time node T are respectively, and T belongs to [1],NONIs the time node when the device is running, there are:
and 4.2: meanwhile, in order to improve the performance of the network in the aspect of load signal reconstruction of the multi-state equipment, the capacity of predicting the equipment state by the model is measured by using the state-based measurement F1 score; in each time step, if the power supply of the equipment exceeds a preset power threshold value rho, considering that the equipment is in an ON state; after the state of the equipment is determined, calculating an F1 score by adopting the precision rate P and the recall rate R; assuming that TP is the positive sample number predicted correctly, FP is the negative sample number predicted positively, and FN is the positive sample number predicted negatively, then:
step 5, reconstructing a load signal of the target equipment and judging the running state information of the electric appliance
The network model obtained in the above steps is used for carrying out load decomposition on the electric appliance aggregated load signal; the network receives input power data and outputs load signals with equal time length, and the output is the reconstructed load signal of the target equipment; according to the load signal, the current running state of the equipment can be judged, and the part exceeding the set threshold value is determined to start running of the equipment in the time period.
2. The method of claim 1, wherein the threshold δ in step 4.1 is determined according to the target device, and the selection criterion is the lowest power at which the target device operates.
3. The method of claim 1, wherein the power threshold p in step 4.2 is determined according to the type of appliance and general power representation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111384868.2A CN114091536A (en) | 2021-11-19 | 2021-11-19 | Load decomposition method based on variational self-encoder |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111384868.2A CN114091536A (en) | 2021-11-19 | 2021-11-19 | Load decomposition method based on variational self-encoder |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114091536A true CN114091536A (en) | 2022-02-25 |
Family
ID=80302628
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111384868.2A Pending CN114091536A (en) | 2021-11-19 | 2021-11-19 | Load decomposition method based on variational self-encoder |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114091536A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114511159A (en) * | 2022-04-20 | 2022-05-17 | 广东电网有限责任公司佛山供电局 | Power load probability prediction method and system based on conditional variational self-encoder |
CN115018011A (en) * | 2022-07-19 | 2022-09-06 | 深圳江行联加智能科技有限公司 | Power load type identification method, device, equipment and storage medium |
WO2024007849A1 (en) * | 2023-04-26 | 2024-01-11 | 之江实验室 | Distributed training container scheduling for intelligent computing |
CN117472591A (en) * | 2023-12-27 | 2024-01-30 | 北京壁仞科技开发有限公司 | Method for data calculation, electronic device and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112215405A (en) * | 2020-09-23 | 2021-01-12 | 国网甘肃省电力公司营销服务中心 | Non-invasive type residential electricity load decomposition method based on DANN domain adaptive learning |
CN113393025A (en) * | 2021-06-07 | 2021-09-14 | 浙江大学 | Non-invasive load decomposition method based on Informer model coding structure |
WO2021208516A1 (en) * | 2020-04-17 | 2021-10-21 | 贵州电网有限责任公司 | Non-intrusive load disaggregation method |
-
2021
- 2021-11-19 CN CN202111384868.2A patent/CN114091536A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021208516A1 (en) * | 2020-04-17 | 2021-10-21 | 贵州电网有限责任公司 | Non-intrusive load disaggregation method |
CN112215405A (en) * | 2020-09-23 | 2021-01-12 | 国网甘肃省电力公司营销服务中心 | Non-invasive type residential electricity load decomposition method based on DANN domain adaptive learning |
CN113393025A (en) * | 2021-06-07 | 2021-09-14 | 浙江大学 | Non-invasive load decomposition method based on Informer model coding structure |
Non-Patent Citations (2)
Title |
---|
ANTOINE LANGEVIN ET AL.: "Energy disaggregation using variational autoencoders", 《ENERGY & BUILDINGS》 * |
MIN XIA ET AL.: "Non-intrusive load disaggregation based on deep dilated residual network", 《ELECTRIC POWER SYSTEMS RESEARCH》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114511159A (en) * | 2022-04-20 | 2022-05-17 | 广东电网有限责任公司佛山供电局 | Power load probability prediction method and system based on conditional variational self-encoder |
CN114511159B (en) * | 2022-04-20 | 2022-07-12 | 广东电网有限责任公司佛山供电局 | Power load probability prediction method and system based on conditional variational self-encoder |
CN115018011A (en) * | 2022-07-19 | 2022-09-06 | 深圳江行联加智能科技有限公司 | Power load type identification method, device, equipment and storage medium |
CN115018011B (en) * | 2022-07-19 | 2022-11-29 | 深圳江行联加智能科技有限公司 | Power load type identification method, device, equipment and storage medium |
WO2024007849A1 (en) * | 2023-04-26 | 2024-01-11 | 之江实验室 | Distributed training container scheduling for intelligent computing |
CN117472591A (en) * | 2023-12-27 | 2024-01-30 | 北京壁仞科技开发有限公司 | Method for data calculation, electronic device and storage medium |
CN117472591B (en) * | 2023-12-27 | 2024-03-22 | 北京壁仞科技开发有限公司 | Method for data calculation, electronic device and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114091536A (en) | Load decomposition method based on variational self-encoder | |
CN113009349B (en) | Lithium ion battery health state diagnosis method based on deep learning model | |
CN108616120B (en) | Non-invasive power load decomposition method based on RBF neural network | |
CN112946484B (en) | SOC estimation method, system, terminal equipment and readable storage medium based on BP neural network | |
CN113406521B (en) | Lithium battery health state online estimation method based on feature analysis | |
CN113109715B (en) | Battery health condition prediction method based on feature selection and support vector regression | |
CN113064093A (en) | Energy storage battery state of charge and state of health joint estimation method and system | |
CN111639586B (en) | Non-invasive load identification model construction method, load identification method and system | |
CN111257754B (en) | Battery SOC robust evaluation method based on PLSTM sequence mapping | |
CN112684363A (en) | Lithium ion battery health state estimation method based on discharge process | |
CN113805138B (en) | Smart electric meter error estimation method and device based on directed parameter traversal | |
CN114740360A (en) | Lithium ion battery residual service life prediction method based on SAE-CEEMDAN-LSTM | |
CN108918928B (en) | Power signal self-adaptive reconstruction method in load decomposition | |
CN113076689B (en) | Battery state evaluation method based on automatic encoder | |
CN113689039A (en) | Lithium ion battery RUL time sequence prediction method | |
CN112287597B (en) | Lead-acid storage battery SOH estimation method based on VPGA-GPR algorithm | |
CN115859190A (en) | Non-invasive household electrical classification method based on causal relationship | |
CN116167654A (en) | Non-invasive load monitoring method based on multitasking learning | |
CN116754960A (en) | Cloud SOH estimation and residual life prediction method, device and program for sodium ion battery | |
CN114510992A (en) | Equipment switch state detection method based on deep learning | |
Herath et al. | Comprehensive analysis of convolutional neural network models for non-instructive load monitoring | |
CN116031453A (en) | On-line estimation method for characteristic frequency impedance of proton exchange membrane fuel cell | |
Zheng et al. | Non-intrusive load monitoring based on the graph least squares reconstruction method | |
CN113920362A (en) | Non-invasive load decomposition method based on attention | |
Li et al. | Adaptive Fusion Feature Transfer Learning Method For NILM |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20220225 |